Disentangled graph collaborative filtering X Wang, H Jin, A Zhang, X He, T Xu, TS Chua Proceedings of the 43rd international ACM SIGIR conference on research and …, 2020 | 477 | 2020 |
Harnessing the power of llms in practice: A survey on chatgpt and beyond J Yang, H Jin, R Tang, X Han, Q Feng, H Jiang, S Zhong, B Yin, X Hu ACM Transactions on Knowledge Discovery from Data 18 (6), 1-32, 2024 | 368 | 2024 |
Llm maybe longlm: Self-extend llm context window without tuning H Jin, X Han, J Yang, Z Jiang, Z Liu, CY Chang, H Chen, X Hu arXiv preprint arXiv:2401.01325, 2024 | 28 | 2024 |
Kivi: A tuning-free asymmetric 2bit quantization for kv cache Z Liu, J Yuan, H Jin, S Zhong, Z Xu, V Braverman, B Chen, X Hu arXiv preprint arXiv:2402.02750, 2024 | 16 | 2024 |
Retiring DP: New Distribution-Level Metrics for Demographic Parity X Han, Z Jiang, H Jin, Z Liu, N Zou, Q Wang, X Hu arXiv preprint arXiv:2301.13443, 2023 | 15 | 2023 |
Weight perturbation can help fairness under distribution shift Z Jiang, X Han, H Jin, G Wang, N Zou, X Hu arXiv preprint arXiv:2303.03300, 2023 | 10 | 2023 |
Chasing fairness under distribution shift: a model weight perturbation approach ZS Jiang, X Han, H Jin, G Wang, R Chen, N Zou, X Hu Advances in Neural Information Processing Systems 36, 2024 | 5 | 2024 |
Growlength: Accelerating llms pretraining by progressively growing training length H Jin, X Han, J Yang, Z Jiang, CY Chang, X Hu arXiv preprint arXiv:2310.00576, 2023 | 5 | 2023 |
Towards mitigating dimensional collapse of representations in collaborative filtering H Chen, V Lai, H Jin, Z Jiang, M Das, X Hu Proceedings of the 17th ACM International Conference on Web Search and Data …, 2024 | 2 | 2024 |
Exposing Model Theft: A Robust and Transferable Watermark for Thwarting Model Extraction Attacks R Tang, H Jin, M Du, C Wigington, R Jain, X Hu Proceedings of the 32nd ACM International Conference on Information and …, 2023 | 1 | 2023 |
Was my model stolen? Feature sharing for robust and transferable watermarks R Tang, H Jin, C Wigington, M Du, R Jain, X Hu | 1 | 2021 |
KV Cache Compression, But What Must We Give in Return? A Comprehensive Benchmark of Long Context Capable Approaches J Yuan, H Liu, YN Chuang, S Li, G Wang, D Le, H Jin, V Chaudhary, Z Xu, ... arXiv preprint arXiv:2407.01527, 2024 | | 2024 |
Secured Weight Release for Large Language Models via Taylor Expansion G Wang, YN Chuang, R Tang, S Zhong, J Yuan, H Jin, Z Liu, ... | | |