Mathverse: Does your multi-modal llm truly see the diagrams in visual math problems? R Zhang, D Jiang, Y Zhang, H Lin, Z Guo, P Qiu, A Zhou, P Lu, KW Chang, ... ECCV 2024, 2024 | 27 | 2024 |
Plug-and-Play: An Efficient Post-training Pruning Method for Large Language Models Y Zhang, H Bai, H Lin, J Zhao, L Hou, CV Cannistraci The Twelfth International Conference on Learning Representations, 0 | 8* | |
MoPE-CLIP: Structured Pruning for Efficient Vision-Language Models with Module-wise Pruning Error Metric H Lin, H Bai, Z Liu, L Hou, M Sun, L Song, Y Wei, Z Sun CVPR 2024, 2024 | 2 | 2024 |
IntactKV: Improving Large Language Model Quantization by Keeping Pivot Tokens Intact R Liu, H Bai, H Lin, Y Li, H Gao, Z Xu, L Hou, J Yao, C Yuan Findings of ACL 2024, 2024 | 1 | 2024 |
Rotation and Permutation for Advanced Outlier Management and Efficient Quantization of LLMs H Lin, H Xu, Y Wu, J Cui, Y Zhang, L Mou, L Song, Z Sun, Y Wei arXiv preprint arXiv:2406.01721, 2024 | | 2024 |