Towards Understanding Regularization in Batch Normalization P Luo*, X Wang*, W Shao*, Z Peng (*Equal Contribution) ICLR 2019, 2018 | 244 | 2018 |
Gpt4roi: Instruction tuning large language model on region-of-interest S Zhang, P Sun, S Chen, M Xiao, W Shao, W Zhang, K Chen, P Luo arXiv preprint arXiv:2307.03601, 2023 | 107 | 2023 |
Lvlm-ehub: A comprehensive evaluation benchmark for large vision-language models P Xu, W Shao, K Zhang, P Gao, S Liu, M Lei, F Meng, S Huang, Y Qiao, ... arXiv preprint arXiv:2306.09265, 2023 | 98 | 2023 |
Sphinx: The joint mixing of weights, tasks, and visual embeddings for multi-modal large language models Z Lin, C Liu, R Zhang, P Gao, L Qiu, H Xiao, H Qiu, C Lin, W Shao, ... arXiv preprint arXiv:2311.07575, 2023 | 90 | 2023 |
What makes for end-to-end object detection? P Sun, Y Jiang, E Xie, W Shao, Z Yuan, C Wang, P Luo International Conference on Machine Learning, 9934-9944, 2021 | 84 | 2021 |
SSN: Learning Sparse Switchable Normalization via SparsestMax W Shao, J Li, J Ren, R Zhang, X Wang, P Luo International Journal of Computer Vision, 2019 | 70 | 2019 |
SSN: Learning Sparse Switchable Normalization via SparsestMax W Shao*, T Meng*, J Li, R Zhang, Y Li, X Wang, ... CVPR 2019, arXiv preprint arXiv:1903.03793, 2019 | 70 | 2019 |
Omniquant: Omnidirectionally calibrated quantization for large language models W Shao, M Chen, Z Zhang, P Xu, L Zhao, Z Li, K Zhang, P Gao, Y Qiao, ... arXiv preprint arXiv:2308.13137, 2023 | 59 | 2023 |
Imagebind-llm: Multi-modality instruction tuning J Han, R Zhang, W Shao, P Gao, P Xu, H Xiao, K Zhang, C Liu, S Wen, ... arXiv preprint arXiv:2309.03905, 2023 | 53 | 2023 |
Rethinking the pruning criteria for convolutional neural network Z Huang, W Shao, X Wang, L Lin, P Luo Advances in Neural Information Processing Systems 34, 16305-16318, 2021 | 45 | 2021 |
Differentiable Learning-to-Group Channels via Groupable Convolutional Neural Networks Z Zhaoyang, L Jingyu, S Wenqi, P Zhanglin, Z Ruimao, W Xiaogang, ... ICCV 2019, 2019 | 43 | 2019 |
Sphinx-x: Scaling data and parameters for a family of multi-modal large language models P Gao, R Zhang, C Liu, L Qiu, S Huang, W Lin, S Zhao, S Geng, Z Lin, ... arXiv preprint arXiv:2402.05935, 2024 | 30 | 2024 |
Differentiable Dynamic Quantization with Mixed Precision and Adaptive Resolution Z Zhaoyang, S Wenqi, G Jinwei, W Xiaogang, L Ping ICML 2021, 2021 | 28 | 2021 |
Differentiable Dynamic Normalization for Learning Deep Representation P Luo, P Zhanglin, S Wenqi, Z Ruimao, R Jiamin, W Lingyun ICML 2019, http://proceedings.mlr.press/v97/luo19a.html, 2019 | 26 | 2019 |
Learning Efficient Detector with Semi-supervised Adaptive Distillation S Tang, L Feng, W Shao, Z Kuang, W Zhang, Y Chen BMVC 2019, arXiv preprint arXiv:1901.00366, 2019 | 22 | 2019 |
Tiny lvlm-ehub: Early multimodal experiments with bard W Shao, Y Hu, P Gao, M Lei, K Zhang, F Meng, P Xu, S Huang, H Li, ... arXiv preprint arXiv:2308.03729, 2023 | 21 | 2023 |
Convolution-weight-distribution assumption: Rethinking the criteria of channel pruning Z Huang*, W Shao*, X Wang, L Lin, P Luo NeurIPS 2021, arXiv preprint arXiv:2004.11627, 2020 | 20 | 2020 |
Not All Models Are Equal: Predicting Model Transferability in a Self-challenging Fisher Space W Shao, X Zhao, Y Ge, Z Zhang, L Yang, X Wang, Y Shan, P Luo ECCV 2022, arXiv preprint arXiv:2207.03036, 2022 | 19 | 2022 |
Channel equilibrium networks for learning deep representation W Shao, S Tang, X Pan, P Tan, X Wang, P Luo ICML 2020, arXiv preprint arXiv:2003.00214, 2020 | 19 | 2020 |
DiffRate: Differentiable Compression Rate for Efficient Vision Transformers M Chen, W Shao, P Xu, M Lin, K Zhang, F Chao, R Ji, Y Qiao, P Luo ICCV23, arXiv preprint arXiv:2305.17997, 2023 | 17 | 2023 |