Be your own teacher: Improve the performance of convolutional neural networks via self distillation L Zhang, J Song, A Gao, J Chen, C Bao, K Ma Proceedings of the IEEE/CVF international conference on computer vision …, 2019 | 902 | 2019 |
Improve Object Detection with Feature-based Knowledge Distillation: Towards Accurate and Efficient Detectors L Zhang, K Ma The Ninth International Conference on Learning Representations (ICLR2021), 2021 | 191 | 2021 |
Self-distillation: Towards efficient and compact neural networks L Zhang, C Bao, K Ma IEEE Transactions on Pattern Analysis and Machine Intelligence 44 (8), 4388-4403, 2021 | 165 | 2021 |
Non-structured DNN weight pruning—Is it beneficial in any platform? X Ma, S Lin, S Ye, Z He, L Zhang, G Yuan, SH Tan, Z Li, D Fan, X Qian, ... IEEE transactions on neural networks and learning systems 33 (9), 4930-4944, 2021 | 100 | 2021 |
Wavelet Knowledge Distillation: Towards Efficient Image-to-Image Translation L Zhang, X Chen, X Tu, P Wan, N Xu, K Ma Proceedings of the IEEE/CVF conference on Computer Vision and Pattern …, 2022 | 75 | 2022 |
SCAN: A scalable neural networks framework towards compact and efficient models L Zhang, Z Tan, J Song, J Chen, C Bao, K Ma NeurIPS2019, 2019 | 74 | 2019 |
Autoencoders as Cross-Modal Teachers: Can Pretrained 2D Image Transformers Help 3D Representation Learning? R Dong, Z Qi, L Zhang, J Zhang, J Sun, Z Ge, L Yi, K Ma ICLR2023, 2022 | 66 | 2022 |
Fine-grained emotion classification of Chinese microblogs based on graph convolution networks Y Lai, L Zhang, D Han, R Zhou, G Wang World Wide Web 23, 2771-2787, 2020 | 65 | 2020 |
Task-oriented feature distillation L Zhang, Y Shi, Z Shi, K Ma, C Bao NeurIPS2020 33, 14759-14771, 2020 | 48 | 2020 |
Auxiliary training: Towards accurate and robust models L Zhang, M Yu, T Chen, Z Shi, C Bao, K Ma Proceedings of the IEEE/CVF conference on computer vision and pattern …, 2020 | 44 | 2020 |
StructADMM: A systematic, high-efficiency framework of structured weight pruning for DNNs T Zhang, S Ye, K Zhang, X Ma, N Liu, L Zhang, J Tang, K Ma, X Lin, ... arXiv preprint arXiv:1807.11091, 2018 | 35 | 2018 |
Contrastive Deep Supervision L Zhang, X Chen, J Zhang, R Dong, K Ma European Conference on Computer Vision (ECCV2022), 2022 | 34 | 2022 |
Pointdistiller: structured knowledge distillation towards efficient and compact 3d detection L Zhang, R Dong, HS Tai, K Ma IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR2023), 2022 | 32 | 2022 |
Non-structured dnn weight pruning considered harmful Y Wang, S Ye, Z He, X Ma, L Zhang, S Lin, G Yuan, SH Tan, Z Li, D Fan, ... arXiv preprint arXiv:1907.02124 2, 2019 | 14 | 2019 |
Finding the Task-Optimal Low-Bit Sub-Distribution in Deep Neural Networks R Dong, Z Tan, M Wu, L Zhang, K Ma International Conference on Machine Learning (ICML2022), 2021 | 11 | 2021 |
Region-aware knowledge distillation for efficient image-to-image translation L Zhang, X Chen, R Dong, K Ma The 34th British Machine Vision Conference 2023, 2022 | 10 | 2022 |
SMART: screen-based gesture recognition on commodity mobile devices Z Liao, Z Luo, Q Huang, L Zhang, F Wu, Q Zhang, Y Wang Proceedings of the 27th Annual International Conference on Mobile Computing …, 2021 | 10 | 2021 |
A Good Data Augmentation Policy Is Not All You Need: A Multi-Task Learning Perspective L Zhang, K Ma IEEE Transactions on Circuits and Systems for Video Technology, 2022 | 9 | 2022 |
Structured knowledge distillation for accurate and efficient object detection L Zhang, K Ma IEEE Transactions on Pattern Analysis and Machine Intelligence, 2023 | 8 | 2023 |
Multi-frequency representation enhancement with privilege information for video super-resolution F Li, L Zhang, Z Liu, J Lei, Z Li Proceedings of the IEEE/CVF International Conference on Computer Vision …, 2023 | 6 | 2023 |