More convnets in the 2020s: Scaling up kernels beyond 51x51 using sparsity S Liu, T Chen, X Chen, X Chen, Q Xiao, B Wu, M Pechenizkiy, D Mocanu, ... ICLR2023, The International Conference on Learning Representations, 2023 | 136 | 2023 |
Do we actually need dense over-parameterization? in-time over-parameterization in sparse training S Liu, L Yin, DC Mocanu, M Pechenizkiy ICML2021, International Conference on Machine Learning, 2021 | 111 | 2021 |
Sparse Training via Boosting Pruning Plasticity with Neuroregeneration S Liu, T Chen, X Chen, Z Atashgahi, L Yin, H Kou, L Shen, M Pechenizkiy, ... NeurIPS2021, Advances in Neural Information Processing Systems, 2021 | 109 | 2021 |
Sparse evolutionary deep learning with over one million artificial neurons on commodity hardware S Liu, DC Mocanu, ARR Matavalam, Y Pei, M Pechenizkiy Neural Computing and Applications 33, 2589-2604, 2021 | 92 | 2021 |
The unreasonable effectiveness of random pruning: Return of the most naive baseline for sparse training S Liu, T Chen, X Chen, L Shen, DC Mocanu, Z Wang, M Pechenizkiy ICLR2022, The International Conference on Learning Representations, 2022 | 89 | 2022 |
Selfish sparse RNN training S Liu, DC Mocanu, Y Pei, M Pechenizkiy ICML2021, International Conference on Machine Learning, 2021 | 53* | 2021 |
Deep Ensembling with No Overhead for either Training or Testing: The All-Round Blessings of Dynamic Sparsity S Liu, T Chen, Z Atashgahi, X Chen, G Sokar, E Mocanu, M Pechenizkiy, ... ICLR2022, The International Conference on Learning Representations, 2021 | 51 | 2021 |
Topological Insights into Sparse Neural Networks S Liu, T Van der Lee, A Yaman, Z Atashgahi, D Ferraro, G Sokar, ... ECML2020, European Conference on Machine Learning, 2020 | 34 | 2020 |
Efficient and effective training of sparse recurrent neural networks S Liu, I Ni’mah, V Menkovski, DC Mocanu, M Pechenizkiy Neural Computing and Applications 33, 9625-9636, 2021 | 30 | 2021 |
Sparse MoE as the New Dropout: Scaling Dense and Self-Slimmable Transformers T Chen, Z Zhang, A Jaiswal, S Liu, Z Wang ICLR2023, The International Conference on Learning Representations, 2023 | 25 | 2023 |
Sparsity May Cry: Let Us Fail (Current) Sparse Neural Networks Together! S Liu, T Chen, Z Zhang, X Chen, T Huang, A Jaiswal, Z Wang ICLR2023, The International Conference on Learning Representations, 2023 | 24 | 2023 |
Achieving personalized federated learning with sparse local models T Huang, S Liu, L Shen, F He, W Lin, D Tao arXiv preprint arXiv:2201.11380, 2022 | 24 | 2022 |
A Brain-inspired Algorithm for Training Highly Sparse Neural Networks Z Atashgahi, J Pieterse, S Liu, DC Mocanu, R Veldhuis, M Pechenizkiy Machine Learning Journal (ECML-PKDD 2022 journal track), 2019 | 24* | 2019 |
Revisiting pruning at initialization through the lens of Ramanujan graph DNM Hoang, S Liu, R Marculescu, Z Wang ICLR2023, The International Conference on Learning Representations, 2023 | 21 | 2023 |
The Emergence of Essential Sparsity in Large Pre-trained Models: The Weights that Matter A Jaiswal, S Liu, T Chen, Z Wang NeurIPS2023, 37th Annual Conference on Neural Information Processing Systems, 2023 | 20 | 2023 |
Dynamic Sparse Network for Time Series Classification: Learning What to “See” Q Xiao, B Wu, Y Zhang, S Liu, M Pechenizkiy, E Mocanu, DC Mocanu NeurIPS2022, 36th Annual Conference on Neural Information Processing Systems, 2022 | 20 | 2022 |
Adamerging: Adaptive model merging for multi-task learning E Yang, Z Wang, L Shen, S Liu, G Guo, X Wang, D Tao ICLR2024, The International Conference on Learning Representations, 2024 | 17* | 2024 |
Ten lessons we have learned in the new" sparseland": A short handbook for sparse neural network researchers S Liu, Z Wang arXiv preprint arXiv:2302.02596, 2023 | 17 | 2023 |
Outlier weighed layerwise sparsity (owl): A missing secret sauce for pruning llms to high sparsity L Yin, Y Wu, Z Zhang, CY Hsieh, Y Wang, Y Jia, G Li, A Jaiswal, ... ICML2024, International Conference on Machine Learning, 2024 | 16 | 2024 |
Dynamic sparse no training: Training-free fine-tuning for sparse llms Y Zhang, L Zhao, M Lin, Y Sun, Y Yao, X Han, J Tanner, S Liu, R Ji ICLR2024, The International Conference on Learning Representations, 2024 | 16 | 2024 |