DSD²: Can We Dodge Sparse Double Descent and Compress the Neural Network Worry-Free? V Quétu, E Tartaglione Proceedings of the AAAI Conference on Artificial Intelligence 38 (13), 14749 …, 2024 | 7* | 2024 |
Can Unstructured Pruning Reduce the Depth in Deep Neural Networks? Z Liao, V Quétu, VT Nguyen, E Tartaglione Proceedings of the IEEE/CVF International Conference on Computer Vision …, 2023 | 5 | 2023 |
Can we avoid Double Descent in Deep Neural Networks? V Quétu, E Tartaglione 2023 IEEE International Conference on Image Processing (ICIP), 1625-1629, 2023 | 4* | 2023 |
The Simpler The Better: An Entropy-Based Importance Metric To Reduce Neural Networks' Depth V Quétu, Z Liao, E Tartaglione arXiv preprint arXiv:2404.18949, 2024 | 2 | 2024 |
Sparse Double Descent in Vision Transformers: real or phantom threat? V Quétu, M Milovanović, E Tartaglione International Conference on Image Analysis and Processing, 490-502, 2023 | 2 | 2023 |
NEPENTHE: Entropy-Based Pruning as a Neural Network Depth's Reducer Z Liao, V Quétu, VT Nguyen, E Tartaglione arXiv preprint arXiv:2404.16890, 2024 | 1 | 2024 |
Disentangling private classes through regularization E Tartaglione, F Gennari, V Quétu, M Grangetto Neurocomputing 554, 126612, 2023 | 1 | 2023 |
LaCoOT: Layer Collapse through Optimal Transport V Quétu, N Hezbri, E Tartaglione arXiv preprint arXiv:2406.08933, 2024 | | 2024 |
The Quest of Finding the Antidote to Sparse Double Descent V Quétu, M Milovanović SCEFA Workshop in conjunction with ECML PKDD 2023, 2023 | | 2023 |