关注
Victor Quétu
Victor Quétu
LTCI, Télécom Paris, Institut Polytechnique de Paris
在 telecom-paris.fr 的电子邮件经过验证
标题
引用次数
引用次数
年份
DSD²: Can We Dodge Sparse Double Descent and Compress the Neural Network Worry-Free?
V Quétu, E Tartaglione
Proceedings of the AAAI Conference on Artificial Intelligence 38 (13), 14749 …, 2024
7*2024
Can Unstructured Pruning Reduce the Depth in Deep Neural Networks?
Z Liao, V Quétu, VT Nguyen, E Tartaglione
Proceedings of the IEEE/CVF International Conference on Computer Vision …, 2023
52023
Can we avoid Double Descent in Deep Neural Networks?
V Quétu, E Tartaglione
2023 IEEE International Conference on Image Processing (ICIP), 1625-1629, 2023
4*2023
The Simpler The Better: An Entropy-Based Importance Metric To Reduce Neural Networks' Depth
V Quétu, Z Liao, E Tartaglione
arXiv preprint arXiv:2404.18949, 2024
22024
Sparse Double Descent in Vision Transformers: real or phantom threat?
V Quétu, M Milovanović, E Tartaglione
International Conference on Image Analysis and Processing, 490-502, 2023
22023
NEPENTHE: Entropy-Based Pruning as a Neural Network Depth's Reducer
Z Liao, V Quétu, VT Nguyen, E Tartaglione
arXiv preprint arXiv:2404.16890, 2024
12024
Disentangling private classes through regularization
E Tartaglione, F Gennari, V Quétu, M Grangetto
Neurocomputing 554, 126612, 2023
12023
LaCoOT: Layer Collapse through Optimal Transport
V Quétu, N Hezbri, E Tartaglione
arXiv preprint arXiv:2406.08933, 2024
2024
The Quest of Finding the Antidote to Sparse Double Descent
V Quétu, M Milovanović
SCEFA Workshop in conjunction with ECML PKDD 2023, 2023
2023
系统目前无法执行此操作,请稍后再试。
文章 1–9