关注
Yatin Dandi
Yatin Dandi
在 iitk.ac.in 的电子邮件经过验证
标题
引用次数
引用次数
年份
How two-layer neural networks learn, one (giant) step at a time
Y Dandi, F Krzakala, B Loureiro, L Pesce, L Stephan
arXiv preprint arXiv:2305.18270, 2023
252023
Implicit gradient alignment in distributed and federated learning
Y Dandi, L Barba, M Jaggi
Proceedings of the AAAI Conference on Artificial Intelligence 36 (6), 6454-6462, 2022
212022
Data-heterogeneity-aware mixing for decentralized learning
Y Dandi, A Koloskova, M Jaggi, SU Stich
arXiv preprint arXiv:2204.06477, 2022
172022
Universality laws for gaussian mixtures in generalized linear models
Y Dandi, L Stephan, F Krzakala, B Loureiro, L Zdeborová
Advances in Neural Information Processing Systems 36, 2024
162024
Sampling with flows, diffusion, and autoregressive neural networks from a spin-glass perspective
D Ghio, Y Dandi, F Krzakala, L Zdeborová
Proceedings of the National Academy of Sciences 121 (27), e2311810121, 2024
112024
The benefits of reusing batches for gradient descent in two-layer networks: Breaking the curse of information and leap exponents
Y Dandi, E Troiani, L Arnaboldi, L Pesce, L Zdeborová, F Krzakala
arXiv preprint arXiv:2402.03220, 2024
102024
Jointly trained image and video generation using residual vectors
Y Dandi, A Das, S Singhal, V Namboodiri, P Rai
Proceedings of the IEEE/CVF Winter Conference on Applications of Computer …, 2020
82020
Asymptotics of feature learning in two-layer networks after one gradient-step
H Cui, L Pesce, Y Dandi, F Krzakala, YM Lu, L Zdeborová, B Loureiro
arXiv preprint arXiv:2402.04980, 2024
62024
Generalized Adversarially Learned Inference
Y Dandi, H Bharadhwaj, A Kumar, P Rai
AAAI, 2021, 2020
62020
Maximally-stable local optima in random graphs and spin glasses: Phase transitions and universality
Y Dandi, D Gamarnik, L Zdeborová
arXiv preprint arXiv:2305.03591, 2023
42023
Understanding Layer-wise Contributions in Deep Neural Networks through Spectral Analysis
Y Dandi, A Jacot
arXiv preprint arXiv:2111.03972, 2021
32021
Repetita iuvant: Data repetition allows sgd to learn high-dimensional multi-index functions
L Arnaboldi, Y Dandi, F Krzakala, L Pesce, L Stephan
arXiv preprint arXiv:2405.15459, 2024
22024
Model-Agnostic Learning to Meta-Learn
A Devos, Y Dandi
NeurIPS pre-registration workshop, 2020, 2020
22020
Online Learning and Information Exponents: On The Importance of Batch size, and Time/Complexity Tradeoffs
L Arnaboldi, Y Dandi, F Krzakala, B Loureiro, L Pesce, L Stephan
arXiv preprint arXiv:2406.02157, 2024
12024
Fundamental limits of weak learnability in high-dimensional multi-index models
E Troiani, Y Dandi, L Defilippis, L Zdeborová, B Loureiro, F Krzakala
arXiv preprint arXiv:2405.15480, 2024
2024
A Gentle Introduction to Gradient-Based Optimization and Variational Inequalities for Machine Learning
NS Wadia, Y Dandi, MI Jordan
arXiv preprint arXiv:2309.04877, 2023
2023
NeurInt: Learning to Interpolate through Neural ODEs
A Bose, A Das, Y Dandi, P Rai
arXiv preprint arXiv:2111.04123, 2021
2021
How Two-Layer Networks Learn, One (Giant) Step at a Time
Y Dandi, F Krzakala, B Loureiro, L Pesce, L Stephan
Learning from setbacks: the impact of adversarial initialization on generalization performance
K Ravichandran, Y Dandi, S Karp, F Mignacco
NeurIPS 2023 Workshop on Mathematics of Modern Machine Learning, 0
Group ID U13825 Affiliated authors Alarcon, Angeles
F Behrens, L Biggio, LA Clarte, HC Cui, G Dalle, Y Dandi, O Duranthon, ...
系统目前无法执行此操作,请稍后再试。
文章 1–20