关注
Chaoyue Liu
Chaoyue Liu
Purdue University, ECE department
在 purdue.edu 的电子邮件经过验证 - 首页
标题
引用次数
引用次数
年份
Loss landscapes and optimization in over-parameterized non-linear systems and neural networks
C Liu, L Zhu, M Belkin
Applied and Computational Harmonic Analysis 59, 85-116, 2022
2242022
On the linearity of large non-linear models: when and why the tangent kernel is constant
C Liu, L Zhu, M Belkin
Advances in Neural Information Processing Systems 33, 15954-15964, 2020
1602020
Accelerating sgd with momentum for over-parameterized learning
C Liu, M Belkin
International Conference on Learning Representations, 2020
110*2020
Toward a theory of optimization for over-parameterized systems of non-linear equations: the lessons of deep learning
C Liu, L Zhu, M Belkin
arXiv preprint arXiv:2003.00307 7, 2020
902020
Quadratic models for understanding catapult dynamics of neural networks
L Zhu, C Liu, A Radhakrishnan, M Belkin
The Twelfth International Conference on Learning Representations, 2024
19*2024
Clustering with Bregman divergences: an asymptotic analysis
C Liu, M Belkin
Advances in neural information processing systems 29, 2016
182016
Catapults in SGD: spikes in the training loss and their impact on generalization through feature learning
L Zhu, C Liu, A Radhakrishnan, M Belkin
The Forty-first International Conference on Machine Learning (ICML), 2024
112024
Aiming towards the minimizers: fast convergence of SGD for overparametrized problems
C Liu, D Drusvyatskiy, M Belkin, D Davis, YA Ma
Conference on Neural Information Processing Systems (NIPS), 2023
82023
Transition to linearity of general neural networks with directed acyclic graph architecture
L Zhu, C Liu, M Belkin
Advances in neural information processing systems 35, 5363-5375, 2022
62022
Two-Sided Wasserstein Procrustes Analysis.
K Jin, C Liu, C Xia
IJCAI, 3515-3521, 2021
52021
Parametrized accelerated methods free of condition number
C Liu, M Belkin
arXiv preprint arXiv:1802.10235, 2018
52018
ReLU soothes the NTK condition number and accelerates optimization for wide neural networks
C Liu, L Hui
arXiv preprint arXiv:2305.08813, 2023
42023
Transition to Linearity of Wide Neural Networks is an Emerging Property of Assembling Weak Models
C Liu, L Zhu, M Belkin
International Conference on Learning Representations, 2022
42022
On emergence of clean-priority learning in early stopped neural networks
C Liu, A Abedsoltan, M Belkin
arXiv preprint arXiv:2306.02533, 2023
22023
Otda: a unsupervised optimal transport framework with discriminant analysis for keystroke inference
K Jin, C Liu, C Xia
2020 IEEE Conference on Communications and Network Security (CNS), 1-9, 2020
12020
On the Predictability of Fine-grained Cellular Network Throughput using Machine Learning Models
O Basit, P Dinh, I Khan, ZJ Kong, YC Hu, D Koutsonikolas, M Lee, C Liu
IEEE MASS, 2024
2024
SGD batch saturation for training wide neural networks
C Liu, D Drusvyatskiy, M Belkin, D Davis, Y Ma
NeurIPS Optimization for Machine Learning workshop, 2023
2023
Understanding and Accelerating the Optimization of Modern Machine Learning
C Liu
The Ohio State University, 2021
2021
系统目前无法执行此操作,请稍后再试。
文章 1–18