Universality laws for high-dimensional learning with random features

H Hu, YM Lu - IEEE Transactions on Information Theory, 2022 - ieeexplore.ieee.org
We prove a universality theorem for learning with random features. Our result shows that, in
terms of training and generalization errors, a random feature model with a nonlinear …

Neural networks as kernel learners: The silent alignment effect

A Atanasov, B Bordelon, C Pehlevan - arXiv preprint arXiv:2111.00034, 2021 - arxiv.org
Neural networks in the lazy training regime converge to kernel machines. Can neural
networks in the rich feature learning regime learn a kernel machine with a data-dependent …

Learning gaussian mixtures with generalized linear models: Precise asymptotics in high-dimensions

B Loureiro, G Sicuro, C Gerbelot… - Advances in …, 2021 - proceedings.neurips.cc
Generalised linear models for multi-class classification problems are one of the fundamental
building blocks of modern machine learning tasks. In this manuscript, we characterise the …

Generalization error rates in kernel regression: The crossover from the noiseless to noisy regime

H Cui, B Loureiro, F Krzakala… - Advances in Neural …, 2021 - proceedings.neurips.cc
In this manuscript we consider Kernel Ridge Regression (KRR) under the Gaussian design.
Exponents for the decay of the excess generalization error of KRR have been reported in …

Graph-based approximate message passing iterations

C Gerbelot, R Berthier - Information and Inference: A Journal of …, 2023 - academic.oup.com
Approximate message passing (AMP) algorithms have become an important element of high-
dimensional statistical inference, mostly due to their adaptability and concentration …

Fluctuations, bias, variance & ensemble of learners: Exact asymptotics for convex losses in high-dimension

B Loureiro, C Gerbelot, M Refinetti… - International …, 2022 - proceedings.mlr.press
From the sampling of data to the initialisation of parameters, randomness is ubiquitous in
modern Machine Learning practice. Understanding the statistical fluctuations engendered …

Sharp global convergence guarantees for iterative nonconvex optimization with random data

KA Chandrasekher, A Pananjady… - The Annals of …, 2023 - projecteuclid.org
Sharp global convergence guarantees for iterative nonconvex optimization with random data
Page 1 The Annals of Statistics 2023, Vol. 51, No. 1, 179–210 https://doi.org/10.1214/22-AOS2246 …

Precise asymptotic analysis of deep random feature models

D Bosch, A Panahi, B Hassibi - The Thirty Sixth Annual …, 2023 - proceedings.mlr.press
We provide exact asymptotic expressions for the performance of regression by an $ L-$
layer deep random feature (RF) model, where the input is mapped through multiple random …

[HTML][HTML] An introduction to machine learning: a perspective from statistical physics

A Decelle - Physica A: Statistical Mechanics and its Applications, 2022 - Elsevier
The recent progresses in Machine Learning opened the door to actual applications of
learning algorithms but also to new research directions both in the field of Machine Learning …

Model, sample, and epoch-wise descents: exact solution of gradient flow in the random feature model

A Bodin, N Macris - Advances in Neural Information …, 2021 - proceedings.neurips.cc
Recent evidence has shown the existence of a so-called double-descent and even triple-
descent behavior for the generalization error of deep-learning models. This important …