Universality laws for high-dimensional learning with random features
We prove a universality theorem for learning with random features. Our result shows that, in
terms of training and generalization errors, a random feature model with a nonlinear …
terms of training and generalization errors, a random feature model with a nonlinear …
Neural networks as kernel learners: The silent alignment effect
Neural networks in the lazy training regime converge to kernel machines. Can neural
networks in the rich feature learning regime learn a kernel machine with a data-dependent …
networks in the rich feature learning regime learn a kernel machine with a data-dependent …
Learning gaussian mixtures with generalized linear models: Precise asymptotics in high-dimensions
Generalised linear models for multi-class classification problems are one of the fundamental
building blocks of modern machine learning tasks. In this manuscript, we characterise the …
building blocks of modern machine learning tasks. In this manuscript, we characterise the …
Generalization error rates in kernel regression: The crossover from the noiseless to noisy regime
In this manuscript we consider Kernel Ridge Regression (KRR) under the Gaussian design.
Exponents for the decay of the excess generalization error of KRR have been reported in …
Exponents for the decay of the excess generalization error of KRR have been reported in …
Graph-based approximate message passing iterations
C Gerbelot, R Berthier - Information and Inference: A Journal of …, 2023 - academic.oup.com
Approximate message passing (AMP) algorithms have become an important element of high-
dimensional statistical inference, mostly due to their adaptability and concentration …
dimensional statistical inference, mostly due to their adaptability and concentration …
Fluctuations, bias, variance & ensemble of learners: Exact asymptotics for convex losses in high-dimension
From the sampling of data to the initialisation of parameters, randomness is ubiquitous in
modern Machine Learning practice. Understanding the statistical fluctuations engendered …
modern Machine Learning practice. Understanding the statistical fluctuations engendered …
Sharp global convergence guarantees for iterative nonconvex optimization with random data
KA Chandrasekher, A Pananjady… - The Annals of …, 2023 - projecteuclid.org
Sharp global convergence guarantees for iterative nonconvex optimization with random data
Page 1 The Annals of Statistics 2023, Vol. 51, No. 1, 179–210 https://doi.org/10.1214/22-AOS2246 …
Page 1 The Annals of Statistics 2023, Vol. 51, No. 1, 179–210 https://doi.org/10.1214/22-AOS2246 …
Precise asymptotic analysis of deep random feature models
We provide exact asymptotic expressions for the performance of regression by an $ L-$
layer deep random feature (RF) model, where the input is mapped through multiple random …
layer deep random feature (RF) model, where the input is mapped through multiple random …
[HTML][HTML] An introduction to machine learning: a perspective from statistical physics
A Decelle - Physica A: Statistical Mechanics and its Applications, 2022 - Elsevier
The recent progresses in Machine Learning opened the door to actual applications of
learning algorithms but also to new research directions both in the field of Machine Learning …
learning algorithms but also to new research directions both in the field of Machine Learning …
Model, sample, and epoch-wise descents: exact solution of gradient flow in the random feature model
Recent evidence has shown the existence of a so-called double-descent and even triple-
descent behavior for the generalization error of deep-learning models. This important …
descent behavior for the generalization error of deep-learning models. This important …