A farewell to the bias-variance tradeoff? an overview of the theory of overparameterized machine learning
Y Dar, V Muthukumar, RG Baraniuk - arXiv preprint arXiv:2109.02355, 2021 - arxiv.org
The rapid recent progress in machine learning (ML) has raised a number of scientific
questions that challenge the longstanding dogma of the field. One of the most important …
questions that challenge the longstanding dogma of the field. One of the most important …
Modeling the influence of data structure on learning in neural networks: The hidden manifold model
Understanding the reasons for the success of deep neural networks trained using stochastic
gradient-based methods is a key open problem for the nascent theory of deep learning. The …
gradient-based methods is a key open problem for the nascent theory of deep learning. The …
A model of double descent for high-dimensional binary linear classification
We consider a model for logistic regression where only a subset of features of size is used
for training a linear classifier over training samples. The classifier is obtained by running …
for training a linear classifier over training samples. The classifier is obtained by running …
Label-imbalanced and group-sensitive classification under overparameterization
The goal in label-imbalanced and group-sensitive classification is to optimize relevant
metrics such as balanced error and equal opportunity. Classical methods, such as weighted …
metrics such as balanced error and equal opportunity. Classical methods, such as weighted …
Classifying high-dimensional gaussian mixtures: Where kernel methods fail and neural networks succeed
A recent series of theoretical works showed that the dynamics of neural networks with a
certain initialisation are well-captured by kernel methods. Concurrent empirical work …
certain initialisation are well-captured by kernel methods. Concurrent empirical work …
Neural networks trained with SGD learn distributions of increasing complexity
The uncanny ability of over-parameterised neural networks to generalise well has been
explained using various" simplicity biases". These theories postulate that neural networks …
explained using various" simplicity biases". These theories postulate that neural networks …
Learning gaussian mixtures with generalized linear models: Precise asymptotics in high-dimensions
Generalised linear models for multi-class classification problems are one of the fundamental
building blocks of modern machine learning tasks. In this manuscript, we characterise the …
building blocks of modern machine learning tasks. In this manuscript, we characterise the …
Are Gaussian data all you need? The extents and limits of universality in high-dimensional generalized linear estimation
In this manuscript we consider the problem of generalized linear estimation on Gaussian
mixture data with labels given by a single-index model. Our first result is a sharp asymptotic …
mixture data with labels given by a single-index model. Our first result is a sharp asymptotic …
Precise statistical analysis of classification accuracies for adversarial training
A Javanmard, M Soltanolkotabi - The Annals of Statistics, 2022 - projecteuclid.org
Precise statistical analysis of classification accuracies for adversarial training Page 1 The
Annals of Statistics 2022, Vol. 50, No. 4, 2127–2156 https://doi.org/10.1214/22-AOS2180 © …
Annals of Statistics 2022, Vol. 50, No. 4, 2127–2156 https://doi.org/10.1214/22-AOS2180 © …
Universality laws for gaussian mixtures in generalized linear models
A recent line of work in high-dimensional statistics working under the Gaussian mixture
hypothesis has led to a number of results in the context of empirical risk minimization …
hypothesis has led to a number of results in the context of empirical risk minimization …