Spectral universality in regularized linear regression with nearly deterministic sensing matrices

R Dudeja, S Sen, YM Lu - IEEE Transactions on Information …, 2024 - ieeexplore.ieee.org
It has been observed that the performances of many high-dimensional estimation problems
are universal with respect to underlying sensing (or design) matrices. Specifically, matrices …

Classification of heavy-tailed features in high dimensions: a superstatistical approach

U Adomaityte, G Sicuro, P Vivo - Advances in Neural …, 2023 - proceedings.neurips.cc
We characterise the learning of a mixture of two clouds of data points with generic centroids
via empirical risk minimisation in the high dimensional regime, under the assumptions of …

High-dimensional robust regression under heavy-tailed data: Asymptotics and universality

U Adomaityte, L Defilippis, B Loureiro… - Journal of Statistical …, 2024 - iopscience.iop.org
We investigate the high-dimensional properties of robust regression estimators in the
presence of heavy-tailed contamination of both the covariates and response functions. In …

High-dimensional learning of narrow neural networks

H Cui - arXiv preprint arXiv:2409.13904, 2024 - arxiv.org
Recent years have been marked with the fast-pace diversification and increasing ubiquity of
machine learning applications. Yet, a firm theoretical understanding of the surprising …

[HTML][HTML] Injectivity of ReLU networks: perspectives from statistical physics

A Maillard, AS Bandeira, D Belius, I Dokmanić… - Applied and …, 2025 - Elsevier
When can the input of a ReLU neural network be inferred from its output? In other words,
when is the network injective? We consider a single layer, x↦ ReLU (W x), with a random …

Asymptotics of Learning with Deep Structured (Random) Features

D Schröder, D Dmitriev, H Cui, B Loureiro - arXiv preprint arXiv …, 2024 - arxiv.org
For a large class of feature maps we provide a tight asymptotic characterisation of the test
error associated with learning the readout layer, in the high-dimensional limit where the …

Gaussian universality for approximately polynomial functions of high-dimensional data

KH Huang, M Austern, P Orbanz - arXiv preprint arXiv:2403.10711, 2024 - arxiv.org
We establish an invariance principle for polynomial functions of $ n $ independent high-
dimensional random vectors, and also show that the obtained rates are nearly optimal. Both …

Fitting an ellipsoid to random points: predictions using the replica method

A Maillard, D Kunisky - IEEE Transactions on Information …, 2024 - ieeexplore.ieee.org
We consider the problem of fitting a centered ellipsoid to n standard Gaussian random
vectors in, as with. It has been conjectured that this problem is, with high probability …

On How Iterative Magnitude Pruning Discovers Local Receptive Fields in Fully Connected Neural Networks

WT Redman, Z Wang, A Ingrosso, S Goldt - arXiv preprint arXiv …, 2024 - arxiv.org
Since its use in the Lottery Ticket Hypothesis, iterative magnitude pruning (IMP) has become
a popular method for extracting sparse subnetworks that can be trained to high performance …

Finite-size correction and variance of the mutual information of random linear estimation with non-Gaussian priors: A replica calculation

TG Tsironis, AL Moustakas - Physical Review E, 2024 - APS
Random linear vector channels have been known to increase the transmission of
information in several communications systems. For Gaussian priors, the statistics of a key …