Spectral universality in regularized linear regression with nearly deterministic sensing matrices
It has been observed that the performances of many high-dimensional estimation problems
are universal with respect to underlying sensing (or design) matrices. Specifically, matrices …
are universal with respect to underlying sensing (or design) matrices. Specifically, matrices …
Classification of heavy-tailed features in high dimensions: a superstatistical approach
We characterise the learning of a mixture of two clouds of data points with generic centroids
via empirical risk minimisation in the high dimensional regime, under the assumptions of …
via empirical risk minimisation in the high dimensional regime, under the assumptions of …
High-dimensional robust regression under heavy-tailed data: Asymptotics and universality
U Adomaityte, L Defilippis, B Loureiro… - Journal of Statistical …, 2024 - iopscience.iop.org
We investigate the high-dimensional properties of robust regression estimators in the
presence of heavy-tailed contamination of both the covariates and response functions. In …
presence of heavy-tailed contamination of both the covariates and response functions. In …
High-dimensional learning of narrow neural networks
H Cui - arXiv preprint arXiv:2409.13904, 2024 - arxiv.org
Recent years have been marked with the fast-pace diversification and increasing ubiquity of
machine learning applications. Yet, a firm theoretical understanding of the surprising …
machine learning applications. Yet, a firm theoretical understanding of the surprising …
[HTML][HTML] Injectivity of ReLU networks: perspectives from statistical physics
When can the input of a ReLU neural network be inferred from its output? In other words,
when is the network injective? We consider a single layer, x↦ ReLU (W x), with a random …
when is the network injective? We consider a single layer, x↦ ReLU (W x), with a random …
Asymptotics of Learning with Deep Structured (Random) Features
For a large class of feature maps we provide a tight asymptotic characterisation of the test
error associated with learning the readout layer, in the high-dimensional limit where the …
error associated with learning the readout layer, in the high-dimensional limit where the …
Gaussian universality for approximately polynomial functions of high-dimensional data
We establish an invariance principle for polynomial functions of $ n $ independent high-
dimensional random vectors, and also show that the obtained rates are nearly optimal. Both …
dimensional random vectors, and also show that the obtained rates are nearly optimal. Both …
Fitting an ellipsoid to random points: predictions using the replica method
A Maillard, D Kunisky - IEEE Transactions on Information …, 2024 - ieeexplore.ieee.org
We consider the problem of fitting a centered ellipsoid to n standard Gaussian random
vectors in, as with. It has been conjectured that this problem is, with high probability …
vectors in, as with. It has been conjectured that this problem is, with high probability …
On How Iterative Magnitude Pruning Discovers Local Receptive Fields in Fully Connected Neural Networks
Since its use in the Lottery Ticket Hypothesis, iterative magnitude pruning (IMP) has become
a popular method for extracting sparse subnetworks that can be trained to high performance …
a popular method for extracting sparse subnetworks that can be trained to high performance …
Finite-size correction and variance of the mutual information of random linear estimation with non-Gaussian priors: A replica calculation
TG Tsironis, AL Moustakas - Physical Review E, 2024 - APS
Random linear vector channels have been known to increase the transmission of
information in several communications systems. For Gaussian priors, the statistics of a key …
information in several communications systems. For Gaussian priors, the statistics of a key …