[HTML][HTML] Cleaning large correlation matrices: tools from random matrix theory

J Bun, JP Bouchaud, M Potters - Physics Reports, 2017 - Elsevier
This review covers recent results concerning the estimation of large covariance matrices
using tools from Random Matrix Theory (RMT). We introduce several RMT methods and …

Implicit self-regularization in deep neural networks: Evidence from random matrix theory and implications for learning

CH Martin, MW Mahoney - Journal of Machine Learning Research, 2021 - jmlr.org
Random Matrix Theory (RMT) is applied to analyze the weight matrices of Deep Neural
Networks (DNNs), including both production quality, pre-trained models such as AlexNet …

[HTML][HTML] An introduction to recent advances in high/infinite dimensional statistics

A Goia, P Vieu - Journal of Multivariate Analysis, 2016 - Elsevier
An introduction to recent advances in high/infinite dimensional statistics - ScienceDirect Skip
to main contentSkip to article Elsevier logo Journals & Books Search RegisterSign in View …

[PDF][PDF] Sample covariance matrices and high-dimensional data analysis

J Yao, S Zheng, ZD Bai - Cambridge UP, New York, 2015 - researchgate.net
In a multivariate analysis problem, we are given a sample x1, x2,..., xn of random
observations of dimension p. Statistical methods such as Principal Components Analysis …

The two-to-infinity norm and singular subspace geometry with applications to high-dimensional statistics

J Cape, M Tang, CE Priebe - 2019 - projecteuclid.org
The singular value matrix decomposition plays a ubiquitous role throughout statistics and
related fields. Myriad applications including clustering, classification, and dimensionality …

A theory of non-linear feature learning with one gradient step in two-layer neural networks

B Moniri, D Lee, H Hassani, E Dobriban - arXiv preprint arXiv:2310.07891, 2023 - arxiv.org
Feature learning is thought to be one of the fundamental reasons for the success of deep
neural networks. It is rigorously known that in two-layer fully-connected neural networks …

Asymptotic independence of spiked eigenvalues and linear spectral statistics for large sample covariance matrices

Z Zhang, S Zheng, G Pan, PS Zhong - The Annals of Statistics, 2022 - projecteuclid.org
Asymptotic independence of spiked eigenvalues and linear spectral statistics for large
sample covariance matrices Page 1 The Annals of Statistics 2022, Vol. 50, No. 4, 2205–2230 …

[图书][B] Large sample techniques for statistics

J Jiang - 2010 - Springer
A quote from the preface of the first edition:“Large-sample techniques provide solutions to
many practical problems; they simplify our solutions to difficult, sometimes intractable …

What causes the test error? going beyond bias-variance via anova

L Lin, E Dobriban - Journal of Machine Learning Research, 2021 - jmlr.org
Modern machine learning methods are often overparametrized, allowing adaptation to the
data at a fine level. This can seem puzzling; in the worst case, such models do not need to …

Ridge regression: Structure, cross-validation, and sketching

S Liu, E Dobriban - arXiv preprint arXiv:1910.02373, 2019 - arxiv.org
We study the following three fundamental problems about ridge regression:(1) what is the
structure of the estimator?(2) how to correctly use cross-validation to choose the …