Statistical learning with sparsity
T Hastie, R Tibshirani… - Monographs on statistics …, 2015 - api.taylorfrancis.com
In this monograph, we have attempted to summarize the actively developing field of
statistical learning with sparsity. A sparse statistical model is one having only a small …
statistical learning with sparsity. A sparse statistical model is one having only a small …
Low-rank matrix factorization for deep neural network training with high-dimensional output targets
While Deep Neural Networks (DNNs) have achieved tremendous success for large
vocabulary continuous speech recognition (LVCSR) tasks, training of these networks is …
vocabulary continuous speech recognition (LVCSR) tasks, training of these networks is …
Regularized M-estimators with nonconvexity: Statistical and algorithmic theory for local optima
PL Loh, MJ Wainwright - Advances in Neural Information …, 2013 - proceedings.neurips.cc
We establish theoretical results concerning all local optima of various regularized M-
estimators, where both loss and penalty functions are allowed to be nonconvex. Our results …
estimators, where both loss and penalty functions are allowed to be nonconvex. Our results …
A Unified Framework for High-Dimensional Analysis of -Estimators with Decomposable Regularizers
A Unified Framework for High-Dimensional Analysis of M-Estimators with Decomposable
Regularizers Page 1 Statistical Science 2012, Vol. 27, No. 4, 538–557 DOI: 10.1214/12-STS400 …
Regularizers Page 1 Statistical Science 2012, Vol. 27, No. 4, 538–557 DOI: 10.1214/12-STS400 …
Training data influence analysis and estimation: A survey
Z Hammoudeh, D Lowd - Machine Learning, 2024 - Springer
Good models require good training data. For overparameterized deep models, the causal
relationship between training data and model predictions is increasingly opaque and poorly …
relationship between training data and model predictions is increasingly opaque and poorly …
[PDF][PDF] An accelerated proximal gradient algorithm for nuclear norm regularized linear least squares problems
KC Toh, S Yun - Pacific Journal of optimization, 2010 - researchgate.net
The affine rank minimization problem, which consists of finding a matrix of minimum rank
subject to linear equality constraints, has been proposed in many areas of engineering and …
subject to linear equality constraints, has been proposed in many areas of engineering and …
[PDF][PDF] partykit: A modular toolkit for recursive partytioning in R
The R package partykit provides a flexible toolkit for learning, representing, summarizing,
and visualizing a wide range of tree-structured regression and classification models. The …
and visualizing a wide range of tree-structured regression and classification models. The …
Estimation of (near) low-rank matrices with noise and high-dimensional scaling
S Negahban, MJ Wainwright - 2011 - projecteuclid.org
Estimation of (near) low-rank matrices with noise and high-dimensional scaling Page 1 The
Annals of Statistics 2011, Vol. 39, No. 2, 1069–1097 DOI: 10.1214/10-AOS850 © Institute of …
Annals of Statistics 2011, Vol. 39, No. 2, 1069–1097 DOI: 10.1214/10-AOS850 © Institute of …
Fast global convergence rates of gradient methods for high-dimensional statistical recovery
A Agarwal, S Negahban… - Advances in Neural …, 2010 - proceedings.neurips.cc
Many statistical $ M $-estimators are based on convex optimization problems formed by the
weighted sum of a loss function with a norm-based regularizer. We analyze the convergence …
weighted sum of a loss function with a norm-based regularizer. We analyze the convergence …
Sparse multivariate regression with covariance estimation
We propose a procedure for constructing a sparse estimator of a multivariate regression
coefficient matrix that accounts for correlation of the response variables. This method, which …
coefficient matrix that accounts for correlation of the response variables. This method, which …