Sketching as a tool for numerical linear algebra
DP Woodruff - … and Trends® in Theoretical Computer Science, 2014 - nowpublishers.com
This survey highlights the recent advances in algorithms for numerical linear algebra that
have come from the technique of linear sketching, whereby given a matrix, one first …
have come from the technique of linear sketching, whereby given a matrix, one first …
Low-rank approximation and regression in input sparsity time
KL Clarkson, DP Woodruff - Journal of the ACM (JACM), 2017 - dl.acm.org
We design a new distribution over m× n matrices S so that, for any fixed n× d matrix A of rank
r, with probability at least 9/10,∥ SAx∥ 2=(1±ε)∥ Ax∥ 2 simultaneously for all x∈ R d …
r, with probability at least 9/10,∥ SAx∥ 2=(1±ε)∥ Ax∥ 2 simultaneously for all x∈ R d …
OSNAP: Faster numerical linear algebra algorithms via sparser subspace embeddings
An oblivious subspace embedding (OSE) given some parameters ε, d is a distribution D over
matrices Π∈ R m× n such that for any linear subspace W⊆ R n with dim (W)= d, P Π~ D (∀ …
matrices Π∈ R m× n such that for any linear subspace W⊆ R n with dim (W)= d, P Π~ D (∀ …
A framework for Bayesian optimization in embedded subspaces
A Nayebi, A Munteanu… - … Conference on Machine …, 2019 - proceedings.mlr.press
We present a theoretically founded approach for high-dimensional Bayesian optimization
based on low-dimensional subspace embeddings. We prove that the error in the Gaussian …
based on low-dimensional subspace embeddings. We prove that the error in the Gaussian …
Sparser johnson-lindenstrauss transforms
We give two different and simple constructions for dimensionality reduction in ℓ 2 via linear
mappings that are sparse: only an O (ε)-fraction of entries in each column of our embedding …
mappings that are sparse: only an O (ε)-fraction of entries in each column of our embedding …
Training multi-layer over-parametrized neural network in subquadratic time
We consider the problem of training a multi-layer over-parametrized neural network to
minimize the empirical risk induced by a loss function. In the typical setting of over …
minimize the empirical risk induced by a loss function. In the typical setting of over …
Oblivious dimension reduction for k-means: beyond subspaces and the Johnson-Lindenstrauss lemma
We show that for n points in d-dimensional Euclidean space, a data oblivious random
projection of the columns onto m∈ O ((log k+ loglog n) ε− 6log1/ε) dimensions is sufficient to …
projection of the columns onto m∈ O ((log k+ loglog n) ε− 6log1/ε) dimensions is sufficient to …
Coresets-methods and history: A theoreticians design pattern for approximation and streaming algorithms
A Munteanu, C Schwiegelshohn - KI-Künstliche Intelligenz, 2018 - Springer
We present a technical survey on the state of the art approaches in data reduction and the
coreset framework. These include geometric decompositions, gradient methods, random …
coreset framework. These include geometric decompositions, gradient methods, random …
Toward a unified theory of sparse dimensionality reduction in euclidean space
Let Φ∈ Rm xn be a sparse Johnson-Lindenstrauss transform [52] with column sparsity s.
For a subset T of the unit sphere and ε∈(0, 1/2), we study settings for m, s to ensure EΦ …
For a subset T of the unit sphere and ε∈(0, 1/2), we study settings for m, s to ensure EΦ …