Sketching as a tool for numerical linear algebra

DP Woodruff - … and Trends® in Theoretical Computer Science, 2014 - nowpublishers.com
This survey highlights the recent advances in algorithms for numerical linear algebra that
have come from the technique of linear sketching, whereby given a matrix, one first …

Low-rank approximation and regression in input sparsity time

KL Clarkson, DP Woodruff - Journal of the ACM (JACM), 2017 - dl.acm.org
We design a new distribution over m× n matrices S so that, for any fixed n× d matrix A of rank
r, with probability at least 9/10,∥ SAx∥ 2=(1±ε)∥ Ax∥ 2 simultaneously for all x∈ R d …

OSNAP: Faster numerical linear algebra algorithms via sparser subspace embeddings

J Nelson, HL Nguyên - 2013 ieee 54th annual symposium on …, 2013 - ieeexplore.ieee.org
An oblivious subspace embedding (OSE) given some parameters ε, d is a distribution D over
matrices Π∈ R m× n such that for any linear subspace W⊆ R n with dim (W)= d, P Π~ D (∀ …

A framework for Bayesian optimization in embedded subspaces

A Nayebi, A Munteanu… - … Conference on Machine …, 2019 - proceedings.mlr.press
We present a theoretically founded approach for high-dimensional Bayesian optimization
based on low-dimensional subspace embeddings. We prove that the error in the Gaussian …

Sparser johnson-lindenstrauss transforms

DM Kane, J Nelson - Journal of the ACM (JACM), 2014 - dl.acm.org
We give two different and simple constructions for dimensionality reduction in ℓ 2 via linear
mappings that are sparse: only an O (ε)-fraction of entries in each column of our embedding …

Training multi-layer over-parametrized neural network in subquadratic time

Z Song, L Zhang, R Zhang - arXiv preprint arXiv:2112.07628, 2021 - arxiv.org
We consider the problem of training a multi-layer over-parametrized neural network to
minimize the empirical risk induced by a loss function. In the typical setting of over …

Oblivious dimension reduction for k-means: beyond subspaces and the Johnson-Lindenstrauss lemma

L Becchetti, M Bury, V Cohen-Addad… - Proceedings of the 51st …, 2019 - dl.acm.org
We show that for n points in d-dimensional Euclidean space, a data oblivious random
projection of the columns onto m∈ O ((log k+ loglog n) ε− 6log1/ε) dimensions is sufficient to …

[图书][B] Small summaries for big data

G Cormode, K Yi - 2020 - books.google.com
The massive volume of data generated in modern applications can overwhelm our ability to
conveniently transmit, store, and index it. For many scenarios, building a compact summary …

Coresets-methods and history: A theoreticians design pattern for approximation and streaming algorithms

A Munteanu, C Schwiegelshohn - KI-Künstliche Intelligenz, 2018 - Springer
We present a technical survey on the state of the art approaches in data reduction and the
coreset framework. These include geometric decompositions, gradient methods, random …

Toward a unified theory of sparse dimensionality reduction in euclidean space

J Bourgain, S Dirksen, J Nelson - … of the forty-seventh annual ACM …, 2015 - dl.acm.org
Let Φ∈ Rm xn be a sparse Johnson-Lindenstrauss transform [52] with column sparsity s.
For a subset T of the unit sphere and ε∈(0, 1/2), we study settings for m, s to ensure EΦ …