Sharper Bounds for Sensitivity Sampling
D Woodruff, T Yasuda - International Conference on …, 2023 - proceedings.mlr.press
In large scale machine learning, random sampling is a popular way to approximate datasets
by a small representative subset of examples. In particular, sensitivity sampling is an …
by a small representative subset of examples. In particular, sensitivity sampling is an …
Online lewis weight sampling
DP Woodruff, T Yasuda - Proceedings of the 2023 Annual ACM-SIAM …, 2023 - SIAM
The seminal work of Cohen and Peng [CP15](STOC 2015) introduced Lewis weight
sampling to the theoretical computer science community, which yields fast row sampling …
sampling to the theoretical computer science community, which yields fast row sampling …
Global linear and local superlinear convergence of IRLS for non-smooth robust regression
We advance both the theory and practice of robust $\ell_p $-quasinorm regression for $ p\in
(0, 1] $ by using novel variants of iteratively reweighted least-squares (IRLS) to solve the …
(0, 1] $ by using novel variants of iteratively reweighted least-squares (IRLS) to solve the …
High-dimensional geometric streaming in polynomial space
DP Woodruff, T Yasuda - 2022 IEEE 63rd Annual Symposium …, 2022 - ieeexplore.ieee.org
Many existing algorithms for streaming geometric data analysis have been plagued by
exponential dependencies in the space complexity, which are undesirable for processing …
exponential dependencies in the space complexity, which are undesirable for processing …
Chaining, group leverage score overestimates, and fast spectral hypergraph sparsification
We present an algorithm that given any n-vertex, m-edge, rank r hypergraph constructs a
spectral sparsifier with O (n ε− 2 log n log r) hyperedges in nearly-linear O (mr) time. This …
spectral sparsifier with O (n ε− 2 log n log r) hyperedges in nearly-linear O (mr) time. This …
Sparsifying generalized linear models
We consider the sparsification of sums F: ℝ n→ ℝ+ where F (x)= f 1 (⟨ a 1, x⟩)+⋯+ fm (⟨ am,
x⟩) for vectors a 1,…, am∈ ℝ n and functions f 1,…, fm: ℝ→ ℝ+. We show that (1+ ε) …
x⟩) for vectors a 1,…, am∈ ℝ n and functions f 1,…, fm: ℝ→ ℝ+. We show that (1+ ε) …
Optimal Excess Risk Bounds for Empirical Risk Minimization on -Norm Linear Regression
A El Hanchi, MA Erdogdu - Advances in Neural Information …, 2024 - proceedings.neurips.cc
We study the performance of empirical risk minimization on the $ p $-norm linear regression
problem for $ p\in (1,\infty) $. We show that, in the realizable case, under no moment …
problem for $ p\in (1,\infty) $. We show that, in the realizable case, under no moment …
Computing approximate ℓp sensitivities
Recent works in dimensionality reduction for regression tasks have introduced the notion of
sensitivity, an estimate of the importance of a specific datapoint in a dataset, offering …
sensitivity, an estimate of the importance of a specific datapoint in a dataset, offering …
Computing Approximate Sensitivities
S Padmanabhan, D Woodruff… - Advances in Neural …, 2024 - proceedings.neurips.cc
Recent works in dimensionality reduction for regression tasks have introduced the notion of
sensitivity, an estimate of the importance of a specific datapoint in a dataset, offering …
sensitivity, an estimate of the importance of a specific datapoint in a dataset, offering …
The change-of-measure method, block lewis weights, and approximating matrix block norms
NS Manoj, M Ovsiankin - arXiv preprint arXiv:2311.10013, 2023 - arxiv.org
Given a matrix $\mathbf {A}\in\mathbb {R}^{k\times n} $, a partitioning of $[k] $ into groups $
S_1,\dots, S_m $, an outer norm $ p $, and a collection of inner norms such that either $ p\ge …
S_1,\dots, S_m $, an outer norm $ p $, and a collection of inner norms such that either $ p\ge …