Sharper Bounds for Sensitivity Sampling

D Woodruff, T Yasuda - International Conference on …, 2023 - proceedings.mlr.press
In large scale machine learning, random sampling is a popular way to approximate datasets
by a small representative subset of examples. In particular, sensitivity sampling is an …

Online lewis weight sampling

DP Woodruff, T Yasuda - Proceedings of the 2023 Annual ACM-SIAM …, 2023 - SIAM
The seminal work of Cohen and Peng [CP15](STOC 2015) introduced Lewis weight
sampling to the theoretical computer science community, which yields fast row sampling …

Global linear and local superlinear convergence of IRLS for non-smooth robust regression

L Peng, C Kümmerle, R Vidal - Advances in neural …, 2022 - proceedings.neurips.cc
We advance both the theory and practice of robust $\ell_p $-quasinorm regression for $ p\in
(0, 1] $ by using novel variants of iteratively reweighted least-squares (IRLS) to solve the …

High-dimensional geometric streaming in polynomial space

DP Woodruff, T Yasuda - 2022 IEEE 63rd Annual Symposium …, 2022 - ieeexplore.ieee.org
Many existing algorithms for streaming geometric data analysis have been plagued by
exponential dependencies in the space complexity, which are undesirable for processing …

Chaining, group leverage score overestimates, and fast spectral hypergraph sparsification

A Jambulapati, YP Liu, A Sidford - Proceedings of the 55th Annual ACM …, 2023 - dl.acm.org
We present an algorithm that given any n-vertex, m-edge, rank r hypergraph constructs a
spectral sparsifier with O (n ε− 2 log n log r) hyperedges in nearly-linear O (mr) time. This …

Sparsifying generalized linear models

A Jambulapati, JR Lee, YP Liu, A Sidford - Proceedings of the 56th …, 2024 - dl.acm.org
We consider the sparsification of sums F: ℝ n→ ℝ+ where F (x)= f 1 (⟨ a 1, x⟩)+⋯+ fm (⟨ am,
x⟩) for vectors a 1,…, am∈ ℝ n and functions f 1,…, fm: ℝ→ ℝ+. We show that (1+ ε) …

Optimal Excess Risk Bounds for Empirical Risk Minimization on -Norm Linear Regression

A El Hanchi, MA Erdogdu - Advances in Neural Information …, 2024 - proceedings.neurips.cc
We study the performance of empirical risk minimization on the $ p $-norm linear regression
problem for $ p\in (1,\infty) $. We show that, in the realizable case, under no moment …

Computing approximate ℓp sensitivities

S Padmanabhan, DP Woodruff, Q Zhang - Proceedings of the 37th …, 2023 - dl.acm.org
Recent works in dimensionality reduction for regression tasks have introduced the notion of
sensitivity, an estimate of the importance of a specific datapoint in a dataset, offering …

Computing Approximate Sensitivities

S Padmanabhan, D Woodruff… - Advances in Neural …, 2024 - proceedings.neurips.cc
Recent works in dimensionality reduction for regression tasks have introduced the notion of
sensitivity, an estimate of the importance of a specific datapoint in a dataset, offering …

The change-of-measure method, block lewis weights, and approximating matrix block norms

NS Manoj, M Ovsiankin - arXiv preprint arXiv:2311.10013, 2023 - arxiv.org
Given a matrix $\mathbf {A}\in\mathbb {R}^{k\times n} $, a partitioning of $[k] $ into groups $
S_1,\dots, S_m $, an outer norm $ p $, and a collection of inner norms such that either $ p\ge …