The why and how of nonnegative matrix factorization

N Gillis - … , optimization, kernels, and support vector machines, 2014 - books.google.com
Nonnegative matrix factorization (NMF) has become a widely used tool for the analysis of
high-dimensional data as it automatically extracts sparse and meaningful features from a set …

[图书][B] Nonnegative matrix factorization

N Gillis - 2020 - SIAM
Identifying the underlying structure of a data set and extracting meaningful information is a
key problem in data analysis. Simple and powerful methods to achieve this goal are linear …

Training (overparametrized) neural networks in near-linear time

J Brand, B Peng, Z Song, O Weinstein - arXiv preprint arXiv:2006.11648, 2020 - arxiv.org
The slow convergence rate and pathological curvature issues of first-order gradient methods
for training deep neural networks, initiated an ongoing effort for developing faster $\mathit …

Overview of accurate coresets

I Jubran, A Maalouf, D Feldman - … Reviews: Data Mining and …, 2021 - Wiley Online Library
A coreset of an input set is its small summarization, such that solving a problem on the
coreset as its input, provably yields the same result as solving the same problem on the …

Low-rank approximation with 1/𝜖1/3 matrix-vector products

A Bakshi, KL Clarkson, DP Woodruff - … of the 54th Annual ACM SIGACT …, 2022 - dl.acm.org
We study iterative methods based on Krylov subspaces for low-rank approximation under
any Schatten-p norm. Here, given access to a matrix A through matrix-vector products, an …

Nonmonotone variable projection algorithms for matrix decomposition with missing data

X Su, M Gan, G Chen, L Yang, J Jin - Pattern Recognition, 2024 - Elsevier
This paper investigates algorithms for matrix factorization when some or many components
are missing, a problem that arises frequently in computer vision and pattern recognition. We …

Efficient alternating minimization with applications to weighted low rank approximation

Z Song, M Ye, J Yin, L Zhang - arXiv preprint arXiv:2306.04169, 2023 - arxiv.org
Weighted low rank approximation is a fundamental problem in numerical linear algebra, and
it has many applications in machine learning. Given a matrix $ M\in\mathbb {R}^{n\times n} …

Numerically stable binary coded computations

N Charalambides, H Mahdavifar, AO Hero III - arXiv preprint arXiv …, 2021 - arxiv.org
This paper addresses the gradient coding and coded matrix multiplication problems in
distributed optimization and coded computing. We present a numerically stable binary …

Near-linear time and fixed-parameter tractable algorithms for tensor decompositions

AV Mahankali, DP Woodruff, Z Zhang - arXiv preprint arXiv:2207.07417, 2022 - arxiv.org
We study low rank approximation of tensors, focusing on the tensor train and Tucker
decompositions, as well as approximations with tree tensor networks and more general …

Additive error guarantees for weighted low rank approximation

A Bhaskara, AK Ruwanpathirana… - International …, 2021 - proceedings.mlr.press
Low-rank approximation is a classic tool in data analysis, where the goal is to approximate a
matrix $ A $ with a low-rank matrix $ L $ so as to minimize the error $\norm {AL} _F^ 2 …