Convex optimization for big data: Scalable, randomized, and parallel algorithms for big data analytics
This article reviews recent advances in convex optimization algorithms for big data, which
aim to reduce the computational, storage, and communications bottlenecks. We provide an …
aim to reduce the computational, storage, and communications bottlenecks. We provide an …
Golden ratio algorithms for variational inequalities
Y Malitsky - Mathematical Programming, 2020 - Springer
The paper presents a fully adaptive algorithm for monotone variational inequalities. In each
iteration the method uses two previous iterates for an approximation of the local Lipschitz …
iteration the method uses two previous iterates for an approximation of the local Lipschitz …
Variable metric forward–backward algorithm for minimizing the sum of a differentiable function and a convex function
We consider the minimization of a function G defined on R^N, which is the sum of a (not
necessarily convex) differentiable function and a (not necessarily differentiable) convex …
necessarily convex) differentiable function and a (not necessarily differentiable) convex …
Dropping convexity for faster semi-definite optimization
S Bhojanapalli, A Kyrillidis… - Conference on Learning …, 2016 - proceedings.mlr.press
We study the minimization of a convex function f (X) over the set of n\times n positive semi-
definite matrices, but when the problem is recast as\min_U g (U):= f (UU^⊤), with …
definite matrices, but when the problem is recast as\min_U g (U):= f (UU^⊤), with …
Near-optimal no-regret learning dynamics for general convex games
A recent line of work has established uncoupled learning dynamics such that, when
employed by all players in a game, each player's regret after $ T $ repetitions grows …
employed by all players in a game, each player's regret after $ T $ repetitions grows …
Algorithms for nonnegative matrix factorization with the Kullback–Leibler divergence
Nonnegative matrix factorization (NMF) is a standard linear dimensionality reduction
technique for nonnegative data sets. In order to measure the discrepancy between the input …
technique for nonnegative data sets. In order to measure the discrepancy between the input …
Sparsity-based Poisson denoising with dictionary learning
The problem of Poisson denoising appears in various imaging applications, such as low-
light photography, medical imaging, and microscopy. In cases of high SNR, several …
light photography, medical imaging, and microscopy. In cases of high SNR, several …
A smooth primal-dual optimization framework for nonsmooth composite convex minimization
We propose a new and low per-iteration complexity first-order primal-dual optimization
framework for a convex optimization template with broad applications. Our analysis relies on …
framework for a convex optimization template with broad applications. Our analysis relies on …
Generalized self-concordant functions: a recipe for newton-type methods
T Sun, Q Tran-Dinh - Mathematical Programming, 2019 - Springer
We study the smooth structure of convex functions by generalizing a powerful concept so-
called self-concordance introduced by Nesterov and Nemirovskii in the early 1990s to a …
called self-concordance introduced by Nesterov and Nemirovskii in the early 1990s to a …
Finite-sample analysis of -estimators using self-concordance
DM Ostrovskii, F Bach - 2021 - projecteuclid.org
The classical asymptotic theory for parametric M-estimators guarantees that, in the limit of
infinite sample size, the excess risk has a chi-square type distribution, even in the …
infinite sample size, the excess risk has a chi-square type distribution, even in the …