Computational optimal transport: Complexity by accelerated gradient descent is better than by Sinkhorn's algorithm

P Dvurechensky, A Gasnikov… - … conference on machine …, 2018 - proceedings.mlr.press
We analyze two algorithms for approximating the general optimal transport (OT) distance
between two discrete distributions of size $ n $, up to accuracy $\varepsilon $. For the first …

Scalable semidefinite programming

A Yurtsever, JA Tropp, O Fercoq, M Udell… - SIAM Journal on …, 2021 - SIAM
Semidefinite programming (SDP) is a powerful framework from convex optimization that has
striking potential for data science applications. This paper develops a provably correct …

A dual approach for optimal algorithms in distributed optimization over networks

CA Uribe, S Lee, A Gasnikov… - 2020 Information theory …, 2020 - ieeexplore.ieee.org
We study dual-based algorithms for distributed convex optimization problems over networks,
where the objective is to minimize a sum Σ i= 1 mfi (z) of functions over in a network. We …

Last-iterate convergent policy gradient primal-dual methods for constrained mdps

D Ding, CY Wei, K Zhang… - Advances in Neural …, 2024 - proceedings.neurips.cc
We study the problem of computing an optimal policy of an infinite-horizon discounted
constrained Markov decision process (constrained MDP). Despite the popularity of …

Dropping convexity for faster semi-definite optimization

S Bhojanapalli, A Kyrillidis… - Conference on Learning …, 2016 - proceedings.mlr.press
We study the minimization of a convex function f (X) over the set of n\times n positive semi-
definite matrices, but when the problem is recast as\min_U g (U):= f (UU^⊤), with …

On the complexity of approximating Wasserstein barycenters

A Kroshnin, N Tupitsa, D Dvinskikh… - International …, 2019 - proceedings.mlr.press
We study the complexity of approximating the Wasserstein barycenter of $ m $ discrete
measures, or histograms of size $ n $, by contrasting two alternative approaches that use …

Online adaptive methods, universality and acceleration

KY Levy, A Yurtsever, V Cevher - Advances in neural …, 2018 - proceedings.neurips.cc
We present a novel method for convex unconstrained optimization that, without any
modifications ensures:(1) accelerated convergence rate for smooth objectives,(2) standard …

An inexact augmented Lagrangian framework for nonconvex optimization with nonlinear constraints

MF Sahin, A Alacaoglu, F Latorre… - Advances in Neural …, 2019 - proceedings.neurips.cc
We propose a practical inexact augmented Lagrangian method (iALM) for nonconvex
problems with nonlinear constraints. We characterize the total computational complexity of …

A universal algorithm for variational inequalities adaptive to smoothness and noise

F Bach, KY Levy - Conference on learning theory, 2019 - proceedings.mlr.press
We consider variational inequalities coming from monotone operators, a setting that
includes convex minimization and convex-concave saddle-point problems. We assume an …

Online primal-dual mirror descent under stochastic constraints

X Wei, H Yu, MJ Neely - Proceedings of the ACM on Measurement and …, 2020 - dl.acm.org
We consider online convex optimization with stochastic constraints where the objective
functions are arbitrarily time-varying and the constraint functions are independent and …