Acceleration methods

A d'Aspremont, D Scieur, A Taylor - Foundations and Trends® …, 2021 - nowpublishers.com
This monograph covers some recent advances in a range of acceleration techniques
frequently used in convex optimization. We first use quadratic optimization problems to …

Extragradient method: O (1/k) last-iterate convergence for monotone variational inequalities and connections with cocoercivity

E Gorbunov, N Loizou, G Gidel - … Conference on Artificial …, 2022 - proceedings.mlr.press
Abstract Extragradient method (EG)(Korpelevich, 1976) is one of the most popular methods
for solving saddle point and variational inequalities problems (VIP). Despite its long history …

Finite-time last-iterate convergence for learning in multi-player games

Y Cai, A Oikonomou, W Zheng - Advances in Neural …, 2022 - proceedings.neurips.cc
We study the question of last-iterate convergence rate of the extragradient algorithm by
Korpelevich [1976] and the optimistic gradient algorithm by Popov [1980] in multi-player …

Sublinear convergence rates of extragradient-type methods: A survey on classical and recent developments

Q Tran-Dinh - arXiv preprint arXiv:2303.17192, 2023 - arxiv.org
The extragradient (EG), introduced by GM Korpelevich in 1976, is a well-known method to
approximate solutions of saddle-point problems and their extensions such as variational …

Last-iterate convergence of optimistic gradient method for monotone variational inequalities

E Gorbunov, A Taylor, G Gidel - Advances in neural …, 2022 - proceedings.neurips.cc
Abstract The Past Extragradient (PEG)[Popov, 1980] method, also known as the Optimistic
Gradient method, has known a recent gain in interest in the optimization community with the …

Convergence of proximal point and extragradient-based methods beyond monotonicity: the case of negative comonotonicity

E Gorbunov, A Taylor, S Horváth… - … on Machine Learning, 2023 - proceedings.mlr.press
Algorithms for min-max optimization and variational inequalities are often studied under
monotonicity assumptions. Motivated by non-monotone machine learning applications, we …

Provably faster gradient descent via long steps

B Grimmer - SIAM Journal on Optimization, 2024 - SIAM
This work establishes new convergence guarantees for gradient descent in smooth convex
optimization via a computer-assisted analysis technique. Our theory allows nonconstant …

Proximal splitting algorithms for convex optimization: A tour of recent advances, with new twists

L Condat, D Kitahara, A Contreras, A Hirabayashi - SIAM Review, 2023 - SIAM
Convex nonsmooth optimization problems, whose solutions live in very high dimensional
spaces, have become ubiquitous. To solve them, the class of first-order algorithms known as …

Accelerated proximal point method for maximally monotone operators

D Kim - Mathematical Programming, 2021 - Springer
This paper proposes an accelerated proximal point method for maximally monotone
operators. The proof is computer-assisted via the performance estimation problem …

PEPit: computer-assisted worst-case analyses of first-order optimization methods in Python

B Goujaud, C Moucer, F Glineur, JM Hendrickx… - Mathematical …, 2024 - Springer
PEPit is a python package aiming at simplifying the access to worst-case analyses of a large
family of first-order optimization methods possibly involving gradient, projection, proximal, or …