Acceleration methods
This monograph covers some recent advances in a range of acceleration techniques
frequently used in convex optimization. We first use quadratic optimization problems to …
frequently used in convex optimization. We first use quadratic optimization problems to …
Extragradient method: O (1/k) last-iterate convergence for monotone variational inequalities and connections with cocoercivity
Abstract Extragradient method (EG)(Korpelevich, 1976) is one of the most popular methods
for solving saddle point and variational inequalities problems (VIP). Despite its long history …
for solving saddle point and variational inequalities problems (VIP). Despite its long history …
Finite-time last-iterate convergence for learning in multi-player games
We study the question of last-iterate convergence rate of the extragradient algorithm by
Korpelevich [1976] and the optimistic gradient algorithm by Popov [1980] in multi-player …
Korpelevich [1976] and the optimistic gradient algorithm by Popov [1980] in multi-player …
Sublinear convergence rates of extragradient-type methods: A survey on classical and recent developments
Q Tran-Dinh - arXiv preprint arXiv:2303.17192, 2023 - arxiv.org
The extragradient (EG), introduced by GM Korpelevich in 1976, is a well-known method to
approximate solutions of saddle-point problems and their extensions such as variational …
approximate solutions of saddle-point problems and their extensions such as variational …
Last-iterate convergence of optimistic gradient method for monotone variational inequalities
Abstract The Past Extragradient (PEG)[Popov, 1980] method, also known as the Optimistic
Gradient method, has known a recent gain in interest in the optimization community with the …
Gradient method, has known a recent gain in interest in the optimization community with the …
Convergence of proximal point and extragradient-based methods beyond monotonicity: the case of negative comonotonicity
Algorithms for min-max optimization and variational inequalities are often studied under
monotonicity assumptions. Motivated by non-monotone machine learning applications, we …
monotonicity assumptions. Motivated by non-monotone machine learning applications, we …
Provably faster gradient descent via long steps
B Grimmer - SIAM Journal on Optimization, 2024 - SIAM
This work establishes new convergence guarantees for gradient descent in smooth convex
optimization via a computer-assisted analysis technique. Our theory allows nonconstant …
optimization via a computer-assisted analysis technique. Our theory allows nonconstant …
Proximal splitting algorithms for convex optimization: A tour of recent advances, with new twists
Convex nonsmooth optimization problems, whose solutions live in very high dimensional
spaces, have become ubiquitous. To solve them, the class of first-order algorithms known as …
spaces, have become ubiquitous. To solve them, the class of first-order algorithms known as …
Accelerated proximal point method for maximally monotone operators
D Kim - Mathematical Programming, 2021 - Springer
This paper proposes an accelerated proximal point method for maximally monotone
operators. The proof is computer-assisted via the performance estimation problem …
operators. The proof is computer-assisted via the performance estimation problem …
PEPit: computer-assisted worst-case analyses of first-order optimization methods in Python
PEPit is a python package aiming at simplifying the access to worst-case analyses of a large
family of first-order optimization methods possibly involving gradient, projection, proximal, or …
family of first-order optimization methods possibly involving gradient, projection, proximal, or …