Acceleration methods

A d'Aspremont, D Scieur, A Taylor - Foundations and Trends® …, 2021 - nowpublishers.com
This monograph covers some recent advances in a range of acceleration techniques
frequently used in convex optimization. We first use quadratic optimization problems to …

Acceleration by Stepsize Hedging: Multi-Step Descent and the Silver Stepsize Schedule

J Altschuler, P Parrilo - Journal of the ACM, 2023 - dl.acm.org
Can we accelerate the convergence of gradient descent without changing the algorithm—
just by judiciously choosing stepsizes? Surprisingly, we show that the answer is yes. Our …

Quasi-hyperbolic momentum and adam for deep learning

J Ma, D Yarats - arXiv preprint arXiv:1810.06801, 2018 - arxiv.org
Momentum-based acceleration of stochastic gradient descent (SGD) is widely used in deep
learning. We propose the quasi-hyperbolic momentum algorithm (QHM) as an extremely …

Understanding the role of momentum in stochastic gradient methods

I Gitman, H Lang, P Zhang… - Advances in Neural …, 2019 - proceedings.neurips.cc
The use of momentum in stochastic gradient methods has become a widespread practice in
machine learning. Different variants of momentum, including heavy-ball momentum …

From Nesterov's estimate sequence to Riemannian acceleration

K Ahn, S Sra - Conference on Learning Theory, 2020 - proceedings.mlr.press
We propose the first global accelerated gradient method for Riemannian manifolds. Toward
establishing our results, we revisit Nesterov's estimate sequence technique and develop a …

Analysis of optimization algorithms via integral quadratic constraints: Nonstrongly convex problems

M Fazlyab, A Ribeiro, M Morari, VM Preciado - SIAM Journal on Optimization, 2018 - SIAM
In this paper, we develop a unified framework capable of certifying both exponential and
subexponential convergence rates for a wide range of iterative first-order optimization …

Branch-and-bound performance estimation programming: A unified methodology for constructing optimal optimization methods

S Das Gupta, BPG Van Parys, EK Ryu - Mathematical Programming, 2024 - Springer
We present the Branch-and-Bound Performance Estimation Programming (BnB-PEP), a
unified methodology for constructing optimal first-order methods for convex and nonconvex …

An optimal gradient method for smooth strongly convex minimization

A Taylor, Y Drori - Mathematical Programming, 2023 - Springer
We present an optimal gradient method for smooth strongly convex optimization. The
method is optimal in the sense that its worst-case bound on the distance to an optimal point …

On the fast convergence of minibatch heavy ball momentum

R Bollapragada, T Chen, R Ward - IMA Journal of Numerical …, 2024 - academic.oup.com
Simple stochastic momentum methods are widely used in machine learning optimization,
but their good practical performance is at odds with an absence of theoretical guarantees of …

Robust hybrid zero-order optimization algorithms with acceleration via averaging in time

JI Poveda, N Li - Automatica, 2021 - Elsevier
This paper presents a new class of robust zero-order algorithms for the solution of real-time
optimization problems with acceleration. In particular, we propose a family of extremum …