Lower bounds and optimal algorithms for personalized federated learning

F Hanzely, S Hanzely, S Horváth… - Advances in Neural …, 2020 - proceedings.neurips.cc
In this work, we consider the optimization formulation of personalized federated learning
recently introduced by Hanzely & Richtarik (2020) which was shown to give an alternative …

Acceleration methods

A d'Aspremont, D Scieur, A Taylor - Foundations and Trends® …, 2021 - nowpublishers.com
This monograph covers some recent advances in a range of acceleration techniques
frequently used in convex optimization. We first use quadratic optimization problems to …

Stochastic distributed learning with gradient quantization and double-variance reduction

S Horváth, D Kovalev, K Mishchenko… - Optimization Methods …, 2023 - Taylor & Francis
We consider distributed optimization over several devices, each sending incremental model
updates to a central server. This setting is considered, for instance, in federated learning …

Momentum and stochastic momentum for stochastic gradient, newton, proximal point and subspace descent methods

N Loizou, P Richtárik - Computational Optimization and Applications, 2020 - Springer
In this paper we study several classes of stochastic optimization algorithms enriched with
heavy ball momentum. Among the methods studied are: stochastic gradient descent …

Don't jump through hoops and remove those loops: SVRG and Katyusha are better without the outer loop

D Kovalev, S Horváth… - Algorithmic Learning …, 2020 - proceedings.mlr.press
The stochastic variance-reduced gradient method (SVRG) and its accelerated variant
(Katyusha) have attracted enormous attention in the machine learning community in the last …

Convex optimization algorithms in medical image reconstruction—in the age of AI

J Xu, F Noo - Physics in Medicine & Biology, 2022 - iopscience.iop.org
The past decade has seen the rapid growth of model based image reconstruction (MBIR)
algorithms, which are often applications or adaptations of convex optimization algorithms …

Sharper rates for separable minimax and finite sum optimization via primal-dual extragradient methods

Y Jin, A Sidford, K Tian - Conference on Learning Theory, 2022 - proceedings.mlr.press
We design accelerated algorithms with improved rates for several fundamental classes of
optimization problems. Our algorithms all build upon techniques related to the analysis of …

Optimal decentralized distributed algorithms for stochastic convex optimization

E Gorbunov, D Dvinskikh, A Gasnikov - arXiv preprint arXiv:1911.07363, 2019 - arxiv.org
We consider stochastic convex optimization problems with affine constraints and develop
several methods using either primal or dual approach to solve it. In the primal case, we use …

On the fast convergence of minibatch heavy ball momentum

R Bollapragada, T Chen, R Ward - IMA Journal of Numerical …, 2024 - academic.oup.com
Simple stochastic momentum methods are widely used in machine learning optimization,
but their good practical performance is at odds with an absence of theoretical guarantees of …

Stochastic first-order methods: non-asymptotic and computer-aided analyses via potential functions

A Taylor, F Bach - Conference on Learning Theory, 2019 - proceedings.mlr.press
We provide a novel computer-assisted technique for systematically analyzing first-order
methods for optimization. In contrast with previous works, the approach is particularly suited …