Federated learning of a mixture of global and local models

F Hanzely, P Richtárik - arXiv preprint arXiv:2002.05516, 2020 - arxiv.org
We propose a new optimization formulation for training federated learning models. The
standard formulation has the form of an empirical risk minimization problem constructed to …

Lower bounds and optimal algorithms for personalized federated learning

F Hanzely, S Hanzely, S Horváth… - Advances in Neural …, 2020 - proceedings.neurips.cc
In this work, we consider the optimization formulation of personalized federated learning
recently introduced by Hanzely & Richtarik (2020) which was shown to give an alternative …

Stochastic optimization with heavy-tailed noise via accelerated gradient clipping

E Gorbunov, M Danilova… - Advances in Neural …, 2020 - proceedings.neurips.cc
In this paper, we propose a new accelerated stochastic first-order method called clipped-
SSTM for smooth convex stochastic optimization with heavy-tailed distributed noise in …

A dual approach for optimal algorithms in distributed optimization over networks

CA Uribe, S Lee, A Gasnikov… - 2020 Information theory …, 2020 - ieeexplore.ieee.org
We study dual-based algorithms for distributed convex optimization problems over networks,
where the objective is to minimize a sum Σ i= 1 mfi (z) of functions over in a network. We …

The power of first-order smooth optimization for black-box non-smooth problems

A Gasnikov, A Novitskii, V Novitskii… - arXiv preprint arXiv …, 2022 - arxiv.org
Gradient-free/zeroth-order methods for black-box convex optimization have been
extensively studied in the last decade with the main focus on oracle calls complexity. In this …

Optimal gradient sliding and its application to optimal distributed optimization under similarity

D Kovalev, A Beznosikov, E Borodich… - Advances in …, 2022 - proceedings.neurips.cc
We study structured convex optimization problems, with additive objective $ r:= p+ q $,
where $ r $ is ($\mu $-strongly) convex, $ q $ is $ L_q $-smooth and convex, and $ p $ is …

Decentralized distributed optimization for saddle point problems

A Rogozin, A Beznosikov, D Dvinskikh… - arXiv preprint arXiv …, 2021 - arxiv.org
We consider distributed convex-concave saddle point problems over arbitrary connected
undirected networks and propose a decentralized distributed algorithm for their solution. The …

Decentralized and parallel primal and dual accelerated methods for stochastic convex programming problems

D Dvinskikh, A Gasnikov - Journal of Inverse and Ill-posed Problems, 2021 - degruyter.com
We introduce primal and dual stochastic gradient oracle methods for decentralized convex
optimization problems. Both for primal and dual oracles, the proposed methods are optimal …

Recent theoretical advances in decentralized distributed convex optimization

E Gorbunov, A Rogozin, A Beznosikov… - … and Probability: With a …, 2022 - Springer
In the last few years, the theory of decentralized distributed convex optimization has made
significant progress. The lower bounds on communications rounds and oracle calls have …

Randomized gradient-free methods in convex optimization

A Gasnikov, D Dvinskikh, P Dvurechensky… - Encyclopedia of …, 2023 - Springer
Consider a convex optimization problem min x∈ Q⊆ Rd f (x)(1) with convex feasible set Q
and convex objective f possessing the zeroth-order (gradient/derivativefree) oracle [83]. The …