Federated learning of a mixture of global and local models
F Hanzely, P Richtárik - arXiv preprint arXiv:2002.05516, 2020 - arxiv.org
We propose a new optimization formulation for training federated learning models. The
standard formulation has the form of an empirical risk minimization problem constructed to …
standard formulation has the form of an empirical risk minimization problem constructed to …
Lower bounds and optimal algorithms for personalized federated learning
In this work, we consider the optimization formulation of personalized federated learning
recently introduced by Hanzely & Richtarik (2020) which was shown to give an alternative …
recently introduced by Hanzely & Richtarik (2020) which was shown to give an alternative …
Stochastic optimization with heavy-tailed noise via accelerated gradient clipping
E Gorbunov, M Danilova… - Advances in Neural …, 2020 - proceedings.neurips.cc
In this paper, we propose a new accelerated stochastic first-order method called clipped-
SSTM for smooth convex stochastic optimization with heavy-tailed distributed noise in …
SSTM for smooth convex stochastic optimization with heavy-tailed distributed noise in …
A dual approach for optimal algorithms in distributed optimization over networks
We study dual-based algorithms for distributed convex optimization problems over networks,
where the objective is to minimize a sum Σ i= 1 mfi (z) of functions over in a network. We …
where the objective is to minimize a sum Σ i= 1 mfi (z) of functions over in a network. We …
The power of first-order smooth optimization for black-box non-smooth problems
A Gasnikov, A Novitskii, V Novitskii… - arXiv preprint arXiv …, 2022 - arxiv.org
Gradient-free/zeroth-order methods for black-box convex optimization have been
extensively studied in the last decade with the main focus on oracle calls complexity. In this …
extensively studied in the last decade with the main focus on oracle calls complexity. In this …
Optimal gradient sliding and its application to optimal distributed optimization under similarity
We study structured convex optimization problems, with additive objective $ r:= p+ q $,
where $ r $ is ($\mu $-strongly) convex, $ q $ is $ L_q $-smooth and convex, and $ p $ is …
where $ r $ is ($\mu $-strongly) convex, $ q $ is $ L_q $-smooth and convex, and $ p $ is …
Decentralized distributed optimization for saddle point problems
A Rogozin, A Beznosikov, D Dvinskikh… - arXiv preprint arXiv …, 2021 - arxiv.org
We consider distributed convex-concave saddle point problems over arbitrary connected
undirected networks and propose a decentralized distributed algorithm for their solution. The …
undirected networks and propose a decentralized distributed algorithm for their solution. The …
Decentralized and parallel primal and dual accelerated methods for stochastic convex programming problems
D Dvinskikh, A Gasnikov - Journal of Inverse and Ill-posed Problems, 2021 - degruyter.com
We introduce primal and dual stochastic gradient oracle methods for decentralized convex
optimization problems. Both for primal and dual oracles, the proposed methods are optimal …
optimization problems. Both for primal and dual oracles, the proposed methods are optimal …
Recent theoretical advances in decentralized distributed convex optimization
E Gorbunov, A Rogozin, A Beznosikov… - … and Probability: With a …, 2022 - Springer
In the last few years, the theory of decentralized distributed convex optimization has made
significant progress. The lower bounds on communications rounds and oracle calls have …
significant progress. The lower bounds on communications rounds and oracle calls have …
Randomized gradient-free methods in convex optimization
Consider a convex optimization problem min x∈ Q⊆ Rd f (x)(1) with convex feasible set Q
and convex objective f possessing the zeroth-order (gradient/derivativefree) oracle [83]. The …
and convex objective f possessing the zeroth-order (gradient/derivativefree) oracle [83]. The …