The power of first-order smooth optimization for black-box non-smooth problems
A Gasnikov, A Novitskii, V Novitskii… - arXiv preprint arXiv …, 2022 - arxiv.org
Gradient-free/zeroth-order methods for black-box convex optimization have been
extensively studied in the last decade with the main focus on oracle calls complexity. In this …
extensively studied in the last decade with the main focus on oracle calls complexity. In this …
DADAO: Decoupled accelerated decentralized asynchronous optimization
This work introduces DADAO: the first decentralized, accelerated, asynchronous, primal, first-
order algorithm to minimize a sum of $ L $-smooth and $\mu $-strongly convex functions …
order algorithm to minimize a sum of $ L $-smooth and $\mu $-strongly convex functions …
Decentralized distributed optimization for saddle point problems
A Rogozin, A Beznosikov, D Dvinskikh… - arXiv preprint arXiv …, 2021 - arxiv.org
We consider distributed convex-concave saddle point problems over arbitrary connected
undirected networks and propose a decentralized distributed algorithm for their solution. The …
undirected networks and propose a decentralized distributed algorithm for their solution. The …
ADOM: accelerated decentralized optimization method for time-varying networks
We propose ADOM–an accelerated method for smooth and strongly convex decentralized
optimization over time-varying networks. ADOM uses a dual oracle, ie, we assume access to …
optimization over time-varying networks. ADOM uses a dual oracle, ie, we assume access to …
Randomized gradient-free methods in convex optimization
Consider a convex optimization problem min x∈ Q⊆ Rd f (x)(1) with convex feasible set Q
and convex objective f possessing the zeroth-order (gradient/derivativefree) oracle [83]. The …
and convex objective f possessing the zeroth-order (gradient/derivativefree) oracle [83]. The …
Acceleration in distributed optimization under similarity
We study distributed (strongly convex) optimization problems over a network of agents, with
no centralized nodes. The loss functions of the agents are assumed to be similar, due to …
no centralized nodes. The loss functions of the agents are assumed to be similar, due to …
Is consensus acceleration possible in decentralized optimization over slowly time-varying networks?
We consider decentralized optimization problems where one aims to minimize a sum of
convex smooth objective functions distributed between nodes in the network. The links in the …
convex smooth objective functions distributed between nodes in the network. The links in the …
Decentralized saddle-point problems with different constants of strong convexity and strong concavity
D Metelev, A Rogozin, A Gasnikov… - Computational …, 2024 - Springer
Large-scale saddle-point problems arise in such machine learning tasks as GANs and linear
models with affine constraints. In this paper, we study distributed saddle-point problems with …
models with affine constraints. In this paper, we study distributed saddle-point problems with …
[HTML][HTML] First-order methods for convex optimization
First-order methods for solving convex optimization problems have been at the forefront of
mathematical optimization in the last 20 years. The rapid development of this important class …
mathematical optimization in the last 20 years. The rapid development of this important class …
Newton method over networks is fast up to the statistical precision
A Daneshmand, G Scutari… - International …, 2021 - proceedings.mlr.press
We propose a distributed cubic regularization of the Newton method for solving
(constrained) empirical risk minimization problems over a network of agents, modeled as …
(constrained) empirical risk minimization problems over a network of agents, modeled as …