The power of first-order smooth optimization for black-box non-smooth problems

A Gasnikov, A Novitskii, V Novitskii… - arXiv preprint arXiv …, 2022 - arxiv.org
Gradient-free/zeroth-order methods for black-box convex optimization have been
extensively studied in the last decade with the main focus on oracle calls complexity. In this …

DADAO: Decoupled accelerated decentralized asynchronous optimization

A Nabli, E Oyallon - International Conference on Machine …, 2023 - proceedings.mlr.press
This work introduces DADAO: the first decentralized, accelerated, asynchronous, primal, first-
order algorithm to minimize a sum of $ L $-smooth and $\mu $-strongly convex functions …

Decentralized distributed optimization for saddle point problems

A Rogozin, A Beznosikov, D Dvinskikh… - arXiv preprint arXiv …, 2021 - arxiv.org
We consider distributed convex-concave saddle point problems over arbitrary connected
undirected networks and propose a decentralized distributed algorithm for their solution. The …

ADOM: accelerated decentralized optimization method for time-varying networks

D Kovalev, E Shulgin, P Richtárik… - International …, 2021 - proceedings.mlr.press
We propose ADOM–an accelerated method for smooth and strongly convex decentralized
optimization over time-varying networks. ADOM uses a dual oracle, ie, we assume access to …

Randomized gradient-free methods in convex optimization

A Gasnikov, D Dvinskikh, P Dvurechensky… - Encyclopedia of …, 2023 - Springer
Consider a convex optimization problem min x∈ Q⊆ Rd f (x)(1) with convex feasible set Q
and convex objective f possessing the zeroth-order (gradient/derivativefree) oracle [83]. The …

Acceleration in distributed optimization under similarity

Y Tian, G Scutari, T Cao… - … Conference on Artificial …, 2022 - proceedings.mlr.press
We study distributed (strongly convex) optimization problems over a network of agents, with
no centralized nodes. The loss functions of the agents are assumed to be similar, due to …

Is consensus acceleration possible in decentralized optimization over slowly time-varying networks?

D Metelev, A Rogozin, D Kovalev… - … on Machine Learning, 2023 - proceedings.mlr.press
We consider decentralized optimization problems where one aims to minimize a sum of
convex smooth objective functions distributed between nodes in the network. The links in the …

Decentralized saddle-point problems with different constants of strong convexity and strong concavity

D Metelev, A Rogozin, A Gasnikov… - Computational …, 2024 - Springer
Large-scale saddle-point problems arise in such machine learning tasks as GANs and linear
models with affine constraints. In this paper, we study distributed saddle-point problems with …

[HTML][HTML] First-order methods for convex optimization

P Dvurechensky, S Shtern, M Staudigl - EURO Journal on Computational …, 2021 - Elsevier
First-order methods for solving convex optimization problems have been at the forefront of
mathematical optimization in the last 20 years. The rapid development of this important class …

Newton method over networks is fast up to the statistical precision

A Daneshmand, G Scutari… - International …, 2021 - proceedings.mlr.press
We propose a distributed cubic regularization of the Newton method for solving
(constrained) empirical risk minimization problems over a network of agents, modeled as …