A survey of distributed optimization

T Yang, X Yi, J Wu, Y Yuan, D Wu, Z Meng… - Annual Reviews in …, 2019 - Elsevier
In distributed optimization of multi-agent systems, agents cooperate to minimize a global
function which is a sum of local objective functions. Motivated by applications including …

Distributed optimization for control

A Nedić, J Liu - Annual Review of Control, Robotics, and …, 2018 - annualreviews.org
Advances in wired and wireless technology have necessitated the development of theory,
models, and tools to cope with the new challenges posed by large-scale control and …

A unified theory of decentralized sgd with changing topology and local updates

A Koloskova, N Loizou, S Boreiri… - International …, 2020 - proceedings.mlr.press
Decentralized stochastic optimization methods have gained a lot of attention recently, mainly
because of their cheap per iteration cost, data locality, and their communication-efficiency. In …

Decentralized federated averaging

T Sun, D Li, B Wang - IEEE Transactions on Pattern Analysis …, 2022 - ieeexplore.ieee.org
Federated averaging (FedAvg) is a communication-efficient algorithm for distributed training
with an enormous number of clients. In FedAvg, clients keep their data locally for privacy …

Decentralized stochastic optimization and gossip algorithms with compressed communication

A Koloskova, S Stich, M Jaggi - International Conference on …, 2019 - proceedings.mlr.press
We consider decentralized stochastic optimization with the objective function (eg data
samples for machine learning tasks) being distributed over n machines that can only …

Network topology and communication-computation tradeoffs in decentralized optimization

A Nedić, A Olshevsky, MG Rabbat - Proceedings of the IEEE, 2018 - ieeexplore.ieee.org
In decentralized optimization, nodes cooperate to minimize an overall objective function that
is the sum (or average) of per-node private objective functions. Algorithms interleave local …

Stochastic gradient push for distributed deep learning

M Assran, N Loizou, N Ballas… - … on Machine Learning, 2019 - proceedings.mlr.press
Distributed data-parallel algorithms aim to accelerate the training of deep neural networks
by parallelizing the computation of large mini-batch gradient updates across multiple nodes …

Achieving geometric convergence for distributed optimization over time-varying graphs

A Nedic, A Olshevsky, W Shi - SIAM Journal on Optimization, 2017 - SIAM
This paper considers the problem of distributed optimization over time-varying graphs. For
the case of undirected graphs, we introduce a distributed algorithm, referred to as DIGing …

Harnessing smoothness to accelerate distributed optimization

G Qu, N Li - IEEE Transactions on Control of Network Systems, 2017 - ieeexplore.ieee.org
There has been a growing effort in studying the distributed optimization problem over a
network. The objective is to optimize a global function formed by a sum of local functions …

Random following ant colony optimization: Continuous and binary variants for global optimization and feature selection

X Zhou, W Gui, AA Heidari, Z Cai, G Liang… - Applied Soft Computing, 2023 - Elsevier
Continuous ant colony optimization was a population-based heuristic search algorithm
inspired by the pathfinding behavior of ant colonies with a simple structure and few control …