A survey of distributed optimization
In distributed optimization of multi-agent systems, agents cooperate to minimize a global
function which is a sum of local objective functions. Motivated by applications including …
function which is a sum of local objective functions. Motivated by applications including …
Distributed optimization for control
Advances in wired and wireless technology have necessitated the development of theory,
models, and tools to cope with the new challenges posed by large-scale control and …
models, and tools to cope with the new challenges posed by large-scale control and …
A unified theory of decentralized sgd with changing topology and local updates
Decentralized stochastic optimization methods have gained a lot of attention recently, mainly
because of their cheap per iteration cost, data locality, and their communication-efficiency. In …
because of their cheap per iteration cost, data locality, and their communication-efficiency. In …
Decentralized federated averaging
Federated averaging (FedAvg) is a communication-efficient algorithm for distributed training
with an enormous number of clients. In FedAvg, clients keep their data locally for privacy …
with an enormous number of clients. In FedAvg, clients keep their data locally for privacy …
Decentralized stochastic optimization and gossip algorithms with compressed communication
We consider decentralized stochastic optimization with the objective function (eg data
samples for machine learning tasks) being distributed over n machines that can only …
samples for machine learning tasks) being distributed over n machines that can only …
Network topology and communication-computation tradeoffs in decentralized optimization
In decentralized optimization, nodes cooperate to minimize an overall objective function that
is the sum (or average) of per-node private objective functions. Algorithms interleave local …
is the sum (or average) of per-node private objective functions. Algorithms interleave local …
Stochastic gradient push for distributed deep learning
Distributed data-parallel algorithms aim to accelerate the training of deep neural networks
by parallelizing the computation of large mini-batch gradient updates across multiple nodes …
by parallelizing the computation of large mini-batch gradient updates across multiple nodes …
Achieving geometric convergence for distributed optimization over time-varying graphs
This paper considers the problem of distributed optimization over time-varying graphs. For
the case of undirected graphs, we introduce a distributed algorithm, referred to as DIGing …
the case of undirected graphs, we introduce a distributed algorithm, referred to as DIGing …
Harnessing smoothness to accelerate distributed optimization
There has been a growing effort in studying the distributed optimization problem over a
network. The objective is to optimize a global function formed by a sum of local functions …
network. The objective is to optimize a global function formed by a sum of local functions …
Random following ant colony optimization: Continuous and binary variants for global optimization and feature selection
X Zhou, W Gui, AA Heidari, Z Cai, G Liang… - Applied Soft Computing, 2023 - Elsevier
Continuous ant colony optimization was a population-based heuristic search algorithm
inspired by the pathfinding behavior of ant colonies with a simple structure and few control …
inspired by the pathfinding behavior of ant colonies with a simple structure and few control …