Communication-efficient distributed learning: An overview

X Cao, T Başar, S Diggavi, YC Eldar… - IEEE journal on …, 2023 - ieeexplore.ieee.org
Distributed learning is envisioned as the bedrock of next-generation intelligent networks,
where intelligent agents, such as mobile devices, robots, and sensors, exchange information …

Communication-efficient distributed deep learning: A comprehensive survey

Z Tang, S Shi, W Wang, B Li, X Chu - arXiv preprint arXiv:2003.06307, 2020 - arxiv.org
Distributed deep learning (DL) has become prevalent in recent years to reduce training time
by leveraging multiple computing devices (eg, GPUs/TPUs) due to larger models and …

Quantization enabled privacy protection in decentralized stochastic optimization

Y Wang, T Başar - IEEE Transactions on Automatic Control, 2022 - ieeexplore.ieee.org
By enabling multiple agents to cooperatively solve a global optimization problem in the
absence of a central coordinator, decentralized stochastic optimization is gaining increasing …

SPARQ-SGD: Event-triggered and compressed communication in decentralized optimization

N Singh, D Data, J George… - IEEE Transactions on …, 2022 - ieeexplore.ieee.org
In this article, we propose and analyze SParsified Action Regulated Quantized–Stochastic
Gradient Descent (SPARQ-SGD), a communication-efficient algorithm for decentralized …

On maintaining linear convergence of distributed learning and optimization under limited communication

S Magnússon, H Shokri-Ghadikolaei… - IEEE Transactions on …, 2020 - ieeexplore.ieee.org
In distributed optimization and machine learning, multiple nodes coordinate to solve large
problems. To do this, the nodes need to compress important algorithm information to bits so …

Distributed constrained optimization and consensus in uncertain networks via proximal minimization

K Margellos, A Falsone, S Garatti… - IEEE Transactions on …, 2017 - ieeexplore.ieee.org
We provide a unifying framework for distributed convex optimization over time-varying
networks, in the presence of constraints and uncertainty, features that are typically treated …

SQuARM-SGD: Communication-efficient momentum SGD for decentralized optimization

N Singh, D Data, J George… - IEEE Journal on Selected …, 2021 - ieeexplore.ieee.org
In this paper, we propose and analyze SQuARM-SGD, a communication-efficient algorithm
for decentralized training of large-scale machine learning models over a network. In …

A second-order accelerated neurodynamic approach for distributed convex optimization

X Jiang, S Qin, X Xue, X Liu - Neural Networks, 2022 - Elsevier
Based on the theories of inertial systems, a second-order accelerated neurodynamic
approach is designed to solve a distributed convex optimization with inequality and set …

Fast convergence rates of distributed subgradient methods with adaptive quantization

TT Doan, ST Maguluri… - IEEE Transactions on …, 2020 - ieeexplore.ieee.org
We study distributed optimization problems over a network when the communication
between the nodes is constrained, and therefore, information that is exchanged between the …

Distributed discrete-time optimization in multiagent networks using only sign of relative state

J Zhang, K You, T Başar - IEEE Transactions on Automatic …, 2018 - ieeexplore.ieee.org
This paper proposes distributed discrete-time algorithms to cooperatively solve an additive
cost optimization problem in multiagent networks. The striking feature lies in the use of only …