Communication-efficient distributed learning: An overview
Distributed learning is envisioned as the bedrock of next-generation intelligent networks,
where intelligent agents, such as mobile devices, robots, and sensors, exchange information …
where intelligent agents, such as mobile devices, robots, and sensors, exchange information …
Communication-efficient distributed deep learning: A comprehensive survey
Distributed deep learning (DL) has become prevalent in recent years to reduce training time
by leveraging multiple computing devices (eg, GPUs/TPUs) due to larger models and …
by leveraging multiple computing devices (eg, GPUs/TPUs) due to larger models and …
Quantization enabled privacy protection in decentralized stochastic optimization
By enabling multiple agents to cooperatively solve a global optimization problem in the
absence of a central coordinator, decentralized stochastic optimization is gaining increasing …
absence of a central coordinator, decentralized stochastic optimization is gaining increasing …
SPARQ-SGD: Event-triggered and compressed communication in decentralized optimization
In this article, we propose and analyze SParsified Action Regulated Quantized–Stochastic
Gradient Descent (SPARQ-SGD), a communication-efficient algorithm for decentralized …
Gradient Descent (SPARQ-SGD), a communication-efficient algorithm for decentralized …
On maintaining linear convergence of distributed learning and optimization under limited communication
S Magnússon, H Shokri-Ghadikolaei… - IEEE Transactions on …, 2020 - ieeexplore.ieee.org
In distributed optimization and machine learning, multiple nodes coordinate to solve large
problems. To do this, the nodes need to compress important algorithm information to bits so …
problems. To do this, the nodes need to compress important algorithm information to bits so …
Distributed constrained optimization and consensus in uncertain networks via proximal minimization
We provide a unifying framework for distributed convex optimization over time-varying
networks, in the presence of constraints and uncertainty, features that are typically treated …
networks, in the presence of constraints and uncertainty, features that are typically treated …
SQuARM-SGD: Communication-efficient momentum SGD for decentralized optimization
In this paper, we propose and analyze SQuARM-SGD, a communication-efficient algorithm
for decentralized training of large-scale machine learning models over a network. In …
for decentralized training of large-scale machine learning models over a network. In …
A second-order accelerated neurodynamic approach for distributed convex optimization
Based on the theories of inertial systems, a second-order accelerated neurodynamic
approach is designed to solve a distributed convex optimization with inequality and set …
approach is designed to solve a distributed convex optimization with inequality and set …
Fast convergence rates of distributed subgradient methods with adaptive quantization
TT Doan, ST Maguluri… - IEEE Transactions on …, 2020 - ieeexplore.ieee.org
We study distributed optimization problems over a network when the communication
between the nodes is constrained, and therefore, information that is exchanged between the …
between the nodes is constrained, and therefore, information that is exchanged between the …
Distributed discrete-time optimization in multiagent networks using only sign of relative state
This paper proposes distributed discrete-time algorithms to cooperatively solve an additive
cost optimization problem in multiagent networks. The striking feature lies in the use of only …
cost optimization problem in multiagent networks. The striking feature lies in the use of only …