Distributed learning in wireless networks: Recent progress and future challenges
The next-generation of wireless networks will enable many machine learning (ML) tools and
applications to efficiently analyze various types of data collected by edge devices for …
applications to efficiently analyze various types of data collected by edge devices for …
Communication-efficient distributed deep learning: A comprehensive survey
Distributed deep learning (DL) has become prevalent in recent years to reduce training time
by leveraging multiple computing devices (eg, GPUs/TPUs) due to larger models and …
by leveraging multiple computing devices (eg, GPUs/TPUs) due to larger models and …
Advances and open problems in federated learning
Federated learning (FL) is a machine learning setting where many clients (eg, mobile
devices or whole organizations) collaboratively train a model under the orchestration of a …
devices or whole organizations) collaboratively train a model under the orchestration of a …
Fjord: Fair and accurate federated learning under heterogeneous targets with ordered dropout
Federated Learning (FL) has been gaining significant traction across different ML tasks,
ranging from vision to keyboard predictions. In large-scale deployments, client heterogeneity …
ranging from vision to keyboard predictions. In large-scale deployments, client heterogeneity …
Hermes: an efficient federated learning framework for heterogeneous mobile clients
Federated learning (FL) has been a popular method to achieve distributed machine learning
among numerous devices without sharing their data to a cloud server. FL aims to learn a …
among numerous devices without sharing their data to a cloud server. FL aims to learn a …
Fedmask: Joint computation and communication-efficient personalized federated learning via heterogeneous masking
Recent advancements in deep neural networks (DNN) enabled various mobile deep
learning applications. However, it is technically challenging to locally train a DNN model due …
learning applications. However, it is technically challenging to locally train a DNN model due …
EF21: A new, simpler, theoretically better, and practically faster error feedback
P Richtárik, I Sokolov… - Advances in Neural …, 2021 - proceedings.neurips.cc
Error feedback (EF), also known as error compensation, is an immensely popular
convergence stabilization mechanism in the context of distributed training of supervised …
convergence stabilization mechanism in the context of distributed training of supervised …
Optimal client sampling for federated learning
It is well understood that client-master communication can be a primary bottleneck in
Federated Learning. In this work, we address this issue with a novel client subsampling …
Federated Learning. In this work, we address this issue with a novel client subsampling …
UVeQFed: Universal vector quantization for federated learning
Traditional deep learning models are trained at a centralized server using data samples
collected from users. Such data samples often include private information, which the users …
collected from users. Such data samples often include private information, which the users …
A guide through the zoo of biased SGD
Y Demidovich, G Malinovsky… - Advances in Neural …, 2023 - proceedings.neurips.cc
Abstract Stochastic Gradient Descent (SGD) is arguably the most important single algorithm
in modern machine learning. Although SGD with unbiased gradient estimators has been …
in modern machine learning. Although SGD with unbiased gradient estimators has been …