Distributed graph neural network training: A survey

Y Shao, H Li, X Gu, H Yin, Y Li, X Miao… - ACM Computing …, 2024 - dl.acm.org
Graph neural networks (GNNs) are a type of deep learning models that are trained on
graphs and have been successfully applied in various domains. Despite the effectiveness of …

Federated learning over images: vertical decompositions and pre-trained backbones are difficult to beat

E Hu, Y Tang, A Kyrillidis… - Proceedings of the IEEE …, 2023 - openaccess.thecvf.com
We carefully evaluate a number of algorithms for learning in a federated environment, and
test their utility for a variety of image classification tasks. We consider many issues that have …

Efficient and light-weight federated learning via asynchronous distributed dropout

C Dun, M Hipolito, C Jermaine… - International …, 2023 - proceedings.mlr.press
Asynchronous learning protocols have regained attention lately, especially in the Federated
Learning (FL) setup, where slower clients can severely impede the learning process. Herein …

Submodel partitioning in hierarchical federated learning: Algorithm design and convergence analysis

W Fang, DJ Han, CG Brinton - arXiv preprint arXiv:2310.17890, 2023 - arxiv.org
Hierarchical federated learning (HFL) has demonstrated promising scalability advantages
over the traditional" star-topology" architecture-based federated learning (FL). However, HFL …

On the convergence of shallow neural network training with randomly masked neurons

F Liao, A Kyrillidis - arXiv preprint arXiv:2112.02668, 2021 - arxiv.org
With the motive of training all the parameters of a neural network, we study why and when
one can achieve this by iteratively creating, training, and combining randomly selected …

FedP3: Federated Personalized and Privacy-friendly Network Pruning under Model Heterogeneity

K Yi, N Gazagnadou, P Richtárik, L Lyu - arXiv preprint arXiv:2404.09816, 2024 - arxiv.org
The interest in federated learning has surged in recent research due to its unique ability to
train a global model using privacy-secured information held locally on each client. This …

Towards a better theoretical understanding of independent subnetwork training

E Shulgin, P Richtárik - arXiv preprint arXiv:2306.16484, 2023 - arxiv.org
Modern advancements in large-scale machine learning would be impossible without the
paradigm of data-parallel distributed computing. Since distributed computing with large …

Sugar: Efficient subgraph-level training via resource-aware graph partitioning

Z Xue, Y Yang, R Marculescu - IEEE Transactions on …, 2023 - ieeexplore.ieee.org
Graph Neural Networks (GNNs) have demonstrated a great potential in a variety of graph-
based applications, such as recommender systems, drug discovery, and object recognition …

Leveraging Sparse Input and Sparse Models: Efficient Distributed Learning in Resource-Constrained Environments

E Kariotakis, G Tsagkatakis… - … on Parsimony and …, 2024 - proceedings.mlr.press
Optimizing for reduced computational and bandwidth resources enables model training in
less-than-ideal environments and paves the way for practical and accessible AI solutions …

MAST: Model-Agnostic Sparsified Training

Y Demidovich, G Malinovsky, E Shulgin… - arXiv preprint arXiv …, 2023 - arxiv.org
We introduce a novel optimization problem formulation that departs from the conventional
way of minimizing machine learning model loss as a black-box function. Unlike traditional …