Speeding up distributed gradient descent by utilizing non-persistent stragglers
When gradient descent (GD) is scaled to many parallel computing servers (workers) for
large scale machine learning problems, its per-iteration computation time is limited by the …
large scale machine learning problems, its per-iteration computation time is limited by the …
Etalumis: Bringing probabilistic programming to scientific simulators at scale
Probabilistic programming languages (PPLs) are receiving widespread attention for
performing Bayesian inference in complex generative models. However, applications to …
performing Bayesian inference in complex generative models. However, applications to …
[HTML][HTML] Privacy-preserved learning from non-iid data in fog-assisted IoT: A federated learning approach
M Abdel-Basset, H Hawash, N Moustafa… - Digital Communications …, 2022 - Elsevier
With the prevalence of the Internet of Things (IoT) systems, smart cities comprise complex
networks, including sensors, actuators, appliances, and cyber services. The complexity and …
networks, including sensors, actuators, appliances, and cyber services. The complexity and …
Backdoor attacks in peer-to-peer federated learning
Most machine learning applications rely on centralized learning processes, opening up the
risk of exposure of their training datasets. While federated learning (FL) mitigates to some …
risk of exposure of their training datasets. While federated learning (FL) mitigates to some …
Neighborhood-correction algorithm for classification of normal and malignant cells
Classification of normal and malignant cells observed under a microscope is an essential
and challenging step in the development of a cost-effective computer-aided diagnosis tool …
and challenging step in the development of a cost-effective computer-aided diagnosis tool …
Totoro: A Scalable Federated Learning Engine for the Edge
Federated Learning (FL) is an emerging distributed machine learning (ML) technique that
enables in-situ model training and inference on decentralized edge devices. We propose …
enables in-situ model training and inference on decentralized edge devices. We propose …
Straggler-resilient distributed machine learning with dynamic backup workers
With the increasing demand for large-scale training of machine learning models, consensus-
based distributed optimization methods have recently been advocated as alternatives to the …
based distributed optimization methods have recently been advocated as alternatives to the …
Straggler-resilient decentralized learning via adaptive asynchronous updates
With the increasing demand for large-scale training of machine learning models, fully
decentralized optimization methods have recently been advocated as alternatives to the …
decentralized optimization methods have recently been advocated as alternatives to the …
Distributed dual coordinate ascent with imbalanced data on a general tree network
In this paper, we investigate the impact of imbalanced data on the convergence of
distributed dual coordinate ascent in a tree network for solving an empirical loss …
distributed dual coordinate ascent in a tree network for solving an empirical loss …
Weighted aggregating stochastic gradient descent for parallel deep learning
This paper investigates the stochastic optimization problem focusing on developing scalable
parallel algorithms for deep learning tasks. Our solution involves a reformation of the …
parallel algorithms for deep learning tasks. Our solution involves a reformation of the …