Snap ML: A hierarchical framework for machine learning
We describe a new software framework for fast training of generalized linear models. The
framework, named Snap Machine Learning (Snap ML), combines recent advances in …
framework, named Snap Machine Learning (Snap ML), combines recent advances in …
SKCompress: compressing sparse and nonuniform gradient in distributed machine learning
Distributed machine learning (ML) has been extensively studied to meet the explosive
growth of training data. A wide range of machine learning models are trained by a family of …
growth of training data. A wide range of machine learning models are trained by a family of …
Differentially private stochastic coordinate descent
G Damaskinos, C Mendler-Dünner… - Proceedings of the …, 2021 - ojs.aaai.org
In this paper we tackle the challenge of making the stochastic coordinate descent algorithm
differentially private. Compared to the classical gradient descent algorithm where updates …
differentially private. Compared to the classical gradient descent algorithm where updates …
Efficient use of limited-memory accelerators for linear learning on heterogeneous systems
We propose a generic algorithmic building block to accelerate training of machine learning
models on heterogeneous compute systems. Our scheme allows to efficiently employ …
models on heterogeneous compute systems. Our scheme allows to efficiently employ …
Tera-scale coordinate descent on GPUs
In this work we propose an asynchronous, GPU-based implementation of the widely-used
stochastic coordinate descent algorithm for convex optimization. We define the class of …
stochastic coordinate descent algorithm for convex optimization. We define the class of …
Stochastic Gradient Descent on Highly-Parallel Architectures
Y Ma, F Rusu, M Torres - arXiv preprint arXiv:1802.08800, 2018 - arxiv.org
There is an increased interest in building data analytics frameworks with advanced
algebraic capabilities both in industry and academia. Many of these frameworks, eg …
algebraic capabilities both in industry and academia. Many of these frameworks, eg …
Parallel and distributed machine learning algorithms for scalable big data analytics
This editorial is for the Special Issue of the journal Future Generation Computing Systems,
consisting of the selected papers of the 6th International Workshop on Parallel and …
consisting of the selected papers of the 6th International Workshop on Parallel and …
SySCD: A system-aware parallel coordinate descent algorithm
N Ioannou, C Mendler-Dünner… - Advances in Neural …, 2019 - proceedings.neurips.cc
In this paper we propose a novel parallel stochastic coordinate descent (SCD) algorithm
with convergence guarantees that exhibits strong scalability. We start by studying a state-of …
with convergence guarantees that exhibits strong scalability. We start by studying a state-of …
[PDF][PDF] Efficient Use of Limited-Memory Accelerators for Linear Learning on Heterogeneous Systems
We propose a generic algorithmic building block to accelerate training of machine learning
models on heterogeneous compute systems. Our scheme allows to efficiently employ …
models on heterogeneous compute systems. Our scheme allows to efficiently employ …
Private and Secure Distributed Learning
G Damaskinos - 2020 - infoscience.epfl.ch
The ever-growing number of edge devices (eg, smartphones) and the exploding volume of
sensitive data they produce, call for distributed machine learning techniques that are privacy …
sensitive data they produce, call for distributed machine learning techniques that are privacy …