Snap ML: A hierarchical framework for machine learning

C Dünner, T Parnell, D Sarigiannis… - Advances in …, 2018 - proceedings.neurips.cc
We describe a new software framework for fast training of generalized linear models. The
framework, named Snap Machine Learning (Snap ML), combines recent advances in …

SKCompress: compressing sparse and nonuniform gradient in distributed machine learning

J Jiang, F Fu, T Yang, Y Shao, B Cui - The VLDB Journal, 2020 - Springer
Distributed machine learning (ML) has been extensively studied to meet the explosive
growth of training data. A wide range of machine learning models are trained by a family of …

Differentially private stochastic coordinate descent

G Damaskinos, C Mendler-Dünner… - Proceedings of the …, 2021 - ojs.aaai.org
In this paper we tackle the challenge of making the stochastic coordinate descent algorithm
differentially private. Compared to the classical gradient descent algorithm where updates …

Efficient use of limited-memory accelerators for linear learning on heterogeneous systems

C Dünner, T Parnell, M Jaggi - Advances in Neural …, 2017 - proceedings.neurips.cc
We propose a generic algorithmic building block to accelerate training of machine learning
models on heterogeneous compute systems. Our scheme allows to efficiently employ …

Tera-scale coordinate descent on GPUs

T Parnell, C Dünner, K Atasu, M Sifalakis… - Future Generation …, 2020 - Elsevier
In this work we propose an asynchronous, GPU-based implementation of the widely-used
stochastic coordinate descent algorithm for convex optimization. We define the class of …

Stochastic Gradient Descent on Highly-Parallel Architectures

Y Ma, F Rusu, M Torres - arXiv preprint arXiv:1802.08800, 2018 - arxiv.org
There is an increased interest in building data analytics frameworks with advanced
algebraic capabilities both in industry and academia. Many of these frameworks, eg …

Parallel and distributed machine learning algorithms for scalable big data analytics

H Bal, A Pal - Future Generation Computer Systems, 2020 - Elsevier
This editorial is for the Special Issue of the journal Future Generation Computing Systems,
consisting of the selected papers of the 6th International Workshop on Parallel and …

SySCD: A system-aware parallel coordinate descent algorithm

N Ioannou, C Mendler-Dünner… - Advances in Neural …, 2019 - proceedings.neurips.cc
In this paper we propose a novel parallel stochastic coordinate descent (SCD) algorithm
with convergence guarantees that exhibits strong scalability. We start by studying a state-of …

[PDF][PDF] Efficient Use of Limited-Memory Accelerators for Linear Learning on Heterogeneous Systems

T Parnell, M Jaggi - proceedings.neurips.cc
We propose a generic algorithmic building block to accelerate training of machine learning
models on heterogeneous compute systems. Our scheme allows to efficiently employ …

Private and Secure Distributed Learning

G Damaskinos - 2020 - infoscience.epfl.ch
The ever-growing number of edge devices (eg, smartphones) and the exploding volume of
sensitive data they produce, call for distributed machine learning techniques that are privacy …