Distributed optimization with arbitrary local solvers
With the growth of data and necessity for distributed optimization methods, solvers that work
well on a single machine must be re-designed to leverage distributed computation. Recent …
well on a single machine must be re-designed to leverage distributed computation. Recent …
Distributed coordinate descent method for learning with big data
P Richtárik, M Takáč - Journal of Machine Learning Research, 2016 - jmlr.org
In this paper we develop and analyze Hydra: HYbriD cooRdinAte descent method for
solving loss minimization problems with big data. We initially partition the coordinates …
solving loss minimization problems with big data. We initially partition the coordinates …
A decentralized second-order method with exact linear convergence rate for consensus optimization
This paper considers decentralized consensus optimization problems where different
summands of a global objective function are available at nodes of a network that can …
summands of a global objective function are available at nodes of a network that can …
Coordinate descent with arbitrary sampling I: Algorithms and complexity
Z Qu, P Richtárik - Optimization Methods and Software, 2016 - Taylor & Francis
We study the problem of minimizing the sum of a smooth convex function and a convex
block-separable regularizer and propose a new randomized coordinate descent method …
block-separable regularizer and propose a new randomized coordinate descent method …
Quartz: Randomized dual coordinate ascent with arbitrary sampling
We study the problem of minimizing the average of a large number of smooth convex
functions penalized with a strongly convex regularizer. We propose and analyze a novel …
functions penalized with a strongly convex regularizer. We propose and analyze a novel …
SDNA: Stochastic dual Newton ascent for empirical risk minimization
We propose a new algorithm for minimizing regularized empirical loss: Stochastic Dual
Newton Ascent (SDNA). Our method is dual in nature: in each iteration we update a random …
Newton Ascent (SDNA). Our method is dual in nature: in each iteration we update a random …
On optimal probabilities in stochastic coordinate descent methods
P Richtárik, M Takáč - Optimization Letters, 2016 - Springer
We propose and analyze a new parallel coordinate descent method—NSync—in which at
each iteration a random subset of coordinates is updated, in parallel, allowing for the …
each iteration a random subset of coordinates is updated, in parallel, allowing for the …
Stochastic dual coordinate ascent with adaptive probabilities
D Csiba, Z Qu, P Richtárik - International Conference on …, 2015 - proceedings.mlr.press
This paper introduces AdaSDCA: an adaptive variant of stochastic dual coordinate ascent
(SDCA) for solving the regularized empirical risk minimization problems. Our modification …
(SDCA) for solving the regularized empirical risk minimization problems. Our modification …
Applications of Lagrangian relaxation-based algorithms to industrial scheduling problems, especially in production workshop scenarios: A review
L Sun, R Yang, J Feng, G Guo - Journal of Process Control, 2024 - Elsevier
Industrial scheduling problems (ISPs), especially industrial production workshop scheduling
problems (IPWSPs) in various sectors like manufacturing, and power require allocating …
problems (IPWSPs) in various sectors like manufacturing, and power require allocating …
Coordinate descent with arbitrary sampling II: Expected separable overapproximation
Z Qu, P Richtárik - Optimization Methods and Software, 2016 - Taylor & Francis
The design and complexity analysis of randomized coordinate descent methods, and in
particular of variants which update a random subset (sampling) of coordinates in each …
particular of variants which update a random subset (sampling) of coordinates in each …