Push–pull gradient methods for distributed optimization in networks
In this article, we focus on solving a distributed convex optimization problem in a network,
where each agent has its own convex cost function and the goal is to minimize the sum of …
where each agent has its own convex cost function and the goal is to minimize the sum of …
A decentralized proximal-gradient method with network independent step-sizes and separated convergence rates
This paper proposes a novel proximal-gradient algorithm for a decentralized optimization
problem with a composite objective containing smooth and nonsmooth terms. Specifically …
problem with a composite objective containing smooth and nonsmooth terms. Specifically …
A dual approach for optimal algorithms in distributed optimization over networks
We study dual-based algorithms for distributed convex optimization problems over networks,
where the objective is to minimize a sum Σ i= 1 mfi (z) of functions over in a network. We …
where the objective is to minimize a sum Σ i= 1 mfi (z) of functions over in a network. We …
Communication-efficient distributed optimization in networks with gradient tracking and variance reduction
There is growing interest in large-scale machine learning and optimization over
decentralized networks, eg in the context of multi-agent learning and federated learning …
decentralized networks, eg in the context of multi-agent learning and federated learning …
Hybrid online learning control in networked multiagent systems: A survey
This survey paper studies deterministic control systems that integrate three of the most active
research areas during the last years:(1) online learning control systems,(2) distributed …
research areas during the last years:(1) online learning control systems,(2) distributed …
Byzantine-resilient multiagent optimization
We consider the problem of multiagent optimization wherein an unknown subset of agents
suffer Byzantine faults and thus behave adversarially. We assume that each agent i has a …
suffer Byzantine faults and thus behave adversarially. We assume that each agent i has a …
Decentralize and randomize: Faster algorithm for Wasserstein barycenters
P Dvurechenskii, D Dvinskikh… - Advances in …, 2018 - proceedings.neurips.cc
We study the decentralized distributed computation of discrete approximations for the
regularized Wasserstein barycenter of a finite set of continuous probability measures …
regularized Wasserstein barycenter of a finite set of continuous probability measures …
Bi-level ensemble method for unsupervised feature selection
Unsupervised feature selection is an important machine learning task and thus attracts
increasingly more attention. However, due to the absence of labels, unsupervised feature …
increasingly more attention. However, due to the absence of labels, unsupervised feature …
Optimal decentralized distributed algorithms for stochastic convex optimization
We consider stochastic convex optimization problems with affine constraints and develop
several methods using either primal or dual approach to solve it. In the primal case, we use …
several methods using either primal or dual approach to solve it. In the primal case, we use …
Distributed non-convex first-order optimization and information processing: Lower complexity bounds and rate optimal algorithms
We consider a class of popular distributed non-convex optimization problems, in which
agents connected by a network ς collectively optimize a sum of smooth (possibly non …
agents connected by a network ς collectively optimize a sum of smooth (possibly non …