Push–pull gradient methods for distributed optimization in networks

S Pu, W Shi, J Xu, A Nedić - IEEE Transactions on Automatic …, 2020 - ieeexplore.ieee.org
In this article, we focus on solving a distributed convex optimization problem in a network,
where each agent has its own convex cost function and the goal is to minimize the sum of …

A decentralized proximal-gradient method with network independent step-sizes and separated convergence rates

Z Li, W Shi, M Yan - IEEE Transactions on Signal Processing, 2019 - ieeexplore.ieee.org
This paper proposes a novel proximal-gradient algorithm for a decentralized optimization
problem with a composite objective containing smooth and nonsmooth terms. Specifically …

A dual approach for optimal algorithms in distributed optimization over networks

CA Uribe, S Lee, A Gasnikov… - 2020 Information theory …, 2020 - ieeexplore.ieee.org
We study dual-based algorithms for distributed convex optimization problems over networks,
where the objective is to minimize a sum Σ i= 1 mfi (z) of functions over in a network. We …

Communication-efficient distributed optimization in networks with gradient tracking and variance reduction

B Li, S Cen, Y Chen, Y Chi - Journal of Machine Learning Research, 2020 - jmlr.org
There is growing interest in large-scale machine learning and optimization over
decentralized networks, eg in the context of multi-agent learning and federated learning …

Hybrid online learning control in networked multiagent systems: A survey

JI Poveda, M Benosman, AR Teel - International Journal of …, 2019 - Wiley Online Library
This survey paper studies deterministic control systems that integrate three of the most active
research areas during the last years:(1) online learning control systems,(2) distributed …

Byzantine-resilient multiagent optimization

L Su, NH Vaidya - IEEE Transactions on Automatic Control, 2020 - ieeexplore.ieee.org
We consider the problem of multiagent optimization wherein an unknown subset of agents
suffer Byzantine faults and thus behave adversarially. We assume that each agent i has a …

Decentralize and randomize: Faster algorithm for Wasserstein barycenters

P Dvurechenskii, D Dvinskikh… - Advances in …, 2018 - proceedings.neurips.cc
We study the decentralized distributed computation of discrete approximations for the
regularized Wasserstein barycenter of a finite set of continuous probability measures …

Bi-level ensemble method for unsupervised feature selection

P Zhou, X Wang, L Du - Information Fusion, 2023 - Elsevier
Unsupervised feature selection is an important machine learning task and thus attracts
increasingly more attention. However, due to the absence of labels, unsupervised feature …

Optimal decentralized distributed algorithms for stochastic convex optimization

E Gorbunov, D Dvinskikh, A Gasnikov - arXiv preprint arXiv:1911.07363, 2019 - arxiv.org
We consider stochastic convex optimization problems with affine constraints and develop
several methods using either primal or dual approach to solve it. In the primal case, we use …

Distributed non-convex first-order optimization and information processing: Lower complexity bounds and rate optimal algorithms

H Sun, M Hong - IEEE Transactions on Signal processing, 2019 - ieeexplore.ieee.org
We consider a class of popular distributed non-convex optimization problems, in which
agents connected by a network ς collectively optimize a sum of smooth (possibly non …