Closing the gap: Tighter analysis of alternating stochastic gradient methods for bilevel problems

T Chen, Y Sun, W Yin - Advances in Neural Information …, 2021 - proceedings.neurips.cc
Stochastic nested optimization, including stochastic compositional, min-max, and bilevel
optimization, is gaining popularity in many machine learning applications. While the three …

Learning from history for byzantine robust optimization

SP Karimireddy, L He, M Jaggi - International Conference on …, 2021 - proceedings.mlr.press
Byzantine robustness has received significant attention recently given its importance for
distributed and federated learning. In spite of this, we identify severe flaws in existing …

Fednest: Federated bilevel, minimax, and compositional optimization

DA Tarzanagh, M Li… - … on Machine Learning, 2022 - proceedings.mlr.press
Standard federated optimization methods successfully apply to stochastic problems with
single-level structure. However, many contemporary ML problems-including adversarial …

Faster single-loop algorithms for minimax optimization without strong concavity

J Yang, A Orvieto, A Lucchi… - … Conference on Artificial …, 2022 - proceedings.mlr.press
Gradient descent ascent (GDA), the simplest single-loop algorithm for nonconvex minimax
optimization, is widely used in practical applications such as generative adversarial …

Federated minimax optimization: Improved convergence analyses and algorithms

P Sharma, R Panda, G Joshi… - … on Machine Learning, 2022 - proceedings.mlr.press
In this paper, we consider nonconvex minimax optimization, which is gaining prominence in
many modern machine learning applications, such as GANs. Large-scale edge-based …

A faster decentralized algorithm for nonconvex minimax problems

W Xian, F Huang, Y Zhang… - Advances in Neural …, 2021 - proceedings.neurips.cc
In this paper, we study the nonconvex-strongly-concave minimax optimization problem on
decentralized setting. The minimax problems are attracting increasing attentions because of …

Stochastic gradient descent-ascent and consensus optimization for smooth games: Convergence analysis under expected co-coercivity

N Loizou, H Berard, G Gidel… - Advances in …, 2021 - proceedings.neurips.cc
Two of the most prominent algorithms for solving unconstrained smooth games are the
classical stochastic gradient descent-ascent (SGDA) and the recently introduced stochastic …

Single-call stochastic extragradient methods for structured non-monotone variational inequalities: Improved analysis under weaker conditions

S Choudhury, E Gorbunov… - Advances in Neural …, 2024 - proceedings.neurips.cc
Single-call stochastic extragradient methods, like stochastic past extragradient (SPEG) and
stochastic optimistic gradient (SOG), have gained a lot of interest in recent years and are …

Accelerated zeroth-order and first-order momentum methods from mini to minimax optimization

F Huang, S Gao, J Pei, H Huang - Journal of Machine Learning Research, 2022 - jmlr.org
In the paper, we propose a class of accelerated zeroth-order and first-order momentum
methods for both nonconvex mini-optimization and minimax-optimization. Specifically, we …

Scalable primal-dual actor-critic method for safe multi-agent rl with general utilities

D Ying, Y Zhang, Y Ding, A Koppel… - Advances in Neural …, 2024 - proceedings.neurips.cc
We investigate safe multi-agent reinforcement learning, where agents seek to collectively
maximize an aggregate sum of local objectives while satisfying their own safety constraints …