Closing the gap: Tighter analysis of alternating stochastic gradient methods for bilevel problems
Stochastic nested optimization, including stochastic compositional, min-max, and bilevel
optimization, is gaining popularity in many machine learning applications. While the three …
optimization, is gaining popularity in many machine learning applications. While the three …
Learning from history for byzantine robust optimization
Byzantine robustness has received significant attention recently given its importance for
distributed and federated learning. In spite of this, we identify severe flaws in existing …
distributed and federated learning. In spite of this, we identify severe flaws in existing …
Fednest: Federated bilevel, minimax, and compositional optimization
DA Tarzanagh, M Li… - … on Machine Learning, 2022 - proceedings.mlr.press
Standard federated optimization methods successfully apply to stochastic problems with
single-level structure. However, many contemporary ML problems-including adversarial …
single-level structure. However, many contemporary ML problems-including adversarial …
Faster single-loop algorithms for minimax optimization without strong concavity
Gradient descent ascent (GDA), the simplest single-loop algorithm for nonconvex minimax
optimization, is widely used in practical applications such as generative adversarial …
optimization, is widely used in practical applications such as generative adversarial …
Federated minimax optimization: Improved convergence analyses and algorithms
In this paper, we consider nonconvex minimax optimization, which is gaining prominence in
many modern machine learning applications, such as GANs. Large-scale edge-based …
many modern machine learning applications, such as GANs. Large-scale edge-based …
A faster decentralized algorithm for nonconvex minimax problems
In this paper, we study the nonconvex-strongly-concave minimax optimization problem on
decentralized setting. The minimax problems are attracting increasing attentions because of …
decentralized setting. The minimax problems are attracting increasing attentions because of …
Stochastic gradient descent-ascent and consensus optimization for smooth games: Convergence analysis under expected co-coercivity
Two of the most prominent algorithms for solving unconstrained smooth games are the
classical stochastic gradient descent-ascent (SGDA) and the recently introduced stochastic …
classical stochastic gradient descent-ascent (SGDA) and the recently introduced stochastic …
Single-call stochastic extragradient methods for structured non-monotone variational inequalities: Improved analysis under weaker conditions
S Choudhury, E Gorbunov… - Advances in Neural …, 2024 - proceedings.neurips.cc
Single-call stochastic extragradient methods, like stochastic past extragradient (SPEG) and
stochastic optimistic gradient (SOG), have gained a lot of interest in recent years and are …
stochastic optimistic gradient (SOG), have gained a lot of interest in recent years and are …
Accelerated zeroth-order and first-order momentum methods from mini to minimax optimization
In the paper, we propose a class of accelerated zeroth-order and first-order momentum
methods for both nonconvex mini-optimization and minimax-optimization. Specifically, we …
methods for both nonconvex mini-optimization and minimax-optimization. Specifically, we …
Scalable primal-dual actor-critic method for safe multi-agent rl with general utilities
We investigate safe multi-agent reinforcement learning, where agents seek to collectively
maximize an aggregate sum of local objectives while satisfying their own safety constraints …
maximize an aggregate sum of local objectives while satisfying their own safety constraints …