Implicit learning dynamics in stackelberg games: Equilibria characterization, convergence analysis, and empirical study

T Fiez, B Chasnov, L Ratliff - International Conference on …, 2020 - proceedings.mlr.press
Contemporary work on learning in continuous games has commonly overlooked the
hierarchical decision-making structure present in machine learning problems formulated as …

Do GANs always have Nash equilibria?

F Farnia, A Ozdaglar - International Conference on Machine …, 2020 - proceedings.mlr.press
Generative adversarial networks (GANs) represent a zero-sum game between two machine
players, a generator and a discriminator, designed to learn the distribution of data. While …

Cola-diff: Conditional latent diffusion model for multi-modal mri synthesis

L Jiang, Y Mao, X Wang, X Chen, C Li - International Conference on …, 2023 - Springer
MRI synthesis promises to mitigate the challenge of missing MRI modality in clinical practice.
Diffusion model has emerged as an effective technique for image synthesis by modelling …

Score-based diffusion models in function space

JH Lim, NB Kovachki, R Baptista, C Beckham… - arXiv preprint arXiv …, 2023 - arxiv.org
Diffusion models have recently emerged as a powerful framework for generative modeling.
They consist of a forward process that perturbs input data with Gaussian white noise and a …

On solving minimax optimization locally: A follow-the-ridge approach

Y Wang, G Zhang, J Ba - arXiv preprint arXiv:1910.07512, 2019 - arxiv.org
Many tasks in modern machine learning can be formulated as finding equilibria in\emph
{sequential} games. In particular, two-player zero-sum sequential games, also known as …

Convergence of proximal point and extragradient-based methods beyond monotonicity: the case of negative comonotonicity

E Gorbunov, A Taylor, S Horváth… - … on Machine Learning, 2023 - proceedings.mlr.press
Algorithms for min-max optimization and variational inequalities are often studied under
monotonicity assumptions. Motivated by non-monotone machine learning applications, we …

Convergence of learning dynamics in stackelberg games

T Fiez, B Chasnov, LJ Ratliff - arXiv preprint arXiv:1906.01217, 2019 - arxiv.org
This paper investigates the convergence of learning dynamics in Stackelberg games. In the
class of games we consider, there is a hierarchical game being played between a leader …

Combating mode collapse in gan training: An empirical analysis using hessian eigenvalues

R Durall, A Chatzimichailidis, P Labus… - arXiv preprint arXiv …, 2020 - arxiv.org
Generative adversarial networks (GANs) provide state-of-the-art results in image generation.
However, despite being so powerful, they still remain very challenging to train. This is in …

Adversarial example games

J Bose, G Gidel, H Berard… - Advances in neural …, 2020 - proceedings.neurips.cc
The existence of adversarial examples capable of fooling trained neural network classifiers
calls for a much better understanding of possible attacks to guide the development of …

Understanding GANs: fundamentals, variants, training challenges, applications, and open problems

Z Ahmad, ZA Jaffri, M Chen, S Bao - Multimedia Tools and Applications, 2024 - Springer
Generative adversarial networks (GANs), a novel framework for training generative models
in an adversarial setup, have attracted significant attention in recent years. The two …