Response theory and phase transitions for the thermodynamic limit of interacting identical systems

V Lucarini, GA Pavliotis, N Zagli - Proceedings of the …, 2020 - royalsocietypublishing.org
We study the response to perturbations in the thermodynamic limit of a network of coupled
identical agents undergoing a stochastic evolution which, in general, describes non …

Projected Wasserstein gradient descent for high-dimensional Bayesian inference

Y Wang, P Chen, W Li - SIAM/ASA Journal on Uncertainty Quantification, 2022 - SIAM
We propose a projected Wasserstein gradient descent method (pWGD) for high-dimensional
Bayesian inference problems. The underlying density function of a particle system of …

Modified hamiltonian monte carlo for bayesian inference

T Radivojević, E Akhmatskaya - Statistics and Computing, 2020 - Springer
Abstract The Hamiltonian Monte Carlo (HMC) method has been recognized as a powerful
sampling tool in computational statistics. We show that performance of HMC can be …

A unifying and canonical description of measure-preserving diffusions

A Barp, S Takao, M Betancourt, A Arnaudon… - arXiv preprint arXiv …, 2021 - arxiv.org
A complete recipe of measure-preserving diffusions in Euclidean space was recently
derived unifying several MCMC algorithms into a single framework. In this paper, we …

Breaking reversibility accelerates langevin dynamics for non-convex optimization

X Gao, M Gurbuzbalaban, L Zhu - Advances in Neural …, 2020 - proceedings.neurips.cc
Langevin dynamics (LD) has been proven to be a powerful technique for optimizing a non-
convex objective as an efficient algorithm to find local minima while eventually visiting a …

Geometry-informed irreversible perturbations for accelerated convergence of Langevin dynamics

BJ Zhang, YM Marzouk, K Spiliopoulos - Statistics and Computing, 2022 - Springer
We introduce a novel geometry-informed irreversible perturbation that accelerates
convergence of the Langevin algorithm for Bayesian computation. It is well documented that …

Breaking reversibility accelerates Langevin dynamics for global non-convex optimization

X Gao, M Gurbuzbalaban, L Zhu - arXiv preprint arXiv:1812.07725, 2018 - arxiv.org
Langevin dynamics (LD) has been proven to be a powerful technique for optimizing a non-
convex objective as an efficient algorithm to find local minima while eventually visiting a …

Accelerated convergence to equilibrium and reduced asymptotic variance for Langevin dynamics using Stratonovich perturbations

A Abdulle, GA Pavliotis… - Comptes …, 2019 - comptes-rendus.academie-sciences …
Dans cet article, nous proposons une nouvelle approche pour l'échantillonnage de mesures
invariantes dans des espaces de grandes dimensions à l'aide d'une dynamique de …

Non-convex optimization via non-reversible stochastic gradient Langevin dynamics

Y Hu, X Wang, X Gao, M Gurbuzbalaban… - arXiv preprint arXiv …, 2020 - arxiv.org
Stochastic Gradient Langevin Dynamics (SGLD) is a powerful algorithm for optimizing a non-
convex objective, where a controlled and properly scaled Gaussian noise is added to the …

Hamiltonian-assisted metropolis sampling

Z Song, Z Tan - Journal of the American Statistical Association, 2023 - Taylor & Francis
Abstract Various Markov chain Monte Carlo (MCMC) methods are studied to improve upon
random walk Metropolis sampling, for simulation from complex distributions. Examples …