Response theory and phase transitions for the thermodynamic limit of interacting identical systems
We study the response to perturbations in the thermodynamic limit of a network of coupled
identical agents undergoing a stochastic evolution which, in general, describes non …
identical agents undergoing a stochastic evolution which, in general, describes non …
Projected Wasserstein gradient descent for high-dimensional Bayesian inference
We propose a projected Wasserstein gradient descent method (pWGD) for high-dimensional
Bayesian inference problems. The underlying density function of a particle system of …
Bayesian inference problems. The underlying density function of a particle system of …
Modified hamiltonian monte carlo for bayesian inference
T Radivojević, E Akhmatskaya - Statistics and Computing, 2020 - Springer
Abstract The Hamiltonian Monte Carlo (HMC) method has been recognized as a powerful
sampling tool in computational statistics. We show that performance of HMC can be …
sampling tool in computational statistics. We show that performance of HMC can be …
A unifying and canonical description of measure-preserving diffusions
A complete recipe of measure-preserving diffusions in Euclidean space was recently
derived unifying several MCMC algorithms into a single framework. In this paper, we …
derived unifying several MCMC algorithms into a single framework. In this paper, we …
Breaking reversibility accelerates langevin dynamics for non-convex optimization
Langevin dynamics (LD) has been proven to be a powerful technique for optimizing a non-
convex objective as an efficient algorithm to find local minima while eventually visiting a …
convex objective as an efficient algorithm to find local minima while eventually visiting a …
Geometry-informed irreversible perturbations for accelerated convergence of Langevin dynamics
We introduce a novel geometry-informed irreversible perturbation that accelerates
convergence of the Langevin algorithm for Bayesian computation. It is well documented that …
convergence of the Langevin algorithm for Bayesian computation. It is well documented that …
Breaking reversibility accelerates Langevin dynamics for global non-convex optimization
Langevin dynamics (LD) has been proven to be a powerful technique for optimizing a non-
convex objective as an efficient algorithm to find local minima while eventually visiting a …
convex objective as an efficient algorithm to find local minima while eventually visiting a …
Accelerated convergence to equilibrium and reduced asymptotic variance for Langevin dynamics using Stratonovich perturbations
A Abdulle, GA Pavliotis… - Comptes …, 2019 - comptes-rendus.academie-sciences …
Dans cet article, nous proposons une nouvelle approche pour l'échantillonnage de mesures
invariantes dans des espaces de grandes dimensions à l'aide d'une dynamique de …
invariantes dans des espaces de grandes dimensions à l'aide d'une dynamique de …
Non-convex optimization via non-reversible stochastic gradient Langevin dynamics
Stochastic Gradient Langevin Dynamics (SGLD) is a powerful algorithm for optimizing a non-
convex objective, where a controlled and properly scaled Gaussian noise is added to the …
convex objective, where a controlled and properly scaled Gaussian noise is added to the …
Hamiltonian-assisted metropolis sampling
Z Song, Z Tan - Journal of the American Statistical Association, 2023 - Taylor & Francis
Abstract Various Markov chain Monte Carlo (MCMC) methods are studied to improve upon
random walk Metropolis sampling, for simulation from complex distributions. Examples …
random walk Metropolis sampling, for simulation from complex distributions. Examples …