Rapid convergence of the unadjusted langevin algorithm: Isoperimetry suffices
S Vempala, A Wibisono - Advances in neural information …, 2019 - proceedings.neurips.cc
Abstract We study the Unadjusted Langevin Algorithm (ULA) for sampling from a probability
distribution $\nu= e^{-f} $ on $\R^ n $. We prove a convergence guarantee in Kullback …
distribution $\nu= e^{-f} $ on $\R^ n $. We prove a convergence guarantee in Kullback …
Towards a theory of non-log-concave sampling: first-order stationarity guarantees for langevin monte carlo
K Balasubramanian, S Chewi… - … on Learning Theory, 2022 - proceedings.mlr.press
For the task of sampling from a density $\pi\propto\exp (-V) $ on $\R^ d $, where $ V $ is
possibly non-convex but $ L $-gradient Lipschitz, we prove that averaged Langevin Monte …
possibly non-convex but $ L $-gradient Lipschitz, we prove that averaged Langevin Monte …
Sampling can be faster than optimization
Optimization algorithms and Monte Carlo sampling algorithms have provided the
computational foundations for the rapid growth in applications of statistical machine learning …
computational foundations for the rapid growth in applications of statistical machine learning …
On sampling from a log-concave density using kinetic Langevin diffusions
AS Dalalyan, L Riou-Durand - 2020 - projecteuclid.org
Langevin diffusion processes and their discretizations are often used for sampling from a
target density. The most convenient framework for assessing the quality of such a sampling …
target density. The most convenient framework for assessing the quality of such a sampling …
Is there an analog of Nesterov acceleration for gradient-based MCMC?
We formulate gradient-based Markov chain Monte Carlo (MCMC) sampling as optimization
on the space of probability measures, with Kullback–Leibler (KL) divergence as the …
on the space of probability measures, with Kullback–Leibler (KL) divergence as the …
The randomized midpoint method for log-concave sampling
R Shen, YT Lee - Advances in Neural Information …, 2019 - proceedings.neurips.cc
Sampling from log-concave distributions is a well researched problem that has many
applications in statistics and machine learning. We study the distributions of the form …
applications in statistics and machine learning. We study the distributions of the form …
Improved discretization analysis for underdamped Langevin Monte Carlo
Abstract Underdamped Langevin Monte Carlo (ULMC) is an algorithm used to sample from
unnormalized densities by leveraging the momentum of a particle moving in a potential well …
unnormalized densities by leveraging the momentum of a particle moving in a potential well …
Global non-convex optimization with discretized diffusions
MA Erdogdu, L Mackey… - Advances in Neural …, 2018 - proceedings.neurips.cc
An Euler discretization of the Langevin diffusion is known to converge to the global
minimizers of certain convex and non-convex optimization problems. We show that this …
minimizers of certain convex and non-convex optimization problems. We show that this …
Online stochastic gradient descent on non-convex losses from high-dimensional inference
Stochastic gradient descent (SGD) is a popular algorithm for optimization problems arising
in high-dimensional inference tasks. Here one produces an estimator of an unknown …
in high-dimensional inference tasks. Here one produces an estimator of an unknown …
On the convergence of langevin monte carlo: The interplay between tail growth and smoothness
MA Erdogdu, R Hosseinzadeh - Conference on Learning …, 2021 - proceedings.mlr.press
We study sampling from a target distribution $\nu_*= e^{-f} $ using the unadjusted Langevin
Monte Carlo (LMC) algorithm. For any potential function $ f $ whose tails behave like …
Monte Carlo (LMC) algorithm. For any potential function $ f $ whose tails behave like …