Rapid convergence of the unadjusted langevin algorithm: Isoperimetry suffices
S Vempala, A Wibisono - Advances in neural information …, 2019 - proceedings.neurips.cc
Abstract We study the Unadjusted Langevin Algorithm (ULA) for sampling from a probability
distribution $\nu= e^{-f} $ on $\R^ n $. We prove a convergence guarantee in Kullback …
distribution $\nu= e^{-f} $ on $\R^ n $. We prove a convergence guarantee in Kullback …
Minimax mixing time of the Metropolis-adjusted Langevin algorithm for log-concave sampling
K Wu, S Schmidler, Y Chen - Journal of Machine Learning Research, 2022 - jmlr.org
We study the mixing time of the Metropolis-adjusted Langevin algorithm (MALA) for
sampling from a log-smooth and strongly log-concave distribution. We establish its optimal …
sampling from a log-smooth and strongly log-concave distribution. We establish its optimal …
Optimal dimension dependence of the Metropolis-adjusted Langevin algorithm
Conventional wisdom in the sampling literature, backed by a popular diffusion scaling limit,
suggests that the mixing time of the Metropolis-Adjusted Langevin Algorithm (MALA) scales …
suggests that the mixing time of the Metropolis-Adjusted Langevin Algorithm (MALA) scales …
Fast mixing of Metropolized Hamiltonian Monte Carlo: Benefits of multi-step gradients
Hamiltonian Monte Carlo (HMC) is a state-of-the-art Markov chain Monte Carlo sampling
algorithm for drawing samples from smooth probability densities over continuous spaces …
algorithm for drawing samples from smooth probability densities over continuous spaces …
Learning halfspaces with massart noise under structured distributions
I Diakonikolas, V Kontonis… - … on learning theory, 2020 - proceedings.mlr.press
We study the problem of learning halfspaces with Massart noise in the distribution-specific
PAC model. We give the first computationally efficient algorithm for this problem with respect …
PAC model. We give the first computationally efficient algorithm for this problem with respect …
Faster convergence of stochastic gradient langevin dynamics for non-log-concave sampling
We provide a new convergence analysis of stochastic gradient Langevin dynamics (SGLD)
for sampling from a class of distributions that can be non-log-concave. At the core of our …
for sampling from a class of distributions that can be non-log-concave. At the core of our …
Proximal langevin algorithm: Rapid convergence under isoperimetry
A Wibisono - arXiv preprint arXiv:1911.01469, 2019 - arxiv.org
We study the Proximal Langevin Algorithm (PLA) for sampling from a probability distribution
$\nu= e^{-f} $ on $\mathbb {R}^ n $ under isoperimetry. We prove a convergence guarantee …
$\nu= e^{-f} $ on $\mathbb {R}^ n $ under isoperimetry. We prove a convergence guarantee …
Bounding the error of discretized Langevin algorithms for non-strongly log-concave targets
In this paper, we provide non-asymptotic upper bounds on the error of sampling from a
target density over ℝ p using three schemes of discretized Langevin diffusions. The first …
target density over ℝ p using three schemes of discretized Langevin diffusions. The first …
Learning general halfspaces with general massart noise under the gaussian distribution
We study the problem of PAC learning halfspaces on ℝ d with Massart noise under the
Gaussian distribution. In the Massart model, an adversary is allowed to flip the label of each …
Gaussian distribution. In the Massart model, an adversary is allowed to flip the label of each …
Provably robust score-based diffusion posterior sampling for plug-and-play image reconstruction
In a great number of tasks in science and engineering, the goal is to infer an unknown image
from a small number of measurements collected from a known forward model describing …
from a small number of measurements collected from a known forward model describing …