Rapid convergence of the unadjusted langevin algorithm: Isoperimetry suffices

S Vempala, A Wibisono - Advances in neural information …, 2019 - proceedings.neurips.cc
Abstract We study the Unadjusted Langevin Algorithm (ULA) for sampling from a probability
distribution $\nu= e^{-f} $ on $\R^ n $. We prove a convergence guarantee in Kullback …

Minimax mixing time of the Metropolis-adjusted Langevin algorithm for log-concave sampling

K Wu, S Schmidler, Y Chen - Journal of Machine Learning Research, 2022 - jmlr.org
We study the mixing time of the Metropolis-adjusted Langevin algorithm (MALA) for
sampling from a log-smooth and strongly log-concave distribution. We establish its optimal …

Optimal dimension dependence of the Metropolis-adjusted Langevin algorithm

S Chewi, C Lu, K Ahn, X Cheng… - … on Learning Theory, 2021 - proceedings.mlr.press
Conventional wisdom in the sampling literature, backed by a popular diffusion scaling limit,
suggests that the mixing time of the Metropolis-Adjusted Langevin Algorithm (MALA) scales …

Fast mixing of Metropolized Hamiltonian Monte Carlo: Benefits of multi-step gradients

Y Chen, R Dwivedi, MJ Wainwright, B Yu - Journal of Machine Learning …, 2020 - jmlr.org
Hamiltonian Monte Carlo (HMC) is a state-of-the-art Markov chain Monte Carlo sampling
algorithm for drawing samples from smooth probability densities over continuous spaces …

Learning halfspaces with massart noise under structured distributions

I Diakonikolas, V Kontonis… - … on learning theory, 2020 - proceedings.mlr.press
We study the problem of learning halfspaces with Massart noise in the distribution-specific
PAC model. We give the first computationally efficient algorithm for this problem with respect …

Faster convergence of stochastic gradient langevin dynamics for non-log-concave sampling

D Zou, P Xu, Q Gu - Uncertainty in Artificial Intelligence, 2021 - proceedings.mlr.press
We provide a new convergence analysis of stochastic gradient Langevin dynamics (SGLD)
for sampling from a class of distributions that can be non-log-concave. At the core of our …

Proximal langevin algorithm: Rapid convergence under isoperimetry

A Wibisono - arXiv preprint arXiv:1911.01469, 2019 - arxiv.org
We study the Proximal Langevin Algorithm (PLA) for sampling from a probability distribution
$\nu= e^{-f} $ on $\mathbb {R}^ n $ under isoperimetry. We prove a convergence guarantee …

Bounding the error of discretized Langevin algorithms for non-strongly log-concave targets

AS Dalalyan, A Karagulyan, L Riou-Durand - Journal of Machine Learning …, 2022 - jmlr.org
In this paper, we provide non-asymptotic upper bounds on the error of sampling from a
target density over ℝ p using three schemes of discretized Langevin diffusions. The first …

Learning general halfspaces with general massart noise under the gaussian distribution

I Diakonikolas, DM Kane, V Kontonis… - Proceedings of the 54th …, 2022 - dl.acm.org
We study the problem of PAC learning halfspaces on ℝ d with Massart noise under the
Gaussian distribution. In the Massart model, an adversary is allowed to flip the label of each …

Provably robust score-based diffusion posterior sampling for plug-and-play image reconstruction

X Xu, Y Chi - arXiv preprint arXiv:2403.17042, 2024 - arxiv.org
In a great number of tasks in science and engineering, the goal is to infer an unknown image
from a small number of measurements collected from a known forward model describing …