An optimization-centric view on Bayes' rule: Reviewing and generalizing variational inference

J Knoblauch, J Jewson, T Damoulas - Journal of Machine Learning …, 2022 - jmlr.org
We advocate an optimization-centric view of Bayesian inference. Our inspiration is the
representation of Bayes' rule as infinite-dimensional optimization (Csisz´ r, 1975; Donsker …

Don't blame the elbo! a linear vae perspective on posterior collapse

J Lucas, G Tucker, RB Grosse… - Advances in Neural …, 2019 - proceedings.neurips.cc
Abstract Posterior collapse in Variational Autoencoders (VAEs) with uninformative priors
arises when the variational posterior distribution closely matches the prior for a subset of …

Differentiable samplers for deep latent variable models

A Doucet, E Moulines, A Thin - … Transactions of the …, 2023 - royalsocietypublishing.org
Latent variable models are a popular class of models in statistics. Combined with neural
networks to improve their expressivity, the resulting deep latent variable models have also …

Understanding posterior collapse in generative latent variable models

J Lucas, G Tucker, R Grosse, M Norouzi - 2019 - openreview.net
Posterior collapse in Variational Autoencoders (VAEs) arises when the variational
distribution closely matches the uninformative prior for a subset of latent variables. This …

Score-based diffusion meets annealed importance sampling

A Doucet, W Grathwohl, AG Matthews… - Advances in Neural …, 2022 - proceedings.neurips.cc
More than twenty years after its introduction, Annealed Importance Sampling (AIS) remains
one of the most effective methods for marginal likelihood estimation. It relies on a sequence …

Annealed flow transport monte carlo

M Arbel, A Matthews, A Doucet - … Conference on Machine …, 2021 - proceedings.mlr.press
Abstract Annealed Importance Sampling (AIS) and its Sequential Monte Carlo (SMC)
extensions are state-of-the-art methods for estimating normalizing constants of probability …

The usual suspects? Reassessing blame for VAE posterior collapse

B Dai, Z Wang, D Wipf - International conference on machine …, 2020 - proceedings.mlr.press
In narrow asymptotic settings Gaussian VAE models of continuous data have been shown to
possess global optima aligned with ground-truth distributions. Even so, it is well known that …

Generalized variational inference: Three arguments for deriving new posteriors

J Knoblauch, J Jewson, T Damoulas - arXiv preprint arXiv:1904.02063, 2019 - arxiv.org
We advocate an optimization-centric view on and introduce a novel generalization of
Bayesian inference. Our inspiration is the representation of Bayes' rule as infinite …

Monte Carlo variational auto-encoders

A Thin, N Kotelevskii, A Doucet… - International …, 2021 - proceedings.mlr.press
Variational auto-encoders (VAE) are popular deep latent variable models which are trained
by maximizing an Evidence Lower Bound (ELBO). To obtain tighter ELBO and hence better …

Posterior collapse of a linear latent variable model

Z Wang, L Ziyin - Advances in Neural Information …, 2022 - proceedings.neurips.cc
This work identifies the existence and cause of a type of posterior collapse that frequently
occurs in the Bayesian deep learning practice. For a general linear latent variable model …