Dynamical variational autoencoders: A comprehensive review

L Girin, S Leglaive, X Bie, J Diard, T Hueber… - arXiv preprint arXiv …, 2020 - arxiv.org
Variational autoencoders (VAEs) are powerful deep generative models widely used to
represent high-dimensional complex data through a low-dimensional latent space learned …

[HTML][HTML] Sequential monte carlo: A unified review

AG Wills, TB Schön - Annual Review of Control, Robotics, and …, 2023 - annualreviews.org
Sequential Monte Carlo methods—also known as particle filters—offer approximate
solutions to filtering problems for nonlinear state-space systems. These filtering problems …

Variational mixture-of-experts autoencoders for multi-modal deep generative models

Y Shi, B Paige, P Torr - Advances in neural information …, 2019 - proceedings.neurips.cc
Learning generative models that span multiple data modalities, such as vision and
language, is often motivated by the desire to learn more useful, generalisable …

Adaptive Monte Carlo augmented with normalizing flows

M Gabrié, GM Rotskoff… - Proceedings of the …, 2022 - National Acad Sciences
Many problems in the physical sciences, machine learning, and statistical inference
necessitate sampling from a high-dimensional, multimodal probability distribution. Markov …

Deep variational reinforcement learning for POMDPs

M Igl, L Zintgraf, TA Le, F Wood… - … on machine learning, 2018 - proceedings.mlr.press
Many real-world sequential decision making problems are partially observable by nature,
and the environment model is typically unknown. Consequently, there is great need for …

An introduction to probabilistic programming

JW van de Meent, B Paige, H Yang, F Wood - arXiv preprint arXiv …, 2018 - arxiv.org
This book is a graduate-level introduction to probabilistic programming. It not only provides a
thorough background for anyone wishing to use a probabilistic programming system, but …

Tighter variational bounds are not necessarily better

T Rainforth, A Kosiorek, TA Le… - International …, 2018 - proceedings.mlr.press
We provide theoretical and empirical evidence that using tighter evidence lower bounds
(ELBOs) can be detrimental to the process of learning an inference network by reducing the …

Filtering variational objectives

CJ Maddison, J Lawson, G Tucker… - Advances in …, 2017 - proceedings.neurips.cc
When used as a surrogate objective for maximum likelihood estimation in latent variable
models, the evidence lower bound (ELBO) produces state-of-the-art results. Inspired by this …

Differentiable particle filtering via entropy-regularized optimal transport

A Corenflos, J Thornton… - International …, 2021 - proceedings.mlr.press
Particle Filtering (PF) methods are an established class of procedures for performing
inference in non-linear state-space models. Resampling is a key ingredient of PF necessary …

Autodifferentiable ensemble Kalman filters

Y Chen, D Sanz-Alonso, R Willett - SIAM Journal on Mathematics of Data …, 2022 - SIAM
Data assimilation is concerned with sequentially estimating a temporally evolving state. This
task, which arises in a wide range of scientific and engineering applications, is particularly …