Dynamical variational autoencoders: A comprehensive review
Variational autoencoders (VAEs) are powerful deep generative models widely used to
represent high-dimensional complex data through a low-dimensional latent space learned …
represent high-dimensional complex data through a low-dimensional latent space learned …
[HTML][HTML] Sequential monte carlo: A unified review
Sequential Monte Carlo methods—also known as particle filters—offer approximate
solutions to filtering problems for nonlinear state-space systems. These filtering problems …
solutions to filtering problems for nonlinear state-space systems. These filtering problems …
Variational mixture-of-experts autoencoders for multi-modal deep generative models
Learning generative models that span multiple data modalities, such as vision and
language, is often motivated by the desire to learn more useful, generalisable …
language, is often motivated by the desire to learn more useful, generalisable …
Adaptive Monte Carlo augmented with normalizing flows
M Gabrié, GM Rotskoff… - Proceedings of the …, 2022 - National Acad Sciences
Many problems in the physical sciences, machine learning, and statistical inference
necessitate sampling from a high-dimensional, multimodal probability distribution. Markov …
necessitate sampling from a high-dimensional, multimodal probability distribution. Markov …
Deep variational reinforcement learning for POMDPs
Many real-world sequential decision making problems are partially observable by nature,
and the environment model is typically unknown. Consequently, there is great need for …
and the environment model is typically unknown. Consequently, there is great need for …
An introduction to probabilistic programming
This book is a graduate-level introduction to probabilistic programming. It not only provides a
thorough background for anyone wishing to use a probabilistic programming system, but …
thorough background for anyone wishing to use a probabilistic programming system, but …
Tighter variational bounds are not necessarily better
We provide theoretical and empirical evidence that using tighter evidence lower bounds
(ELBOs) can be detrimental to the process of learning an inference network by reducing the …
(ELBOs) can be detrimental to the process of learning an inference network by reducing the …
Filtering variational objectives
When used as a surrogate objective for maximum likelihood estimation in latent variable
models, the evidence lower bound (ELBO) produces state-of-the-art results. Inspired by this …
models, the evidence lower bound (ELBO) produces state-of-the-art results. Inspired by this …
Differentiable particle filtering via entropy-regularized optimal transport
A Corenflos, J Thornton… - International …, 2021 - proceedings.mlr.press
Particle Filtering (PF) methods are an established class of procedures for performing
inference in non-linear state-space models. Resampling is a key ingredient of PF necessary …
inference in non-linear state-space models. Resampling is a key ingredient of PF necessary …
Autodifferentiable ensemble Kalman filters
Data assimilation is concerned with sequentially estimating a temporally evolving state. This
task, which arises in a wide range of scientific and engineering applications, is particularly …
task, which arises in a wide range of scientific and engineering applications, is particularly …