Deep neural network concepts for background subtraction: A systematic review and comparative evaluation

T Bouwmans, S Javed, M Sultana, SK Jung - Neural Networks, 2019 - Elsevier
Conventional neural networks have been demonstrated to be a powerful framework for
background subtraction in video acquired by static cameras. Indeed, the well-known Self …

Diagnosing and enhancing VAE models

B Dai, D Wipf - arXiv preprint arXiv:1903.05789, 2019 - arxiv.org
Although variational autoencoders (VAEs) represent a widely influential deep generative
model, many aspects of the underlying energy function remain poorly understood. In …

MIWAE: Deep generative modelling and imputation of incomplete data sets

PA Mattei, J Frellsen - International conference on machine …, 2019 - proceedings.mlr.press
We consider the problem of handling missing data with deep latent variable models
(DLVMs). First, we present a simple technique to train DLVMs when the training set contains …

Don't blame the elbo! a linear vae perspective on posterior collapse

J Lucas, G Tucker, RB Grosse… - Advances in Neural …, 2019 - proceedings.neurips.cc
Abstract Posterior collapse in Variational Autoencoders (VAEs) with uninformative priors
arises when the variational posterior distribution closely matches the prior for a subset of …

Variational autoencoders pursue pca directions (by accident)

M Rolinek, D Zietlow, G Martius - Proceedings of the IEEE …, 2019 - openaccess.thecvf.com
Abstract The Variational Autoencoder (VAE) is a powerful architecture capable of
representation learning and generative modeling. When it comes to learning interpretable …

Understanding posterior collapse in generative latent variable models

J Lucas, G Tucker, R Grosse, M Norouzi - 2019 - openreview.net
Posterior collapse in Variational Autoencoders (VAEs) arises when the variational
distribution closely matches the uninformative prior for a subset of latent variables. This …

A variational autoencoder solution for road traffic forecasting systems: Missing data imputation, dimension reduction, model selection and anomaly detection

G Boquet, A Morell, J Serrano, JL Vicario - Transportation Research Part C …, 2020 - Elsevier
Efforts devoted to mitigate the effects of road traffic congestion have been conducted since
1970s. Nowadays, there is a need for prominent solutions capable of mining information …

Guided variational autoencoder for disentanglement learning

Z Ding, Y Xu, W Xu, G Parmar, Y Yang… - Proceedings of the …, 2020 - openaccess.thecvf.com
We propose an algorithm, guided variational autoencoder (Guided-VAE), that is able to
learn a controllable generative model by performing latent representation disentanglement …

Variational autoencoder: An unsupervised model for encoding and decoding fMRI activity in visual cortex

K Han, H Wen, J Shi, KH Lu, Y Zhang, D Fu, Z Liu - NeuroImage, 2019 - Elsevier
Goal-driven and feedforward-only convolutional neural networks (CNN) have been shown to
be able to predict and decode cortical responses to natural images or videos. Here, we …

State alignment-based imitation learning

F Liu, Z Ling, T Mu, H Su - arXiv preprint arXiv:1911.10947, 2019 - arxiv.org
Consider an imitation learning problem that the imitator and the expert have different
dynamics models. Most of the current imitation learning methods fail because they focus on …