Dynamical variational autoencoders: A comprehensive review
Variational autoencoders (VAEs) are powerful deep generative models widely used to
represent high-dimensional complex data through a low-dimensional latent space learned …
represent high-dimensional complex data through a low-dimensional latent space learned …
Identifiability of deep generative models without auxiliary information
We prove identifiability of a broad class of deep latent variable models that (a) have
universal approximation capabilities and (b) are the decoders of variational autoencoders …
universal approximation capabilities and (b) are the decoders of variational autoencoders …
ReFRS: Resource-efficient federated recommender system for dynamic and diversified user preferences
Owing to its nature of scalability and privacy by design, federated learning (FL) has received
increasing interest in decentralized deep learning. FL has also facilitated recent research on …
increasing interest in decentralized deep learning. FL has also facilitated recent research on …
Characterization of regional differences in resting-state fMRI with a data-driven network model of brain dynamics
Model-based data analysis of whole-brain dynamics links the observed data to model
parameters in a network of neural masses. Recently, studies focused on the role of regional …
parameters in a network of neural masses. Recently, studies focused on the role of regional …
Posterior collapse and latent variable non-identifiability
Variational autoencoders model high-dimensional data by positinglow-dimensional latent
variables that are mapped through a flexibledistribution parametrized by a neural network …
variables that are mapped through a flexibledistribution parametrized by a neural network …
Variations in variational autoencoders-a comparative evaluation
Variational Auto-Encoders (VAEs) are deep latent space generative models which have
been immensely successful in many applications such as image generation, image …
been immensely successful in many applications such as image generation, image …
Posterior collapse of a linear latent variable model
This work identifies the existence and cause of a type of posterior collapse that frequently
occurs in the Bayesian deep learning practice. For a general linear latent variable model …
occurs in the Bayesian deep learning practice. For a general linear latent variable model …
Camera-conditioned stable feature generation for isolated camera supervised person re-identification
To learn camera-view invariant features for person Re-IDentification (Re-ID), the cross-
camera image pairs of each person play an important role. However, such cross-view …
camera image pairs of each person play an important role. However, such cross-view …
[HTML][HTML] Unsupervised flood detection on SAR time series using variational autoencoder
In this study, we propose a novel unsupervised Change Detection (CD) model to detect
flood extent using Synthetic Aperture Radar (SAR) time series data. The proposed model is …
flood extent using Synthetic Aperture Radar (SAR) time series data. The proposed model is …
Embrace the gap: VAEs perform independent mechanism analysis
Variational autoencoders (VAEs) are a popular framework for modeling complex data
distributions; they can be efficiently trained via variational inference by maximizing the …
distributions; they can be efficiently trained via variational inference by maximizing the …