Dynamical variational autoencoders: A comprehensive review

L Girin, S Leglaive, X Bie, J Diard, T Hueber… - arXiv preprint arXiv …, 2020 - arxiv.org
Variational autoencoders (VAEs) are powerful deep generative models widely used to
represent high-dimensional complex data through a low-dimensional latent space learned …

Identifiability of deep generative models without auxiliary information

B Kivva, G Rajendran, P Ravikumar… - Advances in Neural …, 2022 - proceedings.neurips.cc
We prove identifiability of a broad class of deep latent variable models that (a) have
universal approximation capabilities and (b) are the decoders of variational autoencoders …

ReFRS: Resource-efficient federated recommender system for dynamic and diversified user preferences

M Imran, H Yin, T Chen, QVH Nguyen, A Zhou… - ACM Transactions on …, 2023 - dl.acm.org
Owing to its nature of scalability and privacy by design, federated learning (FL) has received
increasing interest in decentralized deep learning. FL has also facilitated recent research on …

Characterization of regional differences in resting-state fMRI with a data-driven network model of brain dynamics

V Sip, M Hashemi, T Dickscheid, K Amunts… - Science …, 2023 - science.org
Model-based data analysis of whole-brain dynamics links the observed data to model
parameters in a network of neural masses. Recently, studies focused on the role of regional …

Posterior collapse and latent variable non-identifiability

Y Wang, D Blei… - Advances in neural …, 2021 - proceedings.neurips.cc
Variational autoencoders model high-dimensional data by positinglow-dimensional latent
variables that are mapped through a flexibledistribution parametrized by a neural network …

Variations in variational autoencoders-a comparative evaluation

R Wei, C Garcia, A El-Sayed, V Peterson… - Ieee …, 2020 - ieeexplore.ieee.org
Variational Auto-Encoders (VAEs) are deep latent space generative models which have
been immensely successful in many applications such as image generation, image …

Posterior collapse of a linear latent variable model

Z Wang, L Ziyin - Advances in Neural Information …, 2022 - proceedings.neurips.cc
This work identifies the existence and cause of a type of posterior collapse that frequently
occurs in the Bayesian deep learning practice. For a general linear latent variable model …

Camera-conditioned stable feature generation for isolated camera supervised person re-identification

C Wu, W Ge, A Wu, X Chang - Proceedings of the IEEE/CVF …, 2022 - openaccess.thecvf.com
To learn camera-view invariant features for person Re-IDentification (Re-ID), the cross-
camera image pairs of each person play an important role. However, such cross-view …

[HTML][HTML] Unsupervised flood detection on SAR time series using variational autoencoder

R Yadav, A Nascetti, H Azizpour, Y Ban - International Journal of Applied …, 2024 - Elsevier
In this study, we propose a novel unsupervised Change Detection (CD) model to detect
flood extent using Synthetic Aperture Radar (SAR) time series data. The proposed model is …

Embrace the gap: VAEs perform independent mechanism analysis

P Reizinger, L Gresele, J Brady… - Advances in …, 2022 - proceedings.neurips.cc
Variational autoencoders (VAEs) are a popular framework for modeling complex data
distributions; they can be efficiently trained via variational inference by maximizing the …