Variational autoencoders and nonlinear ica: A unifying framework
The framework of variational autoencoders allows us to efficiently learn deep latent-variable
models, such that the model's marginal distribution over observed variables fits the data …
models, such that the model's marginal distribution over observed variables fits the data …
Learning deep representations by mutual information estimation and maximization
In this work, we perform unsupervised learning of representations by maximizing mutual
information between an input and the output of a deep neural network encoder. Importantly …
information between an input and the output of a deep neural network encoder. Importantly …
Disentangling by factorising
We define and address the problem of unsupervised learning of disentangled
representations on data generated from independent factors of variation. We propose …
representations on data generated from independent factors of variation. We propose …
Mine: mutual information neural estimation
We argue that the estimation of mutual information between high dimensional continuous
random variables can be achieved by gradient descent over neural networks. We present a …
random variables can be achieved by gradient descent over neural networks. We present a …
Causalvae: Disentangled representation learning via neural structural causal models
Learning disentanglement aims at finding a low dimensional representation which consists
of multiple explanatory and generative factors of the observational data. The framework of …
of multiple explanatory and generative factors of the observational data. The framework of …
Nonlinear ICA using auxiliary variables and generalized contrastive learning
A Hyvarinen, H Sasaki… - The 22nd International …, 2019 - proceedings.mlr.press
Nonlinear ICA is a fundamental problem for unsupervised representation learning,
emphasizing the capacity to recover the underlying latent variables generating the data (ie …
emphasizing the capacity to recover the underlying latent variables generating the data (ie …
A brief introduction to machine learning for engineers
O Simeone - Foundations and Trends® in Signal Processing, 2018 - nowpublishers.com
This monograph aims at providing an introduction to key concepts, algorithms, and
theoretical results in machine learning. The treatment concentrates on probabilistic models …
theoretical results in machine learning. The treatment concentrates on probabilistic models …
The HSIC bottleneck: Deep learning without back-propagation
We introduce the HSIC (Hilbert-Schmidt independence criterion) bottleneck for training deep
neural networks. The HSIC bottleneck is an alternative to the conventional cross-entropy …
neural networks. The HSIC bottleneck is an alternative to the conventional cross-entropy …
Maximum entropy generators for energy-based models
Maximum likelihood estimation of energy-based models is a challenging problem due to the
intractability of the log-likelihood gradient. In this work, we propose learning both the energy …
intractability of the log-likelihood gradient. In this work, we propose learning both the energy …
Learning disentangled representations via mutual information estimation
EH Sanchez, M Serrurier, M Ortner - … , Glasgow, UK, August 23–28, 2020 …, 2020 - Springer
In this paper, we investigate the problem of learning disentangled representations. Given a
pair of images sharing some attributes, we aim to create a low-dimensional representation …
pair of images sharing some attributes, we aim to create a low-dimensional representation …