Nonlinear independent component analysis for principled disentanglement in unsupervised deep learning

A Hyvärinen, I Khemakhem, H Morioka - Patterns, 2023 - cell.com
A central problem in unsupervised deep learning is how to find useful representations of
high-dimensional data, sometimes called" disentanglement." Most approaches are heuristic …

Learnable latent embeddings for joint behavioural and neural analysis

S Schneider, JH Lee, MW Mathis - Nature, 2023 - nature.com
Mapping behavioural actions to neural activity is a fundamental goal of neuroscience. As our
ability to record large neural and behavioural data increases, there is growing interest in …

Self-supervised learning with data augmentations provably isolates content from style

J Von Kügelgen, Y Sharma, L Gresele… - Advances in neural …, 2021 - proceedings.neurips.cc
Self-supervised representation learning has shown remarkable success in a number of
domains. A common practice is to perform data augmentation via hand-crafted …

Interventional causal representation learning

K Ahuja, D Mahajan, Y Wang… - … conference on machine …, 2023 - proceedings.mlr.press
Causal representation learning seeks to extract high-level latent factors from low-level
sensory data. Most existing methods rely on observational data and structural assumptions …

Contrastive learning inverts the data generating process

RS Zimmermann, Y Sharma… - International …, 2021 - proceedings.mlr.press
Contrastive learning has recently seen tremendous success in self-supervised learning. So
far, however, it is largely unclear why the learned representations generalize so effectively to …

Identifiability of latent-variable and structural-equation models: from linear to nonlinear

A Hyvärinen, I Khemakhem, R Monti - Annals of the Institute of Statistical …, 2024 - Springer
An old problem in multivariate statistics is that linear Gaussian models are often
unidentifiable. In factor analysis, an orthogonal rotation of the factors is unidentifiable, while …

Nonparametric identifiability of causal representations from unknown interventions

J von Kügelgen, M Besserve… - Advances in …, 2024 - proceedings.neurips.cc
We study causal representation learning, the task of inferring latent causal variables and
their causal relations from high-dimensional functions (“mixtures”) of the variables. Prior …

Citris: Causal identifiability from temporal intervened sequences

P Lippe, S Magliacane, S Löwe… - International …, 2022 - proceedings.mlr.press
Understanding the latent causal factors of a dynamical system from visual observations is
considered a crucial step towards agents reasoning in complex environments. In this paper …

Disentanglement via mechanism sparsity regularization: A new principle for nonlinear ICA

S Lachapelle, P Rodriguez, Y Sharma… - … on Causal Learning …, 2022 - proceedings.mlr.press
This work introduces a novel principle we call disentanglement via mechanism sparsity
regularization, which can be applied when the latent factors of interest depend sparsely on …

Partial disentanglement for domain adaptation

L Kong, S Xie, W Yao, Y Zheng… - International …, 2022 - proceedings.mlr.press
Unsupervised domain adaptation is critical to many real-world applications where label
information is unavailable in the target domain. In general, without further assumptions, the …