Nonlinear independent component analysis for principled disentanglement in unsupervised deep learning
A central problem in unsupervised deep learning is how to find useful representations of
high-dimensional data, sometimes called" disentanglement." Most approaches are heuristic …
high-dimensional data, sometimes called" disentanglement." Most approaches are heuristic …
Learnable latent embeddings for joint behavioural and neural analysis
Mapping behavioural actions to neural activity is a fundamental goal of neuroscience. As our
ability to record large neural and behavioural data increases, there is growing interest in …
ability to record large neural and behavioural data increases, there is growing interest in …
Self-supervised learning with data augmentations provably isolates content from style
Self-supervised representation learning has shown remarkable success in a number of
domains. A common practice is to perform data augmentation via hand-crafted …
domains. A common practice is to perform data augmentation via hand-crafted …
Interventional causal representation learning
Causal representation learning seeks to extract high-level latent factors from low-level
sensory data. Most existing methods rely on observational data and structural assumptions …
sensory data. Most existing methods rely on observational data and structural assumptions …
Contrastive learning inverts the data generating process
RS Zimmermann, Y Sharma… - International …, 2021 - proceedings.mlr.press
Contrastive learning has recently seen tremendous success in self-supervised learning. So
far, however, it is largely unclear why the learned representations generalize so effectively to …
far, however, it is largely unclear why the learned representations generalize so effectively to …
Identifiability of latent-variable and structural-equation models: from linear to nonlinear
An old problem in multivariate statistics is that linear Gaussian models are often
unidentifiable. In factor analysis, an orthogonal rotation of the factors is unidentifiable, while …
unidentifiable. In factor analysis, an orthogonal rotation of the factors is unidentifiable, while …
Nonparametric identifiability of causal representations from unknown interventions
J von Kügelgen, M Besserve… - Advances in …, 2024 - proceedings.neurips.cc
We study causal representation learning, the task of inferring latent causal variables and
their causal relations from high-dimensional functions (“mixtures”) of the variables. Prior …
their causal relations from high-dimensional functions (“mixtures”) of the variables. Prior …
Citris: Causal identifiability from temporal intervened sequences
Understanding the latent causal factors of a dynamical system from visual observations is
considered a crucial step towards agents reasoning in complex environments. In this paper …
considered a crucial step towards agents reasoning in complex environments. In this paper …
Disentanglement via mechanism sparsity regularization: A new principle for nonlinear ICA
This work introduces a novel principle we call disentanglement via mechanism sparsity
regularization, which can be applied when the latent factors of interest depend sparsely on …
regularization, which can be applied when the latent factors of interest depend sparsely on …
Partial disentanglement for domain adaptation
Unsupervised domain adaptation is critical to many real-world applications where label
information is unavailable in the target domain. In general, without further assumptions, the …
information is unavailable in the target domain. In general, without further assumptions, the …