Learning meaningful representations of protein sequences
How we choose to represent our data has a fundamental impact on our ability to
subsequently extract information from them. Machine learning promises to automatically …
subsequently extract information from them. Machine learning promises to automatically …
Latent space oddity: on the curvature of deep generative models
Deep generative models provide a systematic way to learn nonlinear data distributions,
through a set of latent variables and a nonlinear" generator" function that maps latent points …
through a set of latent variables and a nonlinear" generator" function that maps latent points …
Metrics for deep generative models
Neural samplers such as variational autoencoders (VAEs) or generative adversarial
networks (GANs) approximate distributions by transforming samples from a simple random …
networks (GANs) approximate distributions by transforming samples from a simple random …
Geodesic exponential kernels: When curvature and linearity conflict
We consider kernel methods on general geodesic metric spaces and provide both negative
and positive results. First we show that the common Gaussian kernel can only be …
and positive results. First we show that the common Gaussian kernel can only be …
Geometrically enriched latent spaces
A common assumption in generative models is that the generator immerses the latent space
into a Euclidean ambient space. Instead, we consider the ambient space to be a …
into a Euclidean ambient space. Instead, we consider the ambient space to be a …
Fast adaptation with linearized neural networks
The inductive biases of trained neural networks are difficult to understand and,
consequently, to adapt to new settings. We study the inductive biases of linearizations of …
consequently, to adapt to new settings. We study the inductive biases of linearizations of …
Transferring knowledge across learning processes
In complex transfer learning scenarios new tasks might not be tightly linked to previous
tasks. Approaches that transfer information contained only in the final parameters of a …
tasks. Approaches that transfer information contained only in the final parameters of a …
Riemannian Laplace approximations for Bayesian neural networks
F Bergamin, P Moreno-Muñoz… - Advances in …, 2024 - proceedings.neurips.cc
Bayesian neural networks often approximate the weight-posterior with a Gaussian
distribution. However, practical posteriors are often, even locally, highly non-Gaussian, and …
distribution. However, practical posteriors are often, even locally, highly non-Gaussian, and …
Variational autoencoders with riemannian brownian motion priors
Variational Autoencoders (VAEs) represent the given data in a low-dimensional latent
space, which is generally assumed to be Euclidean. This assumption naturally leads to the …
space, which is generally assumed to be Euclidean. This assumption naturally leads to the …
Learning flat latent manifolds with vaes
Measuring the similarity between data points often requires domain knowledge, which can
in parts be compensated by relying on unsupervised methods such as latent-variable …
in parts be compensated by relying on unsupervised methods such as latent-variable …