Learning meaningful representations of protein sequences

NS Detlefsen, S Hauberg, W Boomsma - Nature communications, 2022 - nature.com
How we choose to represent our data has a fundamental impact on our ability to
subsequently extract information from them. Machine learning promises to automatically …

Latent space oddity: on the curvature of deep generative models

G Arvanitidis, LK Hansen, S Hauberg - arXiv preprint arXiv:1710.11379, 2017 - arxiv.org
Deep generative models provide a systematic way to learn nonlinear data distributions,
through a set of latent variables and a nonlinear" generator" function that maps latent points …

Metrics for deep generative models

N Chen, A Klushyn, R Kurle, X Jiang… - International …, 2018 - proceedings.mlr.press
Neural samplers such as variational autoencoders (VAEs) or generative adversarial
networks (GANs) approximate distributions by transforming samples from a simple random …

Geodesic exponential kernels: When curvature and linearity conflict

A Feragen, F Lauze, S Hauberg - Proceedings of the IEEE …, 2015 - cv-foundation.org
We consider kernel methods on general geodesic metric spaces and provide both negative
and positive results. First we show that the common Gaussian kernel can only be …

Geometrically enriched latent spaces

G Arvanitidis, S Hauberg, B Schölkopf - arXiv preprint arXiv:2008.00565, 2020 - arxiv.org
A common assumption in generative models is that the generator immerses the latent space
into a Euclidean ambient space. Instead, we consider the ambient space to be a …

Fast adaptation with linearized neural networks

W Maddox, S Tang, P Moreno… - International …, 2021 - proceedings.mlr.press
The inductive biases of trained neural networks are difficult to understand and,
consequently, to adapt to new settings. We study the inductive biases of linearizations of …

Transferring knowledge across learning processes

S Flennerhag, PG Moreno, ND Lawrence… - arXiv preprint arXiv …, 2018 - arxiv.org
In complex transfer learning scenarios new tasks might not be tightly linked to previous
tasks. Approaches that transfer information contained only in the final parameters of a …

Riemannian Laplace approximations for Bayesian neural networks

F Bergamin, P Moreno-Muñoz… - Advances in …, 2024 - proceedings.neurips.cc
Bayesian neural networks often approximate the weight-posterior with a Gaussian
distribution. However, practical posteriors are often, even locally, highly non-Gaussian, and …

Variational autoencoders with riemannian brownian motion priors

D Kalatzis, D Eklund, G Arvanitidis… - arXiv preprint arXiv …, 2020 - arxiv.org
Variational Autoencoders (VAEs) represent the given data in a low-dimensional latent
space, which is generally assumed to be Euclidean. This assumption naturally leads to the …

Learning flat latent manifolds with vaes

N Chen, A Klushyn, F Ferroni, J Bayer… - arXiv preprint arXiv …, 2020 - arxiv.org
Measuring the similarity between data points often requires domain knowledge, which can
in parts be compensated by relying on unsupervised methods such as latent-variable …