Advances in variational inference
Many modern unsupervised or semi-supervised machine learning algorithms rely on
Bayesian probabilistic models. These models are usually intractable and thus require …
Bayesian probabilistic models. These models are usually intractable and thus require …
Virtual adversarial training: a regularization method for supervised and semi-supervised learning
We propose a new regularization method based on virtual adversarial loss: a new measure
of local smoothness of the conditional label distribution given input. Virtual adversarial loss …
of local smoothness of the conditional label distribution given input. Virtual adversarial loss …
Online structured laplace approximations for overcoming catastrophic forgetting
We introduce the Kronecker factored online Laplace approximation for overcoming
catastrophic forgetting in neural networks. The method is grounded in a Bayesian online …
catastrophic forgetting in neural networks. The method is grounded in a Bayesian online …
Streaming variational bayes
We present SDA-Bayes, a framework for (S) treaming,(D) istributed,(A) synchronous
computation of a Bayesian posterior. The framework makes streaming updates to the …
computation of a Bayesian posterior. The framework makes streaming updates to the …
Optimal continual learning has perfect memory and is np-hard
J Knoblauch, H Husain… - … Conference on Machine …, 2020 - proceedings.mlr.press
Continual Learning (CL) algorithms incrementally learn a predictor or representation across
multiple sequentially observed tasks. Designing CL algorithms that perform reliably and …
multiple sequentially observed tasks. Designing CL algorithms that perform reliably and …
Measuring and regularizing networks in function space
To optimize a neural network one often thinks of optimizing its parameters, but it is ultimately
a matter of optimizing the function that maps inputs to outputs. Since a change in the …
a matter of optimizing the function that maps inputs to outputs. Since a change in the …
Continual learning with bayesian neural networks for non-stationary data
This work addresses continual learning for non-stationary data, using Bayesian neural
networks and memory-based online variational Bayes. We represent the posterior …
networks and memory-based online variational Bayes. We represent the posterior …
Continual learning via sequential function-space variational inference
Sequential Bayesian inference over predictive functions is a natural framework for continual
learning from streams of data. However, applying it to neural networks has proved …
learning from streams of data. However, applying it to neural networks has proved …
Incremental local gaussian regression
Locally weighted regression (LWR) was created as a nonparametric method that can
approximate a wide range of functions, is computationally efficient, and can learn continually …
approximate a wide range of functions, is computationally efficient, and can learn continually …
Detecting and adapting to irregular distribution shifts in bayesian online learning
We consider the problem of online learning in the presence of distribution shifts that occur at
an unknown rate and of unknown intensity. We derive a new Bayesian online inference …
an unknown rate and of unknown intensity. We derive a new Bayesian online inference …