Efficient continual learning with modular networks and task-driven priors
Existing literature in Continual Learning (CL) has focused on overcoming catastrophic
forgetting, the inability of the learner to recall how to perform tasks observed in the past …
forgetting, the inability of the learner to recall how to perform tasks observed in the past …
On tiny episodic memories in continual learning
In continual learning (CL), an agent learns from a stream of tasks leveraging prior
experience to transfer knowledge to future tasks. It is an ideal framework to decrease the …
experience to transfer knowledge to future tasks. It is an ideal framework to decrease the …
Gcr: Gradient coreset based replay buffer selection for continual learning
Continual learning (CL) aims to develop techniques by which a single model adapts to an
increasing number of tasks encountered sequentially, thereby potentially leveraging …
increasing number of tasks encountered sequentially, thereby potentially leveraging …
Bns: Building network structures dynamically for continual learning
Continual learning (CL) of a sequence of tasks is often accompanied with the catastrophic
forgetting (CF) problem. Existing research has achieved remarkable results in overcoming …
forgetting (CF) problem. Existing research has achieved remarkable results in overcoming …
Using hindsight to anchor past knowledge in continual learning
In continual learning, the learner faces a stream of data whose distribution changes over
time. Modern neural networks are known to suffer under this setting, as they quickly forget …
time. Modern neural networks are known to suffer under this setting, as they quickly forget …
Architecture matters in continual learning
A large body of research in continual learning is devoted to overcoming the catastrophic
forgetting of neural networks by designing new algorithms that are robust to the distribution …
forgetting of neural networks by designing new algorithms that are robust to the distribution …
Ranpac: Random projections and pre-trained models for continual learning
MD McDonnell, D Gong, A Parvaneh… - Advances in …, 2024 - proceedings.neurips.cc
Continual learning (CL) aims to incrementally learn different tasks (such as classification) in
a non-stationary data stream without forgetting old ones. Most CL works focus on tackling …
a non-stationary data stream without forgetting old ones. Most CL works focus on tackling …
Regularization shortcomings for continual learning
In most machine learning algorithms, training data is assumed to be independent and
identically distributed (iid). When it is not the case, the algorithm's performances are …
identically distributed (iid). When it is not the case, the algorithm's performances are …
Learning bayesian sparse networks with full experience replay for continual learning
Continual Learning (CL) methods aim to enable machine learning models to learn new
tasks without catastrophic forgetting of those that have been previously mastered. Existing …
tasks without catastrophic forgetting of those that have been previously mastered. Existing …
A simple baseline that questions the use of pretrained-models in continual learning
With the success of pretraining techniques in representation learning, a number of continual
learning methods based on pretrained models have been proposed. Some of these …
learning methods based on pretrained models have been proposed. Some of these …