Efficient continual learning with modular networks and task-driven priors

T Veniat, L Denoyer, MA Ranzato - arXiv preprint arXiv:2012.12631, 2020 - arxiv.org
Existing literature in Continual Learning (CL) has focused on overcoming catastrophic
forgetting, the inability of the learner to recall how to perform tasks observed in the past …

On tiny episodic memories in continual learning

A Chaudhry, M Rohrbach, M Elhoseiny… - arXiv preprint arXiv …, 2019 - arxiv.org
In continual learning (CL), an agent learns from a stream of tasks leveraging prior
experience to transfer knowledge to future tasks. It is an ideal framework to decrease the …

Gcr: Gradient coreset based replay buffer selection for continual learning

R Tiwari, K Killamsetty, R Iyer… - Proceedings of the …, 2022 - openaccess.thecvf.com
Continual learning (CL) aims to develop techniques by which a single model adapts to an
increasing number of tasks encountered sequentially, thereby potentially leveraging …

Bns: Building network structures dynamically for continual learning

Q Qin, W Hu, H Peng, D Zhao… - Advances in Neural …, 2021 - proceedings.neurips.cc
Continual learning (CL) of a sequence of tasks is often accompanied with the catastrophic
forgetting (CF) problem. Existing research has achieved remarkable results in overcoming …

Using hindsight to anchor past knowledge in continual learning

A Chaudhry, A Gordo, P Dokania, P Torr… - Proceedings of the …, 2021 - ojs.aaai.org
In continual learning, the learner faces a stream of data whose distribution changes over
time. Modern neural networks are known to suffer under this setting, as they quickly forget …

Architecture matters in continual learning

SI Mirzadeh, A Chaudhry, D Yin, T Nguyen… - arXiv preprint arXiv …, 2022 - arxiv.org
A large body of research in continual learning is devoted to overcoming the catastrophic
forgetting of neural networks by designing new algorithms that are robust to the distribution …

Ranpac: Random projections and pre-trained models for continual learning

MD McDonnell, D Gong, A Parvaneh… - Advances in …, 2024 - proceedings.neurips.cc
Continual learning (CL) aims to incrementally learn different tasks (such as classification) in
a non-stationary data stream without forgetting old ones. Most CL works focus on tackling …

Regularization shortcomings for continual learning

T Lesort, A Stoian, D Filliat - arXiv preprint arXiv:1912.03049, 2019 - arxiv.org
In most machine learning algorithms, training data is assumed to be independent and
identically distributed (iid). When it is not the case, the algorithm's performances are …

Learning bayesian sparse networks with full experience replay for continual learning

Q Yan, D Gong, Y Liu… - Proceedings of the …, 2022 - openaccess.thecvf.com
Continual Learning (CL) methods aim to enable machine learning models to learn new
tasks without catastrophic forgetting of those that have been previously mastered. Existing …

A simple baseline that questions the use of pretrained-models in continual learning

P Janson, W Zhang, R Aljundi, M Elhoseiny - arXiv preprint arXiv …, 2022 - arxiv.org
With the success of pretraining techniques in representation learning, a number of continual
learning methods based on pretrained models have been proposed. Some of these …