Dark experience for general continual learning: a strong, simple baseline

P Buzzega, M Boschini, A Porrello… - Advances in neural …, 2020 - proceedings.neurips.cc
Continual Learning has inspired a plethora of approaches and evaluation settings; however,
the majority of them overlooks the properties of a practical scenario, where the data stream …

Representational continuity for unsupervised continual learning

D Madaan, J Yoon, Y Li, Y Liu, SJ Hwang - arXiv preprint arXiv …, 2021 - arxiv.org
Continual learning (CL) aims to learn a sequence of tasks without forgetting the previously
acquired knowledge. However, recent CL advances are restricted to supervised continual …

Gcr: Gradient coreset based replay buffer selection for continual learning

R Tiwari, K Killamsetty, R Iyer… - Proceedings of the …, 2022 - openaccess.thecvf.com
Continual learning (CL) aims to develop techniques by which a single model adapts to an
increasing number of tasks encountered sequentially, thereby potentially leveraging …

Dualprompt: Complementary prompting for rehearsal-free continual learning

Z Wang, Z Zhang, S Ebrahimi, R Sun, H Zhang… - … on Computer Vision, 2022 - Springer
Continual learning aims to enable a single model to learn a sequence of tasks without
catastrophic forgetting. Top-performing methods usually require a rehearsal buffer to store …

Online continual learning under extreme memory constraints

E Fini, S Lathuiliere, E Sangineto, M Nabi… - Computer Vision–ECCV …, 2020 - Springer
Continual Learning (CL) aims to develop agents emulating the human ability to sequentially
learn new tasks while being able to retain knowledge obtained from past experiences. In this …

On the effectiveness of lipschitz-driven rehearsal in continual learning

L Bonicelli, M Boschini, A Porrello… - Advances in …, 2022 - proceedings.neurips.cc
Rehearsal approaches enjoy immense popularity with Continual Learning (CL)
practitioners. These methods collect samples from previously encountered data distributions …

Learning bayesian sparse networks with full experience replay for continual learning

Q Yan, D Gong, Y Liu… - Proceedings of the …, 2022 - openaccess.thecvf.com
Continual Learning (CL) methods aim to enable machine learning models to learn new
tasks without catastrophic forgetting of those that have been previously mastered. Existing …

On tiny episodic memories in continual learning

A Chaudhry, M Rohrbach, M Elhoseiny… - arXiv preprint arXiv …, 2019 - arxiv.org
In continual learning (CL), an agent learns from a stream of tasks leveraging prior
experience to transfer knowledge to future tasks. It is an ideal framework to decrease the …

Rethinking experience replay: a bag of tricks for continual learning

P Buzzega, M Boschini, A Porrello… - … Conference on Pattern …, 2021 - ieeexplore.ieee.org
In Continual Learning, a Neural Network is trained on a stream of data whose distribution
shifts over time. Under these assumptions, it is especially challenging to improve on classes …

Continual normalization: Rethinking batch normalization for online continual learning

Q Pham, C Liu, S Hoi - arXiv preprint arXiv:2203.16102, 2022 - arxiv.org
Existing continual learning methods use Batch Normalization (BN) to facilitate training and
improve generalization across tasks. However, the non-iid and non-stationary nature of …