Dark experience for general continual learning: a strong, simple baseline
Continual Learning has inspired a plethora of approaches and evaluation settings; however,
the majority of them overlooks the properties of a practical scenario, where the data stream …
the majority of them overlooks the properties of a practical scenario, where the data stream …
Representational continuity for unsupervised continual learning
Continual learning (CL) aims to learn a sequence of tasks without forgetting the previously
acquired knowledge. However, recent CL advances are restricted to supervised continual …
acquired knowledge. However, recent CL advances are restricted to supervised continual …
Gcr: Gradient coreset based replay buffer selection for continual learning
Continual learning (CL) aims to develop techniques by which a single model adapts to an
increasing number of tasks encountered sequentially, thereby potentially leveraging …
increasing number of tasks encountered sequentially, thereby potentially leveraging …
Dualprompt: Complementary prompting for rehearsal-free continual learning
Continual learning aims to enable a single model to learn a sequence of tasks without
catastrophic forgetting. Top-performing methods usually require a rehearsal buffer to store …
catastrophic forgetting. Top-performing methods usually require a rehearsal buffer to store …
Online continual learning under extreme memory constraints
Continual Learning (CL) aims to develop agents emulating the human ability to sequentially
learn new tasks while being able to retain knowledge obtained from past experiences. In this …
learn new tasks while being able to retain knowledge obtained from past experiences. In this …
On the effectiveness of lipschitz-driven rehearsal in continual learning
Rehearsal approaches enjoy immense popularity with Continual Learning (CL)
practitioners. These methods collect samples from previously encountered data distributions …
practitioners. These methods collect samples from previously encountered data distributions …
Learning bayesian sparse networks with full experience replay for continual learning
Continual Learning (CL) methods aim to enable machine learning models to learn new
tasks without catastrophic forgetting of those that have been previously mastered. Existing …
tasks without catastrophic forgetting of those that have been previously mastered. Existing …
On tiny episodic memories in continual learning
In continual learning (CL), an agent learns from a stream of tasks leveraging prior
experience to transfer knowledge to future tasks. It is an ideal framework to decrease the …
experience to transfer knowledge to future tasks. It is an ideal framework to decrease the …
Rethinking experience replay: a bag of tricks for continual learning
In Continual Learning, a Neural Network is trained on a stream of data whose distribution
shifts over time. Under these assumptions, it is especially challenging to improve on classes …
shifts over time. Under these assumptions, it is especially challenging to improve on classes …
Continual normalization: Rethinking batch normalization for online continual learning
Existing continual learning methods use Batch Normalization (BN) to facilitate training and
improve generalization across tasks. However, the non-iid and non-stationary nature of …
improve generalization across tasks. However, the non-iid and non-stationary nature of …