Dualprompt: Complementary prompting for rehearsal-free continual learning
Continual learning aims to enable a single model to learn a sequence of tasks without
catastrophic forgetting. Top-performing methods usually require a rehearsal buffer to store …
catastrophic forgetting. Top-performing methods usually require a rehearsal buffer to store …
Learning to prompt for continual learning
The mainstream paradigm behind continual learning has been to adapt the model
parameters to non-stationary data distributions, where catastrophic forgetting is the central …
parameters to non-stationary data distributions, where catastrophic forgetting is the central …
Introducing language guidance in prompt-based continual learning
Continual Learning aims to learn a single model on a sequence of tasks without having
access to data from previous tasks. The biggest challenge in the domain still remains …
access to data from previous tasks. The biggest challenge in the domain still remains …
Gcr: Gradient coreset based replay buffer selection for continual learning
Continual learning (CL) aims to develop techniques by which a single model adapts to an
increasing number of tasks encountered sequentially, thereby potentially leveraging …
increasing number of tasks encountered sequentially, thereby potentially leveraging …
Generating instance-level prompts for rehearsal-free continual learning
Abstract We introduce Domain-Adaptive Prompt (DAP), a novel method for continual
learning using Vision Transformers (ViT). Prompt-based continual learning has recently …
learning using Vision Transformers (ViT). Prompt-based continual learning has recently …
Learning bayesian sparse networks with full experience replay for continual learning
Continual Learning (CL) methods aim to enable machine learning models to learn new
tasks without catastrophic forgetting of those that have been previously mastered. Existing …
tasks without catastrophic forgetting of those that have been previously mastered. Existing …
Representational continuity for unsupervised continual learning
Continual learning (CL) aims to learn a sequence of tasks without forgetting the previously
acquired knowledge. However, recent CL advances are restricted to supervised continual …
acquired knowledge. However, recent CL advances are restricted to supervised continual …
Regularizing second-order influences for continual learning
Continual learning aims to learn on non-stationary data streams without catastrophically
forgetting previous knowledge. Prevalent replay-based methods address this challenge by …
forgetting previous knowledge. Prevalent replay-based methods address this challenge by …
Clip model is an efficient continual learner
The continual learning setting aims to learn new tasks over time without forgetting the
previous ones. The literature reports several significant efforts to tackle this problem with …
previous ones. The literature reports several significant efforts to tackle this problem with …
On tiny episodic memories in continual learning
In continual learning (CL), an agent learns from a stream of tasks leveraging prior
experience to transfer knowledge to future tasks. It is an ideal framework to decrease the …
experience to transfer knowledge to future tasks. It is an ideal framework to decrease the …