Dualprompt: Complementary prompting for rehearsal-free continual learning

Z Wang, Z Zhang, S Ebrahimi, R Sun, H Zhang… - … on Computer Vision, 2022 - Springer
Continual learning aims to enable a single model to learn a sequence of tasks without
catastrophic forgetting. Top-performing methods usually require a rehearsal buffer to store …

Learning to prompt for continual learning

Z Wang, Z Zhang, CY Lee, H Zhang… - Proceedings of the …, 2022 - openaccess.thecvf.com
The mainstream paradigm behind continual learning has been to adapt the model
parameters to non-stationary data distributions, where catastrophic forgetting is the central …

Introducing language guidance in prompt-based continual learning

MGZA Khan, MF Naeem, L Van Gool… - Proceedings of the …, 2023 - openaccess.thecvf.com
Continual Learning aims to learn a single model on a sequence of tasks without having
access to data from previous tasks. The biggest challenge in the domain still remains …

Gcr: Gradient coreset based replay buffer selection for continual learning

R Tiwari, K Killamsetty, R Iyer… - Proceedings of the …, 2022 - openaccess.thecvf.com
Continual learning (CL) aims to develop techniques by which a single model adapts to an
increasing number of tasks encountered sequentially, thereby potentially leveraging …

Generating instance-level prompts for rehearsal-free continual learning

D Jung, D Han, J Bang, H Song - Proceedings of the IEEE …, 2023 - openaccess.thecvf.com
Abstract We introduce Domain-Adaptive Prompt (DAP), a novel method for continual
learning using Vision Transformers (ViT). Prompt-based continual learning has recently …

Learning bayesian sparse networks with full experience replay for continual learning

Q Yan, D Gong, Y Liu… - Proceedings of the …, 2022 - openaccess.thecvf.com
Continual Learning (CL) methods aim to enable machine learning models to learn new
tasks without catastrophic forgetting of those that have been previously mastered. Existing …

Representational continuity for unsupervised continual learning

D Madaan, J Yoon, Y Li, Y Liu, SJ Hwang - arXiv preprint arXiv …, 2021 - arxiv.org
Continual learning (CL) aims to learn a sequence of tasks without forgetting the previously
acquired knowledge. However, recent CL advances are restricted to supervised continual …

Regularizing second-order influences for continual learning

Z Sun, Y Mu, G Hua - … of the IEEE/CVF Conference on …, 2023 - openaccess.thecvf.com
Continual learning aims to learn on non-stationary data streams without catastrophically
forgetting previous knowledge. Prevalent replay-based methods address this challenge by …

Clip model is an efficient continual learner

V Thengane, S Khan, M Hayat, F Khan - arXiv preprint arXiv:2210.03114, 2022 - arxiv.org
The continual learning setting aims to learn new tasks over time without forgetting the
previous ones. The literature reports several significant efforts to tackle this problem with …

On tiny episodic memories in continual learning

A Chaudhry, M Rohrbach, M Elhoseiny… - arXiv preprint arXiv …, 2019 - arxiv.org
In continual learning (CL), an agent learns from a stream of tasks leveraging prior
experience to transfer knowledge to future tasks. It is an ideal framework to decrease the …