A comprehensive survey of continual learning: theory, method and application

L Wang, X Zhang, H Su, J Zhu - IEEE Transactions on Pattern …, 2024 - ieeexplore.ieee.org
To cope with real-world dynamics, an intelligent system needs to incrementally acquire,
update, accumulate, and exploit knowledge throughout its lifetime. This ability, known as …

Clad: A realistic continual learning benchmark for autonomous driving

E Verwimp, K Yang, S Parisot, L Hong, S McDonagh… - Neural Networks, 2023 - Elsevier
In this paper we describe the design and the ideas motivating a new Continual Learning
benchmark for Autonomous Driving (CLAD), that focuses on the problems of object …

Dualprompt: Complementary prompting for rehearsal-free continual learning

Z Wang, Z Zhang, S Ebrahimi, R Sun, H Zhang… - … on Computer Vision, 2022 - Springer
Continual learning aims to enable a single model to learn a sequence of tasks without
catastrophic forgetting. Top-performing methods usually require a rehearsal buffer to store …

Learning to prompt for continual learning

Z Wang, Z Zhang, CY Lee, H Zhang… - Proceedings of the …, 2022 - openaccess.thecvf.com
The mainstream paradigm behind continual learning has been to adapt the model
parameters to non-stationary data distributions, where catastrophic forgetting is the central …

Slca: Slow learner with classifier alignment for continual learning on a pre-trained model

G Zhang, L Wang, G Kang… - Proceedings of the …, 2023 - openaccess.thecvf.com
The goal of continual learning is to improve the performance of recognition models in
learning sequentially arrived data. Although most existing works are established on the …

Hierarchical decomposition of prompt-based continual learning: Rethinking obscured sub-optimality

L Wang, J Xie, X Zhang, M Huang… - Advances in Neural …, 2024 - proceedings.neurips.cc
Prompt-based continual learning is an emerging direction in leveraging pre-trained
knowledge for downstream continual learning, and has almost reached the performance …

Ranpac: Random projections and pre-trained models for continual learning

MD McDonnell, D Gong, A Parvaneh… - Advances in …, 2024 - proceedings.neurips.cc
Continual learning (CL) aims to incrementally learn different tasks (such as classification) in
a non-stationary data stream without forgetting old ones. Most CL works focus on tackling …

Sparcl: Sparse continual learning on the edge

Z Wang, Z Zhan, Y Gong, G Yuan… - Advances in …, 2022 - proceedings.neurips.cc
Existing work in continual learning (CL) focuses on mitigating catastrophic forgetting, ie,
model performance deterioration on past tasks when learning a new task. However, the …

Fine-tuned language models are continual learners

T Scialom, T Chakrabarty, S Muresan - arXiv preprint arXiv:2205.12393, 2022 - arxiv.org
Recent work on large language models relies on the intuition that most natural language
processing tasks can be described via natural language instructions. Language models …

On the importance and applicability of pre-training for federated learning

HY Chen, CH Tu, Z Li, HW Shen, WL Chao - arXiv preprint arXiv …, 2022 - arxiv.org
Pre-training is prevalent in nowadays deep learning to improve the learned model's
performance. However, in the literature on federated learning (FL), neural networks are …