A comprehensive survey of continual learning: theory, method and application
To cope with real-world dynamics, an intelligent system needs to incrementally acquire,
update, accumulate, and exploit knowledge throughout its lifetime. This ability, known as …
update, accumulate, and exploit knowledge throughout its lifetime. This ability, known as …
Clad: A realistic continual learning benchmark for autonomous driving
In this paper we describe the design and the ideas motivating a new Continual Learning
benchmark for Autonomous Driving (CLAD), that focuses on the problems of object …
benchmark for Autonomous Driving (CLAD), that focuses on the problems of object …
Dualprompt: Complementary prompting for rehearsal-free continual learning
Continual learning aims to enable a single model to learn a sequence of tasks without
catastrophic forgetting. Top-performing methods usually require a rehearsal buffer to store …
catastrophic forgetting. Top-performing methods usually require a rehearsal buffer to store …
Learning to prompt for continual learning
The mainstream paradigm behind continual learning has been to adapt the model
parameters to non-stationary data distributions, where catastrophic forgetting is the central …
parameters to non-stationary data distributions, where catastrophic forgetting is the central …
Slca: Slow learner with classifier alignment for continual learning on a pre-trained model
The goal of continual learning is to improve the performance of recognition models in
learning sequentially arrived data. Although most existing works are established on the …
learning sequentially arrived data. Although most existing works are established on the …
Hierarchical decomposition of prompt-based continual learning: Rethinking obscured sub-optimality
Prompt-based continual learning is an emerging direction in leveraging pre-trained
knowledge for downstream continual learning, and has almost reached the performance …
knowledge for downstream continual learning, and has almost reached the performance …
Ranpac: Random projections and pre-trained models for continual learning
MD McDonnell, D Gong, A Parvaneh… - Advances in …, 2024 - proceedings.neurips.cc
Continual learning (CL) aims to incrementally learn different tasks (such as classification) in
a non-stationary data stream without forgetting old ones. Most CL works focus on tackling …
a non-stationary data stream without forgetting old ones. Most CL works focus on tackling …
Sparcl: Sparse continual learning on the edge
Existing work in continual learning (CL) focuses on mitigating catastrophic forgetting, ie,
model performance deterioration on past tasks when learning a new task. However, the …
model performance deterioration on past tasks when learning a new task. However, the …
Fine-tuned language models are continual learners
Recent work on large language models relies on the intuition that most natural language
processing tasks can be described via natural language instructions. Language models …
processing tasks can be described via natural language instructions. Language models …
On the importance and applicability of pre-training for federated learning
Pre-training is prevalent in nowadays deep learning to improve the learned model's
performance. However, in the literature on federated learning (FL), neural networks are …
performance. However, in the literature on federated learning (FL), neural networks are …