Beyond not-forgetting: Continual learning with backward knowledge transfer

S Lin, L Yang, D Fan, J Zhang - Advances in Neural …, 2022 - proceedings.neurips.cc
By learning a sequence of tasks continually, an agent in continual learning (CL) can improve
the learning performance of both a new task andold'tasks by leveraging the forward …

Bns: Building network structures dynamically for continual learning

Q Qin, W Hu, H Peng, D Zhao… - Advances in Neural …, 2021 - proceedings.neurips.cc
Continual learning (CL) of a sequence of tasks is often accompanied with the catastrophic
forgetting (CF) problem. Existing research has achieved remarkable results in overcoming …

Sub-network Discovery and Soft-masking for Continual Learning of Mixed Tasks

Z Ke, B Liu, W Xiong, A Celikyilmaz, H Li - arXiv preprint arXiv:2310.09436, 2023 - arxiv.org
Continual learning (CL) has two main objectives: preventing catastrophic forgetting (CF) and
encouraging knowledge transfer (KT). The existing literature mainly focused on overcoming …

Learning invariant representation for continual learning

G Sokar, DC Mocanu, M Pechenizkiy - arXiv preprint arXiv:2101.06162, 2021 - arxiv.org
Continual learning aims to provide intelligent agents that are capable of learning continually
a sequence of tasks, building on previously learned knowledge. A key challenge in this …

Parameter-level soft-masking for continual learning

T Konishi, M Kurokawa, C Ono, Z Ke… - International …, 2023 - proceedings.mlr.press
Existing research on task incremental learning in continual learning has primarily focused
on preventing catastrophic forgetting (CF). Although several techniques have achieved …

On tiny episodic memories in continual learning

A Chaudhry, M Rohrbach, M Elhoseiny… - arXiv preprint arXiv …, 2019 - arxiv.org
In continual learning (CL), an agent learns from a stream of tasks leveraging prior
experience to transfer knowledge to future tasks. It is an ideal framework to decrease the …

Does continual learning equally forget all parameters?

H Zhao, T Zhou, G Long, J Jiang… - … on Machine Learning, 2023 - proceedings.mlr.press
Distribution shift (eg, task or domain shift) in continual learning (CL) usually results in
catastrophic forgetting of previously learned knowledge. Although it can be alleviated by …

Self-attention meta-learner for continual learning

G Sokar, DC Mocanu, M Pechenizkiy - arXiv preprint arXiv:2101.12136, 2021 - arxiv.org
Continual learning aims to provide intelligent agents capable of learning multiple tasks
sequentially with neural networks. One of its main challenging, catastrophic forgetting, is …

Achieving a better stability-plasticity trade-off via auxiliary networks in continual learning

S Kim, L Noci, A Orvieto… - Proceedings of the IEEE …, 2023 - openaccess.thecvf.com
In contrast to the natural capabilities of humans to learn new tasks in a sequential fashion,
neural networks are known to suffer from catastrophic forgetting, where the model's …

Generative feature replay with orthogonal weight modification for continual learning

G Shen, S Zhang, X Chen… - 2021 International Joint …, 2021 - ieeexplore.ieee.org
The ability of intelligent agents to learn and remember multiple tasks sequentially is crucial
to achieving artificial general intelligence. Many continual learning (CL) methods have been …