A comprehensive survey of continual learning: theory, method and application

L Wang, X Zhang, H Su, J Zhu - IEEE Transactions on Pattern …, 2024 - ieeexplore.ieee.org
To cope with real-world dynamics, an intelligent system needs to incrementally acquire,
update, accumulate, and exploit knowledge throughout its lifetime. This ability, known as …

A comprehensive survey of forgetting in deep learning beyond continual learning

Z Wang, E Yang, L Shen, H Huang - arXiv preprint arXiv:2307.09218, 2023 - arxiv.org
Forgetting refers to the loss or deterioration of previously acquired information or knowledge.
While the existing surveys on forgetting have primarily focused on continual learning …

An empirical study of catastrophic forgetting in large language models during continual fine-tuning

Y Luo, Z Yang, F Meng, Y Li, J Zhou… - arXiv preprint arXiv …, 2023 - arxiv.org
Catastrophic forgetting (CF) is a phenomenon that occurs in machine learning when a
model forgets previously learned information while acquiring new knowledge. As large …

Mips-fusion: Multi-implicit-submaps for scalable and robust online neural rgb-d reconstruction

Y Tang, J Zhang, Z Yu, H Wang, K Xu - ACM Transactions on Graphics …, 2023 - dl.acm.org
We introduce MIPS-Fusion, a robust and scalable online RGB-D reconstruction method
based on a novel neural implicit representation-multi-implicit-submap. Different from existing …

Continual Pre-Training of Large Language Models: How to (re) warm your model?

K Gupta, B Thérien, A Ibrahim, ML Richter… - arXiv preprint arXiv …, 2023 - arxiv.org
Large language models (LLMs) are routinely pre-trained on billions of tokens, only to restart
the process over again once new data becomes available. A much cheaper and more …

A comprehensive empirical evaluation on online continual learning

A Soutif-Cormerais, A Carta, A Cossu… - Proceedings of the …, 2023 - openaccess.thecvf.com
Online continual learning aims to get closer to a live learning experience by learning directly
on a stream of data with temporally shifting distribution and by storing a minimum amount of …

[HTML][HTML] Continual pre-training mitigates forgetting in language and vision

A Cossu, A Carta, L Passaro, V Lomonaco… - Neural Networks, 2024 - Elsevier
Pre-trained models are commonly used in Continual Learning to initialize the model before
training on the stream of non-stationary data. However, pre-training is rarely applied during …

First session adaptation: A strong replay-free baseline for class-incremental learning

A Panos, Y Kobe, DO Reino… - Proceedings of the …, 2023 - openaccess.thecvf.com
Abstract In Class-Incremental Learning (CIL) an image classification system is exposed to
new classes in each learning session and must be updated incrementally. Methods …

Prototype-sample relation distillation: towards replay-free continual learning

N Asadi, MR Davari, S Mudur… - International …, 2023 - proceedings.mlr.press
In Continual learning (CL) balancing effective adaptation while combating catastrophic
forgetting is a central challenge. Many of the recent best-performing methods utilize various …

Grow and merge: A unified framework for continuous categories discovery

X Zhang, J Jiang, Y Feng, ZF Wu… - Advances in …, 2022 - proceedings.neurips.cc
Although a number of studies are devoted to novel category discovery, most of them assume
a static setting where both labeled and unlabeled data are given at once for finding new …