Continual normalization: Rethinking batch normalization for online continual learning

Q Pham, C Liu, S Hoi - arXiv preprint arXiv:2203.16102, 2022 - arxiv.org
Existing continual learning methods use Batch Normalization (BN) to facilitate training and
improve generalization across tasks. However, the non-iid and non-stationary nature of …

New insights on reducing abrupt representation change in online continual learning

L Caccia, R Aljundi, N Asadi, T Tuytelaars… - arXiv preprint arXiv …, 2021 - arxiv.org
In the online continual learning paradigm, agents must learn from a changing distribution
while respecting memory and compute constraints. Experience Replay (ER), where a small …

Gradient based memory editing for task-free continual learning

X Jin, J Du, X Ren - 4th Lifelong Machine Learning Workshop at …, 2020 - openreview.net
Prior work on continual learning often operate in a``task-aware" manner, by assuming that
the task boundaries and identifies of the data instances are known at all times. While in …

Scalable and order-robust continual learning with additive parameter decomposition

J Yoon, S Kim, E Yang, SJ Hwang - arXiv preprint arXiv:1902.09432, 2019 - arxiv.org
While recent continual learning methods largely alleviate the catastrophic problem on toy-
sized datasets, some issues remain to be tackled to apply them to real-world problem …

Representational continuity for unsupervised continual learning

D Madaan, J Yoon, Y Li, Y Liu, SJ Hwang - arXiv preprint arXiv …, 2021 - arxiv.org
Continual learning (CL) aims to learn a sequence of tasks without forgetting the previously
acquired knowledge. However, recent CL advances are restricted to supervised continual …

Online continual learning on class incremental blurry task configuration with anytime inference

H Koh, D Kim, JW Ha, J Choi - arXiv preprint arXiv:2110.10031, 2021 - arxiv.org
Despite rapid advances in continual learning, a large body of research is devoted to
improving performance in the existing setups. While a handful of work do propose new …

Mitigating forgetting in online continual learning with neuron calibration

H Yin, P Li - Advances in Neural Information Processing …, 2021 - proceedings.neurips.cc
Inspired by human intelligence, the research on online continual learning aims to push the
limits of the machine learning models to constantly learn from sequentially encountered …

Gcr: Gradient coreset based replay buffer selection for continual learning

R Tiwari, K Killamsetty, R Iyer… - Proceedings of the …, 2022 - openaccess.thecvf.com
Continual learning (CL) aims to develop techniques by which a single model adapts to an
increasing number of tasks encountered sequentially, thereby potentially leveraging …

Overcoming recency bias of normalization statistics in continual learning: Balance and adaptation

Y Lyu, L Wang, X Zhang, Z Sun, H Su… - Advances in Neural …, 2024 - proceedings.neurips.cc
Continual learning entails learning a sequence of tasks and balancing their knowledge
appropriately. With limited access to old training samples, much of the current work in deep …

Rethinking experience replay: a bag of tricks for continual learning

P Buzzega, M Boschini, A Porrello… - … Conference on Pattern …, 2021 - ieeexplore.ieee.org
In Continual Learning, a Neural Network is trained on a stream of data whose distribution
shifts over time. Under these assumptions, it is especially challenging to improve on classes …