Continual normalization: Rethinking batch normalization for online continual learning
Existing continual learning methods use Batch Normalization (BN) to facilitate training and
improve generalization across tasks. However, the non-iid and non-stationary nature of …
improve generalization across tasks. However, the non-iid and non-stationary nature of …
New insights on reducing abrupt representation change in online continual learning
In the online continual learning paradigm, agents must learn from a changing distribution
while respecting memory and compute constraints. Experience Replay (ER), where a small …
while respecting memory and compute constraints. Experience Replay (ER), where a small …
Gradient based memory editing for task-free continual learning
Prior work on continual learning often operate in a``task-aware" manner, by assuming that
the task boundaries and identifies of the data instances are known at all times. While in …
the task boundaries and identifies of the data instances are known at all times. While in …
Scalable and order-robust continual learning with additive parameter decomposition
While recent continual learning methods largely alleviate the catastrophic problem on toy-
sized datasets, some issues remain to be tackled to apply them to real-world problem …
sized datasets, some issues remain to be tackled to apply them to real-world problem …
Representational continuity for unsupervised continual learning
Continual learning (CL) aims to learn a sequence of tasks without forgetting the previously
acquired knowledge. However, recent CL advances are restricted to supervised continual …
acquired knowledge. However, recent CL advances are restricted to supervised continual …
Online continual learning on class incremental blurry task configuration with anytime inference
Despite rapid advances in continual learning, a large body of research is devoted to
improving performance in the existing setups. While a handful of work do propose new …
improving performance in the existing setups. While a handful of work do propose new …
Mitigating forgetting in online continual learning with neuron calibration
Inspired by human intelligence, the research on online continual learning aims to push the
limits of the machine learning models to constantly learn from sequentially encountered …
limits of the machine learning models to constantly learn from sequentially encountered …
Gcr: Gradient coreset based replay buffer selection for continual learning
Continual learning (CL) aims to develop techniques by which a single model adapts to an
increasing number of tasks encountered sequentially, thereby potentially leveraging …
increasing number of tasks encountered sequentially, thereby potentially leveraging …
Overcoming recency bias of normalization statistics in continual learning: Balance and adaptation
Continual learning entails learning a sequence of tasks and balancing their knowledge
appropriately. With limited access to old training samples, much of the current work in deep …
appropriately. With limited access to old training samples, much of the current work in deep …
Rethinking experience replay: a bag of tricks for continual learning
In Continual Learning, a Neural Network is trained on a stream of data whose distribution
shifts over time. Under these assumptions, it is especially challenging to improve on classes …
shifts over time. Under these assumptions, it is especially challenging to improve on classes …