Towards continual reinforcement learning: A review and perspectives
In this article, we aim to provide a literature review of different formulations and approaches
to continual reinforcement learning (RL), also known as lifelong or non-stationary RL. We …
to continual reinforcement learning (RL), also known as lifelong or non-stationary RL. We …
Continual learning of natural language processing tasks: A survey
Continual learning (CL) is a learning paradigm that emulates the human capability of
learning and accumulating knowledge continually without forgetting the previously learned …
learning and accumulating knowledge continually without forgetting the previously learned …
Class-incremental learning: survey and performance evaluation on image classification
For future learning systems, incremental learning is desirable because it allows for: efficient
resource usage by eliminating the need to retrain from scratch at the arrival of new data; …
resource usage by eliminating the need to retrain from scratch at the arrival of new data; …
Gdumb: A simple approach that questions our progress in continual learning
We discuss a general formulation for the Continual Learning (CL) problem for classification—
a learning task where a stream provides samples to a learner and the goal of the learner …
a learning task where a stream provides samples to a learner and the goal of the learner …
Dark experience for general continual learning: a strong, simple baseline
Continual Learning has inspired a plethora of approaches and evaluation settings; however,
the majority of them overlooks the properties of a practical scenario, where the data stream …
the majority of them overlooks the properties of a practical scenario, where the data stream …
Abdomenct-1k: Is abdominal organ segmentation a solved problem?
With the unprecedented developments in deep learning, automatic segmentation of main
abdominal organs seems to be a solved problem as state-of-the-art (SOTA) methods have …
abdominal organs seems to be a solved problem as state-of-the-art (SOTA) methods have …
Remember the past: Distilling datasets into addressable memories for neural networks
Z Deng, O Russakovsky - Advances in Neural Information …, 2022 - proceedings.neurips.cc
We propose an algorithm that compresses the critical information of a large dataset into
compact addressable memories. These memories can then be recalled to quickly re-train a …
compact addressable memories. These memories can then be recalled to quickly re-train a …
Fetril: Feature translation for exemplar-free class-incremental learning
Exemplar-free class-incremental learning is very challenging due to the negative effect of
catastrophic forgetting. A balance between stability and plasticity of the incremental process …
catastrophic forgetting. A balance between stability and plasticity of the incremental process …
Adaptive aggregation networks for class-incremental learning
Abstract Class-Incremental Learning (CIL) aims to learn a classification model with the
number of classes increasing phase-by-phase. An inherent problem in CIL is the stability …
number of classes increasing phase-by-phase. An inherent problem in CIL is the stability …
Supervised contrastive replay: Revisiting the nearest class mean classifier in online class-incremental continual learning
Online class-incremental continual learning (CL) studies the problem of learning new
classes continually from an online non-stationary data stream, intending to adapt to new …
classes continually from an online non-stationary data stream, intending to adapt to new …