Ordisco: Effective and efficient usage of incremental unlabeled data for semi-supervised continual learning

L Wang, K Yang, C Li, L Hong… - Proceedings of the …, 2021 - openaccess.thecvf.com
Continual learning usually assumes the incoming data are fully labeled, which might not be
applicable in real applications. In this work, we consider semi-supervised continual learning …

Representational continuity for unsupervised continual learning

D Madaan, J Yoon, Y Li, Y Liu, SJ Hwang - arXiv preprint arXiv …, 2021 - arxiv.org
Continual learning (CL) aims to learn a sequence of tasks without forgetting the previously
acquired knowledge. However, recent CL advances are restricted to supervised continual …

Memory-efficient semi-supervised continual learning: The world is its own replay buffer

J Smith, J Balloch, YC Hsu, Z Kira - 2021 International Joint …, 2021 - ieeexplore.ieee.org
Rehearsal is a critical component for class-incremental continual learning, yet it requires a
substantial memory budget. Our work investigates whether we can significantly reduce this …

[HTML][HTML] Continual semi-supervised learning through contrastive interpolation consistency

M Boschini, P Buzzega, L Bonicelli, A Porrello… - Pattern Recognition …, 2022 - Elsevier
Continual Learning (CL) investigates how to train Deep Networks on a stream of tasks
without incurring forgetting. CL settings proposed in literature assume that every incoming …

Plasticity-optimized complementary networks for unsupervised continual learning

A Gomez-Villa, B Twardowski… - Proceedings of the …, 2024 - openaccess.thecvf.com
Continuous unsupervised representation learning (CURL) research has greatly benefited
from improvements in self-supervised learning (SSL) techniques. As a result, existing CURL …

Continual learning based on ood detection and task masking

G Kim, S Esmaeilpour, C Xiao… - Proceedings of the IEEE …, 2022 - openaccess.thecvf.com
Existing continual learning techniques focus on either task incremental learning (TIL) or
class incremental learning (CIL) problem, but not both. CIL and TIL differ mainly in that the …

Real-time evaluation in online continual learning: A new hope

Y Ghunaim, A Bibi, K Alhamoud… - Proceedings of the …, 2023 - openaccess.thecvf.com
Abstract Current evaluations of Continual Learning (CL) methods typically assume that there
is no constraint on training time and computation. This is an unrealistic assumption for any …

Online continual learning on a contaminated data stream with blurry task boundaries

J Bang, H Koh, S Park, H Song… - Proceedings of the …, 2022 - openaccess.thecvf.com
Learning under a continuously changing data distribution with incorrect labels is a desirable
real-world problem yet challenging. Large body of continual learning (CL) methods …

Learning to prompt for continual learning

Z Wang, Z Zhang, CY Lee, H Zhang… - Proceedings of the …, 2022 - openaccess.thecvf.com
The mainstream paradigm behind continual learning has been to adapt the model
parameters to non-stationary data distributions, where catastrophic forgetting is the central …

Slca: Slow learner with classifier alignment for continual learning on a pre-trained model

G Zhang, L Wang, G Kang… - Proceedings of the …, 2023 - openaccess.thecvf.com
The goal of continual learning is to improve the performance of recognition models in
learning sequentially arrived data. Although most existing works are established on the …