Clad: A realistic continual learning benchmark for autonomous driving

E Verwimp, K Yang, S Parisot, L Hong, S McDonagh… - Neural Networks, 2023 - Elsevier
In this paper we describe the design and the ideas motivating a new Continual Learning
benchmark for Autonomous Driving (CLAD), that focuses on the problems of object …

Knowledge distillation in vision transformers: A critical review

G Habib, TJ Saleem, B Lall - arXiv preprint arXiv:2302.02108, 2023 - arxiv.org
In Natural Language Processing (NLP), Transformers have already revolutionized the field
by utilizing an attention-based encoder-decoder model. Recently, some pioneering works …

Continually learning self-supervised representations with projected functional regularization

A Gomez-Villa, B Twardowski, L Yu… - Proceedings of the …, 2022 - openaccess.thecvf.com
Recent self-supervised learning methods are able to learn high-quality image
representations and are closing the gap with supervised approaches. However, these …

Birt: Bio-inspired replay in vision transformers for continual learning

K Jeeveswaran, P Bhat, B Zonooz, E Arani - arXiv preprint arXiv …, 2023 - arxiv.org
The ability of deep neural networks to continually learn and adapt to a sequence of tasks
has remained challenging due to catastrophic forgetting of previously learned tasks …

Continual named entity recognition without catastrophic forgetting

D Zhang, W Cong, J Dong, Y Yu, X Chen… - arXiv preprint arXiv …, 2023 - arxiv.org
Continual Named Entity Recognition (CNER) is a burgeoning area, which involves updating
an existing model by incorporating new entity types sequentially. Nevertheless, continual …

Continual multimodal knowledge graph construction

X Chen, J Zhang, X Wang, N Zhang, T Wu… - arXiv preprint arXiv …, 2023 - arxiv.org
Current Multimodal Knowledge Graph Construction (MKGC) models struggle with the real-
world dynamism of continuously emerging entities and relations, often succumbing to …

Digital twin robotic system with continuous learning for grasp detection in variable scenes

L Ren, J Dong, D Huang, J Lü - IEEE Transactions on Industrial …, 2023 - ieeexplore.ieee.org
With the emergence of digitalization technology, digital twin bridges the gap between
physical and virtual worlds in industrial production with synchronization, reliability, and …

Simpler is better: off-the-shelf continual learning through pretrained backbones

F Pelosin - arXiv preprint arXiv:2205.01586, 2022 - arxiv.org
In this short paper, we propose a baseline (off-the-shelf) for Continual Learning of Computer
Vision problems, by leveraging the power of pretrained models. By doing so, we devise a …

Task-attentive transformer architecture for continual learning of vision-and-language tasks using knowledge distillation

Y Cai, J Thomason, M Rostami - arXiv preprint arXiv:2303.14423, 2023 - arxiv.org
The size and the computational load of fine-tuning large-scale pre-trained neural network
are becoming two major obstacles in adopting machine learning in many applications …

Lifelong language learning with adaptive uncertainty regularization

L Zhang, S Wang, F Yuan, B Geng, M Yang - Information Sciences, 2023 - Elsevier
It has been a long-standing goal in natural language processing (NLP) to learn a general
linguistic intelligence model that can perform well on many different NLP tasks continually …