Revisiting class-incremental learning with pre-trained models: Generalizability and adaptivity are all you need

DW Zhou, ZW Cai, HJ Ye, DC Zhan, Z Liu - arXiv preprint arXiv …, 2023 - arxiv.org
Class-incremental learning (CIL) aims to adapt to emerging new classes without forgetting
old ones. Traditional CIL models are trained from scratch to continually acquire knowledge …

Fantastic gains and where to find them: On the existence and prospect of general knowledge transfer between any pretrained model

K Roth, L Thede, AS Koepke, O Vinyals… - arXiv preprint arXiv …, 2023 - arxiv.org
Training deep networks requires various design decisions regarding for instance their
architecture, data augmentation, or optimization. In this work, we find these training …

Dual-curriculum teacher for domain-inconsistent object detection in autonomous driving

L Yu, Y Zhang, L Hong, F Chen, Z Li - arXiv preprint arXiv:2210.08748, 2022 - arxiv.org
Object detection for autonomous vehicles has received increasing attention in recent years,
where labeled data are often expensive while unlabeled data can be collected readily …

Dual branch network towards accurate printed mathematical expression recognition

Y Wang, Z Weng, Z Zhou, S Ji, Z Ye, Y Zhu - International Conference on …, 2022 - Springer
Over the past years, Printed Mathematical Expression Recognition (PMER) has progressed
rapidly. However, due to the insufficient context information captured by Convolutional …

Class-incremental learning for baseband modulation classification: A comparison

C Montes, T Morehouse, R Zhou - 2024 International Wireless …, 2024 - ieeexplore.ieee.org
This paper presents a comprehensive study on the capabilities of class-incremental learning
in the context of baseband modulation classification. Despite the growing interest in …

ATMKD: adaptive temperature guided multi-teacher knowledge distillation

Y Lin, S Yin, Y Ding, X Liang - Multimedia Systems, 2024 - Springer
Abstract Knowledge distillation is a technique that aims to distill the knowledge from a large
well-trained teacher model to a lightweight student model. In recent years, multi-teacher …

Correlation Guided Multi-teacher Knowledge Distillation

L Shi, N Jiang, J Tang, X Huang - International Conference on Neural …, 2023 - Springer
Abstract Knowledge distillation is a model compression technique that transfers knowledge
from a redundant and strong network (teacher) to a lightweight network (student). Due to the …

Mutually Promoted Hierarchical Learning for Incremental Implicitly-Refined Classification

G Zhao, Y Hou, K Mu - 2023 International Joint Conference on …, 2023 - ieeexplore.ieee.org
Class incremental learning devotes to learning a classification model from incrementally
arriving training data. Existing methods tend to use a single-headed layout due to the lack of …