Representation compensation networks for continual semantic segmentation

CB Zhang, JW Xiao, X Liu, YC Chen… - Proceedings of the …, 2022 - openaccess.thecvf.com
In this work, we study the continual semantic segmentation problem, where the deep neural
networks are required to incorporate new classes continually without catastrophic forgetting …

[Retracted] A Novel Approach to Classifying Breast Cancer Histopathology Biopsy Images Using Bilateral Knowledge Distillation and Label Smoothing Regularization

S Chaudhury, N Shelke, K Sau… - … Methods in Medicine, 2021 - Wiley Online Library
Breast cancer is the most common invasive cancer in women and the second main cause of
cancer death in females, which can be classified benign or malignant. Research and …

Improving neural cross-lingual abstractive summarization via employing optimal transport distance for knowledge distillation

TT Nguyen, AT Luu - Proceedings of the AAAI Conference on Artificial …, 2022 - ojs.aaai.org
Current state-of-the-art cross-lingual summarization models employ multi-task learning
paradigm, which works on a shared vocabulary module and relies on the self-attention …

[PDF][PDF] 深度学习中知识蒸馏研究综述

邵仁荣, 刘宇昂, 张伟, 王骏 - 计算机学报, 2022 - 159.226.43.17
摘要在人工智能迅速发展的今天, 深度神经网络广泛应用于各个研究领域并取得了巨大的成功,
但也同样面临着诸多挑战. 首先, 为了解决复杂的问题和提高模型的训练效果 …

PURSUhInT: In search of informative hint points based on layer clustering for knowledge distillation

RK Keser, A Ayanzadeh, OA Aghdam… - Expert Systems with …, 2023 - Elsevier
One of the most efficient methods for model compression is hint distillation, where the
student model is injected with information (hints) from several different layers of the teacher …

[PDF][PDF] 基于表征补偿的连续语义分割

CB Zhang, JW Xiao, X Liu, YC Chen, MM Cheng - mftp.mmcheng.net
摘要在这项工作中, 本文研究了连续语义分割问题. 在这个问题上, 深度神经网络需要不断地纳入
新的类别, 而不会产生灾难性遗忘的问题. 本文提出使用一种结构化的重参数化机制 …

[PDF][PDF] All at once network quantization via collaborative knowledge transfer

X Sun, R Panda, CF Chen, N Wang, B Pan… - ArXiv preprint, 2021 - academia.edu
Network quantization has rapidly become one of the most widely used methods to compress
and accelerate deep neural networks on edge devices. While existing approaches offer …

Improved Techniques for Quantizing Deep Networks with Adaptive Bit-Widths

X Sun, R Panda, CFR Chen, N Wang… - Proceedings of the …, 2024 - openaccess.thecvf.com
Quantizing deep networks with adaptive bit-widths is a promising technique for efficient
inference across many devices and resource constraints. In contrast to static methods that …