RAIL-KD: RAndom intermediate layer mapping for knowledge distillation
Intermediate layer knowledge distillation (KD) can improve the standard KD technique
(which only targets the output of teacher and student models) especially over large pre …
(which only targets the output of teacher and student models) especially over large pre …
Boosting text augmentation via hybrid instance filtering framework
Text augmentation is an effective technique for addressing the problem of insufficient data in
natural language processing. However, existing text augmentation methods tend to focus on …
natural language processing. However, existing text augmentation methods tend to focus on …
On-the-fly denoising for data augmentation in natural language understanding
Data Augmentation (DA) is frequently used to automatically provide additional training data
without extra human annotation. However, data augmentation may introduce noisy data that …
without extra human annotation. However, data augmentation may introduce noisy data that …
Image and Text: Fighting the Same Battle? Super-resolution Learning for Imbalanced Text Classification
R Meunier, F Benamara, V Moriceau… - 2023 Conference on …, 2023 - hal.science
In this paper, we propose SRL4NLP, a new approach for data augmentation by drawing an
analogy between image and text processing: Super-resolution learning. This method is …
analogy between image and text processing: Super-resolution learning. This method is …
Intended Target Identification for Anomia Patients with Gradient-based Selective Augmentation
In this study, we investigate the potential of language models (LMs) in aiding patients
experiencing anomia, a difficulty identifying the names of items. Identifying the intended …
experiencing anomia, a difficulty identifying the names of items. Identifying the intended …
Taming Prompt-Based Data Augmentation for Long-Tailed Extreme Multi-Label Text Classification
In extreme multi-label text classification (XMC), labels usually follow a long-tailed
distribution, where most labels only contain a small number of documents and limit the …
distribution, where most labels only contain a small number of documents and limit the …
CILDA: Contrastive data augmentation using intermediate layer knowledge distillation
Knowledge distillation (KD) is an efficient framework for compressing large-scale pre-trained
language models. Recent years have seen a surge of research aiming to improve KD by …
language models. Recent years have seen a surge of research aiming to improve KD by …
Learning Semantic Textual Similarity via Multi-Teacher Knowledge Distillation: A Multiple Data Augmentation method
Z Lu, Y Zhao, J Li, Y Tian - 2024 9th International Conference …, 2024 - ieeexplore.ieee.org
Data augmentation technologies, which can overcome the expensive and time-consuming
issue of high-quality labeled data generation in semantic textual similarity (STS) tasks, use …
issue of high-quality labeled data generation in semantic textual similarity (STS) tasks, use …
A Simple Structure for Building a Robust Model
X Tan, J Gao, R Li - International Conference on Intelligence Science, 2022 - Springer
As deep learning applications, especially programs of computer vision, are increasingly
deployed in our lives, we have to think more urgently about the security of these …
deployed in our lives, we have to think more urgently about the security of these …
Dehusked Coconut Vision-Based Counting on a Manufacturing Plant Utilizing the YOLOv8, ByteTrack, and Roboflow Algorithms
Many manufacturing industries in third-world countries are still in need of process systems
improvement in order to increase productivity. Some manufacturing plants are encountering …
improvement in order to increase productivity. Some manufacturing plants are encountering …