[HTML][HTML] A review of uncertainty quantification in deep learning: Techniques, applications and challenges

M Abdar, F Pourpanah, S Hussain, D Rezazadegan… - Information fusion, 2021 - Elsevier
Uncertainty quantification (UQ) methods play a pivotal role in reducing the impact of
uncertainties during both optimization and decision making processes. They have been …

In defense of pseudo-labeling: An uncertainty-aware pseudo-label selection framework for semi-supervised learning

MN Rizve, K Duarte, YS Rawat, M Shah - arXiv preprint arXiv:2101.06329, 2021 - arxiv.org
The recent research in semi-supervised learning (SSL) is mostly dominated by consistency
regularization based methods which achieve strong performance. However, they heavily …

Want to reduce labeling cost? GPT-3 can help

S Wang, Y Liu, Y Xu, C Zhu, M Zeng - arXiv preprint arXiv:2108.13487, 2021 - arxiv.org
Data annotation is a time-consuming and labor-intensive process for many NLP tasks.
Although there exist various methods to produce pseudo data labels, they are often task …

[HTML][HTML] Self-training: A survey

MR Amini, V Feofanov, L Pauletto, L Hadjadj… - Neurocomputing, 2025 - Elsevier
Self-training methods have gained significant attention in recent years due to their
effectiveness in leveraging small labeled datasets and large unlabeled observations for …

Uncertainty in natural language processing: Sources, quantification, and applications

M Hu, Z Zhang, S Zhao, M Huang, B Wu - arXiv preprint arXiv:2306.04459, 2023 - arxiv.org
As a main field of artificial intelligence, natural language processing (NLP) has achieved
remarkable success via deep neural networks. Plenty of NLP tasks have been addressed in …

Cycle self-training for domain adaptation

H Liu, J Wang, M Long - Advances in Neural Information …, 2021 - proceedings.neurips.cc
Mainstream approaches for unsupervised domain adaptation (UDA) learn domain-invariant
representations to narrow the domain shift, which are empirically effective but theoretically …

Exploiting completeness and uncertainty of pseudo labels for weakly supervised video anomaly detection

C Zhang, G Li, Y Qi, S Wang, L Qing… - Proceedings of the …, 2023 - openaccess.thecvf.com
Weakly supervised video anomaly detection aims to identify abnormal events in videos
using only video-level labels. Recently, two-stage self-training methods have achieved …

Meta-based self-training and re-weighting for aspect-based sentiment analysis

K He, R Mao, T Gong, C Li… - IEEE Transactions on …, 2022 - ieeexplore.ieee.org
Aspect-based sentiment analysis (ABSA) means to identify fine-grained aspects, opinions,
and sentiment polarities. Recent ABSA research focuses on utilizing multi-task learning …

Few-shot named entity recognition: An empirical baseline study

J Huang, C Li, K Subudhi, D Jose… - Proceedings of the …, 2021 - aclanthology.org
This paper presents an empirical study to efficiently build named entity recognition (NER)
systems when a small amount of in-domain labeled data is available. Based upon recent …

Fine-tuning pre-trained language model with weak supervision: A contrastive-regularized self-training approach

Y Yu, S Zuo, H Jiang, W Ren, T Zhao… - arXiv preprint arXiv …, 2020 - arxiv.org
Fine-tuned pre-trained language models (LMs) have achieved enormous success in many
natural language processing (NLP) tasks, but they still require excessive labeled data in the …