A survey on deep semi-supervised learning

X Yang, Z Song, I King, Z Xu - IEEE Transactions on …, 2022 - ieeexplore.ieee.org
Deep semi-supervised learning is a fast-growing field with a range of practical applications.
This paper provides a comprehensive survey on both fundamentals and recent advances in …

Meta learning for natural language processing: A survey

H Lee, SW Li, NT Vu - arXiv preprint arXiv:2205.01500, 2022 - arxiv.org
Deep learning has been the mainstream technique in natural language processing (NLP)
area. However, the techniques require many labeled data and are less generalizable across …

Auggpt: Leveraging chatgpt for text data augmentation

H Dai, Z Liu, W Liao, X Huang, Y Cao, Z Wu… - arXiv preprint arXiv …, 2023 - arxiv.org
Text data augmentation is an effective strategy for overcoming the challenge of limited
sample sizes in many natural language processing (NLP) tasks. This challenge is especially …

Want to reduce labeling cost? GPT-3 can help

S Wang, Y Liu, Y Xu, C Zhu, M Zeng - arXiv preprint arXiv:2108.13487, 2021 - arxiv.org
Data annotation is a time-consuming and labor-intensive process for many NLP tasks.
Although there exist various methods to produce pseudo data labels, they are often task …

Meta-learning as a promising approach for few-shot cross-domain fault diagnosis: Algorithms, applications, and prospects

Y Feng, J Chen, J Xie, T Zhang, H Lv, T Pan - Knowledge-Based Systems, 2022 - Elsevier
The advances of intelligent fault diagnosis in recent years show that deep learning has
strong capability of automatic feature extraction and accurate identification for fault signals …

Adapting language models for zero-shot learning by meta-tuning on dataset and prompt collections

R Zhong, K Lee, Z Zhang, D Klein - arXiv preprint arXiv:2104.04670, 2021 - arxiv.org
Large pre-trained language models (LMs) such as GPT-3 have acquired a surprising ability
to perform zero-shot learning. For example, to classify sentiment without any training …

Flex: Unifying evaluation for few-shot nlp

J Bragg, A Cohan, K Lo… - Advances in Neural …, 2021 - proceedings.neurips.cc
Few-shot NLP research is highly active, yet conducted in disjoint research threads with
evaluation suites that lack challenging-yet-realistic testing setups and fail to employ careful …

AdaptSum: Towards low-resource domain adaptation for abstractive summarization

T Yu, Z Liu, P Fung - arXiv preprint arXiv:2103.11332, 2021 - arxiv.org
State-of-the-art abstractive summarization models generally rely on extensive labeled data,
which lowers their generalization ability on domains where such data are not available. In …

Multimodality in meta-learning: A comprehensive survey

Y Ma, S Zhao, W Wang, Y Li, I King - Knowledge-Based Systems, 2022 - Elsevier
Meta-learning has gained wide popularity as a training framework that is more data-efficient
than traditional machine learning methods. However, its generalization ability in complex …

Grounded language learning fast and slow

F Hill, O Tieleman, T Von Glehn, N Wong… - arXiv preprint arXiv …, 2020 - arxiv.org
Recent work has shown that large text-based neural language models, trained with
conventional supervised learning objectives, acquire a surprising propensity for few-and …