A survey on deep semi-supervised learning
Deep semi-supervised learning is a fast-growing field with a range of practical applications.
This paper provides a comprehensive survey on both fundamentals and recent advances in …
This paper provides a comprehensive survey on both fundamentals and recent advances in …
Meta learning for natural language processing: A survey
Deep learning has been the mainstream technique in natural language processing (NLP)
area. However, the techniques require many labeled data and are less generalizable across …
area. However, the techniques require many labeled data and are less generalizable across …
Auggpt: Leveraging chatgpt for text data augmentation
Text data augmentation is an effective strategy for overcoming the challenge of limited
sample sizes in many natural language processing (NLP) tasks. This challenge is especially …
sample sizes in many natural language processing (NLP) tasks. This challenge is especially …
Want to reduce labeling cost? GPT-3 can help
Data annotation is a time-consuming and labor-intensive process for many NLP tasks.
Although there exist various methods to produce pseudo data labels, they are often task …
Although there exist various methods to produce pseudo data labels, they are often task …
Meta-learning as a promising approach for few-shot cross-domain fault diagnosis: Algorithms, applications, and prospects
Y Feng, J Chen, J Xie, T Zhang, H Lv, T Pan - Knowledge-Based Systems, 2022 - Elsevier
The advances of intelligent fault diagnosis in recent years show that deep learning has
strong capability of automatic feature extraction and accurate identification for fault signals …
strong capability of automatic feature extraction and accurate identification for fault signals …
Adapting language models for zero-shot learning by meta-tuning on dataset and prompt collections
Large pre-trained language models (LMs) such as GPT-3 have acquired a surprising ability
to perform zero-shot learning. For example, to classify sentiment without any training …
to perform zero-shot learning. For example, to classify sentiment without any training …
Flex: Unifying evaluation for few-shot nlp
Few-shot NLP research is highly active, yet conducted in disjoint research threads with
evaluation suites that lack challenging-yet-realistic testing setups and fail to employ careful …
evaluation suites that lack challenging-yet-realistic testing setups and fail to employ careful …
AdaptSum: Towards low-resource domain adaptation for abstractive summarization
State-of-the-art abstractive summarization models generally rely on extensive labeled data,
which lowers their generalization ability on domains where such data are not available. In …
which lowers their generalization ability on domains where such data are not available. In …
Multimodality in meta-learning: A comprehensive survey
Meta-learning has gained wide popularity as a training framework that is more data-efficient
than traditional machine learning methods. However, its generalization ability in complex …
than traditional machine learning methods. However, its generalization ability in complex …
Grounded language learning fast and slow
Recent work has shown that large text-based neural language models, trained with
conventional supervised learning objectives, acquire a surprising propensity for few-and …
conventional supervised learning objectives, acquire a surprising propensity for few-and …