[HTML][HTML] Pre-trained language models and their applications
Pre-trained language models have achieved striking success in natural language
processing (NLP), leading to a paradigm shift from supervised learning to pre-training …
processing (NLP), leading to a paradigm shift from supervised learning to pre-training …
A survey on data augmentation for text classification
Data augmentation, the artificial creation of training data for machine learning by
transformations, is a widely studied research field across machine learning disciplines …
transformations, is a widely studied research field across machine learning disciplines …
Selfcheckgpt: Zero-resource black-box hallucination detection for generative large language models
Generative Large Language Models (LLMs) such as GPT-3 are capable of generating highly
fluent responses to a wide variety of user prompts. However, LLMs are known to hallucinate …
fluent responses to a wide variety of user prompts. However, LLMs are known to hallucinate …
Evidence of a predictive coding hierarchy in the human brain listening to speech
Considerable progress has recently been made in natural language processing: deep
learning algorithms are increasingly able to generate, summarize, translate and classify …
learning algorithms are increasingly able to generate, summarize, translate and classify …
Explainability for large language models: A survey
Large language models (LLMs) have demonstrated impressive capabilities in natural
language processing. However, their internal mechanisms are still unclear and this lack of …
language processing. However, their internal mechanisms are still unclear and this lack of …
An empirical study of training end-to-end vision-and-language transformers
Abstract Vision-and-language (VL) pre-training has proven to be highly effective on various
VL downstream tasks. While recent work has shown that fully transformer-based VL models …
VL downstream tasks. While recent work has shown that fully transformer-based VL models …
A survey on text classification: From traditional to deep learning
Text classification is the most fundamental and essential task in natural language
processing. The last decade has seen a surge of research in this area due to the …
processing. The last decade has seen a surge of research in this area due to the …
[HTML][HTML] Ptr: Prompt tuning with rules for text classification
Recently, prompt tuning has been widely applied to stimulate the rich knowledge in pre-
trained language models (PLMs) to serve NLP tasks. Although prompt tuning has achieved …
trained language models (PLMs) to serve NLP tasks. Although prompt tuning has achieved …
Crepe: Can vision-language foundation models reason compositionally?
A fundamental characteristic common to both human vision and natural language is their
compositional nature. Yet, despite the performance gains contributed by large vision and …
compositional nature. Yet, despite the performance gains contributed by large vision and …
[HTML][HTML] Pre-trained models: Past, present and future
Large-scale pre-trained models (PTMs) such as BERT and GPT have recently achieved
great success and become a milestone in the field of artificial intelligence (AI). Owing to …
great success and become a milestone in the field of artificial intelligence (AI). Owing to …