[HTML][HTML] Data augmentation approaches in natural language processing: A survey
As an effective strategy, data augmentation (DA) alleviates data scarcity scenarios where
deep learning techniques may fail. It is widely applied in computer vision then introduced to …
deep learning techniques may fail. It is widely applied in computer vision then introduced to …
Towards trustworthy LLMs: a review on debiasing and dehallucinating in large language models
Z Lin, S Guan, W Zhang, H Zhang, Y Li… - Artificial Intelligence …, 2024 - Springer
Recently, large language models (LLMs) have attracted considerable attention due to their
remarkable capabilities. However, LLMs' generation of biased or hallucinatory content …
remarkable capabilities. However, LLMs' generation of biased or hallucinatory content …
Making language models better reasoners with step-aware verifier
Few-shot learning is a challenging task that requires language models to generalize from
limited examples. Large language models like GPT-3 and PaLM have made impressive …
limited examples. Large language models like GPT-3 and PaLM have made impressive …
Active prompting with chain-of-thought for large language models
The increasing scale of large language models (LLMs) brings emergent abilities to various
complex tasks requiring reasoning, such as arithmetic and commonsense reasoning. It is …
complex tasks requiring reasoning, such as arithmetic and commonsense reasoning. It is …
A survey of data augmentation approaches for NLP
Data augmentation has recently seen increased interest in NLP due to more work in low-
resource domains, new tasks, and the popularity of large-scale neural networks that require …
resource domains, new tasks, and the popularity of large-scale neural networks that require …
Measuring and improving consistency in pretrained language models
Consistency of a model—that is, the invariance of its behavior under meaning-preserving
alternations in its input—is a highly desirable property in natural language processing. In …
alternations in its input—is a highly desirable property in natural language processing. In …
Boosting language models reasoning with chain-of-knowledge prompting
Recently, Chain-of-Thought (CoT) prompting has delivered success on complex reasoning
tasks, which aims at designing a simple prompt like``Let's think step by step''or multiple in …
tasks, which aims at designing a simple prompt like``Let's think step by step''or multiple in …
Consistency analysis of chatgpt
ME Jang, T Lukasiewicz - arXiv preprint arXiv:2303.06273, 2023 - arxiv.org
ChatGPT has gained a huge popularity since its introduction. Its positive aspects have been
reported through many media platforms, and some analyses even showed that ChatGPT …
reported through many media platforms, and some analyses even showed that ChatGPT …
Mutant: A training paradigm for out-of-distribution generalization in visual question answering
While progress has been made on the visual question answering leaderboards, models
often utilize spurious correlations and priors in datasets under the iid setting. As such …
often utilize spurious correlations and priors in datasets under the iid setting. As such …
Reasoning like program executors
Reasoning over natural language is a long-standing goal for the research community.
However, studies have shown that existing language models are inadequate in reasoning …
However, studies have shown that existing language models are inadequate in reasoning …