Large language models can be lazy learners: Analyze shortcuts in in-context learning

R Tang, D Kong, L Huang, H Xue - arXiv preprint arXiv:2305.17256, 2023 - arxiv.org
arXiv preprint arXiv:2305.17256, 2023arxiv.org
Large language models (LLMs) have recently shown great potential for in-context learning,
where LLMs learn a new task simply by conditioning on a few input-label pairs (prompts).
Despite their potential, our understanding of the factors influencing end-task performance
and the robustness of in-context learning remains limited. This paper aims to bridge this
knowledge gap by investigating the reliance of LLMs on shortcuts or spurious correlations
within prompts. Through comprehensive experiments on classification and extraction tasks …
Large language models (LLMs) have recently shown great potential for in-context learning, where LLMs learn a new task simply by conditioning on a few input-label pairs (prompts). Despite their potential, our understanding of the factors influencing end-task performance and the robustness of in-context learning remains limited. This paper aims to bridge this knowledge gap by investigating the reliance of LLMs on shortcuts or spurious correlations within prompts. Through comprehensive experiments on classification and extraction tasks, we reveal that LLMs are "lazy learners" that tend to exploit shortcuts in prompts for downstream tasks. Additionally, we uncover a surprising finding that larger models are more likely to utilize shortcuts in prompts during inference. Our findings provide a new perspective on evaluating robustness in in-context learning and pose new challenges for detecting and mitigating the use of shortcuts in prompts.
arxiv.org
以上显示的是最相近的搜索结果。 查看全部搜索结果