Eliminating position bias of language models: A mechanistic approach

Z Wang, H Zhang, X Li, KH Huang, C Han, S Ji… - arXiv preprint arXiv …, 2024 - arxiv.org
Position bias has proven to be a prevalent issue of modern language models (LMs), where
the models prioritize content based on its position within the given context. This bias often …

Superposition Prompting: Improving and Accelerating Retrieval-Augmented Generation

T Merth, Q Fu, M Rastegari, M Najibi - arXiv preprint arXiv:2404.06910, 2024 - arxiv.org
Despite the successes of large language models (LLMs), they exhibit significant drawbacks,
particularly when processing long contexts. Their inference cost scales quadratically with …

In-Context Learning with Noisy Labels

J Kang, D Son, H Song, B Chang - arXiv preprint arXiv:2411.19581, 2024 - arxiv.org
In-context learning refers to the emerging ability of large language models (LLMs) to perform
a target task without additional training, utilizing demonstrations of the task. Recent studies …