Do transformer models show similar attention patterns to task-specific human gaze?

O Eberle, S Brandl, J Pilot… - Proceedings of the 60th …, 2022 - aclanthology.org
Learned self-attention functions in state-of-the-art NLP models often correlate with human
attention. We investigate whether self-attention in large-scale pre-trained language models …

Multilingual language models predict human reading behavior

N Hollenstein, F Pirovano, C Zhang, L Jäger… - arXiv preprint arXiv …, 2021 - arxiv.org
We analyze if large language models are able to predict patterns of human reading
behavior. We compare the performance of language-specific and multilingual pretrained …

Relative importance in sentence processing

N Hollenstein, L Beinborn - arXiv preprint arXiv:2106.03471, 2021 - arxiv.org
Determining the relative importance of the elements in a sentence is a key factor for
effortless natural language understanding. For human language processing, we can …

WebQAmGaze: A multilingual webcam eye-tracking-while-reading dataset

T Ribeiro, S Brandl, A Søgaard… - arXiv preprint arXiv …, 2023 - arxiv.org
We present WebQAmGaze, a multilingual low-cost eye-tracking-while-reading dataset,
designed as the first webcam-based eye-tracking corpus of reading to support the …

Gazby: Gaze-based bert model to incorporate human attention in neural information retrieval

S Dong, J Goldstein, GH Yang - Proceedings of the 2022 ACM SIGIR …, 2022 - dl.acm.org
This paper is interested in investigating whether human gaze signals can be leveraged to
improve state-of-the-art search engine performance and how to incorporate this new input …

Comparing humans and models on a similar scale: Towards cognitive gender bias evaluation in coreference resolution

G Lior, G Stanovsky - arXiv preprint arXiv:2305.15389, 2023 - arxiv.org
Spurious correlations were found to be an important factor explaining model performance in
various NLP tasks (eg, gender or racial artifacts), often considered to be''shortcuts''to the …

Perturbation-based self-supervised attention for attention bias in text classification

H Feng, Z Lin, Q Ma - IEEE/ACM Transactions on Audio …, 2023 - ieeexplore.ieee.org
In text classification, the traditional attention mechanisms usually focus too much on frequent
words, and need extensive labeled data in order to learn. This article proposes a …

[PDF][PDF] 语言认知与语言计算——人与机器的语言理解

王少楠, 丁鼐, 林楠, 张家俊, 宗成庆 - 中国科学: 信息科学, 2022 - nlpr.ia.ac.cn
摘要语言理解是认知科学和计算机科学交叉领域共同关心的问题, 但两个学科在选择具体研究
问题时却十分不同. 认知科学领域的研究侧重解析大脑的工作机制, 更多地关注于描述大脑对 …

Eye movements in information-seeking reading

O Shubi, Y Berzak - Proceedings of the annual meeting of the …, 2023 - escholarship.org
In this work, we use question answering as a general framework for studying how eye
movements in reading reflect the reader's goals, how they are pursued, and the extent to …

Modeling Task Effects in Human Reading with Neural Network-based Attention

M Hahn, F Keller - arXiv preprint arXiv:1808.00054, 2018 - arxiv.org
Research on human reading has long documented that reading behavior shows task-
specific effects, but it has been challenging to build general models predicting what reading …