Do transformer models show similar attention patterns to task-specific human gaze?
Learned self-attention functions in state-of-the-art NLP models often correlate with human
attention. We investigate whether self-attention in large-scale pre-trained language models …
attention. We investigate whether self-attention in large-scale pre-trained language models …
Multilingual language models predict human reading behavior
We analyze if large language models are able to predict patterns of human reading
behavior. We compare the performance of language-specific and multilingual pretrained …
behavior. We compare the performance of language-specific and multilingual pretrained …
Relative importance in sentence processing
N Hollenstein, L Beinborn - arXiv preprint arXiv:2106.03471, 2021 - arxiv.org
Determining the relative importance of the elements in a sentence is a key factor for
effortless natural language understanding. For human language processing, we can …
effortless natural language understanding. For human language processing, we can …
WebQAmGaze: A multilingual webcam eye-tracking-while-reading dataset
We present WebQAmGaze, a multilingual low-cost eye-tracking-while-reading dataset,
designed as the first webcam-based eye-tracking corpus of reading to support the …
designed as the first webcam-based eye-tracking corpus of reading to support the …
Gazby: Gaze-based bert model to incorporate human attention in neural information retrieval
This paper is interested in investigating whether human gaze signals can be leveraged to
improve state-of-the-art search engine performance and how to incorporate this new input …
improve state-of-the-art search engine performance and how to incorporate this new input …
Comparing humans and models on a similar scale: Towards cognitive gender bias evaluation in coreference resolution
G Lior, G Stanovsky - arXiv preprint arXiv:2305.15389, 2023 - arxiv.org
Spurious correlations were found to be an important factor explaining model performance in
various NLP tasks (eg, gender or racial artifacts), often considered to be''shortcuts''to the …
various NLP tasks (eg, gender or racial artifacts), often considered to be''shortcuts''to the …
Perturbation-based self-supervised attention for attention bias in text classification
In text classification, the traditional attention mechanisms usually focus too much on frequent
words, and need extensive labeled data in order to learn. This article proposes a …
words, and need extensive labeled data in order to learn. This article proposes a …
[PDF][PDF] 语言认知与语言计算——人与机器的语言理解
王少楠, 丁鼐, 林楠, 张家俊, 宗成庆 - 中国科学: 信息科学, 2022 - nlpr.ia.ac.cn
摘要语言理解是认知科学和计算机科学交叉领域共同关心的问题, 但两个学科在选择具体研究
问题时却十分不同. 认知科学领域的研究侧重解析大脑的工作机制, 更多地关注于描述大脑对 …
问题时却十分不同. 认知科学领域的研究侧重解析大脑的工作机制, 更多地关注于描述大脑对 …
Eye movements in information-seeking reading
In this work, we use question answering as a general framework for studying how eye
movements in reading reflect the reader's goals, how they are pursued, and the extent to …
movements in reading reflect the reader's goals, how they are pursued, and the extent to …
Modeling Task Effects in Human Reading with Neural Network-based Attention
Research on human reading has long documented that reading behavior shows task-
specific effects, but it has been challenging to build general models predicting what reading …
specific effects, but it has been challenging to build general models predicting what reading …