Longformer: The long-document transformer

I Beltagy, ME Peters, A Cohan - arXiv preprint arXiv:2004.05150, 2020 - arxiv.org
Transformer-based models are unable to process long sequences due to their self-attention
operation, which scales quadratically with the sequence length. To address this limitation …

TrajGAT: A map-embedded graph attention network for real-time vehicle trajectory imputation of roadside perception

C Zhao, A Song, Y Du, B Yang - Transportation research part C: emerging …, 2022 - Elsevier
With the increasing deployment of roadside sensors, vehicle trajectories can be collected for
driving behavior analysis and vehicle-highway automation systems. However, due to …

Reasoning like program executors

X Pi, Q Liu, B Chen, M Ziyadi, Z Lin, Q Fu, Y Gao… - arXiv preprint arXiv …, 2022 - arxiv.org
Reasoning over natural language is a long-standing goal for the research community.
However, studies have shown that existing language models are inadequate in reasoning …

Multi-hop question answering

V Mavi, A Jangra, A Jatowt - Foundations and Trends® in …, 2024 - nowpublishers.com
Abstract The task of Question Answering (QA) has attracted significant research interest for a
long time. Its relevance to language understanding and knowledge retrieval tasks, along …

A survey on multi-hop question answering and generation

V Mavi, A Jangra, A Jatowt - arXiv preprint arXiv:2204.09140, 2022 - arxiv.org
The problem of Question Answering (QA) has attracted significant research interest for long.
Its relevance to language understanding and knowledge retrieval tasks, along with the …

Dynamic heterogeneous-graph reasoning with language models and knowledge representation learning for commonsense question answering

Y Wang, H Zhang, J Liang, R Li - … of the 61st Annual Meeting of …, 2023 - aclanthology.org
Recently, knowledge graphs (KGs) have won noteworthy success in commonsense
question answering. Existing methods retrieve relevant subgraphs in the KGs through key …

Machine reading comprehension: The role of contextualized language models and beyond

Z Zhang, H Zhao, R Wang - arXiv preprint arXiv:2005.06249, 2020 - arxiv.org
Machine reading comprehension (MRC) aims to teach machines to read and comprehend
human languages, which is a long-standing goal of natural language processing (NLP) …

Decomposing complex questions makes multi-hop QA easier and more interpretable

R Fu, H Wang, X Zhang, J Zhou, Y Yan - arXiv preprint arXiv:2110.13472, 2021 - arxiv.org
Multi-hop QA requires the machine to answer complex questions through finding multiple
clues and reasoning, and provide explanatory evidence to demonstrate the machine …

From easy to hard: Two-stage selector and reader for multi-hop question answering

XY Li, WJ Lei, YB Yang - ICASSP 2023-2023 IEEE International …, 2023 - ieeexplore.ieee.org
Multi-hop question answering (QA) is a challenging task that requires complex reasoning
over multiple documents. Existing works commonly introduce techniques such as graph …

A survey on explainability in machine reading comprehension

M Thayaparan, M Valentino, A Freitas - arXiv preprint arXiv:2010.00389, 2020 - arxiv.org
This paper presents a systematic review of benchmarks and approaches for explainability in
Machine Reading Comprehension (MRC). We present how the representation and …