Longformer: The long-document transformer
Transformer-based models are unable to process long sequences due to their self-attention
operation, which scales quadratically with the sequence length. To address this limitation …
operation, which scales quadratically with the sequence length. To address this limitation …
TrajGAT: A map-embedded graph attention network for real-time vehicle trajectory imputation of roadside perception
With the increasing deployment of roadside sensors, vehicle trajectories can be collected for
driving behavior analysis and vehicle-highway automation systems. However, due to …
driving behavior analysis and vehicle-highway automation systems. However, due to …
Reasoning like program executors
Reasoning over natural language is a long-standing goal for the research community.
However, studies have shown that existing language models are inadequate in reasoning …
However, studies have shown that existing language models are inadequate in reasoning …
Multi-hop question answering
Abstract The task of Question Answering (QA) has attracted significant research interest for a
long time. Its relevance to language understanding and knowledge retrieval tasks, along …
long time. Its relevance to language understanding and knowledge retrieval tasks, along …
A survey on multi-hop question answering and generation
The problem of Question Answering (QA) has attracted significant research interest for long.
Its relevance to language understanding and knowledge retrieval tasks, along with the …
Its relevance to language understanding and knowledge retrieval tasks, along with the …
Dynamic heterogeneous-graph reasoning with language models and knowledge representation learning for commonsense question answering
Recently, knowledge graphs (KGs) have won noteworthy success in commonsense
question answering. Existing methods retrieve relevant subgraphs in the KGs through key …
question answering. Existing methods retrieve relevant subgraphs in the KGs through key …
Machine reading comprehension: The role of contextualized language models and beyond
Machine reading comprehension (MRC) aims to teach machines to read and comprehend
human languages, which is a long-standing goal of natural language processing (NLP) …
human languages, which is a long-standing goal of natural language processing (NLP) …
Decomposing complex questions makes multi-hop QA easier and more interpretable
Multi-hop QA requires the machine to answer complex questions through finding multiple
clues and reasoning, and provide explanatory evidence to demonstrate the machine …
clues and reasoning, and provide explanatory evidence to demonstrate the machine …
From easy to hard: Two-stage selector and reader for multi-hop question answering
Multi-hop question answering (QA) is a challenging task that requires complex reasoning
over multiple documents. Existing works commonly introduce techniques such as graph …
over multiple documents. Existing works commonly introduce techniques such as graph …
A survey on explainability in machine reading comprehension
This paper presents a systematic review of benchmarks and approaches for explainability in
Machine Reading Comprehension (MRC). We present how the representation and …
Machine Reading Comprehension (MRC). We present how the representation and …