Language models are greedy reasoners: A systematic formal analysis of chain-of-thought

A Saparov, H He - arXiv preprint arXiv:2210.01240, 2022 - arxiv.org
Large language models (LLMs) have shown remarkable reasoning capabilities given chain-
of-thought prompts (examples with intermediate reasoning steps). Existing benchmarks …

[HTML][HTML] A survey on complex factual question answering

L Zhang, J Zhang, X Ke, H Li, X Huang, Z Shao, S Cao… - AI Open, 2023 - Elsevier
Answering complex factual questions has drawn a lot of attention. Researchers leverage
various data sources to support complex QA, such as unstructured texts, structured …

Complex knowledge base question answering: A survey

Y Lan, G He, J Jiang, J Jiang… - IEEE Transactions on …, 2022 - ieeexplore.ieee.org
Knowledge base question answering (KBQA) aims to answer a question over a knowledge
base (KB). Early studies mainly focused on answering simple questions over KBs and …

FactGraph: Evaluating factuality in summarization with semantic graph representations

LFR Ribeiro, M Liu, I Gurevych, M Dreyer… - arXiv preprint arXiv …, 2022 - arxiv.org
Despite recent improvements in abstractive summarization, most current approaches
generate summaries that are not factually consistent with the source document, severely …

Knowledge base question answering: A semantic parsing perspective

Y Gu, V Pahuja, G Cheng, Y Su - arXiv preprint arXiv:2209.04994, 2022 - arxiv.org
Recent advances in deep learning have greatly propelled the research on semantic parsing.
Improvement has since been made in many downstream tasks, including natural language …

Structure-aware Fine-tuning of Sequence-to-sequence Transformers for Transition-based AMR Parsing

J Zhou, T Naseem, RF Astudillo, YS Lee… - arXiv preprint arXiv …, 2021 - arxiv.org
Predicting linearized Abstract Meaning Representation (AMR) graphs using pre-trained
sequence-to-sequence Transformer models has recently led to large improvements on AMR …

Maximum Bayes Smatch ensemble distillation for AMR parsing

YS Lee, RF Astudillo, TL Hoang, T Naseem… - arXiv preprint arXiv …, 2021 - arxiv.org
AMR parsing has experienced an unprecendented increase in performance in the last three
years, due to a mixture of effects including architecture improvements and transfer learning …

Relmkg: reasoning with pre-trained language models and knowledge graphs for complex question answering

X Cao, Y Liu - Applied Intelligence, 2023 - Springer
The goal of complex question answering over knowledge bases (KBQA) is to find an answer
entity in a knowledge graph. Recent information retrieval-based methods have focused on …

A Short Review of Abstract Meaning Representation Applications

N Tohidi, C Dadkhah - Modeling and Simulation in Electrical …, 2022 - mseee.semnan.ac.ir
Meaning Representation (AMR) is a representation model in which AMRs are rooted and
labeled graphs that capture semantics on the sentence level while abstracting away from …

Interpretable AMR-based question decomposition for multi-hop question answering

Z Deng, Y Zhu, Y Chen, M Witbrock… - arXiv preprint arXiv …, 2022 - arxiv.org
Effective multi-hop question answering (QA) requires reasoning over multiple scattered
paragraphs and providing explanations for answers. Most existing approaches cannot …