Language models are greedy reasoners: A systematic formal analysis of chain-of-thought
Large language models (LLMs) have shown remarkable reasoning capabilities given chain-
of-thought prompts (examples with intermediate reasoning steps). Existing benchmarks …
of-thought prompts (examples with intermediate reasoning steps). Existing benchmarks …
[HTML][HTML] A survey on complex factual question answering
Answering complex factual questions has drawn a lot of attention. Researchers leverage
various data sources to support complex QA, such as unstructured texts, structured …
various data sources to support complex QA, such as unstructured texts, structured …
Complex knowledge base question answering: A survey
Knowledge base question answering (KBQA) aims to answer a question over a knowledge
base (KB). Early studies mainly focused on answering simple questions over KBs and …
base (KB). Early studies mainly focused on answering simple questions over KBs and …
FactGraph: Evaluating factuality in summarization with semantic graph representations
Despite recent improvements in abstractive summarization, most current approaches
generate summaries that are not factually consistent with the source document, severely …
generate summaries that are not factually consistent with the source document, severely …
Knowledge base question answering: A semantic parsing perspective
Recent advances in deep learning have greatly propelled the research on semantic parsing.
Improvement has since been made in many downstream tasks, including natural language …
Improvement has since been made in many downstream tasks, including natural language …
Structure-aware Fine-tuning of Sequence-to-sequence Transformers for Transition-based AMR Parsing
Predicting linearized Abstract Meaning Representation (AMR) graphs using pre-trained
sequence-to-sequence Transformer models has recently led to large improvements on AMR …
sequence-to-sequence Transformer models has recently led to large improvements on AMR …
Maximum Bayes Smatch ensemble distillation for AMR parsing
AMR parsing has experienced an unprecendented increase in performance in the last three
years, due to a mixture of effects including architecture improvements and transfer learning …
years, due to a mixture of effects including architecture improvements and transfer learning …
Relmkg: reasoning with pre-trained language models and knowledge graphs for complex question answering
X Cao, Y Liu - Applied Intelligence, 2023 - Springer
The goal of complex question answering over knowledge bases (KBQA) is to find an answer
entity in a knowledge graph. Recent information retrieval-based methods have focused on …
entity in a knowledge graph. Recent information retrieval-based methods have focused on …
A Short Review of Abstract Meaning Representation Applications
Meaning Representation (AMR) is a representation model in which AMRs are rooted and
labeled graphs that capture semantics on the sentence level while abstracting away from …
labeled graphs that capture semantics on the sentence level while abstracting away from …
Interpretable AMR-based question decomposition for multi-hop question answering
Effective multi-hop question answering (QA) requires reasoning over multiple scattered
paragraphs and providing explanations for answers. Most existing approaches cannot …
paragraphs and providing explanations for answers. Most existing approaches cannot …