Neural-symbolic recursive machine for systematic generalization

Q Li, Y Zhu, Y Liang, YN Wu, SC Zhu… - arXiv preprint arXiv …, 2022 - arxiv.org
Despite the tremendous success, existing machine learning models still fall short of human-
like systematic generalization--learning compositional rules from limited data and applying …

Breakpoint transformers for modeling and tracking intermediate beliefs

K Richardson, R Tamari, O Sultan, R Tsarfaty… - arXiv preprint arXiv …, 2022 - arxiv.org
Can we teach natural language understanding models to track their beliefs through
intermediate points in text? We propose a representation learning framework called …

LeanReasoner: Boosting Complex Logical Reasoning with Lean

D Jiang, M Fonseca, SB Cohen - arXiv preprint arXiv:2403.13312, 2024 - arxiv.org
Large language models (LLMs) often struggle with complex logical reasoning due to logical
inconsistencies and the inherent difficulty of such reasoning. We use Lean, a theorem …

WebIE: Faithful and robust information extraction on the web

C Whitehouse, C Vania, AF Aji… - arXiv preprint arXiv …, 2023 - arxiv.org
Extracting structured and grounded fact triples from raw text is a fundamental task in
Information Extraction (IE). Existing IE datasets are typically collected from Wikipedia …

Towards Knowledge-Grounded Natural Language Understanding and Generation

C Whitehouse - arXiv preprint arXiv:2403.15364, 2024 - arxiv.org
This thesis investigates how natural language understanding and generation with
transformer models can benefit from grounding the models with knowledge representations …

[PDF][PDF] Shrinking Knowledge Base Size: Dimension Reduction, Splitting & Filtering

V Zouhar - 2022 - raw.githubusercontent.com
Recently neural network based approaches to knowledge-intensive NLP tasks, such as
question answering, started to rely heavily on the combination of neural retrievers and …