Compositional semantic parsing with large language models

A Drozdov, N Schärli, E Akyürek, N Scales… - The Eleventh …, 2022 - openreview.net
Humans can reason compositionally when presented with new tasks. Previous research
shows that appropriate prompting techniques enable large language models (LLMs) to …

Improving compositional generalization with latent structure and data augmentation

L Qiu, P Shaw, P Pasupat, PK Nowak, T Linzen… - arXiv preprint arXiv …, 2021 - arxiv.org
Generic unstructured neural networks have been shown to struggle on out-of-distribution
compositional generalization. Compositional data augmentation via example recombination …

LexSym: Compositionality as lexical symmetry

E Akyürek, J Andreas - Proceedings of the 61st Annual Meeting of …, 2023 - aclanthology.org
In tasks like semantic parsing, instruction following, and question answering, standard deep
networks fail to generalize compositionally from small datasets. Many existing approaches …

How Do In-Context Examples Affect Compositional Generalization?

S An, Z Lin, Q Fu, B Chen, N Zheng, JG Lou… - arXiv preprint arXiv …, 2023 - arxiv.org
Compositional generalization--understanding unseen combinations of seen primitives--is an
essential reasoning capability in human intelligence. The AI community mainly studies this …

Compositionality in computational linguistics

L Donatelli, A Koller - Annual Review of Linguistics, 2023 - annualreviews.org
Neural models greatly outperform grammar-based models across many tasks in modern
computational linguistics. This raises the question of whether linguistic principles, such as …

Uncontrolled lexical exposure leads to overestimation of compositional generalization in pretrained models

N Kim, T Linzen, P Smolensky - arXiv preprint arXiv:2212.10769, 2022 - arxiv.org
Human linguistic capacity is often characterized by compositionality and the generalization it
enables--human learners can produce and comprehend novel complex expressions by …

Magnifico: Evaluating the in-context learning ability of large language models to generalize to novel interpretations

A Patel, S Bhattamishra, S Reddy… - arXiv preprint arXiv …, 2023 - arxiv.org
Humans possess a remarkable ability to assign novel interpretations to linguistic
expressions, enabling them to learn new words and understand community-specific …

SLOG: A structural generalization benchmark for semantic parsing

B Li, L Donatelli, A Koller, T Linzen, Y Yao… - arXiv preprint arXiv …, 2023 - arxiv.org
The goal of compositional generalization benchmarks is to evaluate how well models
generalize to new complex linguistic expressions. Existing benchmarks often focus on …

Structural generalization is hard for sequence-to-sequence models

Y Yao, A Koller - arXiv preprint arXiv:2210.13050, 2022 - arxiv.org
Sequence-to-sequence (seq2seq) models have been successful across many NLP tasks,
including ones that require predicting linguistic structure. However, recent work on …

Compositional generalization with a broad-coverage semantic parser

P Weißenhorn, L Donatelli, A Koller - Proceedings of the 11th …, 2022 - aclanthology.org
We show how the AM parser, a compositional semantic parser (Groschwitz et al., 2018) can
solve compositional generalization on the COGS dataset. It is the first semantic parser that …