SLOG: A structural generalization benchmark for semantic parsing
The goal of compositional generalization benchmarks is to evaluate how well models
generalize to new complex linguistic expressions. Existing benchmarks often focus on …
generalize to new complex linguistic expressions. Existing benchmarks often focus on …
Benchmarking compositionality with formal languages
Recombining known primitive concepts into larger novel combinations is a quintessentially
human cognitive capability. Whether large neural models in NLP can acquire this ability …
human cognitive capability. Whether large neural models in NLP can acquire this ability …
Distillation of weighted automata from recurrent neural networks using a spectral approach
R Eyraud, S Ayache - Machine Learning, 2024 - Springer
This paper is an attempt to bridge the gap between deep learning and grammatical
inference. Indeed, it provides an algorithm to extract a (stochastic) formal language from any …
inference. Indeed, it provides an algorithm to extract a (stochastic) formal language from any …
LSTMs compose (and learn) bottom-up
Recent work in NLP shows that LSTM language models capture hierarchical structure in
language data. In contrast to existing work, we consider the\textit {learning} process that …
language data. In contrast to existing work, we consider the\textit {learning} process that …
Abstract meaning representation for legal documents: an empirical research on a human-annotated dataset
Natural language processing techniques contribute more and more in analyzing legal
documents recently, which supports the implementation of laws and rules using computers …
documents recently, which supports the implementation of laws and rules using computers …
Evaluating attribution methods using white-box LSTMs
Y Hao - arXiv preprint arXiv:2010.08606, 2020 - arxiv.org
Interpretability methods for neural networks are difficult to evaluate because we do not
understand the black-box models typically used to test them. This paper proposes a …
understand the black-box models typically used to test them. This paper proposes a …
The effect of cue length and position on noticing and learning of determiner agreement pairings: Evidence from a cue-balanced artificial vocabulary learning task
DR Walter, G Fischer, J Cai - PloS one, 2024 - journals.plos.org
The importance of cues in language learning has long been established and it is clear that
cues are an essential part of both first language (L1) and second/additional language (L2/A) …
cues are an essential part of both first language (L1) and second/additional language (L2/A) …
How LSTM encodes syntax: Exploring context vectors and semi-quantization on natural text
C Shibata, K Uchiumi, D Mochihashi - arXiv preprint arXiv:2010.00363, 2020 - arxiv.org
Long Short-Term Memory recurrent neural network (LSTM) is widely used and known to
capture informative long-term syntactic dependencies. However, how such information are …
capture informative long-term syntactic dependencies. However, how such information are …
Improving Image Captioning Using Deep Convolutional Neural Network
F Basiri, A Mohammadi, A Amer… - … on Electrical, Energy …, 2023 - ieeexplore.ieee.org
Image captioning is a challenging task that requires a computer vision system to generate
natural language descriptions for images. The aim is to build a model that can comprehend …
natural language descriptions for images. The aim is to build a model that can comprehend …
[PDF][PDF] Benchmarking Compositionality with Formal Languages
JVNSJ Rawski, AWR Cotterell - 2022 - aclanthology.org
Recombining known primitive concepts into larger novel combinations is a quintessentially
human cognitive capability. Whether large neural models in NLP can acquire this ability …
human cognitive capability. Whether large neural models in NLP can acquire this ability …