Limits for learning with language models

N Asher, S Bhar, A Chaturvedi, J Hunter… - arXiv preprint arXiv …, 2023 - arxiv.org
With the advent of large language models (LLMs), the trend in NLP has been to train LLMs
on vast amounts of data to solve diverse language understanding and generation tasks. The …

Probing quantifier comprehension in large language models

A Gupta - arXiv preprint arXiv:2306.07384, 2023 - arxiv.org
With their increasing size, Large language models (LLMs) are becoming increasingly good
at language understanding tasks. But even with high performance on specific downstream …

Quantifying Generalizations: Exploring the Divide Between Human and LLMs' Sensitivity to Quantification

C Collacciani, G Rambelli… - Proceedings of the 62nd …, 2024 - aclanthology.org
Generics are expressions used to communicate abstractions about categories. While
conveying general truths (eg,“Birds fly”), generics have the interesting property to admit …

Syntaxshap: Syntax-aware explainability method for text generation

K Amara, R Sevastjanova, M El-Assady - arXiv preprint arXiv:2402.09259, 2024 - arxiv.org
To harness the power of large language models in safety-critical domains we need to
ensure the explainability of their predictions. However, despite the significant attention to …

Revealing the Unwritten: Visual Investigation of Beam Search Trees to Address Language Model Prompting Challenges

T Spinner, R Kehlbeck, R Sevastjanova… - arXiv preprint arXiv …, 2023 - arxiv.org
The growing popularity of generative language models has amplified interest in interactive
methods to guide model outputs. Prompt refinement is considered one of the most effective …

A study on surprisal and semantic relatedness for eye-tracking data prediction

L Salicchi, E Chersoni, A Lenci - Frontiers in Psychology, 2023 - frontiersin.org
Previous research in computational linguistics dedicated a lot of effort to using language
modeling and/or distributional semantic models to predict metrics extracted from eye …

-generAItor: Tree-in-the-loop Text Generation for Language Model Explainability and Adaptation

T Spinner, R Kehlbeck, R Sevastjanova… - ACM Transactions on …, 2024 - dl.acm.org
Large language models (LLMs) are widely deployed in various downstream tasks, eg, auto-
completion, aided writing, or chat-based text generation. However, the considered output …

Visual Comparison of Text Sequences Generated by Large Language Models

R Sevastjanova, S Vogelbacher, A Spitz… - … IEEE Visualization in …, 2023 - ieeexplore.ieee.org
Causal language models have emerged as the leading technology for automating text
generation tasks. Although these models tend to produce outputs that resemble human …

Probing Quantifier Comprehension in Large Language Models: Another Example of Inverse Scaling

A Gupta - Proceedings of the 6th BlackboxNLP Workshop …, 2023 - aclanthology.org
With their increasing size, large language models (LLMs) are becoming increasingly good at
language understanding tasks. But even with high performance on specific downstream …

Assessing Logical Reasoning Capabilities of Encoder-Only Transformer Models

P Pirozelli, MM José, P de Tarso P. Filho… - … Conference on Neural …, 2024 - Springer
Transformer models have shown impressive abilities in natural language tasks such as text
generation and question answering. Still, it is not clear whether these models can …