Driving and suppressing the human language network using large language models

G Tuckute, A Sathe, S Srikant, M Taliaferro… - Nature Human …, 2024 - nature.com
Transformer models such as GPT generate human-like language and are predictive of
human brain responses to language. Here, using functional-MRI-measured brain responses …

Do large language models know what humans know?

S Trott, C Jones, T Chang, J Michaelov… - Cognitive …, 2023 - Wiley Online Library
Humans can attribute beliefs to others. However, it is unknown to what extent this ability
results from an innate biological endowment or from experience accrued through child …

Large language models demonstrate the potential of statistical learning in language

P Contreras Kallens… - Cognitive …, 2023 - Wiley Online Library
To what degree can language be acquired from linguistic input alone? This question has
vexed scholars for millennia and is still a major focus of debate in the cognitive science of …

Event knowledge in large language models: the gap between the impossible and the unlikely

C Kauf, AA Ivanova, G Rambelli, E Chersoni… - Cognitive …, 2023 - Wiley Online Library
Word cooccurrence patterns in language corpora contain a surprising amount of
conceptual knowledge. Large language models (LLMs), trained to predict words in context …

Lexical-semantic content, not syntactic structure, is the main contributor to ANN-brain similarity of fMRI responses in the language network

C Kauf, G Tuckute, R Levy, J Andreas… - Neurobiology of …, 2024 - direct.mit.edu
Abstract Representations from artificial neural network (ANN) language models have been
shown to predict human brain activity in the language network. To understand what aspects …

Can language models handle recursively nested grammatical structures? A case study on comparing models and humans

A Lampinen - Computational Linguistics, 2024 - direct.mit.edu
How should we compare the capabilities of language models (LMs) and humans? In this
article, I draw inspiration from comparative psychology to highlight challenges in these …

How to plant trees in language models: Data and architectural effects on the emergence of syntactic inductive biases

A Mueller, T Linzen - arXiv preprint arXiv:2305.19905, 2023 - arxiv.org
Accurate syntactic representations are essential for robust generalization in natural
language. Recent work has found that pre-training can teach language models to rely on …

Large Language Models: The Need for Nuance in Current Debates and a Pragmatic Perspective on Understanding

B Van Dijk, T Kouwenhoven, MR Spruit… - arXiv preprint arXiv …, 2023 - arxiv.org
Current Large Language Models (LLMs) are unparalleled in their ability to generate
grammatically correct, fluent text. LLMs are appearing rapidly, and debates on LLM …

[HTML][HTML] Surprisal from language models can predict ERPs in processing predicate-argument structures only if enriched by an Agent Preference principle

E Huber, S Sauppe, A Isasi-Isasmendi… - Neurobiology of …, 2024 - direct.mit.edu
Abstract Language models based on artificial neural networks increasingly capture key
aspects of how humans process sentences. Most notably, model-based surprisals predict …

Why linguistics will thrive in the 21st century: A reply to Piantadosi (2023)

J Kodner, S Payne, J Heinz - arXiv preprint arXiv:2308.03228, 2023 - arxiv.org
We present a critical assessment of Piantadosi's (2023) claim that" Modern language
models refute Chomsky's approach to language," focusing on four main points. First, despite …