Language in brains, minds, and machines

G Tuckute, N Kanwisher… - Annual Review of …, 2024 - annualreviews.org
It has long been argued that only humans could produce and understand language. But
now, for the first time, artificial language models (LMs) achieve this feat. Here we survey the …

Representations and generalization in artificial and brain neural networks

Q Li, B Sorscher, H Sompolinsky - Proceedings of the National Academy of …, 2024 - pnas.org
Humans and animals excel at generalizing from limited data, a capability yet to be fully
replicated in artificial intelligence. This perspective investigates generalization in biological …

Large language models demonstrate the potential of statistical learning in language

P Contreras Kallens… - Cognitive …, 2023 - Wiley Online Library
To what degree can language be acquired from linguistic input alone? This question has
vexed scholars for millennia and is still a major focus of debate in the cognitive science of …

Reconstructing the cascade of language processing in the brain using the internal computations of a transformer-based language model

S Kumar, TR Sumers, T Yamakoshi, A Goldstein… - BioRxiv, 2022 - biorxiv.org
Piecing together the meaning of a narrative requires understanding not only the individual
words but also the intricate relationships between them. How does the brain construct this …

[HTML][HTML] A shared model-based linguistic space for transmitting our thoughts from brain to brain in natural conversations

Z Zada, A Goldstein, S Michelmann, E Simony, A Price… - Neuron, 2024 - cell.com
Effective communication hinges on a mutual understanding of word meaning in different
contexts. We recorded brain activity using electrocorticography during spontaneous, face-to …

[HTML][HTML] A shared linguistic space for transmitting our thoughts from brain to brain in natural conversations

Z Zada, A Goldstein, S Michelmann, E Simony, A Price… - bioRxiv, 2023 - ncbi.nlm.nih.gov
Effective communication hinges on a mutual understanding of word meaning in different
contexts. The embedding space learned by large language models can serve as an explicit …

Words, Subwords, and Morphemes: What Really Matters in the Surprisal-Reading Time Relationship?

S Nair, P Resnik - arXiv preprint arXiv:2310.17774, 2023 - arxiv.org
An important assumption that comes with using LLMs on psycholinguistic data has gone
unverified. LLM-based predictions are based on subword tokenization, not decomposition of …

[HTML][HTML] Navigating the semantic space: Unraveling the structure of meaning in psychosis using different computational language models

R He, C Palominos, H Zhang, MF Alonso-Sánchez… - Psychiatry …, 2024 - Elsevier
Speech in psychosis has long been ascribed as involving 'loosening of associations'. We
pursued the aim to elucidate its underlying cognitive mechanisms by analysing picture …

Deep speech-to-text models capture the neural basis of spontaneous speech in everyday conversations

A Goldstein, H Wang, L Niekerken, Z Zada, B Aubrey… - bioRxiv, 2023 - biorxiv.org
Humans effortlessly use the continuous acoustics of speech to communicate rich linguistic
meaning during everyday conversations. In this study, we leverage 100 hours (half a million …

Exploring temporal sensitivity in the brain using multi-timescale language models: an EEG decoding study

S Ling, A Murphy, A Fyshe - Computational Linguistics, 2024 - direct.mit.edu
The brain's ability to perform complex computations at varying timescales is crucial, ranging
from understanding single words to grasping the overarching narrative of a story. Recently …