Dissociating language and thought in large language models

K Mahowald, AA Ivanova, IA Blank, N Kanwisher… - Trends in Cognitive …, 2024 - cell.com
Large language models (LLMs) have come closest among all models to date to mastering
human language, yet opinions about their linguistic and cognitive capabilities remain split …

[HTML][HTML] Using artificial neural networks to ask 'why'questions of minds and brains

N Kanwisher, M Khosla, K Dobs - Trends in Neurosciences, 2023 - cell.com
Neuroscientists have long characterized the properties and functions of the nervous system,
and are increasingly succeeding in answering how brains perform the tasks they do. But the …

How close is chatgpt to human experts? comparison corpus, evaluation, and detection

B Guo, X Zhang, Z Wang, M Jiang, J Nie, Y Ding… - arXiv preprint arXiv …, 2023 - arxiv.org
The introduction of ChatGPT has garnered widespread attention in both academic and
industrial communities. ChatGPT is able to respond effectively to a wide range of human …

High-resolution image reconstruction with latent diffusion models from human brain activity

Y Takagi, S Nishimoto - … of the IEEE/CVF Conference on …, 2023 - openaccess.thecvf.com
Reconstructing visual experiences from human brain activity offers a unique way to
understand how the brain represents the world, and to interpret the connection between …

[HTML][HTML] Evidence of a predictive coding hierarchy in the human brain listening to speech

C Caucheteux, A Gramfort, JR King - Nature human behaviour, 2023 - nature.com
Considerable progress has recently been made in natural language processing: deep
learning algorithms are increasingly able to generate, summarize, translate and classify …

Can language models learn from explanations in context?

AK Lampinen, I Dasgupta, SCY Chan… - arXiv preprint arXiv …, 2022 - arxiv.org
Language Models (LMs) can perform new tasks by adapting to a few in-context examples.
For humans, explanations that connect examples to task principles can improve learning …

Language models show human-like content effects on reasoning

I Dasgupta, AK Lampinen, SCY Chan… - arXiv preprint arXiv …, 2022 - arxiv.org
Abstract reasoning is a key ability for an intelligent system. Large language models (LMs)
achieve above-chance performance on abstract reasoning tasks, but exhibit many …

A hierarchy of linguistic predictions during natural language comprehension

M Heilbron, K Armeni, JM Schoffelen… - Proceedings of the …, 2022 - National Acad Sciences
Understanding spoken language requires transforming ambiguous acoustic streams into a
hierarchy of representations, from phonemes to meaning. It has been suggested that the …

Large-scale evidence for logarithmic effects of word predictability on reading time

C Shain, C Meister, T Pimentel… - Proceedings of the …, 2024 - National Acad Sciences
During real-time language comprehension, our minds rapidly decode complex meanings
from sequences of words. The difficulty of doing so is known to be related to words' …

Meaning without reference in large language models

ST Piantadosi, F Hill - arXiv preprint arXiv:2208.02957, 2022 - arxiv.org
The widespread success of large language models (LLMs) has been met with skepticism
that they possess anything like human concepts or meanings. Contrary to claims that LLMs …