The language network as a natural kind within the broader landscape of the human brain
Abstract Language behaviour is complex, but neuroscientific evidence disentangles it into
distinct components supported by dedicated brain areas or networks. In this Review, we …
distinct components supported by dedicated brain areas or networks. In this Review, we …
Language in brains, minds, and machines
G Tuckute, N Kanwisher… - Annual Review of …, 2024 - annualreviews.org
It has long been argued that only humans could produce and understand language. But
now, for the first time, artificial language models (LMs) achieve this feat. Here we survey the …
now, for the first time, artificial language models (LMs) achieve this feat. Here we survey the …
Explaining black box text modules in natural language with language models
Large language models (LLMs) have demonstrated remarkable prediction performance for a
growing array of tasks. However, their rapid proliferation and increasing opaqueness have …
growing array of tasks. However, their rapid proliferation and increasing opaqueness have …
Language acquisition: do children and language models follow similar learning stages?
During language acquisition, children follow a typical sequence of learning stages, whereby
they first learn to categorize phonemes before they develop their lexicon and eventually …
they first learn to categorize phonemes before they develop their lexicon and eventually …
[HTML][HTML] Navigating the semantic space: Unraveling the structure of meaning in psychosis using different computational language models
Speech in psychosis has long been ascribed as involving 'loosening of associations'. We
pursued the aim to elucidate its underlying cognitive mechanisms by analysing picture …
pursued the aim to elucidate its underlying cognitive mechanisms by analysing picture …
Metric-Learning Encoding Models Identify Processing Profiles of Linguistic Features in BERT's Representations
L Jalouzot, R Sobczyk, B Lhopitallier, J Salle… - arXiv preprint arXiv …, 2024 - arxiv.org
We introduce Metric-Learning Encoding Models (MLEMs) as a new approach to understand
how neural systems represent the theoretical features of the objects they process. As a proof …
how neural systems represent the theoretical features of the objects they process. As a proof …
Multipath parsing in the brain
Humans understand sentences word-by-word, in the order that they hear them. This
incrementality entails resolving temporary ambiguities about syntactic relationships. We …
incrementality entails resolving temporary ambiguities about syntactic relationships. We …
fMRI predictors based on language models of increasing complexity recover brain left lateralization
L Bonnasse-Gahot, C Pallier - arXiv preprint arXiv:2405.17992, 2024 - arxiv.org
Over the past decade, studies of naturalistic language processing where participants are
scanned while listening to continuous text have flourished. Using word embeddings at first …
scanned while listening to continuous text have flourished. Using word embeddings at first …
Probing brain context-sensitivity with masked-attention generation
Two fundamental questions in neurolinguistics concerns the brain regions that integrate
information beyond the lexical level, and the size of their window of integration. To address …
information beyond the lexical level, and the size of their window of integration. To address …
Modeling Syntactic Ambiguity With Dependency Parsing
B Franzluebbers - 2023 - search.proquest.com
A generative incremental dependency parser enhanced with sequence encodings from a
large language model was used to calculate a syntactic surprisal measure in order to …
large language model was used to calculate a syntactic surprisal measure in order to …