Language models represent space and time
The capabilities of large language models (LLMs) have sparked debate over whether such
systems just learn an enormous collection of superficial statistics or a coherent model of the …
systems just learn an enormous collection of superficial statistics or a coherent model of the …
Grounding the vector space of an octopus: Word meaning from raw text
A Søgaard - Minds and Machines, 2023 - Springer
Most, if not all, philosophers agree that computers cannot learn what words refers to from
raw text alone. While many attacked Searle's Chinese Room thought experiment, no one …
raw text alone. While many attacked Searle's Chinese Room thought experiment, no one …
Give me the facts! a survey on factual knowledge probing in pre-trained language models
P Youssef, OA Koraş, M Li, J Schlötterer… - arXiv preprint arXiv …, 2023 - arxiv.org
Pre-trained Language Models (PLMs) are trained on vast unlabeled data, rich in world
knowledge. This fact has sparked the interest of the community in quantifying the amount of …
knowledge. This fact has sparked the interest of the community in quantifying the amount of …
Evaluating embeddings from pre-trained language models and knowledge graphs for educational content recommendation
Educational content recommendation is a cornerstone of AI-enhanced learning. In particular,
to facilitate navigating the diverse learning resources available on learning platforms …
to facilitate navigating the diverse learning resources available on learning platforms …
Understanding models understanding language
A Søgaard - Synthese, 2022 - Springer
Abstract Landgrebe and Smith (Synthese 198 (March): 2061–2081, 2021) present an
unflattering diagnosis of recent advances in what they call language-centric artificial …
unflattering diagnosis of recent advances in what they call language-centric artificial …
Are large language models geospatially knowledgeable?
Despite the impressive performance of Large Language Models (LLM) for various natural
language processing tasks, little is known about their comprehension of geographic data …
language processing tasks, little is known about their comprehension of geographic data …
Enhancing text representations separately with entity descriptions
Several studies have focused on incorporating language models with entity descriptions to
facilitate the model with a better understanding of knowledge. Existing methods usually …
facilitate the model with a better understanding of knowledge. Existing methods usually …
Monotonic representation of numeric properties in language models
B Heinzerling, K Inui - arXiv preprint arXiv:2403.10381, 2024 - arxiv.org
Language models (LMs) can express factual knowledge involving numeric properties such
as Karl Popper was born in 1902. However, how this information is encoded in the model's …
as Karl Popper was born in 1902. However, how this information is encoded in the model's …
What Do Language Models Hear? Probing for Auditory Representations in Language Models
This work explores whether language models encode meaningfully grounded
representations of sounds of objects. We learn a linear probe that retrieves the correct text …
representations of sounds of objects. We learn a linear probe that retrieves the correct text …
Monotonic Representation of Numeric Attributes in Language Models
B Heinzerling, K Inui - Proceedings of the 62nd Annual Meeting of …, 2024 - aclanthology.org
Abstract Language models (LMs) can express factual knowledge involving numeric
properties such as Karl Popper was born in 1902. However, how this information is encoded …
properties such as Karl Popper was born in 1902. However, how this information is encoded …