Language models represent space and time

W Gurnee, M Tegmark - arXiv preprint arXiv:2310.02207, 2023 - arxiv.org
The capabilities of large language models (LLMs) have sparked debate over whether such
systems just learn an enormous collection of superficial statistics or a coherent model of the …

Grounding the vector space of an octopus: Word meaning from raw text

A Søgaard - Minds and Machines, 2023 - Springer
Most, if not all, philosophers agree that computers cannot learn what words refers to from
raw text alone. While many attacked Searle's Chinese Room thought experiment, no one …

Give me the facts! a survey on factual knowledge probing in pre-trained language models

P Youssef, OA Koraş, M Li, J Schlötterer… - arXiv preprint arXiv …, 2023 - arxiv.org
Pre-trained Language Models (PLMs) are trained on vast unlabeled data, rich in world
knowledge. This fact has sparked the interest of the community in quantifying the amount of …

Evaluating embeddings from pre-trained language models and knowledge graphs for educational content recommendation

X Li, A Henriksson, M Duneld, J Nouri, Y Wu - Future Internet, 2023 - mdpi.com
Educational content recommendation is a cornerstone of AI-enhanced learning. In particular,
to facilitate navigating the diverse learning resources available on learning platforms …

Understanding models understanding language

A Søgaard - Synthese, 2022 - Springer
Abstract Landgrebe and Smith (Synthese 198 (March): 2061–2081, 2021) present an
unflattering diagnosis of recent advances in what they call language-centric artificial …

Are large language models geospatially knowledgeable?

P Bhandari, A Anastasopoulos, D Pfoser - Proceedings of the 31st ACM …, 2023 - dl.acm.org
Despite the impressive performance of Large Language Models (LLM) for various natural
language processing tasks, little is known about their comprehension of geographic data …

Enhancing text representations separately with entity descriptions

Q Zhao, Y Lei, Q Wang, Z Kang, J Liu - Neurocomputing, 2023 - Elsevier
Several studies have focused on incorporating language models with entity descriptions to
facilitate the model with a better understanding of knowledge. Existing methods usually …

Monotonic representation of numeric properties in language models

B Heinzerling, K Inui - arXiv preprint arXiv:2403.10381, 2024 - arxiv.org
Language models (LMs) can express factual knowledge involving numeric properties such
as Karl Popper was born in 1902. However, how this information is encoded in the model's …

What Do Language Models Hear? Probing for Auditory Representations in Language Models

J Ngo, Y Kim - arXiv preprint arXiv:2402.16998, 2024 - arxiv.org
This work explores whether language models encode meaningfully grounded
representations of sounds of objects. We learn a linear probe that retrieves the correct text …

Monotonic Representation of Numeric Attributes in Language Models

B Heinzerling, K Inui - Proceedings of the 62nd Annual Meeting of …, 2024 - aclanthology.org
Abstract Language models (LMs) can express factual knowledge involving numeric
properties such as Karl Popper was born in 1902. However, how this information is encoded …