Towards a holistic landscape of situated theory of mind in large language models

Z Ma, J Sansom, R Peng, J Chai - arXiv preprint arXiv:2310.19619, 2023 - arxiv.org
Large Language Models (LLMs) have generated considerable interest and debate
regarding their potential emergence of Theory of Mind (ToM). Several recent inquiries reveal …

Few-shot character understanding in movies as an assessment to meta-learning of theory-of-mind

M Yu, Q Wang, S Zhang, Y Sang, K Pu, Z Wei… - arXiv preprint arXiv …, 2022 - arxiv.org
When reading a story, humans can quickly understand new fictional characters with a few
observations, mainly by drawing analogies to fictional and real people they already know …

Tomchallenges: A principle-guided dataset and diverse evaluation tasks for exploring theory of mind

X Ma, L Gao, Q Xu - arXiv preprint arXiv:2305.15068, 2023 - arxiv.org
Theory of Mind (ToM), the capacity to comprehend the mental states of distinct individuals, is
essential for numerous practical applications. With the development of large language …

Minddial: Enhancing conversational agents with theory-of-mind for common ground alignment and negotiation

S Qiu, M Liu, H Li, S Zhu, Z Zheng - … of the 25th Annual Meeting of …, 2024 - aclanthology.org
Humans talk in daily conversations while aligning and negotiating the expressed meanings
or common ground. Despite the impressive conversational abilities of the large generative …

Finding common ground: Annotating and predicting common ground in spoken conversations

M Markowska, M Taghizadeh, A Soubki… - arXiv preprint arXiv …, 2023 - arxiv.org
When we communicate with other humans, we do not simply generate a sequence of words.
Rather, we use our cognitive state (beliefs, desires, intentions) and our model of the …

Probing neural language models for understanding of words of estimative probability

D Sileo, MF Moens - arXiv preprint arXiv:2211.03358, 2022 - arxiv.org
Words of estimative probability (WEP) are expressions of a statement's plausibility (probably,
maybe, likely, doubt, likely, unlikely, impossible...). Multiple surveys demonstrate the …

[HTML][HTML] Knowledge representation and acquisition in the era of large language models: Reflections on learning to reason via PAC-Semantics

IG Mocanu, V Belle - Natural Language Processing Journal, 2023 - Elsevier
Human beings are known for their remarkable ability to comprehend, analyse, and interpret
common sense knowledge. This ability is critical for exhibiting intelligent behaviour, often …

Tom-lm: Delegating theory of mind reasoning to external symbolic executors in large language models

W Tang, V Belle - International Conference on Neural-Symbolic Learning …, 2024 - Springer
Abstract Theory of Mind (ToM) refers to the ability of individuals to attribute mental states to
others. While Large Language Models (LLMs) have shown some promise with ToM ability …

tasksource: A large collection of nlp tasks with a structured dataset preprocessing framework

D Sileo - Proceedings of the 2024 Joint International Conference …, 2024 - aclanthology.org
Abstract The HuggingFace Datasets Hub hosts thousands of datasets, offering exciting
opportunities for language model training and evaluation. However, datasets for a specific …

Assessing the Reasoning Abilities of ChatGPT in the Context of Claim Verification

J Dougrez-Lewis, ME Akhter, Y He… - arXiv preprint arXiv …, 2024 - arxiv.org
The reasoning capabilities of LLMs are currently hotly debated. We examine the issue from
the perspective of claim/rumour verification. We propose the first logical reasoning …