Dissociating language and thought in large language models

K Mahowald, AA Ivanova, IA Blank, N Kanwisher… - Trends in Cognitive …, 2024 - cell.com
Large language models (LLMs) have come closest among all models to date to mastering
human language, yet opinions about their linguistic and cognitive capabilities remain split …

A review on language models as knowledge bases

B AlKhamissi, M Li, A Celikyilmaz, M Diab… - arXiv preprint arXiv …, 2022 - arxiv.org
Recently, there has been a surge of interest in the NLP community on the use of pretrained
Language Models (LMs) as Knowledge Bases (KBs). Researchers have shown that LMs …

Faith and fate: Limits of transformers on compositionality

N Dziri, X Lu, M Sclar, XL Li, L Jiang… - Advances in …, 2024 - proceedings.neurips.cc
Transformer large language models (LLMs) have sparked admiration for their exceptional
performance on tasks that demand intricate multi-step reasoning. Yet, these models …

Towards reasoning in large language models: A survey

J Huang, KCC Chang - arXiv preprint arXiv:2212.10403, 2022 - arxiv.org
Reasoning is a fundamental aspect of human intelligence that plays a crucial role in
activities such as problem solving, decision making, and critical thinking. In recent years …

Reasoning like program executors

X Pi, Q Liu, B Chen, M Ziyadi, Z Lin, Q Fu, Y Gao… - arXiv preprint arXiv …, 2022 - arxiv.org
Reasoning over natural language is a long-standing goal for the research community.
However, studies have shown that existing language models are inadequate in reasoning …

Weakly-supervised 3d spatial reasoning for text-based visual question answering

H Li, J Huang, P Jin, G Song, Q Wu… - IEEE Transactions on …, 2023 - ieeexplore.ieee.org
Text-based Visual Question Answering (TextVQA) aims to produce correct answers for given
questions about the images with multiple scene texts. In most cases, the texts naturally …

Lego-prover: Neural theorem proving with growing libraries

H Wang, H Xin, C Zheng, L Li, Z Liu, Q Cao… - arXiv preprint arXiv …, 2023 - arxiv.org
Despite the success of large language models (LLMs), the task of theorem proving still
remains one of the hardest reasoning tasks that is far from being fully solved. Prior methods …

Understanding natural language understanding systems

A Lenci - Sistemi intelligenti, 2023 - rivisteweb.it
The development of machines that “talk like us”, also known as Natural Language
Understanding (NLU) systems, is the Holy Grail of Artificial Intelligence (AI), since language …

Do PLMs know and understand ontological knowledge?

W Wu, C Jiang, Y Jiang, P Xie, K Tu - arXiv preprint arXiv:2309.05936, 2023 - arxiv.org
Ontological knowledge, which comprises classes and properties and their relationships, is
integral to world knowledge. It is significant to explore whether Pretrained Language Models …

Improved logical reasoning of language models via differentiable symbolic programming

H Zhang, J Huang, Z Li, M Naik, E Xing - arXiv preprint arXiv:2305.03742, 2023 - arxiv.org
Pre-trained large language models (LMs) struggle to perform logical reasoning reliably
despite advances in scale and compositionality. In this work, we tackle this challenge …