Dissociating language and thought in large language models
Large language models (LLMs) have come closest among all models to date to mastering
human language, yet opinions about their linguistic and cognitive capabilities remain split …
human language, yet opinions about their linguistic and cognitive capabilities remain split …
A review on language models as knowledge bases
Recently, there has been a surge of interest in the NLP community on the use of pretrained
Language Models (LMs) as Knowledge Bases (KBs). Researchers have shown that LMs …
Language Models (LMs) as Knowledge Bases (KBs). Researchers have shown that LMs …
Faith and fate: Limits of transformers on compositionality
Transformer large language models (LLMs) have sparked admiration for their exceptional
performance on tasks that demand intricate multi-step reasoning. Yet, these models …
performance on tasks that demand intricate multi-step reasoning. Yet, these models …
Towards reasoning in large language models: A survey
Reasoning is a fundamental aspect of human intelligence that plays a crucial role in
activities such as problem solving, decision making, and critical thinking. In recent years …
activities such as problem solving, decision making, and critical thinking. In recent years …
Reasoning like program executors
Reasoning over natural language is a long-standing goal for the research community.
However, studies have shown that existing language models are inadequate in reasoning …
However, studies have shown that existing language models are inadequate in reasoning …
Weakly-supervised 3d spatial reasoning for text-based visual question answering
Text-based Visual Question Answering (TextVQA) aims to produce correct answers for given
questions about the images with multiple scene texts. In most cases, the texts naturally …
questions about the images with multiple scene texts. In most cases, the texts naturally …
Lego-prover: Neural theorem proving with growing libraries
Despite the success of large language models (LLMs), the task of theorem proving still
remains one of the hardest reasoning tasks that is far from being fully solved. Prior methods …
remains one of the hardest reasoning tasks that is far from being fully solved. Prior methods …
Understanding natural language understanding systems
A Lenci - Sistemi intelligenti, 2023 - rivisteweb.it
The development of machines that “talk like us”, also known as Natural Language
Understanding (NLU) systems, is the Holy Grail of Artificial Intelligence (AI), since language …
Understanding (NLU) systems, is the Holy Grail of Artificial Intelligence (AI), since language …
Do PLMs know and understand ontological knowledge?
Ontological knowledge, which comprises classes and properties and their relationships, is
integral to world knowledge. It is significant to explore whether Pretrained Language Models …
integral to world knowledge. It is significant to explore whether Pretrained Language Models …
Improved logical reasoning of language models via differentiable symbolic programming
Pre-trained large language models (LMs) struggle to perform logical reasoning reliably
despite advances in scale and compositionality. In this work, we tackle this challenge …
despite advances in scale and compositionality. In this work, we tackle this challenge …