Reasoning with language model prompting: A survey
Reasoning, as an essential ability for complex problem-solving, can provide back-end
support for various real-world applications, such as medical diagnosis, negotiation, etc. This …
support for various real-world applications, such as medical diagnosis, negotiation, etc. This …
Thought propagation: An analogical approach to complex reasoning with large language models
Large Language Models (LLMs) have achieved remarkable success in reasoning tasks with
the development of prompting methods. However, existing prompting approaches cannot …
the development of prompting methods. However, existing prompting approaches cannot …
Say what you mean! large language models speak too positively about negative commonsense knowledge
Large language models (LLMs) have been widely studied for their ability to store and utilize
positive knowledge. However, negative knowledge, such as" lions don't live in the ocean", is …
positive knowledge. However, negative knowledge, such as" lions don't live in the ocean", is …
Open-Ethical AI: Advancements in Open-Source Human-Centric Neural Language Models
This survey summarises the most recent methods for building and assessing helpful, honest,
and harmless neural language models, considering small, medium, and large-size models …
and harmless neural language models, considering small, medium, and large-size models …
In-context analogical reasoning with pre-trained language models
Analogical reasoning is a fundamental capacity of human cognition that allows us to reason
abstractly about novel situations by relating them to past experiences. While it is thought to …
abstractly about novel situations by relating them to past experiences. While it is thought to …
BertNet: Harvesting knowledge graphs with arbitrary relations from pretrained language models
It is crucial to automatically construct knowledge graphs (KGs) of diverse new relations to
support knowledge discovery and broad applications. Previous KG construction methods …
support knowledge discovery and broad applications. Previous KG construction methods …
Analogykb: Unlocking analogical reasoning of language models with a million-scale knowledge base
Analogical reasoning is a fundamental cognitive ability of humans. However, current
language models (LMs) still struggle to achieve human-like performance in analogical …
language models (LMs) still struggle to achieve human-like performance in analogical …
This is the way: designing and compiling LEPISZCZE, a comprehensive NLP benchmark for Polish
L Augustyniak, K Tagowski… - Advances in …, 2022 - proceedings.neurips.cc
The availability of compute and data to train larger and larger language models increases
the demand for robust methods of benchmarking the true progress of LM training. Recent …
the demand for robust methods of benchmarking the true progress of LM training. Recent …
Harnessing Knowledge and Reasoning for Human-Like Natural Language Generation: A Brief Review
The rapid development and application of natural language generation (NLG) techniques
has revolutionized the field of automatic text production. However, these techniques are still …
has revolutionized the field of automatic text production. However, these techniques are still …
Buffer of Thoughts: Thought-Augmented Reasoning with Large Language Models
We introduce Buffer of Thoughts (BoT), a novel and versatile thought-augmented reasoning
approach for enhancing accuracy, efficiency and robustness of large language models …
approach for enhancing accuracy, efficiency and robustness of large language models …