A closer look at the self-verification abilities of large language models in logical reasoning
Logical reasoning has been an ongoing pursuit in the field of AI. Despite significant
advancements made by large language models (LLMs), they still struggle with complex …
advancements made by large language models (LLMs), they still struggle with complex …
Clongeval: A chinese benchmark for evaluating long-context large language models
Developing Large Language Models (LLMs) with robust long-context capabilities has been
the recent research focus, resulting in the emergence of long-context LLMs proficient in …
the recent research focus, resulting in the emergence of long-context LLMs proficient in …
IDOL: indicator-oriented logic pre-training for logical reasoning
In the field of machine reading comprehension (MRC), existing systems have surpassed the
average performance of human beings in many tasks like SQuAD. However, there is still a …
average performance of human beings in many tasks like SQuAD. However, there is still a …
Harnessing Knowledge and Reasoning for Human-Like Natural Language Generation: A Brief Review
The rapid development and application of natural language generation (NLG) techniques
has revolutionized the field of automatic text production. However, these techniques are still …
has revolutionized the field of automatic text production. However, these techniques are still …
Determlr: Augmenting llm-based logical reasoning from indeterminacy to determinacy
Recent advances in large language models (LLMs) have revolutionized the landscape of
reasoning tasks. To enhance the capabilities of LLMs to emulate human reasoning, prior …
reasoning tasks. To enhance the capabilities of LLMs to emulate human reasoning, prior …
Unifying structure reasoning and language model pre-training for complex reasoning
Recent pre-trained language models (PLMs) equipped with foundation reasoning skills
have shown remarkable performance on downstream complex tasks. However, the …
have shown remarkable performance on downstream complex tasks. However, the …
LogiTorch: A PyTorch-based library for logical reasoning on natural language
Logical reasoning on natural language is one of the most challenging tasks for deep
learning models. There has been an increasing interest in developing new benchmarks to …
learning models. There has been an increasing interest in developing new benchmarks to …
DaGATN: A Type of Machine Reading Comprehension Based on Discourse-Apperceptive Graph Attention Networks
M Wu, T Sun, Z Wang, J Duan - Applied Sciences, 2023 - mdpi.com
In recent years, with the advancement of natural language processing techniques and the
release of models like ChatGPT, how language models understand questions has become a …
release of models like ChatGPT, how language models understand questions has become a …
Disentangling reasoning capabilities from language models with compositional reasoning transformers
This paper presents ReasonFormer, a unified reasoning framework for mirroring the
modular and compositional reasoning process of humans in complex decision-making …
modular and compositional reasoning process of humans in complex decision-making …
Unifying Structure Reasoning and Language Pre-Training for Complex Reasoning Tasks
Recent pre-trained language models (PLMs) equipped with foundation reasoning skills
have shown remarkable performance on downstream complex tasks. However, the …
have shown remarkable performance on downstream complex tasks. However, the …