Natural language reasoning, a survey
This survey paper proposes a clearer view of natural language reasoning in the field of
Natural Language Processing (NLP), both conceptually and practically. Conceptually, we …
Natural Language Processing (NLP), both conceptually and practically. Conceptually, we …
Augmented language models: a survey
This survey reviews works in which language models (LMs) are augmented with reasoning
skills and the ability to use tools. The former is defined as decomposing a potentially …
skills and the ability to use tools. The former is defined as decomposing a potentially …
Measuring and narrowing the compositionality gap in language models
We investigate the ability of language models to perform compositional reasoning tasks
where the overall solution depends on correctly composing the answers to sub-problems …
where the overall solution depends on correctly composing the answers to sub-problems …
Toolqa: A dataset for llm question answering with external tools
Abstract Large Language Models (LLMs) have demonstrated impressive performance in
various NLP tasks, but they still suffer from challenges such as hallucination and weak …
various NLP tasks, but they still suffer from challenges such as hallucination and weak …
“What it wants me to say”: Bridging the abstraction gap between end-user programmers and code-generating large language models
Code-generating large language models map natural language to code. However, only a
small portion of the infinite space of naturalistic utterances is effective at guiding code …
small portion of the infinite space of naturalistic utterances is effective at guiding code …
Take a step back: Evoking reasoning via abstraction in large language models
We present Step-Back Prompting, a simple prompting technique that enables LLMs to do
abstractions to derive high-level concepts and first principles from instances containing …
abstractions to derive high-level concepts and first principles from instances containing …
Self-discover: Large language models self-compose reasoning structures
We introduce SELF-DISCOVER, a general framework for LLMs to self-discover the task-
intrinsic reasoning structures to tackle complex reasoning problems that are challenging for …
intrinsic reasoning structures to tackle complex reasoning problems that are challenging for …
Exploring question decomposition for zero-shot VQA
Visual question answering (VQA) has traditionally been treated as a single-step task where
each question receives the same amount of effort, unlike natural human question-answering …
each question receives the same amount of effort, unlike natural human question-answering …
An LLM compiler for parallel function calling
Large Language Models (LLMs) have shown remarkable results on various complex
reasoning benchmarks. The reasoning capabilities of LLMs enable them to execute function …
reasoning benchmarks. The reasoning capabilities of LLMs enable them to execute function …
Instruction tuned models are quick learners
Instruction tuning of language models has demonstrated the ability to enhance model
generalization to unseen tasks via in-context learning using a few examples. However …
generalization to unseen tasks via in-context learning using a few examples. However …