A survey of deep learning for mathematical reasoning
Mathematical reasoning is a fundamental aspect of human intelligence and is applicable in
various fields, including science, engineering, finance, and everyday life. The development …
various fields, including science, engineering, finance, and everyday life. The development …
Palm: Scaling language modeling with pathways
Large language models have been shown to achieve remarkable performance across a
variety of natural language tasks using few-shot learning, which drastically reduces the …
variety of natural language tasks using few-shot learning, which drastically reduces the …
Chain-of-thought prompting elicits reasoning in large language models
We explore how generating a chain of thought---a series of intermediate reasoning steps---
significantly improves the ability of large language models to perform complex reasoning. In …
significantly improves the ability of large language models to perform complex reasoning. In …
Wizardmath: Empowering mathematical reasoning for large language models via reinforced evol-instruct
Large language models (LLMs), such as GPT-4, have shown remarkable performance in
natural language processing (NLP) tasks, including challenging mathematical reasoning …
natural language processing (NLP) tasks, including challenging mathematical reasoning …
Active prompting with chain-of-thought for large language models
The increasing scale of large language models (LLMs) brings emergent abilities to various
complex tasks requiring reasoning, such as arithmetic and commonsense reasoning. It is …
complex tasks requiring reasoning, such as arithmetic and commonsense reasoning. It is …
Datasets for large language models: A comprehensive survey
This paper embarks on an exploration into the Large Language Model (LLM) datasets,
which play a crucial role in the remarkable advancements of LLMs. The datasets serve as …
which play a crucial role in the remarkable advancements of LLMs. The datasets serve as …
Automatic prompt augmentation and selection with chain-of-thought from labeled data
Chain-of-thought prompting (CoT) advances the reasoning abilities of large language
models (LLMs) and achieves superior performance in arithmetic, commonsense, and …
models (LLMs) and achieves superior performance in arithmetic, commonsense, and …
Learning to reason deductively: Math word problem solving as complex relation extraction
Solving math word problems requires deductive reasoning over the quantities in the text.
Various recent research efforts mostly relied on sequence-to-sequence or sequence-to-tree …
Various recent research efforts mostly relied on sequence-to-sequence or sequence-to-tree …
Solving math word problems via cooperative reasoning induced language models
Large-scale pre-trained language models (PLMs) bring new opportunities to challenging
problems, especially those that need high-level intelligence, such as the math word problem …
problems, especially those that need high-level intelligence, such as the math word problem …
Let gpt be a math tutor: Teaching math word problem solvers with customized exercise generation
In this paper, we present a novel approach for distilling math word problem solving
capabilities from large language models (LLMs) into smaller, more efficient student models …
capabilities from large language models (LLMs) into smaller, more efficient student models …