A survey of deep learning for mathematical reasoning
Mathematical reasoning is a fundamental aspect of human intelligence and is applicable in
various fields, including science, engineering, finance, and everyday life. The development …
various fields, including science, engineering, finance, and everyday life. The development …
A survey on text-to-sql parsing: Concepts, methods, and future directions
Text-to-SQL parsing is an essential and challenging task. The goal of text-to-SQL parsing is
to convert a natural language (NL) question to its corresponding structured query language …
to convert a natural language (NL) question to its corresponding structured query language …
Chameleon: Plug-and-play compositional reasoning with large language models
Large language models (LLMs) have achieved remarkable progress in solving various
natural language processing tasks due to emergent reasoning abilities. However, LLMs …
natural language processing tasks due to emergent reasoning abilities. However, LLMs …
Lever: Learning to verify language-to-code generation with execution
The advent of large language models trained on code (code LLMs) has led to significant
progress in language-to-code generation. State-of-the-art approaches in this area combine …
progress in language-to-code generation. State-of-the-art approaches in this area combine …
Dynamic prompt learning via policy gradient for semi-structured mathematical reasoning
Mathematical reasoning, a core ability of human intelligence, presents unique challenges for
machines in abstract thinking and logical reasoning. Recent large pre-trained language …
machines in abstract thinking and logical reasoning. Recent large pre-trained language …
Lift: Language-interfaced fine-tuning for non-language machine learning tasks
Fine-tuning pretrained language models (LMs) without making any architectural changes
has become a norm for learning various language downstream tasks. However, for non …
has become a norm for learning various language downstream tasks. However, for non …
Large language models are few (1)-shot table reasoners
W Chen - arXiv preprint arXiv:2210.06710, 2022 - arxiv.org
Recent literature has shown that large language models (LLMs) are generally excellent few-
shot reasoners to solve text reasoning tasks. However, the capability of LLMs on table …
shot reasoners to solve text reasoning tasks. However, the capability of LLMs on table …
To repeat or not to repeat: Insights from scaling llm under token-crisis
Recent research has highlighted the importance of dataset size in scaling language models.
However, large language models (LLMs) are notoriously token-hungry during pre-training …
However, large language models (LLMs) are notoriously token-hungry during pre-training …
A survey on stance detection for mis-and disinformation identification
Understanding attitudes expressed in texts, also known as stance detection, plays an
important role in systems for detecting false information online, be it misinformation …
important role in systems for detecting false information online, be it misinformation …
Tool documentation enables zero-shot tool-usage with large language models
Today, large language models (LLMs) are taught to use new tools by providing a few
demonstrations of the tool's usage. Unfortunately, demonstrations are hard to acquire, and …
demonstrations of the tool's usage. Unfortunately, demonstrations are hard to acquire, and …