Do NLP models know numbers? probing numeracy in embeddings

E Wallace, Y Wang, S Li, S Singh… - arXiv preprint arXiv …, 2019 - arxiv.org
The ability to understand and work with numbers (numeracy) is critical for many complex
reasoning tasks. Currently, most NLP models treat numbers in text in the same way as other …

Methods for numeracy-preserving word embeddings

D Sundararaman, S Si, V Subramanian… - Proceedings of the …, 2020 - aclanthology.org
Word embedding models are typically able to capture the semantics of words via the
distributional hypothesis, but fail to capture the numerical properties of numbers that appear …

Reasoning in large language models through symbolic math word problems

V Gaur, N Saunshi - arXiv preprint arXiv:2308.01906, 2023 - arxiv.org
Large language models (LLMs) have revolutionized NLP by solving downstream tasks with
little to no labeled data. Despite their versatile abilities, the larger question of their ability to …

Giving BERT a calculator: Finding operations and arguments with reading comprehension

D Andor, L He, K Lee, E Pitler - arXiv preprint arXiv:1909.00109, 2019 - arxiv.org
Reading comprehension models have been successfully applied to extractive text answers,
but it is unclear how best to generalize these models to abstractive numerical answers. We …

NumGLUE: A suite of fundamental yet challenging mathematical reasoning tasks

S Mishra, A Mitra, N Varshney, B Sachdeva… - arXiv preprint arXiv …, 2022 - arxiv.org
Given the ubiquitous nature of numbers in text, reasoning with numbers to perform simple
calculations is an important skill of AI systems. While many datasets and models have been …

Injecting numerical reasoning skills into language models

M Geva, A Gupta, J Berant - arXiv preprint arXiv:2004.04487, 2020 - arxiv.org
Large pre-trained language models (LMs) are known to encode substantial amounts of
linguistic information. However, high-level reasoning skills, such as numerical reasoning …

Mathprompter: Mathematical reasoning using large language models

S Imani, L Du, H Shrivastava - arXiv preprint arXiv:2303.05398, 2023 - arxiv.org
Large Language Models (LLMs) have limited performance when solving arithmetic
reasoning tasks and often provide incorrect answers. Unlike natural language …

Hitab: A hierarchical table dataset for question answering and natural language generation

Z Cheng, H Dong, Z Wang, R Jia, J Guo, Y Gao… - arXiv preprint arXiv …, 2021 - arxiv.org
Tables are often created with hierarchies, but existing works on table reasoning mainly focus
on flat tables and neglect hierarchical tables. Hierarchical tables challenge existing methods …

Mwp-bert: Numeracy-augmented pre-training for math word problem solving

Z Liang, J Zhang, L Wang, W Qin, Y Lan, J Shao… - arXiv preprint arXiv …, 2021 - arxiv.org
Math word problem (MWP) solving faces a dilemma in number representation learning. In
order to avoid the number representation issue and reduce the search space of feasible …

Can llms master math? investigating large language models on math stack exchange

A Satpute, N Gießing, A Greiner-Petter… - Proceedings of the 47th …, 2024 - dl.acm.org
Large Language Models (LLMs) have demonstrated exceptional capabilities in various
natural language tasks, often achieving performances that surpass those of humans …