Piqa: Reasoning about physical commonsense in natural language

Y Bisk, R Zellers, J Gao, Y Choi - … of the AAAI conference on artificial …, 2020 - ojs.aaai.org
To apply eyeshadow without a brush, should I use a cotton swab or a toothpick? Questions
requiring this kind of physical commonsense pose a challenge to today's natural language …

Experience grounds language

Y Bisk, A Holtzman, J Thomason, J Andreas… - arXiv preprint arXiv …, 2020 - arxiv.org
Language understanding research is held back by a failure to relate language to the
physical world it describes and to the social interactions it facilitates. Despite the incredible …

oLMpics-on what language model pre-training captures

A Talmor, Y Elazar, Y Goldberg… - Transactions of the …, 2020 - direct.mit.edu
Recent success of pre-trained language models (LMs) has spurred widespread interest in
the language capabilities that they possess. However, efforts to understand whether LM …

Representing numbers in NLP: a survey and a vision

A Thawani, J Pujara, PA Szekely, F Ilievski - arXiv preprint arXiv …, 2021 - arxiv.org
NLP systems rarely give special consideration to numbers found in text. This starkly
contrasts with the consensus in neuroscience that, in the brain, numbers are represented …

Birds have four legs?! numersense: Probing numerical commonsense knowledge of pre-trained language models

BY Lin, S Lee, R Khanna, X Ren - arXiv preprint arXiv:2005.00683, 2020 - arxiv.org
Recent works show that pre-trained language models (PTLMs), such as BERT, possess
certain commonsense and factual knowledge. They suggest that it is promising to use …

Things not written in text: Exploring spatial commonsense from visual signals

X Liu, D Yin, Y Feng, D Zhao - arXiv preprint arXiv:2203.08075, 2022 - arxiv.org
Spatial commonsense, the knowledge about spatial position and relationship between
objects (like the relative size of a lion and a girl, and the position of a boy relative to a bicycle …

Do language embeddings capture scales?

X Zhang, D Ramachandran, I Tenney, Y Elazar… - arXiv preprint arXiv …, 2020 - arxiv.org
Pretrained Language Models (LMs) have been shown to possess significant linguistic,
common sense, and factual knowledge. One form of knowledge that has not been studied …

Temporal common sense acquisition with minimal supervision

B Zhou, Q Ning, D Khashabi, D Roth - arXiv preprint arXiv:2005.04304, 2020 - arxiv.org
Temporal common sense (eg, duration and frequency of events) is crucial for understanding
natural language. However, its acquisition is challenging, partly because such information is …

Commonsense reasoning for natural language processing

M Sap, V Shwartz, A Bosselut, Y Choi… - Proceedings of the 58th …, 2020 - aclanthology.org
Commonsense knowledge, such as knowing that “bumping into people annoys them” or
“rain makes the road slippery”, helps humans navigate everyday situations seamlessly. Yet …

Transomcs: From linguistic graphs to commonsense knowledge

H Zhang, D Khashabi, Y Song, D Roth - arXiv preprint arXiv:2005.00206, 2020 - arxiv.org
Commonsense knowledge acquisition is a key problem for artificial intelligence.
Conventional methods of acquiring commonsense knowledge generally require laborious …