Symbols and mental programs: a hypothesis about human singularity

S Dehaene, F Al Roumi, Y Lakretz, S Planton… - Trends in Cognitive …, 2022 - cell.com
Natural language is often seen as the single factor that explains the cognitive singularity of
the human species. Instead, we propose that humans possess multiple internal languages …

Symbols and grounding in large language models

E Pavlick - … Transactions of the Royal Society A, 2023 - royalsocietypublishing.org
Large language models (LLMs) are one of the most impressive achievements of artificial
intelligence in recent years. However, their relevance to the study of language more broadly …

Code as policies: Language model programs for embodied control

J Liang, W Huang, F Xia, P Xu… - … on Robotics and …, 2023 - ieeexplore.ieee.org
Large language models (LLMs) trained on code-completion have been shown to be capable
of synthesizing simple Python programs from docstrings [1]. We find that these code-writing …

Beyond the imitation game: Quantifying and extrapolating the capabilities of language models

A Srivastava, A Rastogi, A Rao, AAM Shoeb… - arXiv preprint arXiv …, 2022 - arxiv.org
Language models demonstrate both quantitative improvement and new qualitative
capabilities with increasing scale. Despite their potentially transformative impact, these new …

Language to rewards for robotic skill synthesis

W Yu, N Gileadi, C Fu, S Kirmani, KH Lee… - arXiv preprint arXiv …, 2023 - arxiv.org
Large language models (LLMs) have demonstrated exciting progress in acquiring diverse
new capabilities through in-context learning, ranging from logical reasoning to code-writing …

Program synthesis with large language models

J Austin, A Odena, M Nye, M Bosma… - arXiv preprint arXiv …, 2021 - arxiv.org
This paper explores the limits of the current generation of large language models for
program synthesis in general purpose programming languages. We evaluate a collection of …

Simulation intelligence: Towards a new generation of scientific methods

A Lavin, D Krakauer, H Zenil, J Gottschlich… - arXiv preprint arXiv …, 2021 - arxiv.org
The original" Seven Motifs" set forth a roadmap of essential methods for the field of scientific
computing, where a motif is an algorithmic method that captures a pattern of computation …

Abstraction and analogy‐making in artificial intelligence

M Mitchell - Annals of the New York Academy of Sciences, 2021 - Wiley Online Library
Conceptual abstraction and analogy‐making are key abilities underlying humans' abilities to
learn, reason, and robustly adapt their knowledge to new domains. Despite a long history of …

From word models to world models: Translating from natural language to the probabilistic language of thought

L Wong, G Grand, AK Lew, ND Goodman… - arXiv preprint arXiv …, 2023 - arxiv.org
How does language inform our downstream thinking? In particular, how do humans make
meaning from language--and how can we leverage a theory of linguistic meaning to build …

Neurosymbolic programming

S Chaudhuri, K Ellis, O Polozov, R Singh… - … and Trends® in …, 2021 - nowpublishers.com
We survey recent work on neurosymbolic programming, an emerging area that bridges the
areas of deep learning and program synthesis. Like in classic machine learning, the goal …