Neurosymbolic programming
We survey recent work on neurosymbolic programming, an emerging area that bridges the
areas of deep learning and program synthesis. Like in classic machine learning, the goal …
areas of deep learning and program synthesis. Like in classic machine learning, the goal …
Augmented language models: a survey
This survey reviews works in which language models (LMs) are augmented with reasoning
skills and the ability to use tools. The former is defined as decomposing a potentially …
skills and the ability to use tools. The former is defined as decomposing a potentially …
Towards reasoning in large language models: A survey
Reasoning is a fundamental aspect of human intelligence that plays a crucial role in
activities such as problem solving, decision making, and critical thinking. In recent years …
activities such as problem solving, decision making, and critical thinking. In recent years …
Human-like systematic generalization through a meta-learning neural network
The power of human language and thought arises from systematic compositionality—the
algebraic ability to understand and produce novel combinations from known components …
algebraic ability to understand and produce novel combinations from known components …
Least-to-most prompting enables complex reasoning in large language models
Chain-of-thought prompting has demonstrated remarkable performance on various natural
language reasoning tasks. However, it tends to perform poorly on tasks which requires …
language reasoning tasks. However, it tends to perform poorly on tasks which requires …
Socratic models: Composing zero-shot multimodal reasoning with language
Large pretrained (eg," foundation") models exhibit distinct capabilities depending on the
domain of data they are trained on. While these domains are generic, they may only barely …
domain of data they are trained on. While these domains are generic, they may only barely …
Composer: Creative and controllable image synthesis with composable conditions
Recent large-scale generative models learned on big data are capable of synthesizing
incredible images yet suffer from limited controllability. This work offers a new generation …
incredible images yet suffer from limited controllability. This work offers a new generation …
Measuring and narrowing the compositionality gap in language models
We investigate the ability of language models to perform compositional reasoning tasks
where the overall solution depends on correctly composing the answers to sub-problems …
where the overall solution depends on correctly composing the answers to sub-problems …
A survey of zero-shot generalisation in deep reinforcement learning
The study of zero-shot generalisation (ZSG) in deep Reinforcement Learning (RL) aims to
produce RL algorithms whose policies generalise well to novel unseen situations at …
produce RL algorithms whose policies generalise well to novel unseen situations at …