Learning neuro-symbolic skills for bilevel planning

T Silver, A Athalye, JB Tenenbaum… - arXiv preprint arXiv …, 2022 - arxiv.org
Decision-making is challenging in robotics environments with continuous object-centric
states, continuous actions, long horizons, and sparse feedback. Hierarchical approaches …

Learning neuro-symbolic relational transition models for bilevel planning

R Chitnis, T Silver, JB Tenenbaum… - 2022 IEEE/RSJ …, 2022 - ieeexplore.ieee.org
In robotic domains, learning and planning are complicated by continuous state spaces,
continuous action spaces, and long task horizons. In this work, we address these challenges …

[PDF][PDF] Structure in reinforcement learning: A survey and open problems

A Mohan, A Zhang, M Lindauer - arXiv preprint arXiv:2306.16021, 2023 - academia.edu
Reinforcement Learning (RL), bolstered by the expressive capabilities of Deep Neural
Networks (DNNs) for function approximation, has demonstrated considerable success in …

Leveraging approximate symbolic models for reinforcement learning via skill diversity

L Guan, S Sreedharan… - … Conference on Machine …, 2022 - proceedings.mlr.press
Creating reinforcement learning (RL) agents that are capable of accepting and leveraging
task-specific knowledge from humans has been long identified as a possible strategy for …

Causality-driven hierarchical structure discovery for reinforcement learning

X Hu, R Zhang, K Tang, J Guo, Q Yi… - Advances in …, 2022 - proceedings.neurips.cc
Hierarchical reinforcement learning (HRL) has been proven to be effective for tasks with
sparse rewards, for it can improve the agent's exploration efficiency by discovering high …

Relational abstractions for generalized reinforcement learning on symbolic problems

R Karia, S Srivastava - arXiv preprint arXiv:2204.12665, 2022 - arxiv.org
Reinforcement learning in problems with symbolic state spaces is challenging due to the
need for reasoning over long horizons. This paper presents a new approach that utilizes …

Rapid-learn: A framework for learning to recover for handling novelties in open-world environments

S Goel, Y Shukla, V Sarathy, M Scheutz… - … on Development and …, 2022 - ieeexplore.ieee.org
We propose RAPid-Learn (Learning to Recover and Plan Again), a hybrid planning and
learning method, to tackle the problem of adapting to sudden and unexpected changes in an …

Hierarchical planning and learning for robots in stochastic settings using zero-shot option invention

N Shah, S Srivastava - Proceedings of the AAAI Conference on Artificial …, 2024 - ojs.aaai.org
This paper addresses the problem of inventing and using hierarchical representations for
stochastic robot-planning problems. Rather than using hand-coded state or action …

Learning temporally extended skills in continuous domains as symbolic actions for planning

J Achterhold, M Krimmel… - Conference on Robot …, 2023 - proceedings.mlr.press
Problems which require both long-horizon planning and continuous control capabilities
pose significant challenges to existing reinforcement learning agents. In this paper we …

A neurosymbolic cognitive architecture framework for handling novelties in open worlds

S Goel, P Lymperopoulos, R Thielstrom, E Krause… - Artificial Intelligence, 2024 - Elsevier
Abstract “Open world” environments are those in which novel objects, agents, events, and
more can appear and contradict previous understandings of the environment. This runs …