Learning neuro-symbolic skills for bilevel planning
T Silver, A Athalye, JB Tenenbaum… - arXiv preprint arXiv …, 2022 - arxiv.org
Decision-making is challenging in robotics environments with continuous object-centric
states, continuous actions, long horizons, and sparse feedback. Hierarchical approaches …
states, continuous actions, long horizons, and sparse feedback. Hierarchical approaches …
Learning neuro-symbolic relational transition models for bilevel planning
In robotic domains, learning and planning are complicated by continuous state spaces,
continuous action spaces, and long task horizons. In this work, we address these challenges …
continuous action spaces, and long task horizons. In this work, we address these challenges …
[PDF][PDF] Structure in reinforcement learning: A survey and open problems
Reinforcement Learning (RL), bolstered by the expressive capabilities of Deep Neural
Networks (DNNs) for function approximation, has demonstrated considerable success in …
Networks (DNNs) for function approximation, has demonstrated considerable success in …
Leveraging approximate symbolic models for reinforcement learning via skill diversity
L Guan, S Sreedharan… - … Conference on Machine …, 2022 - proceedings.mlr.press
Creating reinforcement learning (RL) agents that are capable of accepting and leveraging
task-specific knowledge from humans has been long identified as a possible strategy for …
task-specific knowledge from humans has been long identified as a possible strategy for …
Causality-driven hierarchical structure discovery for reinforcement learning
Hierarchical reinforcement learning (HRL) has been proven to be effective for tasks with
sparse rewards, for it can improve the agent's exploration efficiency by discovering high …
sparse rewards, for it can improve the agent's exploration efficiency by discovering high …
Relational abstractions for generalized reinforcement learning on symbolic problems
R Karia, S Srivastava - arXiv preprint arXiv:2204.12665, 2022 - arxiv.org
Reinforcement learning in problems with symbolic state spaces is challenging due to the
need for reasoning over long horizons. This paper presents a new approach that utilizes …
need for reasoning over long horizons. This paper presents a new approach that utilizes …
Rapid-learn: A framework for learning to recover for handling novelties in open-world environments
We propose RAPid-Learn (Learning to Recover and Plan Again), a hybrid planning and
learning method, to tackle the problem of adapting to sudden and unexpected changes in an …
learning method, to tackle the problem of adapting to sudden and unexpected changes in an …
Hierarchical planning and learning for robots in stochastic settings using zero-shot option invention
N Shah, S Srivastava - Proceedings of the AAAI Conference on Artificial …, 2024 - ojs.aaai.org
This paper addresses the problem of inventing and using hierarchical representations for
stochastic robot-planning problems. Rather than using hand-coded state or action …
stochastic robot-planning problems. Rather than using hand-coded state or action …
Learning temporally extended skills in continuous domains as symbolic actions for planning
J Achterhold, M Krimmel… - Conference on Robot …, 2023 - proceedings.mlr.press
Problems which require both long-horizon planning and continuous control capabilities
pose significant challenges to existing reinforcement learning agents. In this paper we …
pose significant challenges to existing reinforcement learning agents. In this paper we …
A neurosymbolic cognitive architecture framework for handling novelties in open worlds
Abstract “Open world” environments are those in which novel objects, agents, events, and
more can appear and contradict previous understandings of the environment. This runs …
more can appear and contradict previous understandings of the environment. This runs …