Exploration in deep reinforcement learning: A survey

P Ladosz, L Weng, M Kim, H Oh - Information Fusion, 2022 - Elsevier
This paper reviews exploration techniques in deep reinforcement learning. Exploration
techniques are of primary importance when solving sparse reward problems. In sparse …

Towards continual reinforcement learning: A review and perspectives

K Khetarpal, M Riemer, I Rish, D Precup - Journal of Artificial Intelligence …, 2022 - jair.org
In this article, we aim to provide a literature review of different formulations and approaches
to continual reinforcement learning (RL), also known as lifelong or non-stationary RL. We …

Continual learning for robotics: Definition, framework, learning strategies, opportunities and challenges

T Lesort, V Lomonaco, A Stoian, D Maltoni, D Filliat… - Information fusion, 2020 - Elsevier
Continual learning (CL) is a particular machine learning paradigm where the data
distribution and learning objective change through time, or where all the training data and …

Intelligent problem-solving as integrated hierarchical reinforcement learning

M Eppe, C Gumbsch, M Kerzel, PDH Nguyen… - Nature Machine …, 2022 - nature.com
According to cognitive psychology and related disciplines, the development of complex
problem-solving behaviour in biological agents depends on hierarchical cognitive …

A goal-centric outlook on learning

G Molinaro, AGE Collins - Trends in Cognitive Sciences, 2023 - cell.com
Goals play a central role in human cognition. However, computational theories of learning
and decision-making often take goals as given. Here, we review key empirical findings …

Byol-explore: Exploration by bootstrapped prediction

Z Guo, S Thakoor, M Pîslar… - Advances in neural …, 2022 - proceedings.neurips.cc
We present BYOL-Explore, a conceptually simple yet general approach for curiosity-driven
exploration in visually complex environments. BYOL-Explore learns the world …

Planning with goal-conditioned policies

S Nasiriany, V Pong, S Lin… - Advances in neural …, 2019 - proceedings.neurips.cc
Planning methods can solve temporally extended sequential decision making problems by
composing simple behaviors. However, planning requires suitable abstractions for the states …

State entropy maximization with random encoders for efficient exploration

Y Seo, L Chen, J Shin, H Lee… - … on Machine Learning, 2021 - proceedings.mlr.press
Recent exploration methods have proven to be a recipe for improving sample-efficiency in
deep reinforcement learning (RL). However, efficient exploration in high-dimensional …

Semantic exploration from language abstractions and pretrained representations

A Tam, N Rabinowitz, A Lampinen… - Advances in neural …, 2022 - proceedings.neurips.cc
Effective exploration is a challenge in reinforcement learning (RL). Novelty-based
exploration methods can suffer in high-dimensional state spaces, such as continuous …

Automatic curriculum learning for deep rl: A short survey

R Portelas, C Colas, L Weng, K Hofmann… - arXiv preprint arXiv …, 2020 - arxiv.org
Automatic Curriculum Learning (ACL) has become a cornerstone of recent successes in
Deep Reinforcement Learning (DRL). These methods shape the learning trajectories of …