A definition of continual reinforcement learning

D Abel, A Barreto, B Van Roy… - Advances in …, 2024 - proceedings.neurips.cc
In a standard view of the reinforcement learning problem, an agent's goal is to efficiently
identify a policy that maximizes long-term reward. However, this perspective is based on a …

Disentangling the causes of plasticity loss in neural networks

C Lyle, Z Zheng, K Khetarpal, H van Hasselt… - arXiv preprint arXiv …, 2024 - arxiv.org
Underpinning the past decades of work on the design, initialization, and optimization of
neural networks is a seemingly innocuous assumption: that the network is trained on a\textit …

Continual learning: Applications and the road forward

E Verwimp, S Ben-David, M Bethge, A Cossu… - arXiv preprint arXiv …, 2023 - arxiv.org
Continual learning is a sub-field of machine learning, which aims to allow machine learning
models to continuously learn on new data, by accumulating knowledge without forgetting …

Dynamically masked discriminator for GANs

W Zhang, H Liu, B Li, J Xie, Y Huang… - Advances in …, 2024 - proceedings.neurips.cc
Abstract Training Generative Adversarial Networks (GANs) remains a challenging problem.
The discriminator trains the generator by learning the distribution of real/generated data …

Slow and Steady Wins the Race: Maintaining Plasticity with Hare and Tortoise Networks

H Lee, H Cho, H Kim, D Kim, D Min, J Choo… - arXiv preprint arXiv …, 2024 - arxiv.org
This study investigates the loss of generalization ability in neural networks, revisiting warm-
starting experiments from Ash & Adams. Our empirical analysis reveals that common …

Weight Clipping for Deep Continual and Reinforcement Learning

M Elsayed, Q Lan, C Lyle, AR Mahmood - arXiv preprint arXiv:2407.01704, 2024 - arxiv.org
Many failures in deep continual and reinforcement learning are associated with increasing
magnitudes of the weights, making them hard to change and potentially causing overfitting …

Data-dependent and Oracle Bounds on Forgetting in Continual Learning

L Friedman, R Meir - arXiv preprint arXiv:2406.09370, 2024 - arxiv.org
In continual learning, knowledge must be preserved and re-used between tasks,
maintaining good transfer to future tasks and minimizing forgetting of previously learned …

Harnessing Discrete Representations for Continual Reinforcement Learning

EJ Meyer, A White, MC Machado - 2023 - openreview.net
Reinforcement learning (RL) agents make decisions using nothing but observations from the
environment, and consequently, heavily rely on the representations of those observations …

[PDF][PDF] Three Dogmas of Reinforcement Learning

D Abel, MK Ho, A Harutyunyan - david-abel.github.io
Modern reinforcement learning has been conditioned by at least three dogmas. The first is
the environment spotlight, which refers to our tendency to focus on modeling environments …

[PDF][PDF] Successive Refinement in Continual Learning: A Study on Spatial Representations

NC Volpi, H Charvin, D Polani - … Learning (IMOL 2023 …, 2023 - researchprofiles.herts.ac.uk
Humans' capacity for perpetual learning and adjustment in response to novel circumstances
throughout their lifespan is exceptional. This cognitive aptitude, known as Continual …