Understanding plasticity in neural networks

C Lyle, Z Zheng, E Nikishin, BA Pires… - International …, 2023 - proceedings.mlr.press
Plasticity, the ability of a neural network to quickly change its predictions in response to new
information, is essential for the adaptability and robustness of deep reinforcement learning …

Deep reinforcement learning with plasticity injection

E Nikishin, J Oh, G Ostrovski, C Lyle… - Advances in …, 2024 - proceedings.neurips.cc
A growing body of evidence suggests that neural networks employed in deep reinforcement
learning (RL) gradually lose their plasticity, the ability to learn from new data; however, the …

Disentangling the causes of plasticity loss in neural networks

C Lyle, Z Zheng, K Khetarpal, H van Hasselt… - arXiv preprint arXiv …, 2024 - arxiv.org
Underpinning the past decades of work on the design, initialization, and optimization of
neural networks is a seemingly innocuous assumption: that the network is trained on a\textit …

Training larger networks for deep reinforcement learning

K Ota, DK Jha, A Kanezaki - arXiv preprint arXiv:2102.07920, 2021 - arxiv.org
The success of deep learning in the computer vision and natural language processing
communities can be attributed to training of very deep neural networks with millions or …

Understanding and preventing capacity loss in reinforcement learning

C Lyle, M Rowland, W Dabney - arXiv preprint arXiv:2204.09560, 2022 - arxiv.org
The reinforcement learning (RL) problem is rife with sources of non-stationarity, making it a
notoriously difficult problem domain for the application of neural networks. We identify a …

A study on overfitting in deep reinforcement learning

C Zhang, O Vinyals, R Munos, S Bengio - arXiv preprint arXiv:1804.06893, 2018 - arxiv.org
Recent years have witnessed significant progresses in deep Reinforcement Learning (RL).
Empowered with large scale neural networks, carefully designed architectures, novel …

The dormant neuron phenomenon in deep reinforcement learning

G Sokar, R Agarwal, PS Castro… - … Conference on Machine …, 2023 - proceedings.mlr.press
In this work we identify the dormant neuron phenomenon in deep reinforcement learning,
where an agent's network suffers from an increasing number of inactive neurons, thereby …

D2rl: Deep dense architectures in reinforcement learning

S Sinha, H Bharadhwaj, A Srinivas, A Garg - arXiv preprint arXiv …, 2020 - arxiv.org
While improvements in deep learning architectures have played a crucial role in improving
the state of supervised and unsupervised learning in computer vision and natural language …

Deep reinforcement learning at the edge of the statistical precipice

R Agarwal, M Schwarzer, PS Castro… - Advances in neural …, 2021 - proceedings.neurips.cc
Deep reinforcement learning (RL) algorithms are predominantly evaluated by comparing
their relative performance on a large suite of tasks. Most published results on deep RL …

Plastic: Improving input and label plasticity for sample efficient reinforcement learning

H Lee, H Cho, H Kim, D Gwak, J Kim… - Advances in …, 2024 - proceedings.neurips.cc
Abstract In Reinforcement Learning (RL), enhancing sample efficiency is crucial, particularly
in scenarios when data acquisition is costly and risky. In principle, off-policy RL algorithms …