Biological underpinnings for lifelong learning machines

D Kudithipudi, M Aguilar-Simon, J Babb… - Nature Machine …, 2022 - nature.com
Biological organisms learn from interactions with their environment throughout their lifetime.
For artificial systems to successfully act and adapt in the real world, it is desirable to similarly …

Continual learning for robotics: Definition, framework, learning strategies, opportunities and challenges

T Lesort, V Lomonaco, A Stoian, D Maltoni, D Filliat… - Information fusion, 2020 - Elsevier
Continual learning (CL) is a particular machine learning paradigm where the data
distribution and learning objective change through time, or where all the training data and …

Brain-inspired replay for continual learning with artificial neural networks

GM Van de Ven, HT Siegelmann, AS Tolias - Nature communications, 2020 - nature.com
Artificial neural networks suffer from catastrophic forgetting. Unlike humans, when these
networks are trained on something new, they rapidly forget what was learned before. In the …

Dualnet: Continual learning, fast and slow

Q Pham, C Liu, S Hoi - Advances in Neural Information …, 2021 - proceedings.neurips.cc
Abstract According to Complementary Learning Systems (CLS) theory~\cite
{mcclelland1995there} in neuroscience, humans do effective\emph {continual learning} …

[HTML][HTML] Continual lifelong learning with neural networks: A review

GI Parisi, R Kemker, JL Part, C Kanan, S Wermter - Neural networks, 2019 - Elsevier
Humans and animals have the ability to continually acquire, fine-tune, and transfer
knowledge and skills throughout their lifespan. This ability, referred to as lifelong learning, is …

Remind your neural network to prevent catastrophic forgetting

TL Hayes, K Kafle, R Shrestha, M Acharya… - European conference on …, 2020 - Springer
People learn throughout life. However, incrementally updating conventional neural networks
leads to catastrophic forgetting. A common remedy is replay, which is inspired by how the …

Continuous learning in single-incremental-task scenarios

D Maltoni, V Lomonaco - Neural Networks, 2019 - Elsevier
It was recently shown that architectural, regularization and rehearsal strategies can be used
to train deep models sequentially on a number of disjoint tasks without forgetting previously …

Replay in deep learning: Current approaches and missing biological elements

TL Hayes, GP Krishnan, M Bazhenov… - Neural …, 2021 - ieeexplore.ieee.org
Replay is the reactivation of one or more neural patterns that are similar to the activation
patterns experienced during past waking experiences. Replay was first observed in …

Continual learning for recurrent neural networks: an empirical evaluation

A Cossu, A Carta, V Lomonaco, D Bacciu - Neural Networks, 2021 - Elsevier
Learning continuously during all model lifetime is fundamental to deploy machine learning
solutions robust to drifts in the data distribution. Advances in Continual Learning (CL) with …

Imbalanced continual learning with partitioning reservoir sampling

CD Kim, J Jeong, G Kim - Computer Vision–ECCV 2020: 16th European …, 2020 - Springer
Continual learning from a sequential stream of data is a crucial challenge for machine
learning research. Most studies have been conducted on this topic under the single-label …