作者
Rajat Saxena, Justin L Shobe, Bruce L McNaughton
发表日期
2022/7/5
期刊
Proceedings of the National Academy of Sciences
卷号
119
期号
27
页码范围
e2115229119
出版商
National Academy of Sciences
简介
Understanding how the brain learns throughout a lifetime remains a long-standing challenge. In artificial neural networks (ANNs), incorporating novel information too rapidly results in catastrophic interference, i.e., abrupt loss of previously acquired knowledge. Complementary Learning Systems Theory (CLST) suggests that new memories can be gradually integrated into the neocortex by interleaving new memories with existing knowledge. This approach, however, has been assumed to require interleaving all existing knowledge every time something new is learned, which is implausible because it is time-consuming and requires a large amount of data. We show that deep, nonlinear ANNs can learn new information by interleaving only a subset of old items that share substantial representational similarity with the new information. By using such similarity-weighted interleaved learning (SWIL), ANNs can learn new …
引用总数
学术搜索中的文章
R Saxena, JL Shobe, BL McNaughton - Proceedings of the National Academy of Sciences, 2022