Explainable neural networks that simulate reasoning

PJ Blazek, MM Lin - Nature Computational Science, 2021 - nature.com
Nature Computational Science, 2021nature.com
The success of deep neural networks suggests that cognition may emerge from
indecipherable patterns of distributed neural activity. Yet these networks are pattern-
matching black boxes that cannot simulate higher cognitive functions and lack numerous
neurobiological features. Accordingly, they are currently insufficient computational models
for understanding neural information processing. Here, we show how neural circuits can
directly encode cognitive processes via simple neurobiological principles. To illustrate, we …
Abstract
The success of deep neural networks suggests that cognition may emerge from indecipherable patterns of distributed neural activity. Yet these networks are pattern-matching black boxes that cannot simulate higher cognitive functions and lack numerous neurobiological features. Accordingly, they are currently insufficient computational models for understanding neural information processing. Here, we show how neural circuits can directly encode cognitive processes via simple neurobiological principles. To illustrate, we implemented this model in a non-gradient-based machine learning algorithm to train deep neural networks called essence neural networks (ENNs). Neural information processing in ENNs is intrinsically explainable, even on benchmark computer vision tasks. ENNs can also simulate higher cognitive functions such as deliberation, symbolic reasoning and out-of-distribution generalization. ENNs display network properties associated with the brain, such as modularity, distributed and localist firing, and adversarial robustness. ENNs establish a broad computational framework to decipher the neural basis of cognition and pursue artificial general intelligence.
nature.com
以上显示的是最相近的搜索结果。 查看全部搜索结果