[HTML][HTML] Evolving artificial neural networks with feedback

S Herzog, C Tetzlaff, F Wörgötter - Neural Networks, 2020 - Elsevier
Neural networks in the brain are dominated by sometimes more than 60% feedback
connections, which most often have small synaptic weights. Different from this, little is known …

Designing neural networks through neuroevolution

KO Stanley, J Clune, J Lehman… - Nature Machine …, 2019 - nature.com
Much of recent machine learning has focused on deep learning, in which neural network
weights are trained through variants of stochastic gradient descent. An alternative approach …

Feedback alignment in deep convolutional networks

TH Moskovitz, A Litwin-Kumar, LF Abbott - arXiv preprint arXiv:1812.06488, 2018 - arxiv.org
Ongoing studies have identified similarities between neural representations in biological
networks and in deep artificial neural networks. This has led to renewed interest in …

Scaling equilibrium propagation to deep convnets by drastically reducing its gradient estimator bias

A Laborieux, M Ernoult, B Scellier, Y Bengio… - Frontiers in …, 2021 - frontiersin.org
Equilibrium Propagation is a biologically-inspired algorithm that trains convergent recurrent
neural networks with a local learning rule. This approach constitutes a major lead to allow …

Attention-Gated Brain Propagation: How the brain can implement reward-based error backpropagation

I Pozzi, S Bohte, P Roelfsema - Advances in neural …, 2020 - proceedings.neurips.cc
Much recent work has focused on biologically plausible variants of supervised learning
algorithms. However, there is no teacher in the motor cortex that instructs the motor neurons …

Direct feedback alignment provides learning in deep neural networks

A Nøkland - Advances in neural information processing …, 2016 - proceedings.neurips.cc
Artificial neural networks are most commonly trained with the back-propagation algorithm,
where the gradient for learning is provided by back-propagating the error, layer by layer …

Backpropagation and the brain

TP Lillicrap, A Santoro, L Marris, CJ Akerman… - Nature Reviews …, 2020 - nature.com
During learning, the brain modifies synapses to improve behaviour. In the cortex, synapses
are embedded within multilayered networks, making it difficult to determine the effect of an …

Deep learning with dynamic spiking neurons and fixed feedback weights

A Samadi, TP Lillicrap, DB Tweed - Neural computation, 2017 - ieeexplore.ieee.org
Recent work in computer science has shown the power of deep learning driven by the
backpropagation algorithm in networks of artificial neurons. But real neurons in the brain are …

A biologically plausible learning rule for deep learning in the brain

I Pozzi, S Bohté, P Roelfsema - arXiv preprint arXiv:1811.01768, 2018 - arxiv.org
Researchers have proposed that deep learning, which is providing important progress in a
wide range of high complexity tasks, might inspire new insights into learning in the brain …

Overfitting in neural nets: Backpropagation, conjugate gradient, and early stopping

R Caruana, S Lawrence… - Advances in neural …, 2000 - proceedings.neurips.cc
The conventional wisdom is that backprop nets with excess hidden units generalize poorly.
We show that nets with excess capacity generalize well when trained with backprop and …