[HTML][HTML] Evolving artificial neural networks with feedback
S Herzog, C Tetzlaff, F Wörgötter - Neural Networks, 2020 - Elsevier
Neural networks in the brain are dominated by sometimes more than 60% feedback
connections, which most often have small synaptic weights. Different from this, little is known …
connections, which most often have small synaptic weights. Different from this, little is known …
Designing neural networks through neuroevolution
Much of recent machine learning has focused on deep learning, in which neural network
weights are trained through variants of stochastic gradient descent. An alternative approach …
weights are trained through variants of stochastic gradient descent. An alternative approach …
Feedback alignment in deep convolutional networks
Ongoing studies have identified similarities between neural representations in biological
networks and in deep artificial neural networks. This has led to renewed interest in …
networks and in deep artificial neural networks. This has led to renewed interest in …
Scaling equilibrium propagation to deep convnets by drastically reducing its gradient estimator bias
Equilibrium Propagation is a biologically-inspired algorithm that trains convergent recurrent
neural networks with a local learning rule. This approach constitutes a major lead to allow …
neural networks with a local learning rule. This approach constitutes a major lead to allow …
Attention-Gated Brain Propagation: How the brain can implement reward-based error backpropagation
Much recent work has focused on biologically plausible variants of supervised learning
algorithms. However, there is no teacher in the motor cortex that instructs the motor neurons …
algorithms. However, there is no teacher in the motor cortex that instructs the motor neurons …
Direct feedback alignment provides learning in deep neural networks
A Nøkland - Advances in neural information processing …, 2016 - proceedings.neurips.cc
Artificial neural networks are most commonly trained with the back-propagation algorithm,
where the gradient for learning is provided by back-propagating the error, layer by layer …
where the gradient for learning is provided by back-propagating the error, layer by layer …
Backpropagation and the brain
During learning, the brain modifies synapses to improve behaviour. In the cortex, synapses
are embedded within multilayered networks, making it difficult to determine the effect of an …
are embedded within multilayered networks, making it difficult to determine the effect of an …
Deep learning with dynamic spiking neurons and fixed feedback weights
A Samadi, TP Lillicrap, DB Tweed - Neural computation, 2017 - ieeexplore.ieee.org
Recent work in computer science has shown the power of deep learning driven by the
backpropagation algorithm in networks of artificial neurons. But real neurons in the brain are …
backpropagation algorithm in networks of artificial neurons. But real neurons in the brain are …
A biologically plausible learning rule for deep learning in the brain
Researchers have proposed that deep learning, which is providing important progress in a
wide range of high complexity tasks, might inspire new insights into learning in the brain …
wide range of high complexity tasks, might inspire new insights into learning in the brain …
Overfitting in neural nets: Backpropagation, conjugate gradient, and early stopping
R Caruana, S Lawrence… - Advances in neural …, 2000 - proceedings.neurips.cc
The conventional wisdom is that backprop nets with excess hidden units generalize poorly.
We show that nets with excess capacity generalize well when trained with backprop and …
We show that nets with excess capacity generalize well when trained with backprop and …