[HTML][HTML] Meta-learning biologically plausible plasticity rules with random feedback pathways
N Shervani-Tabar, R Rosenbaum - Nature Communications, 2023 - nature.com
Backpropagation is widely used to train artificial neural networks, but its relationship to
synaptic plasticity in the brain is unknown. Some biological models of backpropagation rely …
synaptic plasticity in the brain is unknown. Some biological models of backpropagation rely …
Goal-conditioned generators of deep policies
Abstract Goal-conditioned Reinforcement Learning (RL) aims at learning optimal policies,
given goals encoded in special command inputs. Here we study goal-conditioned neural …
given goals encoded in special command inputs. Here we study goal-conditioned neural …
Adaptive convolutions with per-pixel dynamic filter atom
Applying feature dependent network weights have been proved to be effective in many
fields. However, in practice, restricted by the enormous size of model parameters and …
fields. However, in practice, restricted by the enormous size of model parameters and …
Meta-learning deep energy-based memory models
S Bartunov, JW Rae, S Osindero… - arXiv preprint arXiv …, 2019 - arxiv.org
We study the problem of learning associative memory--a system which is able to retrieve a
remembered pattern based on its distorted or incomplete version. Attractor networks provide …
remembered pattern based on its distorted or incomplete version. Attractor networks provide …
A meta-learning approach to (re) discover plasticity rules that carve a desired function into a neural network
The search for biologically faithful synaptic plasticity rules has resulted in a large body of
models. They are usually inspired by--and fitted to--experimental data, but they rarely …
models. They are usually inspired by--and fitted to--experimental data, but they rarely …
Adaptive regularized warped gradient descent enhances model generalization and meta-learning for few-shot learning
S Rao, J Huang, Z Tang - Neurocomputing, 2023 - Elsevier
Abstract Warped Gradient descent (WarpGrad) is a remarkable meta-learning method for
gradient transformation by inserting warp-layers. However, the task-shared initialization …
gradient transformation by inserting warp-layers. However, the task-shared initialization …
Eliminating meta optimization through self-referential meta learning
L Kirsch, J Schmidhuber - arXiv preprint arXiv:2212.14392, 2022 - arxiv.org
Meta Learning automates the search for learning algorithms. At the same time, it creates a
dependency on human engineering on the meta-level, where meta learning algorithms …
dependency on human engineering on the meta-level, where meta learning algorithms …
Short-term plasticity neurons learning to learn and forget
HG Rodriguez, Q Guo… - … Conference on Machine …, 2022 - proceedings.mlr.press
Short-term plasticity (STP) is a mechanism that stores decaying memories in synapses of the
cerebral cortex. In computing practice, STP has been used, but mostly in the niche of spiking …
cerebral cortex. In computing practice, STP has been used, but mostly in the niche of spiking …
Plastic gating network: Adapting to personal development and individual differences in knowledge tracing
Abstract Knowledge tracing (KT) refers to the issue of predicting learners' knowledge states
based on their learning history and is the core technology for computer-assisted adaptive …
based on their learning history and is the core technology for computer-assisted adaptive …
Evolvability ES: scalable and direct optimization of evolvability
A Gajewski, J Clune, KO Stanley… - Proceedings of the genetic …, 2019 - dl.acm.org
Designing evolutionary algorithms capable of uncovering highly evolvable representations
is an open challenge in evolutionary computation; such evolvability is important in practice …
is an open challenge in evolutionary computation; such evolvability is important in practice …