Model metamers reveal divergent invariances between biological and artificial neural networks

J Feather, G Leclerc, A Mądry, JH McDermott - Nature Neuroscience, 2023 - nature.com
Deep neural network models of sensory systems are often proposed to learn
representational transformations with invariances like those in the brain. To reveal these …

Silences, spikes and bursts: Three‐part knot of the neural code

Z Friedenberger, E Harkin, K Tóth… - The Journal of …, 2023 - Wiley Online Library
When a neuron breaks silence, it can emit action potentials in a number of patterns. Some
responses are so sudden and intense that electrophysiologists felt the need to single them …

Aligning model and macaque inferior temporal cortex representations improves model-to-human behavioral alignment and adversarial robustness

J Dapello, K Kar, M Schrimpf, R Geary, M Ferguson… - bioRxiv, 2022 - biorxiv.org
While some state-of-the-art artificial neural network systems in computer vision are strikingly
accurate models of the corresponding primate visual processing, there are still many …

Lcanets: Lateral competition improves robustness against corruption and attack

M Teti, G Kenyon, B Migliori… - … Conference on Machine …, 2022 - proceedings.mlr.press
Abstract Although Convolutional Neural Networks (CNNs) achieve high accuracy on image
recognition tasks, they lack robustness against realistic corruptions and fail catastrophically …

Supervised perceptron learning vs unsupervised Hebbian unlearning: Approaching optimal memory retrieval in Hopfield-like networks

M Benedetti, E Ventura, E Marinari, G Ruocco… - The Journal of …, 2022 - pubs.aip.org
The Hebbian unlearning algorithm, ie, an unsupervised local procedure used to improve the
retrieval properties in Hopfield-like neural networks, is numerically compared to a …

Model metamers illuminate divergences between biological and artificial neural networks

J Feather, G Leclerc, A Mądry, JH McDermott - bioRxiv, 2022 - biorxiv.org
Deep neural network models of sensory systems are often proposed to learn
representational transformations with invariances like those in the brain. To reveal these …

Evolutionary algorithms as an alternative to backpropagation for supervised training of Biophysical Neural Networks and Neural ODEs

J Hazelden, YH Liu, E Shlizerman… - arXiv preprint arXiv …, 2023 - arxiv.org
Training networks consisting of biophysically accurate neuron models could allow for new
insights into how brain circuits can organize and solve tasks. We begin by analyzing the …

Exploring the perceptual straightness of adversarially robust and biologically-inspired visual representations

A Harrington, V DuTell, A Tewari… - SVRHM 2022 …, 2022 - openreview.net
Humans have been shown to use a''straightened''encoding to represent the natural visual
world as it evolves in time (H\'enaff et al.~ 2019). In the context of discrete video sequences,'' …

Complex network effects on the robustness of graph convolutional networks

BA Miller, K Chan, T Eliassi-Rad - Applied Network Science, 2024 - Springer
Vertex classification using graph convolutional networks is susceptible to targeted poisoning
attacks, in which both graph structure and node attributes can be changed in an attempt to …

Extreme image transformations affect humans and machines differently

G Malik, D Crowder, E Mingolla - Biological Cybernetics, 2023 - Springer
Some recent artificial neural networks (ANNs) claim to model aspects of primate neural and
human performance data. Their success in object recognition is, however, dependent on …