A review of binarized neural networks
T Simons, DJ Lee - Electronics, 2019 - mdpi.com
In this work, we review Binarized Neural Networks (BNNs). BNNs are deep neural networks
that use binary values for activations and weights, instead of full precision values. With …
that use binary values for activations and weights, instead of full precision values. With …
Memristors for energy‐efficient new computing paradigms
In this Review, memristors are examined from the frameworks of both von Neumann and
neuromorphic computing architectures. For the former, a new logic computational process …
neuromorphic computing architectures. For the former, a new logic computational process …
Optimal errors and phase transitions in high-dimensional generalized linear models
Generalized linear models (GLMs) are used in high-dimensional machine learning,
statistics, communications, and signal processing. In this paper we analyze GLMs when the …
statistics, communications, and signal processing. In this paper we analyze GLMs when the …
Entropy and mutual information in models of deep neural networks
We examine a class of stochastic deep learning models with a tractable method to compute
information-theoretic quantities. Our contributions are three-fold:(i) We show how entropies …
information-theoretic quantities. Our contributions are three-fold:(i) We show how entropies …
Expectation backpropagation: Parameter-free training of multilayer neural networks with continuous or discrete weights
Abstract Multilayer Neural Networks (MNNs) are commonly trained using gradient descent-
based methods, such as BackPropagation (BP). Inference in probabilistic graphical models …
based methods, such as BackPropagation (BP). Inference in probabilistic graphical models …
Unreasonable effectiveness of learning neural networks: From accessible states and robust ensembles to basic algorithmic schemes
In artificial neural networks, learning from data is a computationally demanding task in which
a large number of connection weights are iteratively tuned through stochastic-gradient …
a large number of connection weights are iteratively tuned through stochastic-gradient …
[HTML][HTML] BS4NN: Binarized spiking neural networks with temporal coding and learning
SR Kheradpisheh, M Mirsadeghi… - Neural Processing …, 2022 - Springer
We recently proposed the S4NN algorithm, essentially an adaptation of backpropagation to
multilayer spiking neural networks that use simple non-leaky integrate-and-fire neurons and …
multilayer spiking neural networks that use simple non-leaky integrate-and-fire neurons and …
Subdominant dense clusters allow for simple learning and high computational performance in neural networks with discrete synapses
We show that discrete synaptic weights can be efficiently used for learning in large scale
neural systems, and lead to unanticipated computational performance. We focus on the …
neural systems, and lead to unanticipated computational performance. We focus on the …
Neuromorphic computing based on emerging memory technologies
B Rajendran, F Alibart - … on Emerging and Selected Topics in …, 2016 - ieeexplore.ieee.org
In this paper, we review some of the novel emerging memory technologies and how they
can enable energy-efficient implementation of large neuromorphic computing systems. We …
can enable energy-efficient implementation of large neuromorphic computing systems. We …
The committee machine: Computational to statistical gaps in learning a two-layers neural network
Heuristic tools from statistical physics have been used in the past to compute the optimal
learning and generalization errors in the teacher-student scenario in multi-layer neural …
learning and generalization errors in the teacher-student scenario in multi-layer neural …