Recent advances and future prospects for memristive materials, devices, and systems
Memristive technology has been rapidly emerging as a potential alternative to traditional
CMOS technology, which is facing fundamental limitations in its development. Since oxide …
CMOS technology, which is facing fundamental limitations in its development. Since oxide …
Compute in‐memory with non‐volatile elements for neural networks: A review from a co‐design perspective
W Haensch, A Raghunathan, K Roy… - Advanced …, 2023 - Wiley Online Library
Deep learning has become ubiquitous, touching daily lives across the globe. Today,
traditional computer architectures are stressed to their limits in efficiently executing the …
traditional computer architectures are stressed to their limits in efficiently executing the …
Electrochemical ionic synapses: progress and perspectives
Artificial neural networks based on crossbar arrays of analog programmable resistors can
address the high energy challenge of conventional hardware in artificial intelligence …
address the high energy challenge of conventional hardware in artificial intelligence …
A review on device requirements of resistive random access memory (RRAM)-based neuromorphic computing
With the arrival of the era of big data, the conventional von Neumann architecture is now
insufficient owing to its high latency and energy consumption that originate from its …
insufficient owing to its high latency and energy consumption that originate from its …
Device‐Algorithm Co‐Optimization for an On‐Chip Trainable Capacitor‐Based Synaptic Device with IGZO TFT and Retention‐Centric Tiki‐Taka Algorithm
Analog in‐memory computing synaptic devices are widely studied for efficient
implementation of deep learning. However, synaptic devices based on resistive memory …
implementation of deep learning. However, synaptic devices based on resistive memory …
A comprehensive review of advanced trends: from artificial synapses to neuromorphic systems with consideration of non-ideal effects
A neuromorphic system is composed of hardware-based artificial neurons and synaptic
devices, designed to improve the efficiency of neural computations inspired by energy …
devices, designed to improve the efficiency of neural computations inspired by energy …
Enabling training of neural networks on noisy hardware
T Gokmen - Frontiers in Artificial Intelligence, 2021 - frontiersin.org
Deep neural networks (DNNs) are typically trained using the conventional stochastic
gradient descent (SGD) algorithm. However, SGD performs poorly when applied to train …
gradient descent (SGD) algorithm. However, SGD performs poorly when applied to train …
Retention-aware zero-shifting technique for Tiki-Taka algorithm-based analog deep learning accelerator
We present the fabrication of 4 K-scale electrochemical random-access memory (ECRAM)
cross-point arrays for analog neural network training accelerator and an electrical …
cross-point arrays for analog neural network training accelerator and an electrical …
Fast and robust analog in-memory deep neural network training
Analog in-memory computing is a promising future technology for efficiently accelerating
deep learning networks. While using in-memory computing to accelerate the inference …
deep learning networks. While using in-memory computing to accelerate the inference …
Neural network learning using non-ideal resistive memory devices
We demonstrate a modified stochastic gradient (Tiki-Taka v2 or TTv2) algorithm for deep
learning network training in a cross-bar array architecture based on ReRAM cells. There …
learning network training in a cross-bar array architecture based on ReRAM cells. There …