Compute-in-memory chips for deep learning: Recent trends and prospects

S Yu, H Jiang, S Huang, X Peng… - IEEE circuits and systems …, 2021 - ieeexplore.ieee.org
Compute-in-memory (CIM) is a new computing paradigm that addresses the memory-wall
problem in hardware accelerator design for deep learning. The input vector and weight …

Toward memristive in-memory computing: principles and applications

H Bao, H Zhou, J Li, H Pei, J Tian, L Yang… - Frontiers of …, 2022 - Springer
With the rapid growth of computer science and big data, the traditional von Neumann
architecture suffers the aggravating data communication costs due to the separated structure …

DNN+ NeuroSim V2. 0: An end-to-end benchmarking framework for compute-in-memory accelerators for on-chip training

X Peng, S Huang, H Jiang, A Lu… - IEEE Transactions on …, 2020 - ieeexplore.ieee.org
DNN+ NeuroSim is an integrated framework to benchmark compute-in-memory (CIM)
accelerators for deep neural networks, with hierarchical design options from device-level, to …

Hybrid analog-digital in-memory computing

MRH Rashed, SK Jha, R Ewetz - 2021 IEEE/ACM International …, 2021 - ieeexplore.ieee.org
Today's high performance computing (HPC) systems are limited by the expensive data
movement between processing and memory units. An emerging solution strategy is to …

[HTML][HTML] A Comprehensive Review of Processing-in-Memory Architectures for Deep Neural Networks

R Kaur, A Asad, F Mohammadi - Computers, 2024 - mdpi.com
This comprehensive review explores the advancements in processing-in-memory (PIM)
techniques and chiplet-based architectures for deep neural networks (DNNs). It addresses …

Ferroelectric field-effect transistor-based 3-D NAND architecture for energy-efficient on-chip training accelerator

W Shim, S Yu - IEEE Journal on Exploratory Solid-State …, 2021 - ieeexplore.ieee.org
Different from the deep neural network (DNN) inference process, the training process
produces a huge amount of intermediate data to compute the new weights of the network …

Cmq: Crossbar-aware neural network mixed-precision quantization via differentiable architecture search

J Peng, H Liu, Z Zhao, Z Li, S Liu… - IEEE Transactions on …, 2022 - ieeexplore.ieee.org
The RRAM-based accelerators have become very popular candidates for neural network
acceleration due to they perform matrix-vector multiplication in-memory with high storage …

Infox: An energy-efficient reram accelerator design with information-lossless low-bit adcs

Y He, S Qu, Y Wang, B Li, H Li, X Li - Proceedings of the 59th ACM/IEEE …, 2022 - dl.acm.org
ReRAM-based accelerators have shown great potential in neural network acceleration via in-
memory analog computing. However, high-precision analog-to-digital converters (ADCs) …

Variation Tolerant RRAM Based Synaptic Architecture for On-Chip Training

A Dongre, G Trivedi - IEEE Transactions on Nanotechnology, 2023 - ieeexplore.ieee.org
Neuromorphic computing has emerged as a better alternative for developing next-
generation artificial intelligent systems. Resistive Random Access Memory (RRAM) have …

Advances of embedded resistive random access memory in industrial manufacturing and its potential applications

Z Wang, Y Song, G Zhang, Q Luo, K Xu… - … Journal of Extreme …, 2024 - iopscience.iop.org
Embedded memory, which heavily relies on the manufacturing process, has been widely
adopted in various industrial applications. As the field of embedded memory continues to …