A 28nm 29.2 TFLOPS/W BF16 and 36.5 TOPS/W INT8 reconfigurable digital CIM processor with unified FP/INT pipeline and bitwise in-memory booth multiplication for …
Many computing-in-memory (CIM) processors have been proposed for edge deep learning
(DL) acceleration. They usually rely on analog CIM techniques to achieve high-efficiency NN …
(DL) acceleration. They usually rely on analog CIM techniques to achieve high-efficiency NN …
A 28nm 16.9-300TOPS/W computing-in-memory processor supporting floating-point NN inference/training with intensive-CIM sparse-digital architecture
Computing-in-memory (CIM) has shown high energy efficiency on low-precision integer
multiply-accumulate (MAC)[1–3]. However, implementing floating-point (FP) operations …
multiply-accumulate (MAC)[1–3]. However, implementing floating-point (FP) operations …
A 95.6-TOPS/W deep learning inference accelerator with per-vector scaled 4-bit quantization in 5 nm
B Keller, R Venkatesan, S Dai, SG Tell… - IEEE Journal of Solid …, 2023 - ieeexplore.ieee.org
The energy efficiency of deep neural network (DNN) inference can be improved with custom
accelerators. DNN inference accelerators often employ specialized hardware techniques to …
accelerators. DNN inference accelerators often employ specialized hardware techniques to …
ReDCIM: Reconfigurable digital computing-in-memory processor with unified FP/INT pipeline for cloud AI acceleration
Cloud AI acceleration has drawn great attention in recent years, as big models are
becoming a popular trend in deep learning. Cloud AI runs high-efficiency inference, high …
becoming a popular trend in deep learning. Cloud AI runs high-efficiency inference, high …
Comprehending in-memory computing trends via proper benchmarking
NR Shanbhag, SK Roy - 2022 IEEE Custom Integrated Circuits …, 2022 - ieeexplore.ieee.org
Since its inception in 2014 [1], the modern version of in-memory computing (IMC) has
become an active area of research in integrated circuit design globally for realizing artificial …
become an active area of research in integrated circuit design globally for realizing artificial …
Benchmarking in-memory computing architectures
NR Shanbhag, SK Roy - IEEE Open Journal of the Solid-State …, 2022 - ieeexplore.ieee.org
In-memory computing (IMC) architectures have emerged as a compelling platform to
implement energy-efficient machine learning (ML) systems. However, today, the energy …
implement energy-efficient machine learning (ML) systems. However, today, the energy …
A 28 nm 16 kb bit-scalable charge-domain transpose 6T SRAM in-memory computing macro
This article presents a compact, robust, and transposable SRAM in-memory computing
(IMC) macro to support feed forward (FF) and back propagation (BP) computation within a …
(IMC) macro to support feed forward (FF) and back propagation (BP) computation within a …
A 28nm 1.644 tflops/w floating-point computation sram macro with variable precision for deep neural network inference and training
This paper presents a digital compute-in-memory (CIM) macro for accelerating deep neural
networks. The macro provides high-precision computation required for training deep neural …
networks. The macro provides high-precision computation required for training deep neural …
Bring memristive in-memory computing into general-purpose machine learning: A perspective
H Zhou, J Chen, J Li, L Yang, Y Li, X Miao - APL Machine Learning, 2023 - pubs.aip.org
In-memory computing (IMC) using emerging nonvolatile devices has received considerable
attention due to its great potential for accelerating artificial neural networks and machine …
attention due to its great potential for accelerating artificial neural networks and machine …
Afpr-cim: An analog-domain floating-point rram-based compute-in-memory architecture with dynamic range adaptive fp-adc
Power consumption has become the major concern in neural network accelerators for edge
devices. The novel non-volatile-memory (NVM) based computing-in-memory (CIM) …
devices. The novel non-volatile-memory (NVM) based computing-in-memory (CIM) …