A 65-nm 8T SRAM compute-in-memory macro with column ADCs for processing neural networks

C Yu, T Yoo, KTC Chai, TTH Kim… - IEEE Journal of Solid …, 2022 - ieeexplore.ieee.org
In this work, we present a novel 8T static random access memory (SRAM)-based compute-in-
memory (CIM) macro for processing neural networks with high energy efficiency. The …

Scalable and programmable neural network inference accelerator based on in-memory computing

H Jia, M Ozatay, Y Tang, H Valavi… - IEEE Journal of Solid …, 2021 - ieeexplore.ieee.org
This work demonstrates a programmable in-memory-computing (IMC) inference accelerator
for scalable execution of neural network (NN) models, leveraging a high-signal-to-noise …

An overview of sparsity exploitation in CNNs for on-device intelligence with software-hardware cross-layer optimizations

S Kang, G Park, S Kim, S Kim, D Han… - IEEE Journal on …, 2021 - ieeexplore.ieee.org
This paper presents a detailed overview of sparsity exploitation in deep neural network
(DNN) accelerators. Despite the algorithmic advancements which drove DNNs to become …

T-PIM: An energy-efficient processing-in-memory accelerator for end-to-end on-device training

J Heo, J Kim, S Lim, W Han… - IEEE Journal of Solid-State …, 2022 - ieeexplore.ieee.org
Recently, on-device training has become crucial for the success of edge intelligence.
However, frequent data movement between computing units and memory during training …

A 1-16b reconfigurable 80Kb 7T SRAM-based digital near-memory computing macro for processing neural networks

H Kim, J Mu, C Yu, TTH Kim… - IEEE Transactions on …, 2023 - ieeexplore.ieee.org
This work introduces a digital SRAM-based near-memory compute macro for DNN
inference, improving on-chip weight memory capacity and area efficiency compared to state …

PIM-trie: A Skew-resistant Trie for Processing-in-Memory

H Kang, Y Zhao, GE Blelloch, L Dhulipala… - Proceedings of the 35th …, 2023 - dl.acm.org
Memory latency and bandwidth are significant bottlenecks in designing in-memory indexes.
Processing-in-memory (PIM), an emerging hardware design approach, alleviates this …

Hardware for Deep Learning Acceleration

C Song, CM Ye, Y Sim, DS Jeong - Advanced Intelligent …, 2024 - Wiley Online Library
Deep learning (DL) has proven to be one of the most pivotal components of machine
learning given its notable performance in a variety of application domains. Neural networks …

A 0.05- 2.91-nJ/Decision Keyword-Spotting (KWS) Chip Featuring an Always-Retention 5T-SRAM in 28-nm CMOS

F Tan, WH Yu, KF Un, RP Martins… - IEEE Journal of Solid …, 2023 - ieeexplore.ieee.org
This article reports a keyword-spotting (KWS) chip for voice-controlled devices. It features a
number of techniques to enhance the performance, area, and power efficiencies: 1) a fast …

Hardsea: Hybrid analog-reram clustering and digital-sram in-memory computing accelerator for dynamic sparse self-attention in transformer

S Liu, C Mu, H Jiang, Y Wang, J Zhang… - … Transactions on Very …, 2023 - ieeexplore.ieee.org
Self-attention-based transformers have outperformed recurrent and convolutional neural
networks (RNN/CNNs) in many applications. Despite the effectiveness, calculating self …

A 108-nW 0.8-mm2 Analog Voice Activity Detector Featuring a Time-Domain CNN With Sparsity-Aware Computation and Sparsified Quantization in 28-nm CMOS

F Chen, KF Un, WH Yu, PI Mak… - IEEE Journal of Solid …, 2022 - ieeexplore.ieee.org
This article reports a passive analog feature extractor for realizing an area-and-power-
efficient voice activity detector (VAD) for voice-control edge devices. It features a switched …