Two-dimensional materials for next-generation computing technologies

C Liu, H Chen, S Wang, Q Liu, YG Jiang… - Nature …, 2020 - nature.com
Rapid digital technology advancement has resulted in a tremendous increase in computing
tasks imposing stringent energy efficiency and area efficiency requirements on next …

Neuro-inspired computing chips

W Zhang, B Gao, J Tang, P Yao, S Yu, MF Chang… - Nature …, 2020 - nature.com
The rapid development of artificial intelligence (AI) demands the rapid development of
domain-specific hardware specifically designed for AI applications. Neuro-inspired …

In-memory computing: Advances and prospects

N Verma, H Jia, H Valavi, Y Tang… - IEEE Solid-State …, 2019 - ieeexplore.ieee.org
IMC has the potential to address a critical and foundational challenge affecting computing
platforms today-that is, the high energy and delay costs of moving data and accessing data …

[图书][B] Efficient processing of deep neural networks

V Sze, YH Chen, TJ Yang, JS Emer - 2020 - Springer
This book provides a structured treatment of the key principles and techniques for enabling
efficient processing of deep neural networks (DNNs). DNNs are currently widely used for …

CONV-SRAM: An energy-efficient SRAM with in-memory dot-product computation for low-power convolutional neural networks

A Biswas, AP Chandrakasan - IEEE Journal of Solid-State …, 2018 - ieeexplore.ieee.org
This paper presents an energy-efficient static random access memory (SRAM) with
embedded dot-product computation capability, for binary-weight convolutional neural …

A 64-tile 2.4-Mb in-memory-computing CNN accelerator employing charge-domain compute

H Valavi, PJ Ramadge, E Nestler… - IEEE Journal of Solid …, 2019 - ieeexplore.ieee.org
Large-scale matrix-vector multiplications, which dominate in deep neural networks (DNNs),
are limited by data movement in modern VLSI technologies. This paper addresses data …

[HTML][HTML] A survey on hardware accelerators: Taxonomy, trends, challenges, and perspectives

B Peccerillo, M Mannino, A Mondelli… - Journal of Systems …, 2022 - Elsevier
In recent years, the limits of the multicore approach emerged in the so-called “dark silicon”
issue and diminishing returns of an ever-increasing core count. Hardware manufacturers …

A CMOS-integrated compute-in-memory macro based on resistive random-access memory for AI edge devices

CX Xue, YC Chiu, TW Liu, TY Huang, JS Liu… - Nature …, 2021 - nature.com
The development of small, energy-efficient artificial intelligence edge devices is limited in
conventional computing architectures by the need to transfer data between the processor …

HERMES-Core—A 1.59-TOPS/mm2 PCM on 14-nm CMOS In-Memory Compute Core Using 300-ps/LSB Linearized CCO-Based ADCs

R Khaddam-Aljameh, M Stanisavljevic… - IEEE Journal of Solid …, 2022 - ieeexplore.ieee.org
We present a 256 256 in-memory compute (IMC) core designed and fabricated in 14-nm
CMOS technology with backend-integrated multi-level phase change memory (PCM). It …

Colonnade: A reconfigurable SRAM-based digital bit-serial compute-in-memory macro for processing neural networks

H Kim, T Yoo, TTH Kim, B Kim - IEEE Journal of Solid-State …, 2021 - ieeexplore.ieee.org
This article (Colonnade) presents a fully digital bit-serial compute-in-memory (CIM) macro.
The digital CIM macro is designed for processing neural networks with reconfigurable 1-16 …