Two-dimensional materials for next-generation computing technologies
Rapid digital technology advancement has resulted in a tremendous increase in computing
tasks imposing stringent energy efficiency and area efficiency requirements on next …
tasks imposing stringent energy efficiency and area efficiency requirements on next …
Neuro-inspired computing chips
The rapid development of artificial intelligence (AI) demands the rapid development of
domain-specific hardware specifically designed for AI applications. Neuro-inspired …
domain-specific hardware specifically designed for AI applications. Neuro-inspired …
In-memory computing: Advances and prospects
IMC has the potential to address a critical and foundational challenge affecting computing
platforms today-that is, the high energy and delay costs of moving data and accessing data …
platforms today-that is, the high energy and delay costs of moving data and accessing data …
[图书][B] Efficient processing of deep neural networks
This book provides a structured treatment of the key principles and techniques for enabling
efficient processing of deep neural networks (DNNs). DNNs are currently widely used for …
efficient processing of deep neural networks (DNNs). DNNs are currently widely used for …
CONV-SRAM: An energy-efficient SRAM with in-memory dot-product computation for low-power convolutional neural networks
A Biswas, AP Chandrakasan - IEEE Journal of Solid-State …, 2018 - ieeexplore.ieee.org
This paper presents an energy-efficient static random access memory (SRAM) with
embedded dot-product computation capability, for binary-weight convolutional neural …
embedded dot-product computation capability, for binary-weight convolutional neural …
A 64-tile 2.4-Mb in-memory-computing CNN accelerator employing charge-domain compute
Large-scale matrix-vector multiplications, which dominate in deep neural networks (DNNs),
are limited by data movement in modern VLSI technologies. This paper addresses data …
are limited by data movement in modern VLSI technologies. This paper addresses data …
[HTML][HTML] A survey on hardware accelerators: Taxonomy, trends, challenges, and perspectives
In recent years, the limits of the multicore approach emerged in the so-called “dark silicon”
issue and diminishing returns of an ever-increasing core count. Hardware manufacturers …
issue and diminishing returns of an ever-increasing core count. Hardware manufacturers …
A CMOS-integrated compute-in-memory macro based on resistive random-access memory for AI edge devices
CX Xue, YC Chiu, TW Liu, TY Huang, JS Liu… - Nature …, 2021 - nature.com
The development of small, energy-efficient artificial intelligence edge devices is limited in
conventional computing architectures by the need to transfer data between the processor …
conventional computing architectures by the need to transfer data between the processor …
HERMES-Core—A 1.59-TOPS/mm2 PCM on 14-nm CMOS In-Memory Compute Core Using 300-ps/LSB Linearized CCO-Based ADCs
R Khaddam-Aljameh, M Stanisavljevic… - IEEE Journal of Solid …, 2022 - ieeexplore.ieee.org
We present a 256 256 in-memory compute (IMC) core designed and fabricated in 14-nm
CMOS technology with backend-integrated multi-level phase change memory (PCM). It …
CMOS technology with backend-integrated multi-level phase change memory (PCM). It …
Colonnade: A reconfigurable SRAM-based digital bit-serial compute-in-memory macro for processing neural networks
This article (Colonnade) presents a fully digital bit-serial compute-in-memory (CIM) macro.
The digital CIM macro is designed for processing neural networks with reconfigurable 1-16 …
The digital CIM macro is designed for processing neural networks with reconfigurable 1-16 …