Memristive dynamics enabled neuromorphic computing systems

B Yan, Y Yang, R Huang - Science China Information Sciences, 2023 - Springer
The slowing down of transistor scaling and explosive growth for intelligence computing
power emerge as the two driving factors for the study of novel devices and materials to …

PICO-RAM: A PVT-Insensitive Analog Compute-In-Memory SRAM Macro With In Situ Multi-Bit Charge Computing and 6T Thin-Cell-Compatible Layout

Z Chen, Z Wen, W Wan, AR Pakala… - IEEE Journal of Solid …, 2024 - ieeexplore.ieee.org
Analog compute-in-memory (CIM) in static random access memory (SRAM) is promising for
accelerating deep learning inference by circumventing the memory wall and exploiting ultra …

A 161.6 tops/w mixed-mode computing-in-memory processor for energy-efficient mixed-precision deep neural networks

W Jo, S Kim, J Lee, S Um, Z Li… - 2022 IEEE International …, 2022 - ieeexplore.ieee.org
A Mixed-mode Computing-in memory (CIM) processor for the mixed-precision Deep Neural
Network (DNN) processing is proposed. Due to the bit-serial processing for the multi-bit …

A Quantization Model Based on a Floating-point Computing-in-Memory Architecture

X Chen, A Guo, X Xu, X Si… - 2022 IEEE Asia Pacific …, 2022 - ieeexplore.ieee.org
Computing-in-memory (CIM) has been proved to perform high energy efficiency and
significant acceleration effect for high computational parallelism neural networks. Floating …

A 9T-SRAM in-memory computing macro for Boolean logic and multiply-and-accumulate operations

C Dai, Z Ren, L Guan, H Liu, M Gao, W Lu, Z Pang… - Microelectronics …, 2024 - Elsevier
Artificial intelligence algorithms play important roles in image classification to speech
recognition, which contains enormous Boolean logic and multiplication operations …

Toggle Rate Aware Quantization Model Based on Digital Floating-Point Computing-in-Memory Architecture

X Chen, Y Zhao, A Guo, J Chen, F Dong… - … on Circuits and …, 2024 - ieeexplore.ieee.org
Computing-in-memory (CIM) has been proven to achieve high energy efficiency and
significant acceleration effects on neural networks with high computational parallelism …

Design of processing-in-memory with triple computational path and sparsity handling for energy-efficient DNN training

W Han, J Heo, J Kim, S Lim… - IEEE Journal on Emerging …, 2022 - ieeexplore.ieee.org
As machine learning (ML) and artificial intelligence (AI) have become mainstream
technologies, many accelerators have been proposed to cope with their computation …

Hadamard product-based in-memory computing design for floating point neural network training

A Fan, Y Fu, Y Tao, Z Jin, H Han, H Liu… - Neuromorphic …, 2023 - iopscience.iop.org
Deep neural networks (DNNs) are one of the key fields of machine learning. It requires
considerable computational resources for cognitive tasks. As a novel technology to perform …

A Multi-Level Deep Neural Network-Based Tourism Supply Chain Risk Management Study

L Xu - Scalable Computing: Practice and Experience, 2024 - scpe.org
With the rapid advancement of the tourism, the capital demand of tourism enterprises has
gradually risen, but the confusion of market management has increased the difficulty of risk …

Mixed-Signal Non-Von Neumann Accelerators for Edge Computing

Z Chen - 2024 - search.proquest.com
Deep learning models deployed on edge devices for local inference offer superior latency,
efficiency, availability, scalability, and privacy over cloud-based inference. Due to the …