[HTML][HTML] Prospects and challenges of electrochemical random-access memory for deep-learning accelerators

J Cui, H Liu, Q Cao - Current Opinion in Solid State and Materials Science, 2024 - Elsevier
The ever-expanding capabilities of machine learning are powered by exponentially growing
complexity of deep neural network (DNN) models, requiring more energy and chip-area …

Integrated non-reciprocal magneto-optics with ultra-high endurance for photonic in-memory computing

P Pintus, M Dumont, V Shah, T Murai, Y Shoji… - Nature …, 2024 - nature.com
Processing information in the optical domain promises advantages in both speed and
energy efficiency over existing digital hardware for a variety of emerging applications in …

A Heterogeneous Chiplet Architecture for Accelerating End-to-End Transformer Models

H Sharma, P Dhingra, JR Doppa, U Ogras… - arXiv preprint arXiv …, 2023 - arxiv.org
Transformers have revolutionized deep learning and generative modeling, enabling
unprecedented advancements in natural language processing tasks. However, the size of …

Data Pruning-enabled High Performance and Reliable Graph Neural Network Training on ReRAM-based Processing-in-Memory Accelerators

C Ogbogu, B Joardar, K Chakrabarty, J Doppa… - ACM Transactions on …, 2024 - dl.acm.org
Graph Neural Networks (GNNs) have achieved remarkable accuracy in cognitive tasks such
as predictive analytics on graph-structured data. Hence, they have become very popular in …

Experimental demonstration of non-stateful in-memory logic with 1t1r oxram valence change mechanism memristors

H Padberg, A Regev, G Piccolboni… - … on Circuits and …, 2023 - ieeexplore.ieee.org
Processing-in-memory (PIM) is attractive to overcome the limitations of modern computing
systems. Numerous PIM systems exist, varying by the technologies and logic techniques …

DRCTL: A Disorder-Resistant Computation Translation Layer Enhancing the Lifetime and Performance of Memristive CIM Architecture

H Zhou, B Wu, H Cheng, J Liu, T Lei… - 2024 57th IEEE/ACM …, 2024 - ieeexplore.ieee.org
The memristive Computing-in-Memory (CIM) sys-tem can efficiently accelerate matrix-vector
multiplication (MVM) operations through in-situ computing. The data layout has a significant …

HpT: Hybrid Acceleration of Spatio-Temporal Attention Model Training on Heterogeneous Manycore Architectures

S Dahal, P Dhingra, KK Thapa… - … on Parallel and …, 2025 - ieeexplore.ieee.org
Transformer models have become widely popular in numerous applications, and especially
for building foundation large language models (LLMs). Recently, there has been a surge in …

ARAS: An Adaptive Low-Cost ReRAM-Based Accelerator for DNNs

M Sabri, M Riera, A González - arXiv preprint arXiv:2410.17931, 2024 - arxiv.org
Processing Using Memory (PUM) accelerators have the potential to perform Deep Neural
Network (DNN) inference by using arrays of memory cells as computation engines. Among …

Efficient Reprogramming of Memristive Crossbars for DNNs: Weight Sorting and Bit Stucking

M Farias, HT Kung - arXiv preprint arXiv:2410.21730, 2024 - arxiv.org
We introduce a novel approach to reduce the number of times required for reprogramming
memristors on bit-sliced compute-in-memory crossbars for deep neural networks (DNNs) …

Crafting Non-Volatile Memory (NVM) Hierarchies: Optimizing Performance, Reliability, and Energy Efficiency

Crafting Non-Volatile Memory (NVM) Hierarchies: Optimizing Performance, Reliability, and
Energy Efficiency / Carlos Escuín Blas Page 1 2024 228 Carlos Escuín Blasco Crafting Non-Volatile …