In-memory computing to break the memory wall

X Huang, C Liu, YG Jiang, P Zhou - Chinese Physics B, 2020 - iopscience.iop.org
Facing the computing demands of Internet of things (IoT) and artificial intelligence (AI), the
cost induced by moving the data between the central processing unit (CPU) and memory is …

GAN‐LSTM‐3D: An efficient method for lung tumour 3D reconstruction enhanced by attention‐based LSTM

L Hong, MH Modirrousta… - CAAI Transactions …, 2023 - Wiley Online Library
Abstract Three‐dimensional (3D) image reconstruction of tumours can visualise their
structures with precision and high resolution. In this article, GAN‐LSTM‐3D method is …

MNSIM 2.0: A behavior-level modeling tool for memristor-based neuromorphic computing systems

Z Zhu, H Sun, K Qiu, L Xia, G Krishnan, G Dai… - Proceedings of the …, 2020 - dl.acm.org
Memristor based neuromorphic computing systems give alternative solutions to boost the
computing energy efficiency of Neural Network (NN) algorithms. Because of the large-scale …

Evaluating machine learningworkloads on memory-centric computing systems

J Gómez-Luna, Y Guo, S Brocard… - … Analysis of Systems …, 2023 - ieeexplore.ieee.org
Training machine learning (ML) algorithms is a computationally intensive process, which is
frequently memory-bound due to repeatedly accessing large training datasets. As a result …

Mnsim 2.0: A behavior-level modeling tool for processing-in-memory architectures

Z Zhu, H Sun, T Xie, Y Zhu, G Dai, L Xia… - … on Computer-Aided …, 2023 - ieeexplore.ieee.org
In the age of artificial intelligence (AI), the huge data movements between memory and
computing units become the bottleneck of von Neumann architectures, ie, the “memory wall” …

Simplepim: A software framework for productive and efficient processing-in-memory

J Chen, J Gómez-Luna, I El Hajj… - 2023 32nd …, 2023 - ieeexplore.ieee.org
Data movement between memory and processors is a major bottleneck in modern
computing systems. The processing-in-memory (PIM) paradigm aims to alleviate this …

An Experimental Evaluation of Machine Learning Training on a Real Processing-in-Memory System

J Gómez-Luna, Y Guo, S Brocard, J Legriel… - arXiv preprint arXiv …, 2022 - arxiv.org
Training machine learning (ML) algorithms is a computationally intensive process, which is
frequently memory-bound due to repeatedly accessing large training datasets. As a result …

Towards efficient allocation of graph convolutional networks on hybrid computation-in-memory architecture

J Chen, G Lin, J Chen, Y Wang - Science China Information Sciences, 2021 - Springer
Graph convolutional networks (GCNs) have been applied successfully in social networks
and recommendation systems to analyze graph data. Unlike conventional neural networks …

Extreme partial-sum quantization for analog computing-in-memory neural network accelerators

Y Kim, H Kim, JJ Kim - ACM Journal on Emerging Technologies in …, 2022 - dl.acm.org
In Analog Computing-in-Memory (CIM) neural network accelerators, analog-to-digital
converters (ADCs) are required to convert the analog partial sums generated from a CIM …

SwiftRL: Towards Efficient Reinforcement Learning on Real Processing-In-Memory Systems

K Gogineni, SS Dayapule, J Gómez-Luna… - arXiv preprint arXiv …, 2024 - arxiv.org
Reinforcement Learning (RL) trains agents to learn optimal behavior by maximizing reward
signals from experience datasets. However, RL training often faces memory limitations …