Smartsage: training large-scale graph neural networks using in-storage processing architectures

Y Lee, J Chung, M Rhu - Proceedings of the 49th Annual International …, 2022 - dl.acm.org
Graph neural networks (GNNs) can extract features by learning both the representation of
each objects (ie, graph nodes) and the relationship across different objects (ie, the edges …

Flash-Cosmos: In-flash bulk bitwise operations using inherent computation capability of nand flash memory

J Park, R Azizi, GF Oliveira… - 2022 55th IEEE/ACM …, 2022 - ieeexplore.ieee.org
Bulk bitwise operations, ie, bitwise operations on large bit vectors, are prevalent in a wide
range of important application domains, including databases, graph processing, genome …

Hyperscale FPGA-as-a-service architecture for large-scale distributed graph neural network

S Li, D Niu, Y Wang, W Han, Z Zhang, T Guan… - Proceedings of the 49th …, 2022 - dl.acm.org
Graph neural network (GNN) is a promising emerging application for link prediction,
recommendation, etc. Existing hardware innovation is limited to single-machine GNN (SM …

Ginex: Ssd-enabled billion-scale graph neural network training on a single machine via provably optimal in-memory caching

Y Park, S Min, JW Lee - arXiv preprint arXiv:2208.09151, 2022 - arxiv.org
Recently, Graph Neural Networks (GNNs) have been receiving a spotlight as a powerful tool
that can effectively serve various inference tasks on graph structured data. As the size of real …

Optimstore: In-storage optimization of large scale dnns with on-die processing

J Kim, M Kang, Y Han, YG Kim… - 2023 IEEE International …, 2023 - ieeexplore.ieee.org
Training deep neural network (DNN) models is a resource-intensive, iterative process. For
this reason, nowadays, complex optimizers like Adam are widely adopted as it increases the …

HGL: accelerating heterogeneous GNN training with holistic representation and optimization

Y Gui, Y Wu, H Yang, T Jin, B Li, Q Zhou… - … Conference for High …, 2022 - ieeexplore.ieee.org
Graph neural networks (GNNs) have shown to significantly improve graph analytics. Existing
systems for GNN training are primarily designed for homogeneous graphs. In industry …

Assasin: Architecture support for stream computing to accelerate computational storage

C Zou, AA Chien - … 55th IEEE/ACM International Symposium on …, 2022 - ieeexplore.ieee.org
Computational storage adds computing to storage devices, providing potential benefits in
offload, data-reduction, and lower energy. Successful computational SSD architectures …

A survey on AI for storage

Y Liu, H Wang, K Zhou, CH Li, R Wu - CCF Transactions on High …, 2022 - Springer
Storage, as a core function and fundamental component of computers, provides services for
saving and reading digital data. The increasing complexity of data operations and storage …

Horae: A Hybrid I/O Request Scheduling Technique for Near-Data Processing-Based SSD

J Li, X Chen, D Liu, L Li, J Wang, Z Zeng… - … on Computer-Aided …, 2022 - ieeexplore.ieee.org
Near-data processing (NDP) architecture is promised to break the bottleneck of data
movement in many scenarios (eg, databases and recommendation systems), which limits …

TT-GNN: Efficient On-Chip Graph Neural Network Training via Embedding Reformation and Hardware Optimization

Z Qu, D Niu, S Li, H Zheng, Y Xie - Proceedings of the 56th Annual IEEE …, 2023 - dl.acm.org
Training Graph Neural Networks on large graphs is challenging due to the need to store
graph data and move them along the memory hierarchy. In this work, we tackle this by …