Smartsage: training large-scale graph neural networks using in-storage processing architectures
Graph neural networks (GNNs) can extract features by learning both the representation of
each objects (ie, graph nodes) and the relationship across different objects (ie, the edges …
each objects (ie, graph nodes) and the relationship across different objects (ie, the edges …
Flash-Cosmos: In-flash bulk bitwise operations using inherent computation capability of nand flash memory
Bulk bitwise operations, ie, bitwise operations on large bit vectors, are prevalent in a wide
range of important application domains, including databases, graph processing, genome …
range of important application domains, including databases, graph processing, genome …
Hyperscale FPGA-as-a-service architecture for large-scale distributed graph neural network
Graph neural network (GNN) is a promising emerging application for link prediction,
recommendation, etc. Existing hardware innovation is limited to single-machine GNN (SM …
recommendation, etc. Existing hardware innovation is limited to single-machine GNN (SM …
Ginex: Ssd-enabled billion-scale graph neural network training on a single machine via provably optimal in-memory caching
Recently, Graph Neural Networks (GNNs) have been receiving a spotlight as a powerful tool
that can effectively serve various inference tasks on graph structured data. As the size of real …
that can effectively serve various inference tasks on graph structured data. As the size of real …
Optimstore: In-storage optimization of large scale dnns with on-die processing
Training deep neural network (DNN) models is a resource-intensive, iterative process. For
this reason, nowadays, complex optimizers like Adam are widely adopted as it increases the …
this reason, nowadays, complex optimizers like Adam are widely adopted as it increases the …
HGL: accelerating heterogeneous GNN training with holistic representation and optimization
Graph neural networks (GNNs) have shown to significantly improve graph analytics. Existing
systems for GNN training are primarily designed for homogeneous graphs. In industry …
systems for GNN training are primarily designed for homogeneous graphs. In industry …
Assasin: Architecture support for stream computing to accelerate computational storage
Computational storage adds computing to storage devices, providing potential benefits in
offload, data-reduction, and lower energy. Successful computational SSD architectures …
offload, data-reduction, and lower energy. Successful computational SSD architectures …
A survey on AI for storage
Y Liu, H Wang, K Zhou, CH Li, R Wu - CCF Transactions on High …, 2022 - Springer
Storage, as a core function and fundamental component of computers, provides services for
saving and reading digital data. The increasing complexity of data operations and storage …
saving and reading digital data. The increasing complexity of data operations and storage …
Horae: A Hybrid I/O Request Scheduling Technique for Near-Data Processing-Based SSD
Near-data processing (NDP) architecture is promised to break the bottleneck of data
movement in many scenarios (eg, databases and recommendation systems), which limits …
movement in many scenarios (eg, databases and recommendation systems), which limits …
TT-GNN: Efficient On-Chip Graph Neural Network Training via Embedding Reformation and Hardware Optimization
Training Graph Neural Networks on large graphs is challenging due to the need to store
graph data and move them along the memory hierarchy. In this work, we tackle this by …
graph data and move them along the memory hierarchy. In this work, we tackle this by …