The evolution of distributed systems for graph neural networks and their origin in graph processing and deep learning: A survey

J Vatter, R Mayer, HA Jacobsen - ACM Computing Surveys, 2023 - dl.acm.org
Graph neural networks (GNNs) are an emerging research field. This specialized deep
neural network architecture is capable of processing graph structured data and bridges the …

Thinking like a vertex: A survey of vertex-centric frameworks for large-scale distributed graph processing

RR McCune, T Weninger, G Madey - ACM Computing Surveys (CSUR), 2015 - dl.acm.org
The vertex-centric programming model is an established computational paradigm recently
incorporated into distributed processing frameworks to address challenges in large-scale …

Dorylus: Affordable, scalable, and accurate {GNN} training with distributed {CPU} servers and serverless threads

J Thorpe, Y Qiao, J Eyolfson, S Teng, G Hu… - … USENIX Symposium on …, 2021 - usenix.org
A graph neural network (GNN) enables deep learning on structured graph data. There are
two major GNN training obstacles: 1) it relies on high-end servers with many GPUs which …

P3: Distributed deep graph learning at scale

S Gandhi, AP Iyer - 15th {USENIX} Symposium on Operating Systems …, 2021 - usenix.org
Graph Neural Networks (GNNs) have gained significant attention in the recent past, and
become one of the fastest growing subareas in deep learning. While several new GNN …

Gemini: A {Computation-Centric} distributed graph processing system

X Zhu, W Chen, W Zheng, X Ma - 12th USENIX Symposium on Operating …, 2016 - usenix.org
Traditionally distributed graph processing systems have largely focused on scalability
through the optimizations of inter-node communication and load balance. However, they …

GNNLab: a factored system for sample-based GNN training over GPUs

J Yang, D Tang, X Song, L Wang, Q Yin… - Proceedings of the …, 2022 - dl.acm.org
We propose GNNLab, a sample-based GNN training system in a single machine multi-GPU
setup. GNNLab adopts a factored design for multiple GPUs, where each GPU is dedicated to …

{NeuGraph}: Parallel deep neural network computation on large graphs

L Ma, Z Yang, Y Miao, J Xue, M Wu, L Zhou… - 2019 USENIX Annual …, 2019 - usenix.org
Recent deep learning models have moved beyond low dimensional regular grids such as
image, video, and speech, to high-dimensional graph-structured data, such as social …

Sancus: staleness-aware communication-avoiding full-graph decentralized training in large-scale graph neural networks

J Peng, Z Chen, Y Shao, Y Shen, L Chen… - Proceedings of the VLDB …, 2022 - dl.acm.org
Graph neural networks (GNNs) have emerged due to their success at modeling graph data.
Yet, it is challenging for GNNs to efficiently scale to large graphs. Thus, distributed GNNs …

Bns-gcn: Efficient full-graph training of graph convolutional networks with partition-parallelism and random boundary node sampling

C Wan, Y Li, A Li, NS Kim, Y Lin - Proceedings of Machine …, 2022 - proceedings.mlsys.org
Abstract Graph Convolutional Networks (GCNs) have emerged as the state-of-the-art
method for graph-based learning tasks. However, training GCNs at scale is still challenging …

GraphR: Accelerating graph processing using ReRAM

L Song, Y Zhuo, X Qian, H Li… - 2018 IEEE International …, 2018 - ieeexplore.ieee.org
Graph processing recently received intensive interests in light of a wide range of needs to
understand relationships. It is well-known for the poor locality and high memory bandwidth …