The evolution of distributed systems for graph neural networks and their origin in graph processing and deep learning: A survey
Graph neural networks (GNNs) are an emerging research field. This specialized deep
neural network architecture is capable of processing graph structured data and bridges the …
neural network architecture is capable of processing graph structured data and bridges the …
Thinking like a vertex: A survey of vertex-centric frameworks for large-scale distributed graph processing
The vertex-centric programming model is an established computational paradigm recently
incorporated into distributed processing frameworks to address challenges in large-scale …
incorporated into distributed processing frameworks to address challenges in large-scale …
Dorylus: Affordable, scalable, and accurate {GNN} training with distributed {CPU} servers and serverless threads
A graph neural network (GNN) enables deep learning on structured graph data. There are
two major GNN training obstacles: 1) it relies on high-end servers with many GPUs which …
two major GNN training obstacles: 1) it relies on high-end servers with many GPUs which …
P3: Distributed deep graph learning at scale
Graph Neural Networks (GNNs) have gained significant attention in the recent past, and
become one of the fastest growing subareas in deep learning. While several new GNN …
become one of the fastest growing subareas in deep learning. While several new GNN …
Gemini: A {Computation-Centric} distributed graph processing system
Traditionally distributed graph processing systems have largely focused on scalability
through the optimizations of inter-node communication and load balance. However, they …
through the optimizations of inter-node communication and load balance. However, they …
GNNLab: a factored system for sample-based GNN training over GPUs
We propose GNNLab, a sample-based GNN training system in a single machine multi-GPU
setup. GNNLab adopts a factored design for multiple GPUs, where each GPU is dedicated to …
setup. GNNLab adopts a factored design for multiple GPUs, where each GPU is dedicated to …
{NeuGraph}: Parallel deep neural network computation on large graphs
Recent deep learning models have moved beyond low dimensional regular grids such as
image, video, and speech, to high-dimensional graph-structured data, such as social …
image, video, and speech, to high-dimensional graph-structured data, such as social …
Sancus: staleness-aware communication-avoiding full-graph decentralized training in large-scale graph neural networks
Graph neural networks (GNNs) have emerged due to their success at modeling graph data.
Yet, it is challenging for GNNs to efficiently scale to large graphs. Thus, distributed GNNs …
Yet, it is challenging for GNNs to efficiently scale to large graphs. Thus, distributed GNNs …
Bns-gcn: Efficient full-graph training of graph convolutional networks with partition-parallelism and random boundary node sampling
Abstract Graph Convolutional Networks (GCNs) have emerged as the state-of-the-art
method for graph-based learning tasks. However, training GCNs at scale is still challenging …
method for graph-based learning tasks. However, training GCNs at scale is still challenging …
GraphR: Accelerating graph processing using ReRAM
Graph processing recently received intensive interests in light of a wide range of needs to
understand relationships. It is well-known for the poor locality and high memory bandwidth …
understand relationships. It is well-known for the poor locality and high memory bandwidth …