Provable training for graph contrastive learning

Y Yu, X Wang, M Zhang, N Liu… - Advances in Neural …, 2024 - proceedings.neurips.cc
Y Yu, X Wang, M Zhang, N Liu, C Shi
Advances in Neural Information Processing Systems, 2024proceedings.neurips.cc
Abstract Graph Contrastive Learning (GCL) has emerged as a popular training approach for
learning node embeddings from augmented graphs without labels. Despite the key principle
that maximizing the similarity between positive node pairs while minimizing it between
negative node pairs is well established, some fundamental problems are still unclear.
Considering the complex graph structure, are some nodes consistently well-trained and
following this principle even with different graph augmentations? Or are there some nodes …
Abstract
Graph Contrastive Learning (GCL) has emerged as a popular training approach for learning node embeddings from augmented graphs without labels. Despite the key principle that maximizing the similarity between positive node pairs while minimizing it between negative node pairs is well established, some fundamental problems are still unclear. Considering the complex graph structure, are some nodes consistently well-trained and following this principle even with different graph augmentations? Or are there some nodes more likely to be untrained across graph augmentations and violate the principle? How to distinguish these nodes and further guide the training of GCL? To answer these questions, we first present experimental evidence showing that the training of GCL is indeed imbalanced across all nodes. To address this problem, we propose the metric" node compactness", which is the lower bound of how a node follows the GCL principle related to the range of augmentations. We further derive the form of node compactness theoretically through bound propagation, which can be integrated into binary cross-entropy as a regularization. To this end, we propose the PrOvable Training (POT) for GCL, which regularizes the training of GCL to encode node embeddings that follows the GCL principle better. Through extensive experiments on various benchmarks, POT consistently improves the existing GCL approaches, serving as a friendly plugin.
proceedings.neurips.cc
以上显示的是最相近的搜索结果。 查看全部搜索结果