InfoGCL: Information-Aware Graph Contrastive Learning

Publication Date: 12/14/2021

Event: Thirty-fifth Conference on Neural Information Processing Systems (NeurIPS 2021), Virtual-only Conference

Reference: pp. 1-12, 2021

Authors: Dongkuan Xu, NEC Laboratories America, Inc.; Wei Cheng, NEC Laboratories America, Inc.; Dongsheng Luo, Pennsylvania State University; Haifeng Chen, NEC Laboratories America, Inc.; Xiang Zhang, Pennsylvania State University

Abstract: Various graph contrastive learning models have been proposed to improve the performance of tasks on graph datasets in recent years. While effective and prevalent, these models are usually carefully customized. In particular, despite all recent work create two contrastive views, they differ in a variety of view augmentations, architectures, and objectives. It remains an open question how to build your graph contrastive learning model from scratch for particular graph tasks and datasets. In this work, we aim to fill this gap by studying how graph information is transformed and transferred during the contrastive learning process, and proposing an information-aware graph contrastive learning framework called InfoGCL. The key to the success of the proposed framework is to follow the Information Bottleneck principle to reduce the mutual information between contrastive parts while keeping task-relevant information intact at both the levels of the individual module and the entire framework so that the information loss during graph representation learning can be minimized. We show for the first time that all recent graph contrastive learning methods can be unified by our framework. Based on theoretical and empirical analysis on benchmark graph datasets, we show that InfoGCL achieves state-of-the-art performance in the settings of both graph classification and node classification tasks.

Publication Link: https://papers.nips.cc/paper_files/paper/2021/hash/ff1e68e74c6b16a1a7b5d958b95e120c-Abstract.html