Towards Robust Graph Neural Networks via Adversarial Contrastive Learning

Publication Date: 12/20/2022

Event: 2022 IEEE International Conference on Big Data (IEEE BigData 2022), Osaka, Japan

Reference: pp. 1-10, 2022, 2022

Authors: Shen Wang, University of Illinois at Chicago; Zhengzhang Chen, NEC Laboratories America, Inc.; Jingchao Ni, AWS AI Labs; Haifeng Chen, NEC Laboratories America, Inc.; Philip S. Yu, University of Illinois at Chicago

Abstract: Graph Neural Network (GNN), as a powerful representation learning model on graph data, attracts much attention across various disciplines. However, recent studies show that GNN is vulnerable to adversarial attacks. How to make GNN more robust? What are the key vulnerabilities in GNN? How to address the vulnerabilities and defend GNN against the adversarial attacks? Adversarial training has shown to be effective in improving the robustness of traditional Deep Neural Networks (DNNs). However, existing adversarial training works mainly focus on the image data, which consists of continuous features, while the features and structures of graph data are often discrete. Moreover, rather than assuming each sample is independent and identically distributed as in DNN, GNN leverages the contextual information across the graph (e.g., neighborhoods of a node). Thus, existing adversarial training techniques cannot be directly applied to defend GNN. In this paper, we propose ContrastNet, an effective adversarial defense framework for GNN. In particular, we propose an adversarial contrastive learning method to train the GNN over the adversarial space. To further improve the robustness of GNN, we investigate the latent vulnerabilities in every component of a GNN encoder and propose corresponding refining strategies. Extensive experiments on three public datasets demonstrate the effectiveness of ContrastNet in improving the robustness of popular GNN variants, such as Graph Convolutional Network and GraphSage, under various types of adversarial attacks.

Publication Link: https://ieeexplore.ieee.org/document/10021051