Robust Graph Representation Learning via Neural Sparsification

Publication Date: 7/18/2020

Event: The 37th International Conference on Machine Learning (ICML 2020)

Reference: pp. 1-12, 2020

Authors: Cheng Zheng, NEC Laboratories America, Inc.; University of California, Los Angeles; Bo Zong, NEC Laboratories America, Inc.; Wei Cheng, NEC Laboratories America, Inc.; Dongjin Song, NEC Laboratories America, Inc.; Jingchao Ni, NEC Laboratories America, Inc.; Wenchao Yu, NEC Laboratories America, Inc.; Haifeng Chen, NEC Laboratories America, Inc.; Wei Wang, University of California, Los Angeles

Abstract: Graph representation learning serves as the core of important prediction tasks, ranging from product recommendation to fraud detection. Reallife graphs usually have complex information in the local neighborhood, where each node is described by a rich set of features and connects to dozens or even hundreds of neighbors. Despite the success of neighborhood aggregation in graph neural networks, task-irrelevant information is mixed into nodes’ neighborhood, making learned models suffer from sub-optimal generalization performance. In this paper, we present NeuralSparse, a supervised graph sparsification technique that improves generalization power by learning to remove potentially task-irrelevant edges from input graphs. Our method takes both structural and nonstructural information as input, utilizes deep neural networks to parameterize sparsification processes, and optimizes the parameters by feedback signals from downstream tasks. Under the NeuralSparse framework, supervised graph sparsification could seamlessly connect with existing graph neural networks for more robust performance. Experimental results on both benchmark and private datasets show that NeuralSparse can yield up to 7.2% improvement in testing accuracy when working with existing graph neural networks on node classification tasks.

Publication Link: https://dl.acm.org/doi/abs/10.5555/3524938.3526000