Learning to Drop: Robust Graph Neural Network via Topological Denoising
Publication Date: 3/12/2021
Event: WSDM 2021 – The 14th ACM International WSDM Conference on Web Seach and Data Mining
Reference: pp. 779-787, 2021
Authors: Dongsheng Luo, Pennsylvania State University; Wei Cheng, NEC Laboratories America, Inc.; Wenchao Yu, NEC Laboratories America, Inc.; Bo Zong, NEC Laboratories America, Inc.; Jingchao Ni, NEC Laboratories America, Inc.; Haifeng Chen, NEC Laboratories America, Inc.; Xiang Zhang, Pennsylvania State University
Abstract: Graph Neural Networks (GNNs) have shown to be powerful tools for graph analytics. The key idea is to recursively propagate and aggregate information along the edges of the given graph. Despite their success, however, the existing GNNs are usually sensitive to the quality of the input graph. Real-world graphs are often noisy and contain task-irrelevant edges, which may lead to suboptimal generalization performance in the learned GNN models. In this paper, we propose PTDNet, a parameterized topological denoising network, to improve the robustness and generalization performance of GNNs by learning to drop task-irrelevant edges. PTDNet prunes task-irrelevant edges by penalizing the number of edges in the sparsified graph with parameterized networks. To take into consideration the topology of the entire graph, the nuclear norm regularization is applied to impose the low-rank constraint on the resulting sparsified graph for better generalization. PTDNet can be used as a key component in GNN models to improve their performances on various tasks, such as node classification and link prediction. Experimental studies on both synthetic and benchmark datasets show that PTDNet can improve the performance of GNNs significantly and the performance gain becomes larger for more noisy datasets.
Publication Link: https://dl.acm.org/doi/10.1145/3437963.3441734