Parameterized Explainer for Graph Neural Network

Publication Date: 12/12/2020

Event: Thirty-Fourth Annual Conference on Neural Information Processing Systems (NeurIPS 2020)

Reference: 1-17, 2020

Authors: Dongsheng Luo, Pennsylvania State University; Wei Cheng, NEC Laboratories America, Inc.; Dongkuan Xu, Pennsylvania State University; Wenchao Yu, NEC Laboratories America, Inc.; Bo Zong, NEC Laboratories America, Inc.; Haifeng Chen, NEC Laboratories America, Inc.; Xiang Zhang, Pennsylvania State University

Abstract: Despite recent progress in Graph Neural Networks (GNNs), explaining predictions made by GNNs remains a challenging open problem. The leading method independently addresses the local explanations (i.e., important subgraph structure and node features) to interpret why a GNN model makes the prediction for a single instance, e.g. a node or a graph. As a result, the explanation generated is painstakingly customized for each instance. The unique explanation interpreting each instance independently is not sufficient to provide a global understanding of the learned GNN model, leading to the lack of generalizability and hindering it from being used in the inductive setting. Besides, as it is designed for explaining a single instance, it is challenging to explain a set of instances naturally (e.g., graphs of a given class). In this study, we address these key challenges and propose PGExplainer, a parameterized explainer for GNNs. PGExplainer adopts a deep neural network to parameterize the generation process of explanations, which enables PGExplainer a natural approach to explaining multiple instances collectively. Compared to the existing work, PGExplainer has better generalization ability and can be utilized in an inductive setting easily. Experiments on both synthetic and real-life datasets show highly competitive performance with up to 24.7% relative improvement in AUC on explaining graph classification over the leading baseline.

Publication Link: https://dl.acm.org/doi/10.5555/3495724.3497370