Inductive and Unsupervised Representation Learning on Graph Structured Objects

Publication Date: 4/30/2020

Event: 8th International Conference on Learning Representations (ICLR 2020)

Reference: pp. 1-20, 2020

Authors: Lichen Wang, NEC Laboratories America, Inc.; Northeastern University; Bo Zong, NEC Laboratories America, Inc.; Qianqian Ma, Boston University; Wei Cheng, NEC Laboratories America, Inc.; Jingchao Ni, NEC Laboratories America, Inc.; Wenchao Yu, NEC Laboratories America, Inc.; Yanchi Liu, NEC Laboratories America, Inc.; Dongjin Song, NEC Laboratories America, Inc.; Haifeng Chen, NEC Laboratories America, Inc.; Yun Fu, Northeastern University

Abstract: Inductive and unsupervised graph learning is a critical technique for predictive or information retrieval tasks where label information is difficult to obtain. It is also challenging to make graph learning inductive and unsupervised at the same time, as learning processes guided by reconstruction error based loss functions inevitably demand graph similarity evaluation that is usually computationally intractable. In this paper, we propose a general framework SEED (Sampling, Encoding, and Embedding Distributions) for inductive and unsupervised representation learning on graph structured objects. Instead of directly dealing with the computational challenges raised by graph similarity evaluation, given an input graph, the SEED framework samples a number of subgraphs whose reconstruction errors could be efficiently evaluated, encodes the subgraph samples into a collection of subgraph vectors, and employs the embedding of the subgraph vector distribution as the output vector representation for the input graph. By theoretical analysis, we demonstrate the close connection between SEED and graph isomorphism. Using public benchmark datasets, our empirical study suggests the proposed SEED framework is able to achieve up to 10% improvement, compared with competitive baseline methods.

Publication Link: