Publication Date: 2/7/2020
Event: AAAI 2020, New York, New York USA
Reference: pp 12434-12441, 2020
Authors: Taihong Xiao, University of California at Merced; Yi-Hsuan Tsai, NEC Laboratories America, Inc.; Kihyuk Sohn, NEC Laboratories America, Inc., Google; Manmohan Chandraker, NEC Laboratories America, Inc., UC San Diego; Ming-Hsuan Yang, University of California at Merced
Abstract: Data privacy has emerged as an important issue as data-driven deep learning has been an essential component of modern machine learning systems. For instance, there could be a potential privacy risk of machine learning systems via the model inversion attack, whose goal is to reconstruct the input data from the latent representation of deep networks. Our work aims at learning a privacy-preserving and task-oriented representation to defend against such model inversion attacks. Specifically, we propose an adversarial reconstruction learning framework that prevents the latent representations decoded into original input data. By simulating the expected behavior of adversary, our framework is realized by minimizing the negative pixel reconstruction loss or the negative feature reconstruction (i.e., perceptual distance) loss. We validate the proposed method on face attribute prediction, showing that our method allows protecting visual privacy with a small decrease in utility performance. In addition, we show the utility-privacy trade-off with different choices of hyperparameter for negative perceptual distance loss at training, allowing service providers to determine the right level of privacy-protection with a certain utility performance. Moreover, we provide an extensive study with different selections of features, tasks, and the data to further analyze their influence on privacy protection.