Learning Gibbs-Regularized Pushforward Density Estimators with a Symmetric KL Objective

Publication Date: 10/11/2018

Event: BayLearn Symposium 2018, Menlo Park, CA USA

Reference: PP 1-3, 2018

Authors: Nicholas Rhinehart, The Robotics Institute, Carnegie Mellon University, NEC Laboratories America, Inc.; Anqi Liu, Department of Computer Science, University of Illinois at Chicago, NEC Laboratories America, Inc.; Kihyuk Sohn, NEC Laboratories America, Inc.; Paul Vernaza, NEC Laboratories America, Inc.

Abstract: We claim that there is currently no satisfactory way to regularize a generative adversarial network (GAN): neither the generator nor discriminator is particularly amenable to the imposition of inductive biases derived from domain knowledge. A generator is effectively a causal model of generation—one that usually bears no resemblance to the true generation process, which is most often unobserved or exceedingly difficult to model. Consider image generation: although it is plausible—e.g., from biological arguments—that convolutional neural networks constitute a good class of image classifiers, claiming CNNs are inherently well-suited to image generation is harder to justify. Likewise, it is clear that regularizing the discriminator is necessary to prevent trivial solutions; although recent methods have seen some success in applying generic smoothness regularizers to the discriminator [1, 5, 12], it is not obvious how to impose domain-specific structure on the discriminator in an optimal way

Publication Link: