Learning Gibbs-Regularized Pushforward Density Estimators with a Symmetric KL Objective
We claim that there is currently no satisfactory way to regularize a generative adversarial network (GAN): neither the generator nor discriminator is particularly amenable to the imposition of inductive biases derived from domain knowledge. A generator is effectively a causal model of generationone that usually bears no resemblance to the true generation process, which is most often unobserved or exceedingly difficult to model. Consider image generation: although it is plausiblee.g., from biological argumentsthat convolutional neural networks constitute a good class of image classifiers, claiming CNNs are inherently well-suited to image generation is harder to justify. Likewise, it is clear that regularizing the discriminator is necessary to prevent trivial solutions; although recent methods have seen some success in applying generic smoothness regularizers to the discriminator [1, 5, 12], it is not obvious how to impose domain-specific structure on the discriminator in an optimal way