Dual Projection Generative Adversarial Networks for Conditional Image Generation

Publication Date: 10/11/2021

Event: ICCV 2021

Reference: pp. 14438-14447, 2021

Authors: Ligong Han, Rutgers University; Martin Renqiang Min, NEC Laboratories America, Inc.; Anastasis Stathopoulos, Rutgers University; Yu Tian, Rutgers University; Ruijiang Gao, University of Texas at Austin; Asim Kadav, NEC Laboratories America, Inc.; Dimitris Metaxas, Rutgers University

Abstract: onditional Generative Adversarial Networks (cGANs) extend the standard unconditional GAN framework to learning joint data-label distributions from samples, and have been established as powerful generative models capable of generating high-fidelity imagery. A challenge of training such a model lies in properly infusing class information into its generator and discriminator. For the discriminator, class conditioning can be achieved by either (1) directly incorporating labels as input or (2) involving labels in an auxiliary classification loss. In this paper, we show that the former directly aligns the class-conditioned fake-and-real data distributions P (image|class) (data matching), while the latter aligns data-conditioned class distributions P (class|image) (label matching). Although class separability does not directly translate to sample quality and becomes a burden if classification itself is intrinsically difficult, the discriminator cannot provide useful guidance for the generator if features of distinct classes are mapped to the same point and thus become inseparable. Motivated by this intuition, we propose a Dual Projection GAN (P2GAN) model that learns to balance between data matching and label matching. We then propose an improved cGAN model with Auxiliary Classification that directly aligns the fake and real conditionals P (class|image) by minimizing their f-divergence. Experiments on a synthetic Mixture of Gaussian (MoG) dataset and a variety of real-world datasets including CIFAR100, ImageNet, and VGGFace2 demonstrate the efficacy of our proposed models.

Publication Link: https://ieeexplore.ieee.org/document/9711101