Publication Date: 7/23/2019
Authors: Feng-Ju Chang, University of Southern California, NEC Laboratories America, Inc.; Xiang Yu, NEC Laboratories America, Inc.; Ram Nevatia, University of Southern California; Manmohan Chandraker, NEC Laboratories America, Inc., University of California, San Diego
Abstract: We address the challenging problem of generating facial attributes using a single image in an unconstrained pose. In contrast to prior works that largely consider generation on 2D near-frontal images, we propose a GAN-based framework to generate attributes directly on a dense 3D representation given by UV texture and position maps, resulting in photorealistic, geometrically-consistent and identity-preserving outputs. Starting from a self-occluded UV texture map obtained by applying an off-the-shelf 3D reconstruction method, we propose two novel components. First, a texture completion generative adversarial network (TC-GAN) completes the partial UV texture map. Second, a 3D attribute generation GAN (3DA-GAN) synthesizes the target attribute while obtaining an appearance consistent with 3D face geometry and preserving identity. Extensive experiments on CelebA, LFW and IJB-A show that our method achieves consistently better attribute generation accuracy than prior methods, a higher degree of qualitative photorealism and preserves face identity information.
Publication Link: https://arxiv.org/pdf/1907.10202.pdf