AE-StyleGAN: Improved Training of Style-Based Auto-Encoders

Publication Date: 1/4/2022

Event: WACV 2022

Reference: pp. 3134-3143, 2021

Authors: Ligong Han, Rutgers University; Sri Harsha Musunuri, Rutgers University; Martin Renqiang Min, NEC Laboratories America, Inc.; Ruijiang Gao, The University of Texas at Austin; Yu Tian, Rutgers University; Dimitris Metaxas, Rutgers University

Abstract: StyleGANs have shown impressive results on data generation and manipulation in recent years, thanks to its disentangled style latent space. A lot of efforts have been made in inverting a pretrained generator, where an encoder is trained ad hoc after the generator is trained in a two-stage fashion. In this paper, we focus on style-based generators asking a scientific question: Does forcing such a generator to reconstruct real data lead to more disentangled latent space and make the inversion process from image to latent space easy? We describe a new methodology to train a style-based autoencoder where the encoder and generator are optimized end-to-end. We show that our proposed model consistently outperforms baselines in terms of image inversion and generation quality. Supplementary, code, and pretrained models are available on the project website.

Publication Link: https://ieeexplore.ieee.org/document/9707025