Channel-Recurrent Autoencoding for Image Modeling

Publication Date: 3/14/2018

Event: WACV 2018, Lake Tahoe, Nevada USA

Reference: pp 1195-1204, 2018

Authors: Wenling Shang , University of Amsterdam; Kihyuk Sohn , NEC Laboratories America, Inc.; Yuandong Tian , Facebook AI Research

Abstract: Despite recent successes in synthesizing faces and bedrooms, existing generative models struggle to capture more complex image types (Figure 1), potentially due to the oversimplification of their latent space constructions. To tackle this issue, building on Variational Autoencoders (VAEs), we integrate recurrent connections across channels to both inference and generation steps, allowing the high-level features to be captured in global-to-local, coarse-to-fine manners. Combined with adversarial loss, our channel-recurrent VAE-GAN (crVAE-GAN) outperforms VAE-GAN in generating a diverse spectrum of high resolution images while maintaining the same level of computational efficacy. Our model produces interpretable and expressive latent representations to benefit downstream tasks such as image completion. Moreover, we propose two novel regularizations, namely the KL objective weighting scheme over time steps and mutual information maximization between transformed latent variables and the outputs, to enhance the training.

Publication Link: https://ieeexplore.ieee.org/document/8354240