Disentangled Recurrent Wasserstein Auto-Encoder
Publication Date: 5/4/2021
Event: ICLR 2021
Reference: pp. 1-21, 2021
Authors: Jun Han, PCG, Tencent; Martin Renqiang Min, NEC Laboratories America, Inc.; Ligong Han, Rutgers University; Li Erran Li, Alexa AI, Amazon; Xuan Zhang, Texas A&M University
Abstract: Learning disentangled representations leads to interpretable models and facilitates data generation with style transfer, which has been extensively studied on static data such as images in an unsupervised learning framework. However, only a few works have explored unsupervised disentangled sequential representation learning due to challenges of generating sequential data. In this paper, we propose recurrent Wasserstein Autoencoder (R-WAE), a new framework for generative modeling of sequential data. R-WAE disentangles the representation of an input sequence into static and dynamic factors (i.e., time-invariant and time-varying parts). Our theoretical analysis shows that, R-WAE minimizes an upper bound of a penalized form of the Wasserstein distance between model distribution and sequential data distribution, and simultaneously maximizes the mutual information between input data and different disentangled latent factors, respectively. This is superior to (recurrent) VAE which does not explicitly enforce mutual information maximization between input data and disentangled latent representations. When the number of actions in sequential data is available as weak supervision information, R-WAE is extended to learn a categorical latent representation of actions to improve its disentanglement. Experiments on a variety of datasets show that our models outperform other baselines with the same settings in terms of disentanglement and unconditional video generation both quantitatively and qualitatively.
Publication Link: https://iclr.cc/virtual/2021/poster/3257