Conditional GAN with Discriminative Filter Generation for Text-to-Video Synthesis

Publication Date: 8/10/2019

Event: IJCAI 2019

Reference: pp.1997-2001, 2019

Authors: Yogesh Balaji, NEC Laboratories America, Inc., University of Maryland; Martin Renqiang Min, NEC Laboratories America, Inc.; Bing Bai, NEC Laboratories America, Inc.; Rama Chellappa, University of Maryland; Hans Peter Graf, NEC Laboratories America, Inc.

Abstract: Developing conditional generative models for text-to-video synthesis is an extremely challenging yet an important topic of research in machine learning. In this work, we address this problem by introducing Text-Filter conditioning Generative Adversarial Network (TFGAN), a conditional GAN model with a novel multi-scale text-conditioning scheme that improves text-video associations. By combining the proposed conditioning scheme with a deep GAN architecture, TFGAN generates high quality videos from text on challenging real-world video datasets. In addition, we construct a synthetic dataset of text-conditioned moving shapes to systematically evaluate our conditioning scheme. Extensive experiments demonstrate that TFGAN significantly outperforms existing approaches, and can also generate videos of novel categories not seen during training.

Publication Link: