A Conditional GAN (cGAN) is a type of generative adversarial network where the generation process is conditioned on additional input information. In a cGAN, both the generator and the discriminator receive additional input data (conditions), allowing for the generation of samples that meet specific criteria. This can be useful in tasks such as generating images based on certain attributes or characteristics specified by the conditioning information.

Posts

Conditional GAN with Discriminative Filter Generation for Text-to-Video Synthesis

Developing conditional generative models for text-to-video synthesis is an extremely challenging yet an important topic of research in machine learning. In this work, we address this problem by introducing Text-Filter conditioning Generative Adversarial Network (TFGAN), a conditional GAN model with a novel multi-scale text-conditioning scheme that improves text-video associations. By combining the proposed conditioning scheme with a deep GAN architecture, TFGAN generates high quality videos from text on challenging real-world video datasets. In addition, we construct a synthetic dataset of text-conditioned moving shapes to systematically evaluate our conditioning scheme. Extensive experiments demonstrate that TFGAN significantly outperforms existing approaches, and can also generate videos of novel categories not seen during training.