Coordinated Joint Multimodal Embeddings for Generalized Audio-Visual Zero-shot Classification and Retrieval of Videos

Publication Date: 3/2/2020

Event: WACV 2020, Snowmass Village, CO USA

Reference: pp 3240-3249, 2020

Authors: Kranti Kumar Parida, IIT Kanpur; Neeraj Matiyali, IIT Kanpur; Tanaya Guha, University of Warwick; Gaurav Sharma, NEC Laboratories America, Inc.

Abstract: We present an audio-visual multimodal approach for the task of zero-shot learning (ZSL) for classification and retrieval of videos. ZSL has been studied extensively in the recent past but has primarily been limited to visual modality and to images. We demonstrate that both audio and visual modalities are important for ZSL for videos. Since a dataset to study the task is currently not available, we also construct an appropriate multimodal dataset with 33 classes containing 156, 416 videos, from an existing large scale audio event dataset. We empirically show that the performance improves by adding audio modality for both tasks of zero-shot classification and retrieval, when using multi-modal extensions of embedding learning methods. We also propose a novel method to predict the `dominant’ modality using a jointly learned modality attention network. We learn the attention in a semi-supervised setting and thus do not require any additional explicit labelling for the modalities. We provide qualitative validation of the modality specific attention, which also successfully generalizes to unseen test classes.

Publication Link: https://ieeexplore.ieee.org/document/9093438

Secondary Publication Link: https://arxiv.org/pdf/1910.08732.pdf