Learning Cross-Modal Contrastive Features for Video Domain Adaptation

Publication Date: 10/11/2021

Event: ICCV 2021, Virtual

Reference: pp. 1-10, 2021

Authors: Donghyun Kim, Boston University, NEC Laboratories America, Inc.; Yi-Hsuan Tsai, NEC Laboratories America, Inc.; Bingbing Zhuang, NEC Laboratories America, Inc.; Xiang Yu, NEC Laboratories America, Inc.; Stan Sclarof, Boston University; Kate Saenko, Boston University, MIT-IBM Watson AI Lab; Manmohan Chandraker, NEC Laboratories America, Inc.

Abstract: Learning transferable and domain adaptive feature representations from videos is important for video-relevant tasks such as action recognition. Existing video domain adaptation methods mainly rely on adversarial feature alignment, which has been derived from the RGB image space. However, video data is usually associated with multi-modal information, e.g., RGB and optical flow, and thus it remains a challenge to design a better method that considers the cross-modal inputs under the cross-domain adaptation setting. To this end, we propose a unified framework for video domain adaptation, which simultaneously regularizes cross-modal and cross-domain feature representations. Specifically, we treat each modality in a domain as a view and leverage the contrastive learning technique with properly designed sampling strategies. As a result, our objectives regularize feature spaces, which originally lack the connection across modalities or have less alignment across domains. We conduct experiments on domain adaptive action recognition benchmark datasets, i.e., UCF, HMDB, and EPIC-Kitchens, and demonstrate the effectiveness of our components against state-of-the-art algorithms.

Publication Link: https://ieeexplore.ieee.org/document/9710306

Additional Publication Link: https://arxiv.org/pdf/2108.11974.pdf