Shuffle and Attend: Video Domain Adaptation
Publication Date: 8/23/2020
Event: ECCV 2020 – The 16th European Conference on Computer Vision, Glasgow, UK
Reference: https://www.ecva.net/papers/eccv_2020/papers_ECCV/papers/123570664.pdf
Authors: Jinwoo Choi, Virginia Tech, NEC Laboratories America, Inc.; Gaurav Sharma, NEC Laboratories America, Inc.; Samuel Schulter, NEC Laboratories America, Inc.; Jia-Bin Huang, Virginia Tech
Abstract: We address the problem of domain adaptation in videos for the task of human action recognition. Inspired by image-based domain adaptation, we can perform video adaptation by aligning the features of frames or clips of source and target videos. However, equally aligning all clips is sub-optimal as not all clips are informative for the task. As the first novelty, we propose an attention mechanism which focuses on more discriminative clips and directly optimizes for video-level (cf. clip-level) alignment. As the backgrounds are often very different between source and target, the source background-corrupted model adapts poorly to target domain videos. To alleviate this, as a second novelty, we propose to use the clip order prediction as an auxiliary task. The clip order prediction loss, when combined with domain adversarial loss, encourages learning of representations which focus on the humans and objects involved in the actions, rather than the uninformative and widely differing (between source and target) backgrounds. We empirically show that both components contribute positively towards adaptation performance. We report state-of-the-art performances on two out of three challenging public benchmarks, two based on the UCF and HMDB datasets, and one on Kinetics to NEC-Drone datasets. We also support the intuitions and the results with qualitative results.
Publication Link: https://www.ecva.net/papers/eccv_2020/papers_ECCV/html/1712_ECCV_2020_paper.php