Self-supervised Video Representation Learning with Cascade Positive Retrieval
Publication Date: 6/19/2022
Event: CVPR: Workshop on Learning with Limited Labelled Data for Image and Video Understanding
Reference: pp. 4070-4079, 2022
Authors: Cheng-En Wu, University of Wisconsin-Madison; Farley Lai, NEC Laboratories America, Inc.; Yu Hen Hu, University of Wisconsin-Madison; Asim Kadav, NEC Laboratories America, Inc.
Abstract: Self-supervised video representation learning has been shown to effectively improve downstream tasks such as video retrieval and action recognition. In this paper, we present the Cascade Positive Retrieval (CPR) that successively mines positive examples w.r.t. the query for contrastive learning in a cascade of stages. Specifically, CPR exploits multiple views of a query example in different modalities, where an alternative view may help find another positive example dissimilar in the query view. We explore the effects of possible CPR configurations in ablations including the number of mining stages, the top similar example selection ratio in each stage, and progressive training with an incremental number of the final Top-k selection. The overall mining quality is measured to reflect the recall across training set classes. CPR reaches a median class mining recall of 83.3%, outperforming previous work by 5.5%. Implementation-wise, CPR is complementary to pretext tasks and can be easily applied to previous work. In the evaluation of pretraining on UCF101, CPR consistently improves existing work and even achieves state-of-the-art R@1 of 56.7% and 24.4% in video retrieval as well as 83.8% and 54.8% in action recognition on UCF101 and HMDB51. The code is available at https://github.com/necla-ml/CPR.
Publication Link: https://ieeexplore.ieee.org/document/9857165
Additional Publication Link: https://arxiv.org/pdf/2201.07989.pdf