Tripping through time: Efficient Localization of Activities in Videos

Publication Date: 9/11/2020

Event: BMVC 2020

Reference: pp. 1-14, 2020

Authors: Meera Hahn, Georgia Tech, NEC Laboratories America, Inc.; Asim Kadav, NEC Laboratories America, Inc.; James M. Rehg, Georgia Tech; Hans Peter Graf, NEC Laboratories America, Inc.

Abstract: Localizing moments in untrimmed videos via language queries is a new and interesting task that requires the ability to accurately ground language into video. Previous works have approached this task by processing the entire video, often more than once, to localize relevant activities. In the real world applications of this approach, such as video surveillance, efficiency is a key system requirement. In this paper, we present TripNet, an end-to-end system that uses a gated attention architecture to model fine-grained textual and visual representations in order to align text and video content. Furthermore, TripNet uses reinforcement learning to efficiently localize relevant activity clips in long videos, by learning how to intelligently skip around the video. It extracts visual features for few frames to perform activity classification. In our evaluation over Charades-STA [14], ActivityNet Captions [26] and the TACoS dataset [36], we find that TripNet achieves high accuracy and saves processing time by only looking at 32-41% of the entire video.

Publication Link: