Channel Recurrent Attention Networks for Video Pedestrian Retrieval

Publication Date: 11/30/2020

Event: ACCV 2020, Kyoto, Japan

Reference: pp. 1-20, 2020

Authors: Pengfei Fang, Australian National University, Data61; Pan Ji, NEC Laboratories America, Inc.; Jieming Zhou, Australian National University, Data61; Lars Petersson, Data61; Mehrtash T. Harandi, Monash University

Abstract: Full attention, which generates an attention value per element of the input feature maps, has been successfully demonstrated to be beneficial in visual tasks. In this work, we propose a fully attentional network, termed channel recurrent attention network, for the task of video pedestrian retrieval. The main attention unit, channel recurrent attention, identifies attention maps at the frame level by jointly leveraging spatial and channel patterns via a recurrent neural network. This channel recurrent attention is designed to build a global receptive field by recurrently receiving and learning the spatial vectors. Then, a set aggregation cell is employed to generate a compact video representation. Empirical experimental results demonstrate the superior performance of the proposed deep network, outperforming current state-of-the-art results across standard video person retrieval benchmarks, and a thorough ablation study shows the effectiveness of the proposed units.

Publication Link: https://openaccess.thecvf.com/content/ACCV2020/html/Fang_Channel_Recurrent_Attention_Networks_for_Video_Pedestrian_Retrieval_ACCV_2020_paper.html