Understanding Road Layout from Videos as a Whole
Publication Date: 6/16/2020
Event: CVPR 2020
Reference: pp 4414-4423, 2020
Authors: Buyu Liu, NEC Laboratories America, Inc.; Bingbing Zhuang, NEC Laboratories America, Inc.; Samuel Schulter, NEC Laboratories America, Inc.; Pan Ji, NEC Laboratories America, Inc.; Manmohan Chandraker, NEC Laboratories America, Inc., UC San Diego
Abstract: In this paper, we address the problem of inferring the layout of complex road scenes from video sequences. To this end, we formulate it as a top-view road attributes prediction problem and our goal is to predict these attributes for each frame both accurately and consistently. In contrast to prior work, we exploit the following three novel aspects: leveraging camera motions in videos, including context cues and incorporating long-term video information. Specifically, we introduce a model that aims to enforce prediction consistency in videos. Our model consists of one LSTM and one Feature Transform Module (FTM). The former implicitly incorporates the consistency constraint with its hidden states, and the latter explicitly takes the camera motion into consideration when aggregating information along videos. Moreover, we propose to incorporate context information by introducing road participants, e.g. objects, into our model. When the entire video sequence is available, our model is also able to encode both local and global cues, e.g. information from both past and future frames. Experiments on two data sets show that: (1) Incorporating either global or contextual cues improves the prediction accuracy and leveraging both gives the best performance. (2) Introducing the LSTM and FTM modules improves the prediction consistency in videos. (3) The proposed method outperforms the SOTA by a large margin.
Publication Link: https://ieeexplore.ieee.org/document/9157339