Learning to Look around Objects for Top-View Representations of Outdoor Scenes

Publication Date: 3/28/2018

Event: arXiv

Reference: https://arxiv.org/abs/1803.10870v1

Authors: Samuel Schulter, NEC Laboratories America, Inc.; Menghua Zhai, University of Kentucky, NEC Laboratories America, Inc.; Nathan Jacobs, Computer Science, University of Kentucky; Manmohan Chandraker, NEC Laboratories America, Inc.

Abstract: Given a single RGB image of a complex outdoor road scene in the perspective view, we address the novel problem of estimating an occlusion-reasoned semantic scene layout in the top-view. This challenging problem not only requires an accurate understanding of both the 3D geometry and the semantics of the visible scene, but also of occluded areas. We propose a convolutional neural network that learns to predict occluded portions of the scene layout by looking around foreground objects like cars or pedestrians. But instead of hallucinating RGB values, we show that directly predicting the semantics and depths in the occluded areas enables a better transformation into the top-view. We further show that this initial top-view representation can be significantly enhanced by learning priors and rules about typical road layouts from simulated or, if available, map data. Crucially, training our model does not require costly or subjective human annotations for occluded areas or the top-view, but rather uses readily available annotations for standard semantic segmentation. We extensively evaluate and analyze our approach on the KITTI and Cityscapes data sets.

Publication Link: https://arxiv.org/pdf/1803.10870v1.pdf

0 replies

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply

Your email address will not be published. Required fields are marked *