Publication Date: 6/18/2023
Event: CVPR 2023
Reference: pp. 21404-21414, 2023
Authors: Zhixiang Min, Stevens Institute of Technology, NEC Laboratories America, Inc.; Bingbing Zhuang, NEC Laboratories America, Inc.; Samuel Schulter, NEC Laboratories America, Inc.; Buyu Liu, NEC Laboratories America, Inc.; Enrique Dunn, Stevens Institute of Technology; Manmohan Chandraker, NEC Laboratories America, Inc.
Abstract: Monocular 3D object localization in driving scenes is a crucial task, but challenging due to its ill-posed nature. Estimating 3D coordinates for each pixel on the object surface holds great potential as it provides dense 2D-3D geometric constraints for the underlying PnP problem. However, high-quality ground truth supervision is not available in driving scenes due to sparsity and various artifacts of Lidar data, as well as the practical infeasibility of collecting per-instance CAD models. In this work, we present NeurOCS, a framework that uses instance masks and 3D boxes as input to learn 3D object shapes by means of differentiable rendering, which further serves as supervision for learning dense object coordinates. Our approach rests on insights in learning a category-level shape prior directly from real driving scenes, while properly handling single-view ambiguities. Furthermore, we study and make critical design choices to learn object coordinates more effectively from an object-centric view. Altogether, our framework leads to new state-of-the-art in monocular 3D localization that ranks 1st on the KITTI-Object benchmark among published monocular methods.