Pseudo RGB-D for Self-Improving Monocular SLAM and Depth Prediction
Publication Date: 8/28/2020
Event: ECCV 2020 – The 16th European Conference on Computer Vision, Glasgow, UK
Reference: https://www.ecva.net/papers/eccv_2020/papers_ECCV/papers/123560426.pdf
Authors: Lokender Tiwari, NEC Laboratories America, Inc.; Indraprastha Institute of Information Technology; Pan Ji, NEC Laboratories America, Inc.; Quoc-Huy Tran, NEC Laboratories America, Inc.; Bingbing Zhuang, NEC Laboratories America, Inc.; Saket Anand, Indraprastha Institute of Information Technology; Manmohan Chandraker, NEC Laboratories America, Inc.
Abstract: Classical monocular Simultaneous Localization And Mapping (SLAM) and the recently emerging convolutional neural networks (CNNs) for monocular depth prediction represent two largely disjoint approaches towards building a 3D map of the surrounding environment. In this paper, we demonstrate that the coupling of these two by leveraging the strengths of each mitigates the other’s shortcomings. Specifically, we propose a joint narrow and wide baseline based self-improving framework, where on the one hand the CNN-predicted depth is leveraged to perform $ extit(Unknown sysvar: (pseudo))$ RGB-D feature-based SLAM, leading to better accuracy and robustness than the monocular RGB SLAM baseline. On the other hand, the bundle-adjusted 3D scene structures and camera poses from the more principled geometric SLAM are injected back into the depth network through novel wide baseline losses proposed for improving the depth prediction network, which then continues to contribute towards better pose and 3D structure estimation in the next iteration. We emphasize that our framework only requires $ extit(Unknown sysvar: ( unlabeled monocular))$ videos in both training and inference stages, and yet is able to outperform state-of-the-art self-supervised $ extit(Unknown sysvar: (monocular))$ and $ extit(Unknown sysvar: (stereo))$ depth prediction networks (e.g, Monodepth2) and feature based monocular SLAM system (i.e, ORB-SLAM). Extensive experiments on KITTI and TUM RGB-D datasets verify the superiority of our self-improving geometry-CNN framework.
Publication Link: https://www.ecva.net/papers/eccv_2020/papers_ECCV/html/1363_ECCV_2020_paper.php
Supplemental Publication Link: https://www.ecva.net/papers/eccv_2020/papers_ECCV/papers/123560426-supp.zip
Additional Publication Link: https://arxiv.org/pdf/2004.10681v1.pdf