Uncalibrated SLAM (Simultaneous Localization and Mapping) is a computational problem in robotics and computer vision where a device or agent builds a map of an unknown environment while simultaneously determining its location within that environment. Uncalibrated SLAM specifically refers to SLAM methods that do not rely on accurate calibration information about the sensors used (e.g., cameras or LiDAR). Instead, uncalibrated SLAM algorithms aim to simultaneously estimate the map and the sensor calibration parameters.


Degeneracy in Self-Calibration Revisited and a Deep Learning Solution for Uncalibrated SLAM

Self-calibration of camera intrinsics and radial distortion has a long history of research in the computer vision community. However, it remains rare to see real applications of such techniques to modern Simultaneous Localization And Mapping (SLAM) systems, especially in driving scenarios. In this paper, we revisit the geometric approach to this problem, and provide a theoretical proof that explicitly shows the ambiguity between radial distortion and scene depth when two-view geometry is used to self-calibrate the radial distortion. In view of such geometric degeneracy, we propose a learning approach that trains a convolutional neural network (CNN) on a large amount of synthetic data. We demonstrate the utility of our proposed method by applying it as a checkerboard-free calibration tool for SLAM, achieving comparable or superior performance to previous learning and hand-crafted method