Abstract
This paper proposes a novel approach for robust plane matching and real-time RGB-D fusion based on the representation of plane parameter space. In contrast to previous planar-based SLAM algorithms estimating correspondences for each plane-pair independently, our method instead explores the holistic topology of all relevant planes. We note that by adopting the low-dimensionality parameter space representation, the plane matching can be intuitively reformulated and solved as a point cloud registration problem. Besides estimating the plane correspondences, we contribute an efficient optimization framework, which employs both frame-to-frame and frame-to-model planar consistency constraints. We propose a global plane map to dynamically represent the reconstructed scene and alleviate accumulation errors that exist in camera pose tracking. We validate the proposed algorithm on standard benchmark datasets and additional challenging real-world environments. The experimental results demonstrate its outperformance to current state-of-the-art methods in tracking robustness and reconstruction fidelity.











Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.References
Adrien Kaiser, A.Y., Boubekeur, T.: Proxy clouds for live RGB-D stream processing and consolidation. In: ECCV (2018)
Agamennoni, G., Fontana, S., Siegwart, R.Y., Sorrenti, D.G.: Point clouds registration with probabilistic data association. In: IROS (2016)
Besl, P.J., Mckay, N.D.: A method for registration of 3-D shapes. In: Robotics, pp. 239–256 (1992)
Bylow, E., Sturm, J., Kerl, C., Kahl, F., Cremers, D.: Real-time camera tracking and 3D reconstruction using signed distance functions. In: Robotics: Science and Systems (2013)
Concha, A., Civera, J.: DPPTAM: Dense piecewise planar tracking and mapping from a monocular sequence. In: IROS (2015)
Dai, A., Nießner, M., Zollöfer, M., Izadi, S., Theobalt, C.: BundleFusion: real-time globally consistent 3D reconstruction using on-the-fly surface re-integration. ACM Trans. Graph. 36, 1 (2017)
Dou, M., Guan, L., Frahm, J.M., Fuchs, H.: Exploring high-level plane primitives for indoor 3D reconstruction with a hand-held RGB-D camera. In: ACCV Workshops (2013)
Dzitsiuk, M., Sturm, J., Maier, R., Ma, L., Cremers, D.: De-noising, stabilizing and completing 3D reconstructions on-the-go using plane priors. In: ICRA (2017)
Feng, C., Taguchi, Y., Kamat, V.R.: Fast plane extraction in organized point clouds using agglomerative hierarchical clustering. In: ICRA (2014)
Fernández-Moral, E., Mayol-Cuevas, W., Arévalo, V., González-Jiménez, J.: Fast place recognition with plane-based maps. In: ICRA (2013)
Flint, A., Mei, C., Reid, I., Murray, D.: Growing semantically meaningful models for visual SLAM. In: CVPR (2010)
Fu, Y., Yan, Q., Yang, L., Liao, J., Xiao, C.: Texture mapping for 3D reconstruction with RGB-D sensor. In: CVPR (2018)
Halber, M., Funkhouser, T.: Fine-to-coarse global registration of RGB-D scans. In: CVPR (2017)
Handa, A., Whelan, T., McDonald, J., Davison, A.J.: A benchmark for RGB-D visual odometry, 3D reconstruction and SLAM. In: ICRA (2014)
Jeon, J., Jung, Y., Kim, H., Lee, S.: Texture map generation for 3D reconstructed scenes. The Visual Computer 32(6–8), 955–965 (2016)
Kerl, C., Sturm, J., Cremers, D.: Robust odometry estimation for rgb-d cameras. In: ICRA (2013)
Kim, P., Coltin, B., Kim, H.J.: Linear RGB-D SLAM for planar environments. In: ECCV (2018)
Kim, P., Coltin, B., Kim, H.J.: Low-drift visual odometry in structured environments by decoupling rotational and translational motion. In: ICRA (2018)
Le, P.H., Kosecka, J.: Dense piecewise planar RGB-D SLAM for indoor environments. In: IROS (2017)
Lee, J.K., Yea, J., Park, M.G., Yoon, K.J.: Joint layout estimation and global multi-view registration for indoor reconstruction. In: ICCV (2017)
Li, R., Liu, Q., Gui, J., Gu, D., Hu, H.: A novel RGB-D SLAM algorithm based on points and plane-patches. In: CASE (2016)
Ma, L., Kerl, C., Stückler, J., Cremers, D.: CPA-SLAM: Consistent plane-model alignment for direct RGB-D SLAM. In: ICRA (2016)
Ming, H., Westman, E., Zhang, G., Kaess, M.: Keyframe-based dense planar SLAM. In: ICRA (2017)
Mur-Artal, R., Tardós, J.D.: ORB-SLAM2: an open-source SLAM system for monocular, stereo, and RGB-D cameras. IEEE Trans. Robot. 33(5), 1255–1262 (2017)
Newcombe, R.A., Izadi, S., Hilliges, O., Molyneaux, D., Kim, D., Davison, A.J., Kohi, P., Shotton, J., Hodges, S., Fitzgibbon, A.: KinectFusion: real-time dense surface mapping and tracking. In: ISMAR (2012)
Okada, K., Kagami, S., Inaba, M., Inoue, H.: Plane segment finder: algorithm, implementation and applications. In: ICRA (2001)
Proença, P.F., Gao, Y.: Probabilistic RGB-D odometry based on points, lines and planes under depth uncertainty. Robot. Auton. Syst. 104, 25–39 (2018)
Pumarola, A., Vakhitov, A., Agudo, A., Sanfeliu, A., Moreno-Noguer, F.: PL-SLAM: Real-time monocular visual SLAM with points and lines. In: ICRA (2017)
Rublee, E., Rabaud, V., Konolige, K., Bradski, G.: Orb: An efficient alternative to sift or surf. In: 2011 International Conference on Computer Vision, pp. 2564–2571 (2011)
Salas, M., Hussain, W., Concha, A., Montano, L., Civera, J., Montiel, J.M.M.: Layout aware visual tracking and mapping. In: IROS (2015)
Salas-Moreno, R.F., Glocken, B., Kelly, P.H.J., Davison, A.J.: Dense planar SLAM. In: ISMAR (2014)
Salas-Moreno, R.F., Newcombe, R.A., Strasdat, H., Kelly, P.H.J., Davison, A.J.: SLAM++: simultaneous localisation and mapping at the level of objects. In: CVPR (2013)
Shi, Y., Xu, K., Nießner, M., Rusinkiewicz, S., Funkhouser, T.: PlaneMatch: patch coplanarity prediction for robust RGB-D reconstruction. In: ECCV (2018)
Sturm, J., Engelhard, N., Endres, F., Burgard, W., Cremers, D.: A benchmark for the evaluation of RGB-D SLAM systems. In: IROS (2012)
Sun, Q., Yuan, J., Zhang, X., Sun, F.: RGB-D SLAM in indoor environments with STING-based plane feature extraction. IEEE/ASME Trans. Mechatron. PP(99), 1–1 (2017)
Taguchi, Y., Jian, Y.D., Ramalingam, S., Feng, C.: SLAM using both points and planes for hand-held 3D sensors. In: ISMAR (2012)
Taguchi, Y., Jian, Y.D., Ramalingam, S., Feng, C.: Point-plane SLAM for hand-held 3D sensors. In: ICRA (2013)
Wei, M., Yan, Q., Luo, F., Song, C., Xiao, C.: Joint bilateral propagation upsampling for unstructured multi-view stereo. The Visual Computer 35(6–8), 797–809 (2019)
Whelan, T., Kaess, M., Fallon, M., Johannsson, H., Leonard, J., Mcdonald, J.: Kintinuous: spatially extended kinectfusion. Robot. Auton. Syst. 69(C), 3–14 (2012)
Whelan, T., Leutenegger, S., Moreno, R.S., Glocker, B., Davison, A.: ElasticFusion: Dense SLAM without a pose graph. In: Robotics: Science and Systems (2015)
Xiao, J., Owens, A., Torralba, A.: SUN3D: A database of big spaces reconstructed using SfM and object labels. In: ICCV (2013)
Yang, S., Scherer, S.: CubeSLAM: monocular 3-D object slam. IEEE Trans. Robot. 35(4), 925–938 (2019)
Yang, L., Yan, Q., Fu, Y., Xiao, C.: Surface reconstruction via fusing sparse-sequence of depth images. IEEE Transactions on Visualization and Computer Graphics 24(2), 1190–1203 (2017)
Yang, L., Yan, Q., Xiao, C.: Shape-controllable geometry completion for point cloud models. The Visual Computer 33(3), 385–398 (2017)
Zhang, Y., Xu, W., Tong, Y., Zhou, K.: Online structure analysis for real-time indoor scene reconstruction. ACM Trans. Graph. 34(5), 159 (2015)
Zhou, Q.Y., Koltun, V.: Depth camera tracking with contour cues. In: CVPR (2015)
Zuo, X., Xie, X., Liu, Y., Huang, G.: Robust visual SLAM with point and line features. In: IROS (2017)
Acknowledgements
Funding was provided by NSFC (Grant Nos. 61972298, 61672390), the National Key Research and Development Program of China (Grant No. 2017YFB1002600), and the Key Technological Innovation Projects of Hubei Province (Grant No. 2018AAA062).
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The authors declare that they have no conflict of interest.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Electronic supplementary material
Below is the link to the electronic supplementary material.
Rights and permissions
About this article
Cite this article
Fu, Y., Yan, Q., Liao, J. et al. Real-time dense 3D reconstruction and camera tracking via embedded planes representation. Vis Comput 36, 2215–2226 (2020). https://doi.org/10.1007/s00371-020-01899-1
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00371-020-01899-1