Abstract
To autonomously explore and densely recover an unknown indoor scene is a nontrivial task in 3D scene reconstruction. It is challenging for scenes composed of compact and complicated interconnected rooms with no priors. To address this issue, we aim to use autonomous scanning, reconstruct multi-room scenes, and produce a complete reconstruction in as few scans as possible. With a progressive discrete motion planning module, we introduce submodular-based planning for automated scanning scenarios to efficiently guide the active scanning by Next-Best-View until marginal gains diminish. The submodular-based planning gives an approximately optimal solution of “Next-Best-View” which is NP-hard in case of no prior knowledge. Experiments show that our method can improve scanning efficiency significantly for multi-room scenes while maintaining reconstruction errors.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Bai, S., Wang, J., Chen, F., Englot, B.: Information-theoretic exploration with Bayesian optimization. In: 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 1816–1822. IEEE (2016)
Chang, A., et al.: Matterport3D: learning from RGB-D data in indoor environments. arXiv preprint arXiv:1709.06158 (2017)
Chen, Z., Qiu, J., Sheng, B., Li, P., Wu, E.: GPSD: generative parking spot detection using multi-clue recovery model. Vis. Comput. 37(9–11), 2657–2669 (2021)
Choi, S., Zhou, Q.Y., Koltun, V.: Robust reconstruction of indoor scenes. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5556–5565 (2015)
Cignoni, P., Rocchini, C., Scopigno, R.: Metro: measuring error on simplified surfaces. In: Computer Graphics Forum, vol. 17, pp. 167–174. Wiley Online Library (1998)
Handa, A., Whelan, T., McDonald, J., Davison, A.J.: A benchmark for RGB-D visual odometry, 3D reconstruction and slam. In: 2014 IEEE International Conference on Robotics and Automation (ICRA), pp. 1524–1531 (2014). https://doi.org/10.1109/ICRA.2014.6907054
Hepp, B., Nießner, M., Hilliges, O.: Plan3D: viewpoint and trajectory optimization for aerial multi-view stereo reconstruction. ACM Trans. Graph. (TOG) 38(1), 1–17 (2018)
Ikehata, S., Yang, H., Furukawa, Y.: Structured indoor modeling. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1323–1331 (2015)
Kehoe, B., Patil, S., Abbeel, P., Goldberg, K.: A survey of research on cloud robotics and automation. IEEE Trans. Autom. Sci. Eng. 12(2), 398–409 (2015)
Li, L., et al.: Improving autonomous exploration using reduced approximated generalized voronoi graphs. J. Intell. Robot. Syst. 99, 91–113 (2020)
Liu, L., et al.: Object-aware guidance for autonomous scene reconstruction. ACM Trans. Graph. (TOG) 37(4), 1–12 (2018)
Low, K.L., Lastra, A.: An adaptive hierarchical next-best-view algorithm for 3D reconstruction of indoor scenes. In: Proceedings of 14th Pacific Conference on Computer Graphics and Applications (Pacific Graphics 2006), pp. 1–8. Citeseer (2006)
Mura, C., Mattausch, O., Pajarola, R.: Piecewise-planar reconstruction of multi-room interiors with arbitrary wall arrangements. In: Computer Graphics Forum, vol. 35, pp. 179–188. Wiley Online Library (2016)
Qin, Y., Chi, X., Sheng, B., Lau, R.W.: Guiderender: large-scale scene navigation based on multi-modal view frustum movement prediction. Vis. Comput. 39, 1–11 (2023). https://doi.org/10.1007/s00371-023-02922-x
Qiu, J., Yin, Z.X., Cheng, M.M., Ren, B.: Rendering real-world unbounded scenes with cars by learning positional bias. Vis. Comput. 39, 1–14 (2023). https://doi.org/10.1007/s00371-023-03070-y
Quintana, B., Prieto, S., Adán, A., Vázquez, A.S.: Semantic scan planning for indoor structural elements of buildings. Adv. Eng. Inform. 30(4), 643–659 (2016)
Selin, M., Tiger, M., Duberg, D., Heintz, F., Jensfelt, P.: Efficient autonomous exploration planning of large-scale 3-d environments. IEEE Robot. Autom. Lett. 4(2), 1699–1706 (2019)
Wang, L., Yi, L., Zhang, Y., Wang, X., Wang, W., Wang, X.: 3D reconstruction method based on N-step phase unwrapping. Vis. Comput. 39, 1–13 (2023). https://doi.org/10.1007/s00371-023-03054-y
Xu, K., et al.: Autonomous reconstruction of unknown indoor scenes guided by time-varying tensor fields. ACM Trans. Graph. (TOG) 36(6), 1–15 (2017)
Zhang, J., Zhu, C., Zheng, L., Xu, K.: Rosefusion: random optimization for online dense reconstruction under fast camera motion. ACM Trans. Graph. (TOG) 40(4), 1–17 (2021)
Zhou, Q.Y., Miller, S., Koltun, V.: Elastic fragments for dense scene reconstruction. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 473–480 (2013)
Acknowledgments
This work was partially supported by the National Natural Science Foundation of China under Grant No. 61972458, and the Zhejiang Provincial Natural Science Foundation of China under Grant No. LZ23F020002. The authors would like to thank the anonymous reviewers for their helpful and valuable comments and suggestions.
Author information
Authors and Affiliations
Corresponding authors
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Miao, Y., Wang, H., Fan, R., Liu, F. (2024). A Submodular-Based Autonomous Exploration for Multi-Room Indoor Scenes Reconstruction. In: Sheng, B., Bi, L., Kim, J., Magnenat-Thalmann, N., Thalmann, D. (eds) Advances in Computer Graphics. CGI 2023. Lecture Notes in Computer Science, vol 14496. Springer, Cham. https://doi.org/10.1007/978-3-031-50072-5_9
Download citation
DOI: https://doi.org/10.1007/978-3-031-50072-5_9
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-50071-8
Online ISBN: 978-3-031-50072-5
eBook Packages: Computer ScienceComputer Science (R0)