Abstract
This paper addresses the problem of reconstructing the surface shape of transparent objects. The difficulty of this problem originates from the viewpoint dependent appearance of a transparent object, which quickly makes reconstruction methods tailored for diffuse surfaces fail disgracefully. In this paper, we introduce a fixed viewpoint approach to dense surface reconstruction of transparent objects based on refraction of light. We present a simple setup that allows us to alter the incident light paths before light rays enter the object by immersing the object partially in a liquid, and develop a method for recovering the object surface through reconstructing and triangulating such incident light paths. Our proposed approach does not need to model the complex interactions of light as it travels through the object, neither does it assume any parametric form for the object shape nor the exact number of refractions and reflections taken place along the light paths. It can therefore handle transparent objects with a relatively complex shape and structure, with unknown and inhomogeneous refractive index. We also show that for thin transparent objects, our proposed acquisition setup can be further simplified by adopting a single refraction approximation. Experimental results on both synthetic and real data demonstrate the feasibility and accuracy of our proposed approach.





















Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Notes
If the camera is calibrated w.r.t the reference plane, it is straightforward to recover the visual ray of an image point, and two images are sufficient to construct the blue PBC. By using four images as described in the main text, the PBC and visual ray can be constructed even without calibrating the camera. We only need to calibrate the pattern poses, which is also required by the two-image method.
The transparent object can be inhomogeneous, namely the refractive index varies across the interior of the object.
The depth map is defined as the z component for each 3D point.
Except facet 6 with a mean of 1.0442.
References
Balzer, J., & Werling, S. (2010). Principles of shape from specular reflection. Measurement, 43(10), 1305–1317.
Batlle, J., Mouaddib, E., & Salvi, J. (1998). Recent progress in coded structured light as a technique to solve the correspondence problem: A survey. Pattern Recognition, 31(7), 963–982.
Ben-Ezra, M., & Nayar, S. K. (2003). What does motion reveal about transparency? In ICCV (Vol. 2, pp. 1025–1032).
Bouguet, J. Y. (2008). Camera calibration toolbox for matlab. http://www.vision.caltech.edu/bouguetj/calib_doc/.
Chari, V., & Sturm, P. (2013). A theory of refractive photo-light-path triangulation. In CVPR (pp. 1438–1445).
Ding, Y., Li, F., Ji, Y., Yu, J. (2011). Dynamic fluid surface acquisition using a camera array. In ICCV (pp. 2478–2485).
Eren, G., Aubreton, O., Meriaudeau, F., Secades, L. A. S., Fofi, D., Naskali, A. T., et al. (2009). Scanning from heating: 3D shape estimation of transparent objects from local surface heating. Optics Express, 17(14), 11457–11468.
Fischler, M. A., & Bolles, R. C. (1981). Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography. Communications of the ACM, 24(6), 381–395.
Han, K., Wong, K. Y. K., & Liu, M. (2015). A fixed viewpoint approach for dense reconstruction of transparent objects. In CVPR (pp. 4001–4008).
Hata, S., Saitoh, Y., Kumamura, S., & Kaida, K. (1996). Shape extraction of transparent object using genetic algorithm. In ICPR (Vol. 4, pp. 684–688).
Hullin, M. B., Fuchs, M., Ihrke, I., Seidel, H. P., & Lensch, H. P. A. (2008). Fluorescent immersion range scanning. In SIGGRAPH (pp. 87:1–87:10).
Ihrke, I., Goidluecke, B., & Magnor, M. (2005). Reconstructing the geometry of flowing water. In ICCV (Vol. 2, pp. 1055–1060).
Ihrke, I., Kutulakos, K. N., Lensch, H. P. A., Magnor, M., & Heidrich, W. (2008). State of the art in transparent and specular object reconstruction. In Eurographics STAR (pp. 87–108).
Ihrke, I., Kutulakos, K. N., Lensch, H. P. A., Magnor, M., & Heidrich, W. (2010). Transparent and specular object reconstruction. Computer Graphics Forum, 29(8), 2400–2426.
Inokuchi, S., Sato, K., & Matsuda, F. (1984). Range-imaging for 3-d object recognition. In ICPR (pp. 806–808).
Ji, Y., Ye, J., Yu, J. (2013). Reconstructing gas flows using light-path approximation. In CVPR (pp. 2507–2514).
Kutulakos, K. N., & Steger, E. (2005). A theory of refractive and specular 3D shape by light-path triangulation. In ICCV (Vol. 2, pp. 1448–1455).
Kutulakos, K. N., & Steger, E. (2008). A theory of refractive and specular 3D shape by light-path triangulation. IJCV, 76(1), 13–29.
Liu, D., Chen, X., & Yang, Y. H. (2014). Frequency-based 3D reconstruction of transparent and specular objects. In CVPR (pp. 660–667).
Ma, C., Lin, X., Suo, J., Dai, Q., & Wetzstein, G. (2014). Transparent object reconstruction via coded transport of intensity. In CVPR (pp. 3238–3245).
Miyazaki, D., & Ikeuchi, K. (2005). Inverse polarization raytracing: Estimating surface shapes of transparent objects. In CVPR (Vol. 2, pp. 910–917).
Morris, N. J. W., & Kutulakos, K. N. (2007). Reconstructing the surface of inhomogeneous transparent scenes by scatter-trace photography. In ICCV (pp. 1–8).
Morris, N. J. W., & Kutulakos, K. N. (2011). Dynamic refraction stereo. PAMI, 33(10), 1518–1531.
Murase, H. (1990). Surface shape reconstruction of an undulating transparent object. In ICCV (pp. 313–317).
Murase, H. (1992). Surface shape reconstruction of a nonrigid transport object using refraction and motion. PAMI, 14(10), 1045–1052.
Narasimhan, S. G., Nayar, S. K., Sun, B., & Koppal, S. J. (2005). Structured light in scattering media. In ICCV (pp. 420–427).
O’Toole, M., Mather, J., & Kutulakos, K. N. (2014). 3D shape and indirect appearance by structured light transport. In CVPR (pp. 3246–3253).
Qian, Y., Gong, M., & Yang, Y. H. (2016). 3D reconstruction of transparent objects with position-normal consistency. In CVPR (pp. 4369–4377).
Reshetouski, I., & Ihrke, I. (2013). Mirrors in computer graphics, computer vision and time-of-flight imaging. Lecture Notes in Computer Science, 8200, 77–104.
Shan, Q., Agarwal, S., & Curless, B. (2012). Refractive height fields from single and multiple images. In CVPR (pp. 286–293).
Torr, P. H. S., & Zisserman, A. (2000). MLESAC: A new robust estimator with application to estimating image geometry. CVIU, 78(1), 138–156.
Trifonov, B., Bradley, D., & Heidrich, W. (2006). Tomographic reconstruction of transparent objects. In EGSR (pp. 51–60).
Tsai, C., Veeraraghavan, A., & Sankaranarayanan, A. C. (2015). What does a single light-ray reveal about a transparent object? In ICIP (pp. 606–610).
Wetzstein, G., Roodnick, D., Heidrich, W., & Raskar, R. (2011). Refractive shape from light field distortion. In ICCV (pp. 1180–1186).
Wust, C., & Capson, D. W. (1991). Surface profile measurement using color fringe projection. Machine Vision and Applications, 4(3), 193–203.
Xie, W., Zhang, Y., Wang, C. C. L., & Chung, R. C. K. (2014). Surface-from-gradients: An approach based on discrete geometry processing. In CVPR (pp. 2203–2210).
Yeung, S. K., Wu, T. P., Tang, C. K., Chan, T. F., & Osher, S. (2011). Adequate reconstruction of transparent objects on a shoestring budget. In CVPR (pp. 2513–2520).
Zuo, X., Du, C., Wang, S., Zheng, J., & Yang, R. (2015). Interactive visual hull refinement for specular and transparent object surface reconstruction. In ICCV (pp. 2237–2245).
Acknowledgements
This project is supported by a Grant from the Research Grant Council of the Hong Kong (SAR), China, under the Project HKU 718113E.
Author information
Authors and Affiliations
Corresponding author
Additional information
Communicated by Yasutaka Furukawa.
Rights and permissions
About this article
Cite this article
Han, K., Wong, KY.K. & Liu, M. Dense Reconstruction of Transparent Objects by Altering Incident Light Paths Through Refraction. Int J Comput Vis 126, 460–475 (2018). https://doi.org/10.1007/s11263-017-1045-3
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11263-017-1045-3