Abstract

We propose a method for depth estimation using a tilted retroreflective structure, which consists of a beam splitter and a micro-corner cube array. Existing depth estimation methods commonly use an active light source to detect the depth of an object. However, sunlight interferes with their depth measurements and decreases their accuracy outdoors. The proposed method does not need any active light source because depth information is obtained in the image domain, not in the object domain. The depth of field imaging by a retroreflector indicates the depth position of the object image, immediately. We believe that the proposed method can be applied to depth measurement systems such as LIDAR and time of flight cameras. An experiment is performed, whose results are compared with theoretical calculations, which confirms the feasibility of the proposed method.

© 2018 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

Full Article  |  PDF Article

Corrections

1 March 2018: A typographical correction was made to the title.


OSA Recommended Articles
Improvement of fill factor in pinhole-type integral imaging display using a retroreflector

Sungwon Choi, Yuzuru Takashima, and Sung-Wook Min
Opt. Express 25(26) 33078-33087 (2017)

Flexible depth-of-field imaging system using a spatial light modulator

Deokhwa Hong, Kangmin Park, Hyungsuck Cho, and Minyoung Kim
Appl. Opt. 46(36) 8591-8599 (2007)

Accurate depth estimation in structured light fields

Zewei Cai, Xiaoli Liu, Giancarlo Pedrini, Wolfgang Osten, and Xiang Peng
Opt. Express 27(9) 13532-13546 (2019)

References

  • View by:
  • |
  • |
  • |

  1. G. Lippmann, “La photographic integrale,” CR Acad. Sci. 146, 446–451 (1908).
  2. T. Okoshi, “Three-dimensional display,” Proc. IEEE 68(5), 548–564 (1980).
    [Crossref]
  3. H. Arimoto and B. Javidi, “Integral three-dimensional imaging with digital reconstruction,” Opt. Lett. 26(3), 157–159 (2001).
    [Crossref] [PubMed]
  4. Y. Takaki, K. Tanaka, and J. Nakamura, “Super multi-view display with a lower resolution flat-panel display,” Opt. Express 19(5), 4129–4139 (2011).
    [Crossref] [PubMed]
  5. Y. Takaki and J. Nakamura, “Generation of 360-degree color three-dimensional images using a small array of high-speed projectors to provide multiple vertical viewpoints,” Opt. Express 22(7), 8779–8789 (2014).
    [Crossref] [PubMed]
  6. D. Marr and T. Poggio, “A Computational Theory of Human Stereo Vision,” Proc. R. Soc. Lond. B Biol. Sci. 204(1156), 301–328 (1979).
    [Crossref] [PubMed]
  7. D. Murray and J. Little, “Using real-time stereo vision for mobile robot navigation,” Auton. Robots 8(2), 161–171 (2000).
    [Crossref]
  8. D. Scharstein and R. Szeliski, “High-accuracy stereo depth maps using structured light,” in 2003 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (2003), paper I–195.
    [Crossref]
  9. G. Geng, “Structured-light 3D surface imaging: a tutorial,” Adv. Opt. Photonics 3(2), 128–160 (2011).
    [Crossref]
  10. V. Ganapathi, C. Plagemann, D. Koller, and S. Thrun, “Real time motion capture using a single time-of-flight camera,” in Proc. IEEE Comput. Sci. Conf. on Comput. Vision Pattern Recognit. (2010), pp. 755–762.
    [Crossref]
  11. R. Lange and P. Seitz, “Solid-state time-of-flight range camera,” IEEE J. Quantum Electron. 37(3), 390–397 (2001).
    [Crossref]
  12. C. Hahne, A. Aggoun, V. Velisavljevic, S. Fiebig, and M. Pesch, “Refocusing distance of a standard plenoptic camera,” Opt. Express 24(19), 21521–21540 (2016).
    [Crossref] [PubMed]
  13. Y. Chen, X. Jin, and Q. Dai, “Distance measurement based on light field geometry and ray tracing,” Opt. Express 25(1), 59–76 (2017).
    [Crossref] [PubMed]
  14. B. Song, S. Choi, H. Sung, and S.-W. Min, “Reflection-type three-dimensional screen using retroreflector,” J. Opt. Soc. Korea 18(3), 225–229 (2014).
    [Crossref]
  15. Y. M. Kim, B. Song, and S.-W. Min, “Projection-type integral imaging system using a three-dimensional screen composed of a lens array and a retroreflector film,” Appl. Opt. 56(13), F105–F111 (2017).
    [Crossref] [PubMed]
  16. M. Subbarao and G. Surya, “Depth from defocus: A spatial domain approach,” Int. J. Comput. Vis. 13(3), 271–294 (1994).
    [Crossref]

2017 (2)

2016 (1)

2014 (2)

2011 (2)

2001 (2)

H. Arimoto and B. Javidi, “Integral three-dimensional imaging with digital reconstruction,” Opt. Lett. 26(3), 157–159 (2001).
[Crossref] [PubMed]

R. Lange and P. Seitz, “Solid-state time-of-flight range camera,” IEEE J. Quantum Electron. 37(3), 390–397 (2001).
[Crossref]

2000 (1)

D. Murray and J. Little, “Using real-time stereo vision for mobile robot navigation,” Auton. Robots 8(2), 161–171 (2000).
[Crossref]

1994 (1)

M. Subbarao and G. Surya, “Depth from defocus: A spatial domain approach,” Int. J. Comput. Vis. 13(3), 271–294 (1994).
[Crossref]

1980 (1)

T. Okoshi, “Three-dimensional display,” Proc. IEEE 68(5), 548–564 (1980).
[Crossref]

1979 (1)

D. Marr and T. Poggio, “A Computational Theory of Human Stereo Vision,” Proc. R. Soc. Lond. B Biol. Sci. 204(1156), 301–328 (1979).
[Crossref] [PubMed]

1908 (1)

G. Lippmann, “La photographic integrale,” CR Acad. Sci. 146, 446–451 (1908).

Aggoun, A.

Arimoto, H.

Chen, Y.

Choi, S.

Dai, Q.

Fiebig, S.

Ganapathi, V.

V. Ganapathi, C. Plagemann, D. Koller, and S. Thrun, “Real time motion capture using a single time-of-flight camera,” in Proc. IEEE Comput. Sci. Conf. on Comput. Vision Pattern Recognit. (2010), pp. 755–762.
[Crossref]

Geng, G.

G. Geng, “Structured-light 3D surface imaging: a tutorial,” Adv. Opt. Photonics 3(2), 128–160 (2011).
[Crossref]

Hahne, C.

Javidi, B.

Jin, X.

Kim, Y. M.

Koller, D.

V. Ganapathi, C. Plagemann, D. Koller, and S. Thrun, “Real time motion capture using a single time-of-flight camera,” in Proc. IEEE Comput. Sci. Conf. on Comput. Vision Pattern Recognit. (2010), pp. 755–762.
[Crossref]

Lange, R.

R. Lange and P. Seitz, “Solid-state time-of-flight range camera,” IEEE J. Quantum Electron. 37(3), 390–397 (2001).
[Crossref]

Lippmann, G.

G. Lippmann, “La photographic integrale,” CR Acad. Sci. 146, 446–451 (1908).

Little, J.

D. Murray and J. Little, “Using real-time stereo vision for mobile robot navigation,” Auton. Robots 8(2), 161–171 (2000).
[Crossref]

Marr, D.

D. Marr and T. Poggio, “A Computational Theory of Human Stereo Vision,” Proc. R. Soc. Lond. B Biol. Sci. 204(1156), 301–328 (1979).
[Crossref] [PubMed]

Min, S.-W.

Murray, D.

D. Murray and J. Little, “Using real-time stereo vision for mobile robot navigation,” Auton. Robots 8(2), 161–171 (2000).
[Crossref]

Nakamura, J.

Okoshi, T.

T. Okoshi, “Three-dimensional display,” Proc. IEEE 68(5), 548–564 (1980).
[Crossref]

Pesch, M.

Plagemann, C.

V. Ganapathi, C. Plagemann, D. Koller, and S. Thrun, “Real time motion capture using a single time-of-flight camera,” in Proc. IEEE Comput. Sci. Conf. on Comput. Vision Pattern Recognit. (2010), pp. 755–762.
[Crossref]

Poggio, T.

D. Marr and T. Poggio, “A Computational Theory of Human Stereo Vision,” Proc. R. Soc. Lond. B Biol. Sci. 204(1156), 301–328 (1979).
[Crossref] [PubMed]

Scharstein, D.

D. Scharstein and R. Szeliski, “High-accuracy stereo depth maps using structured light,” in 2003 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (2003), paper I–195.
[Crossref]

Seitz, P.

R. Lange and P. Seitz, “Solid-state time-of-flight range camera,” IEEE J. Quantum Electron. 37(3), 390–397 (2001).
[Crossref]

Song, B.

Subbarao, M.

M. Subbarao and G. Surya, “Depth from defocus: A spatial domain approach,” Int. J. Comput. Vis. 13(3), 271–294 (1994).
[Crossref]

Sung, H.

Surya, G.

M. Subbarao and G. Surya, “Depth from defocus: A spatial domain approach,” Int. J. Comput. Vis. 13(3), 271–294 (1994).
[Crossref]

Szeliski, R.

D. Scharstein and R. Szeliski, “High-accuracy stereo depth maps using structured light,” in 2003 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (2003), paper I–195.
[Crossref]

Takaki, Y.

Tanaka, K.

Thrun, S.

V. Ganapathi, C. Plagemann, D. Koller, and S. Thrun, “Real time motion capture using a single time-of-flight camera,” in Proc. IEEE Comput. Sci. Conf. on Comput. Vision Pattern Recognit. (2010), pp. 755–762.
[Crossref]

Velisavljevic, V.

Adv. Opt. Photonics (1)

G. Geng, “Structured-light 3D surface imaging: a tutorial,” Adv. Opt. Photonics 3(2), 128–160 (2011).
[Crossref]

Appl. Opt. (1)

Auton. Robots (1)

D. Murray and J. Little, “Using real-time stereo vision for mobile robot navigation,” Auton. Robots 8(2), 161–171 (2000).
[Crossref]

CR Acad. Sci. (1)

G. Lippmann, “La photographic integrale,” CR Acad. Sci. 146, 446–451 (1908).

IEEE J. Quantum Electron. (1)

R. Lange and P. Seitz, “Solid-state time-of-flight range camera,” IEEE J. Quantum Electron. 37(3), 390–397 (2001).
[Crossref]

Int. J. Comput. Vis. (1)

M. Subbarao and G. Surya, “Depth from defocus: A spatial domain approach,” Int. J. Comput. Vis. 13(3), 271–294 (1994).
[Crossref]

J. Opt. Soc. Korea (1)

Opt. Express (4)

Opt. Lett. (1)

Proc. IEEE (1)

T. Okoshi, “Three-dimensional display,” Proc. IEEE 68(5), 548–564 (1980).
[Crossref]

Proc. R. Soc. Lond. B Biol. Sci. (1)

D. Marr and T. Poggio, “A Computational Theory of Human Stereo Vision,” Proc. R. Soc. Lond. B Biol. Sci. 204(1156), 301–328 (1979).
[Crossref] [PubMed]

Other (2)

D. Scharstein and R. Szeliski, “High-accuracy stereo depth maps using structured light,” in 2003 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (2003), paper I–195.
[Crossref]

V. Ganapathi, C. Plagemann, D. Koller, and S. Thrun, “Real time motion capture using a single time-of-flight camera,” in Proc. IEEE Comput. Sci. Conf. on Comput. Vision Pattern Recognit. (2010), pp. 755–762.
[Crossref]

Supplementary Material (2)

NameDescription
» Visualization 1       Depth of field imaging for object position at 200 mm
» Visualization 2       Depth of field imaging for object position at 250 mm

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (9)

Fig. 1
Fig. 1 Scheme of the system for the proposed method. It consists of a lens group, retroreflective structure, and camera. The object is imaged on the front face of the retroreflective film by the lens group. Then, the image is again re-formed by the retroreflective film. The optical axis is changed by a beam splitter, and the image is recorded using the camera. At this time, the camera records the entire area of the retroreflective film and the object image. The depth of the object is measured through the position where the MTF value of the captured image is the highest.
Fig. 2
Fig. 2 Principle of the proposed method. The object image is reconstructed again by the tilted retroreflector. The captured image includes the focused object image and the focused area of the retroreflector. They are located at the same focal plane. Therefore, the focused area indicates the object image position. The captured image shows the focused object image and the focused area of the retroreflector.
Fig. 3
Fig. 3 DoF of the system. It is divided into DoF1, DoF2, and DoF3. (a) DoF1 is the DoF of the camera within the tilted retroreflector range. (b) DoF2 is the DoF reconstructed by the retroreflector. We ignore the error caused by the retroreflector. DoF3 is the DoF of the system. It indicates the accuracy of depth measurement.
Fig. 4
Fig. 4 Relation between the distance from the object to the lens group and DoF in the object domain. When α is 1, it shows the narrowest DoF. This means a high accuracy of the depth measurement.
Fig. 5
Fig. 5 (a) Ray distribution of the retroreflector in two dimensions. (b) Relation between the distance from the image point to the retroreflector and the minimum waist width. The effective aperture size of the retroreflector is 120 μ m . (c) MTF contour map for image quality estimation by the retroreflector.
Fig. 6
Fig. 6 Configuration of the experimental setup
Fig. 7
Fig. 7 Experimental results and their edge images. (a)-(e) Object images at 200, 225, 250, 275, and 300 mm. (f)-(h) Edge images of (a)-(e), respectively. The red and blue boxes indicate the area to calculate the value of the MTF. The box is chosen to minimize the error during the measurement. Two videos are for the depth measurement of (a) and (c) (see Visualization 1 and Visualization 2).
Fig. 8
Fig. 8 Data on the MTF at each object distance. The peak position is the depth distance from the lens group.
Fig. 9
Fig. 9 Comparison between the theoretical and experimental results

Tables (1)

Tables Icon

Table 1 Specification of experimental setup.

Equations (8)

Equations on this page are rendered with MathJax. Learn more.

R = D | d F o c u s d D e f o c u s | 2 d D e f o c u s ,
h ( x , y ) = 1 2 π σ 2 e x 2 + y 2 2 σ 2 ,
σ = k R ,
I = R R o b j h ( x , y ) ,
M T F = I max I min I max + I min .
D o F 1 = D o F 2 = 2 W α = | d D e f o c u s e _ f a r d D e f o c u s e _ n e a r | .
O N e a r = f G r o u p 2 G ( d F a r f G r o u p 1 ) f G r o u p 2 d F a r f G r o u p 1 ( G f G r o u p 2 ) ( d F a r f G r o u p 1 ) d F a r f G r o u p 1 O F a r = f G r o u p 2 G ( d N e a r f G r o u p 1 ) f G r o u p 2 d N e a r f G r o u p 1 ( G f G r o u p 2 ) ( d N e a r f G r o u p 1 ) d N e a r f G r o u p 1 ,
D o F 3 = | O F a r O N e a r | .

Metrics