Abstract

Passive light field imaging generally uses depth cues that depend on the image structure to perform depth estimation, causing robustness and accuracy problems in complex scenes. In this study, the commonly used depth cues, defocus and correspondence, were analyzed by using phase encoding instead of the image structure. The defocus cue obtained by spatial variance is insensitive to the global spatial monotonicity of the phase-encoded field. In contrast, the correspondence cue is sensitive to the angular variance of the phase-encoded field, and the correspondence responses across the depth range have single-peak distributions. Based on this analysis, a novel active light field depth estimation method is proposed by directly using the correspondence cue in the structured light field to search for non-ambiguous depths, and thus no optimization is required. Furthermore, the angular variance can be weighted to reduce the depth estimation uncertainty according to the phase encoding information. The depth estimation of an experimental scene with rich colors demonstrated that the proposed method could distinguish different depth regions in each color segment more clearly, and was substantially improved in terms of phase consistency compared to the passive method, thus verifying its robustness and accuracy.

© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

Full Article  |  PDF Article
OSA Recommended Articles
Ray calibration and phase mapping for structured-light-field 3D reconstruction

Zewei Cai, Xiaoli Liu, Xiang Peng, and Bruce Z. Gao
Opt. Express 26(6) 7598-7613 (2018)

Active depth estimation from defocus using a camera array

Tianyang Tao, Qian Chen, Shijie Feng, Yan Hu, and Chao Zuo
Appl. Opt. 57(18) 4960-4967 (2018)

Structured light field 3D imaging

Zewei Cai, Xiaoli Liu, Xiang Peng, Yongkai Yin, Ameng Li, Jiachen Wu, and Bruce Z. Gao
Opt. Express 24(18) 20324-20334 (2016)

References

  • View by:
  • |
  • |
  • |

  1. M. Levoy, “Light fields and computational imaging,” Computer 39(8), 46–55 (2006).
    [Crossref]
  2. I. Ihrke, J. Restrepo, and L. Mignard-Debise, “Principles of light field imaging briefly revisiting 25 years of research,” IEEE Signal Procss. Mag. 33(5), 59–69 (2016).
  3. G. Wu, B. Masia, A. Jarabo, Y. Zhang, L. Wang, Q. Dai, T. Chai, and Y. Liu, “Light field image processing: an overview,” IEEE J. Sel. Top. Signal Process. 11(7), 926–954 (2017).
    [Crossref]
  4. B. Wilburn, N. Joshi, V. Vaish, E. Talvala, E. Antunez, A. Barth, A. Adams, M. Horowitz, and M. Levoy, “High performance imaging using large camera arrays,” ACM Trans. Graph. 24(3), 765–776 (2005).
    [Crossref]
  5. A. Levin, R. Fergus, F. Durand, and W. Freeman, “Image and depth from a conventional camera with a coded aperture,” ACM Trans. Graph. 26(3), 70 (2007).
    [Crossref]
  6. A. Veeraraghavan, R. Raskar, A. Agrawal, A. Mohan, and J. Tumblin, “Dappled photography: mask enhanced cameras for heterodyned light fields and coded aperture refocusing,” ACM Trans. Graph. 26(3), 69 (2007).
    [Crossref]
  7. R. Ng, M. Levoy, M. Brédif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” Stanford Technical Report CSTR (2005), pp. 1–11.
  8. S. Wanner and B. Goldluecke, “Globally consistent depth labeling of 4D light fields,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2012), pp. 41–48.
    [Crossref]
  9. C. Kim, H. Zimmer, Y. Pritch, A. Sorkine-Hornung, and M. Gross, “Scene reconstruction from high spatio-angular resolution light fields,” ACM Trans. Graph. 32(4), 73 (2013).
    [Crossref]
  10. M. W. Tao, S. Hadap, J. Malik, and R. Ramamoorthi, “Depth from combining defocus and correspondence using light-field cameras,” in Proceedings of IEEE International Conference on Computer Vision (IEEE, 2013), pp. 673–680.
    [Crossref]
  11. H. Lin, C. Chen, S. B. Kang, and J. Yu, “Depth recovery from light field using focal stack symmetry,” in Proceedings of IEEE International Conference on Computer Vision (IEEE, 2015), pp. 3451–3459.
    [Crossref]
  12. C. Hahne, A. Aggoun, V. Velisavljevic, S. Fiebig, and M. Pesch, “Refocusing distance of a standard plenoptic camera,” Opt. Express 24(19), 21521–21540 (2016).
    [Crossref] [PubMed]
  13. Y. Chen, X. Jin, and Q. Dai, “Distance measurement based on light field geometry and ray tracing,” Opt. Express 25(1), 59–76 (2017).
    [Crossref] [PubMed]
  14. Y. Zhang, H. Lv, Y. Liu, H. Wang, X. Wang, Q. Huang, X. Xiang, and Q. Dai, “Light-field depth estimation via epipolar plane image analysis and locally linear embedding,” IEEE Trans. Circ. Syst. Video Tech. 27(4), 739–747 (2017).
    [Crossref]
  15. I. K. P. Williem, I. K. Park, and K. M. Lee, “Robust light field depth estimation using occlusion-noise aware data costs,” IEEE Trans. Pattern Anal. Mach. Intell. 40(10), 2484–2497 (2018).
    [Crossref] [PubMed]
  16. T. Tao, Q. Chen, S. Feng, Y. Hu, and C. Zuo, “Active depth estimation from defocus using a camera array,” Appl. Opt. 57(18), 4960–4967 (2018).
    [Crossref] [PubMed]
  17. J. Salvi, S. Fernandez, T. Pribanic, and X. Llado, “A state of the art in structured light patterns for surface profilometry,” Pattern Recognit. 43(8), 2666–2680 (2010).
    [Crossref]
  18. J. Geng, “Structured-light 3D surface imaging: a tutorial,” Adv. Opt. Photonics 3(2), 128–160 (2011).
    [Crossref]
  19. A. Gershun, “The light field,” Translated by P. Moon and, G. Timoshenko, J. Math. Phys. 18(1–4), 51–151 (1936).
  20. E. H. Adelson and J. R. Bergen, “The plenoptic function and the elements of early vision,” in Computational models of visual processing (MIT, 1991), pp. 3–20.
  21. M. Levoy and P. Hanrahan, “Light field rendering,” in Proceedings of ACM SIGGRAPH (ACM, 1996), pp. 31–42.
  22. S. J. Gortler, R. Grzeszczuk, R. Szeliski, and M. F. Cohen, “The lumigraph,” in Proceedings of ACM SIGGRAPH (ACM, 1996), pp. 43–54.
  23. X. Peng, Z. Yang, and H. Niu, “Multi-resolution reconstruction of 3-D image with modified temporal unwrapping algorithm,” Opt. Commun. 224(1–3), 35–44 (2003).
    [Crossref]
  24. Z. Cai, X. Liu, H. Jiang, D. He, X. Peng, S. Huang, and Z. Zhang, “Flexible phase error compensation based on Hilbert transform in phase shifting profilometry,” Opt. Express 23(19), 25171–25181 (2015).
    [Crossref] [PubMed]

2018 (2)

I. K. P. Williem, I. K. Park, and K. M. Lee, “Robust light field depth estimation using occlusion-noise aware data costs,” IEEE Trans. Pattern Anal. Mach. Intell. 40(10), 2484–2497 (2018).
[Crossref] [PubMed]

T. Tao, Q. Chen, S. Feng, Y. Hu, and C. Zuo, “Active depth estimation from defocus using a camera array,” Appl. Opt. 57(18), 4960–4967 (2018).
[Crossref] [PubMed]

2017 (3)

Y. Chen, X. Jin, and Q. Dai, “Distance measurement based on light field geometry and ray tracing,” Opt. Express 25(1), 59–76 (2017).
[Crossref] [PubMed]

Y. Zhang, H. Lv, Y. Liu, H. Wang, X. Wang, Q. Huang, X. Xiang, and Q. Dai, “Light-field depth estimation via epipolar plane image analysis and locally linear embedding,” IEEE Trans. Circ. Syst. Video Tech. 27(4), 739–747 (2017).
[Crossref]

G. Wu, B. Masia, A. Jarabo, Y. Zhang, L. Wang, Q. Dai, T. Chai, and Y. Liu, “Light field image processing: an overview,” IEEE J. Sel. Top. Signal Process. 11(7), 926–954 (2017).
[Crossref]

2016 (2)

I. Ihrke, J. Restrepo, and L. Mignard-Debise, “Principles of light field imaging briefly revisiting 25 years of research,” IEEE Signal Procss. Mag. 33(5), 59–69 (2016).

C. Hahne, A. Aggoun, V. Velisavljevic, S. Fiebig, and M. Pesch, “Refocusing distance of a standard plenoptic camera,” Opt. Express 24(19), 21521–21540 (2016).
[Crossref] [PubMed]

2015 (1)

2013 (1)

C. Kim, H. Zimmer, Y. Pritch, A. Sorkine-Hornung, and M. Gross, “Scene reconstruction from high spatio-angular resolution light fields,” ACM Trans. Graph. 32(4), 73 (2013).
[Crossref]

2011 (1)

J. Geng, “Structured-light 3D surface imaging: a tutorial,” Adv. Opt. Photonics 3(2), 128–160 (2011).
[Crossref]

2010 (1)

J. Salvi, S. Fernandez, T. Pribanic, and X. Llado, “A state of the art in structured light patterns for surface profilometry,” Pattern Recognit. 43(8), 2666–2680 (2010).
[Crossref]

2007 (2)

A. Levin, R. Fergus, F. Durand, and W. Freeman, “Image and depth from a conventional camera with a coded aperture,” ACM Trans. Graph. 26(3), 70 (2007).
[Crossref]

A. Veeraraghavan, R. Raskar, A. Agrawal, A. Mohan, and J. Tumblin, “Dappled photography: mask enhanced cameras for heterodyned light fields and coded aperture refocusing,” ACM Trans. Graph. 26(3), 69 (2007).
[Crossref]

2006 (1)

M. Levoy, “Light fields and computational imaging,” Computer 39(8), 46–55 (2006).
[Crossref]

2005 (1)

B. Wilburn, N. Joshi, V. Vaish, E. Talvala, E. Antunez, A. Barth, A. Adams, M. Horowitz, and M. Levoy, “High performance imaging using large camera arrays,” ACM Trans. Graph. 24(3), 765–776 (2005).
[Crossref]

2003 (1)

X. Peng, Z. Yang, and H. Niu, “Multi-resolution reconstruction of 3-D image with modified temporal unwrapping algorithm,” Opt. Commun. 224(1–3), 35–44 (2003).
[Crossref]

1936 (1)

A. Gershun, “The light field,” Translated by P. Moon and, G. Timoshenko, J. Math. Phys. 18(1–4), 51–151 (1936).

Adams, A.

B. Wilburn, N. Joshi, V. Vaish, E. Talvala, E. Antunez, A. Barth, A. Adams, M. Horowitz, and M. Levoy, “High performance imaging using large camera arrays,” ACM Trans. Graph. 24(3), 765–776 (2005).
[Crossref]

Aggoun, A.

Agrawal, A.

A. Veeraraghavan, R. Raskar, A. Agrawal, A. Mohan, and J. Tumblin, “Dappled photography: mask enhanced cameras for heterodyned light fields and coded aperture refocusing,” ACM Trans. Graph. 26(3), 69 (2007).
[Crossref]

Antunez, E.

B. Wilburn, N. Joshi, V. Vaish, E. Talvala, E. Antunez, A. Barth, A. Adams, M. Horowitz, and M. Levoy, “High performance imaging using large camera arrays,” ACM Trans. Graph. 24(3), 765–776 (2005).
[Crossref]

Barth, A.

B. Wilburn, N. Joshi, V. Vaish, E. Talvala, E. Antunez, A. Barth, A. Adams, M. Horowitz, and M. Levoy, “High performance imaging using large camera arrays,” ACM Trans. Graph. 24(3), 765–776 (2005).
[Crossref]

Cai, Z.

Chai, T.

G. Wu, B. Masia, A. Jarabo, Y. Zhang, L. Wang, Q. Dai, T. Chai, and Y. Liu, “Light field image processing: an overview,” IEEE J. Sel. Top. Signal Process. 11(7), 926–954 (2017).
[Crossref]

Chen, C.

H. Lin, C. Chen, S. B. Kang, and J. Yu, “Depth recovery from light field using focal stack symmetry,” in Proceedings of IEEE International Conference on Computer Vision (IEEE, 2015), pp. 3451–3459.
[Crossref]

Chen, Q.

Chen, Y.

Cohen, M. F.

S. J. Gortler, R. Grzeszczuk, R. Szeliski, and M. F. Cohen, “The lumigraph,” in Proceedings of ACM SIGGRAPH (ACM, 1996), pp. 43–54.

Dai, Q.

Y. Zhang, H. Lv, Y. Liu, H. Wang, X. Wang, Q. Huang, X. Xiang, and Q. Dai, “Light-field depth estimation via epipolar plane image analysis and locally linear embedding,” IEEE Trans. Circ. Syst. Video Tech. 27(4), 739–747 (2017).
[Crossref]

G. Wu, B. Masia, A. Jarabo, Y. Zhang, L. Wang, Q. Dai, T. Chai, and Y. Liu, “Light field image processing: an overview,” IEEE J. Sel. Top. Signal Process. 11(7), 926–954 (2017).
[Crossref]

Y. Chen, X. Jin, and Q. Dai, “Distance measurement based on light field geometry and ray tracing,” Opt. Express 25(1), 59–76 (2017).
[Crossref] [PubMed]

Durand, F.

A. Levin, R. Fergus, F. Durand, and W. Freeman, “Image and depth from a conventional camera with a coded aperture,” ACM Trans. Graph. 26(3), 70 (2007).
[Crossref]

Feng, S.

Fergus, R.

A. Levin, R. Fergus, F. Durand, and W. Freeman, “Image and depth from a conventional camera with a coded aperture,” ACM Trans. Graph. 26(3), 70 (2007).
[Crossref]

Fernandez, S.

J. Salvi, S. Fernandez, T. Pribanic, and X. Llado, “A state of the art in structured light patterns for surface profilometry,” Pattern Recognit. 43(8), 2666–2680 (2010).
[Crossref]

Fiebig, S.

Freeman, W.

A. Levin, R. Fergus, F. Durand, and W. Freeman, “Image and depth from a conventional camera with a coded aperture,” ACM Trans. Graph. 26(3), 70 (2007).
[Crossref]

Geng, J.

J. Geng, “Structured-light 3D surface imaging: a tutorial,” Adv. Opt. Photonics 3(2), 128–160 (2011).
[Crossref]

Gershun, A.

A. Gershun, “The light field,” Translated by P. Moon and, G. Timoshenko, J. Math. Phys. 18(1–4), 51–151 (1936).

Goldluecke, B.

S. Wanner and B. Goldluecke, “Globally consistent depth labeling of 4D light fields,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2012), pp. 41–48.
[Crossref]

Gortler, S. J.

S. J. Gortler, R. Grzeszczuk, R. Szeliski, and M. F. Cohen, “The lumigraph,” in Proceedings of ACM SIGGRAPH (ACM, 1996), pp. 43–54.

Gross, M.

C. Kim, H. Zimmer, Y. Pritch, A. Sorkine-Hornung, and M. Gross, “Scene reconstruction from high spatio-angular resolution light fields,” ACM Trans. Graph. 32(4), 73 (2013).
[Crossref]

Grzeszczuk, R.

S. J. Gortler, R. Grzeszczuk, R. Szeliski, and M. F. Cohen, “The lumigraph,” in Proceedings of ACM SIGGRAPH (ACM, 1996), pp. 43–54.

Hadap, S.

M. W. Tao, S. Hadap, J. Malik, and R. Ramamoorthi, “Depth from combining defocus and correspondence using light-field cameras,” in Proceedings of IEEE International Conference on Computer Vision (IEEE, 2013), pp. 673–680.
[Crossref]

Hahne, C.

Hanrahan, P.

M. Levoy and P. Hanrahan, “Light field rendering,” in Proceedings of ACM SIGGRAPH (ACM, 1996), pp. 31–42.

He, D.

Horowitz, M.

B. Wilburn, N. Joshi, V. Vaish, E. Talvala, E. Antunez, A. Barth, A. Adams, M. Horowitz, and M. Levoy, “High performance imaging using large camera arrays,” ACM Trans. Graph. 24(3), 765–776 (2005).
[Crossref]

Hu, Y.

Huang, Q.

Y. Zhang, H. Lv, Y. Liu, H. Wang, X. Wang, Q. Huang, X. Xiang, and Q. Dai, “Light-field depth estimation via epipolar plane image analysis and locally linear embedding,” IEEE Trans. Circ. Syst. Video Tech. 27(4), 739–747 (2017).
[Crossref]

Huang, S.

Ihrke, I.

I. Ihrke, J. Restrepo, and L. Mignard-Debise, “Principles of light field imaging briefly revisiting 25 years of research,” IEEE Signal Procss. Mag. 33(5), 59–69 (2016).

Jarabo, A.

G. Wu, B. Masia, A. Jarabo, Y. Zhang, L. Wang, Q. Dai, T. Chai, and Y. Liu, “Light field image processing: an overview,” IEEE J. Sel. Top. Signal Process. 11(7), 926–954 (2017).
[Crossref]

Jiang, H.

Jin, X.

Joshi, N.

B. Wilburn, N. Joshi, V. Vaish, E. Talvala, E. Antunez, A. Barth, A. Adams, M. Horowitz, and M. Levoy, “High performance imaging using large camera arrays,” ACM Trans. Graph. 24(3), 765–776 (2005).
[Crossref]

Kang, S. B.

H. Lin, C. Chen, S. B. Kang, and J. Yu, “Depth recovery from light field using focal stack symmetry,” in Proceedings of IEEE International Conference on Computer Vision (IEEE, 2015), pp. 3451–3459.
[Crossref]

Kim, C.

C. Kim, H. Zimmer, Y. Pritch, A. Sorkine-Hornung, and M. Gross, “Scene reconstruction from high spatio-angular resolution light fields,” ACM Trans. Graph. 32(4), 73 (2013).
[Crossref]

Lee, K. M.

I. K. P. Williem, I. K. Park, and K. M. Lee, “Robust light field depth estimation using occlusion-noise aware data costs,” IEEE Trans. Pattern Anal. Mach. Intell. 40(10), 2484–2497 (2018).
[Crossref] [PubMed]

Levin, A.

A. Levin, R. Fergus, F. Durand, and W. Freeman, “Image and depth from a conventional camera with a coded aperture,” ACM Trans. Graph. 26(3), 70 (2007).
[Crossref]

Levoy, M.

M. Levoy, “Light fields and computational imaging,” Computer 39(8), 46–55 (2006).
[Crossref]

B. Wilburn, N. Joshi, V. Vaish, E. Talvala, E. Antunez, A. Barth, A. Adams, M. Horowitz, and M. Levoy, “High performance imaging using large camera arrays,” ACM Trans. Graph. 24(3), 765–776 (2005).
[Crossref]

M. Levoy and P. Hanrahan, “Light field rendering,” in Proceedings of ACM SIGGRAPH (ACM, 1996), pp. 31–42.

Lin, H.

H. Lin, C. Chen, S. B. Kang, and J. Yu, “Depth recovery from light field using focal stack symmetry,” in Proceedings of IEEE International Conference on Computer Vision (IEEE, 2015), pp. 3451–3459.
[Crossref]

Liu, X.

Liu, Y.

Y. Zhang, H. Lv, Y. Liu, H. Wang, X. Wang, Q. Huang, X. Xiang, and Q. Dai, “Light-field depth estimation via epipolar plane image analysis and locally linear embedding,” IEEE Trans. Circ. Syst. Video Tech. 27(4), 739–747 (2017).
[Crossref]

G. Wu, B. Masia, A. Jarabo, Y. Zhang, L. Wang, Q. Dai, T. Chai, and Y. Liu, “Light field image processing: an overview,” IEEE J. Sel. Top. Signal Process. 11(7), 926–954 (2017).
[Crossref]

Llado, X.

J. Salvi, S. Fernandez, T. Pribanic, and X. Llado, “A state of the art in structured light patterns for surface profilometry,” Pattern Recognit. 43(8), 2666–2680 (2010).
[Crossref]

Lv, H.

Y. Zhang, H. Lv, Y. Liu, H. Wang, X. Wang, Q. Huang, X. Xiang, and Q. Dai, “Light-field depth estimation via epipolar plane image analysis and locally linear embedding,” IEEE Trans. Circ. Syst. Video Tech. 27(4), 739–747 (2017).
[Crossref]

Malik, J.

M. W. Tao, S. Hadap, J. Malik, and R. Ramamoorthi, “Depth from combining defocus and correspondence using light-field cameras,” in Proceedings of IEEE International Conference on Computer Vision (IEEE, 2013), pp. 673–680.
[Crossref]

Masia, B.

G. Wu, B. Masia, A. Jarabo, Y. Zhang, L. Wang, Q. Dai, T. Chai, and Y. Liu, “Light field image processing: an overview,” IEEE J. Sel. Top. Signal Process. 11(7), 926–954 (2017).
[Crossref]

Mignard-Debise, L.

I. Ihrke, J. Restrepo, and L. Mignard-Debise, “Principles of light field imaging briefly revisiting 25 years of research,” IEEE Signal Procss. Mag. 33(5), 59–69 (2016).

Mohan, A.

A. Veeraraghavan, R. Raskar, A. Agrawal, A. Mohan, and J. Tumblin, “Dappled photography: mask enhanced cameras for heterodyned light fields and coded aperture refocusing,” ACM Trans. Graph. 26(3), 69 (2007).
[Crossref]

Niu, H.

X. Peng, Z. Yang, and H. Niu, “Multi-resolution reconstruction of 3-D image with modified temporal unwrapping algorithm,” Opt. Commun. 224(1–3), 35–44 (2003).
[Crossref]

Park, I. K.

I. K. P. Williem, I. K. Park, and K. M. Lee, “Robust light field depth estimation using occlusion-noise aware data costs,” IEEE Trans. Pattern Anal. Mach. Intell. 40(10), 2484–2497 (2018).
[Crossref] [PubMed]

Peng, X.

Z. Cai, X. Liu, H. Jiang, D. He, X. Peng, S. Huang, and Z. Zhang, “Flexible phase error compensation based on Hilbert transform in phase shifting profilometry,” Opt. Express 23(19), 25171–25181 (2015).
[Crossref] [PubMed]

X. Peng, Z. Yang, and H. Niu, “Multi-resolution reconstruction of 3-D image with modified temporal unwrapping algorithm,” Opt. Commun. 224(1–3), 35–44 (2003).
[Crossref]

Pesch, M.

Pribanic, T.

J. Salvi, S. Fernandez, T. Pribanic, and X. Llado, “A state of the art in structured light patterns for surface profilometry,” Pattern Recognit. 43(8), 2666–2680 (2010).
[Crossref]

Pritch, Y.

C. Kim, H. Zimmer, Y. Pritch, A. Sorkine-Hornung, and M. Gross, “Scene reconstruction from high spatio-angular resolution light fields,” ACM Trans. Graph. 32(4), 73 (2013).
[Crossref]

Ramamoorthi, R.

M. W. Tao, S. Hadap, J. Malik, and R. Ramamoorthi, “Depth from combining defocus and correspondence using light-field cameras,” in Proceedings of IEEE International Conference on Computer Vision (IEEE, 2013), pp. 673–680.
[Crossref]

Raskar, R.

A. Veeraraghavan, R. Raskar, A. Agrawal, A. Mohan, and J. Tumblin, “Dappled photography: mask enhanced cameras for heterodyned light fields and coded aperture refocusing,” ACM Trans. Graph. 26(3), 69 (2007).
[Crossref]

Restrepo, J.

I. Ihrke, J. Restrepo, and L. Mignard-Debise, “Principles of light field imaging briefly revisiting 25 years of research,” IEEE Signal Procss. Mag. 33(5), 59–69 (2016).

Salvi, J.

J. Salvi, S. Fernandez, T. Pribanic, and X. Llado, “A state of the art in structured light patterns for surface profilometry,” Pattern Recognit. 43(8), 2666–2680 (2010).
[Crossref]

Sorkine-Hornung, A.

C. Kim, H. Zimmer, Y. Pritch, A. Sorkine-Hornung, and M. Gross, “Scene reconstruction from high spatio-angular resolution light fields,” ACM Trans. Graph. 32(4), 73 (2013).
[Crossref]

Szeliski, R.

S. J. Gortler, R. Grzeszczuk, R. Szeliski, and M. F. Cohen, “The lumigraph,” in Proceedings of ACM SIGGRAPH (ACM, 1996), pp. 43–54.

Talvala, E.

B. Wilburn, N. Joshi, V. Vaish, E. Talvala, E. Antunez, A. Barth, A. Adams, M. Horowitz, and M. Levoy, “High performance imaging using large camera arrays,” ACM Trans. Graph. 24(3), 765–776 (2005).
[Crossref]

Tao, M. W.

M. W. Tao, S. Hadap, J. Malik, and R. Ramamoorthi, “Depth from combining defocus and correspondence using light-field cameras,” in Proceedings of IEEE International Conference on Computer Vision (IEEE, 2013), pp. 673–680.
[Crossref]

Tao, T.

Tumblin, J.

A. Veeraraghavan, R. Raskar, A. Agrawal, A. Mohan, and J. Tumblin, “Dappled photography: mask enhanced cameras for heterodyned light fields and coded aperture refocusing,” ACM Trans. Graph. 26(3), 69 (2007).
[Crossref]

Vaish, V.

B. Wilburn, N. Joshi, V. Vaish, E. Talvala, E. Antunez, A. Barth, A. Adams, M. Horowitz, and M. Levoy, “High performance imaging using large camera arrays,” ACM Trans. Graph. 24(3), 765–776 (2005).
[Crossref]

Veeraraghavan, A.

A. Veeraraghavan, R. Raskar, A. Agrawal, A. Mohan, and J. Tumblin, “Dappled photography: mask enhanced cameras for heterodyned light fields and coded aperture refocusing,” ACM Trans. Graph. 26(3), 69 (2007).
[Crossref]

Velisavljevic, V.

Wang, H.

Y. Zhang, H. Lv, Y. Liu, H. Wang, X. Wang, Q. Huang, X. Xiang, and Q. Dai, “Light-field depth estimation via epipolar plane image analysis and locally linear embedding,” IEEE Trans. Circ. Syst. Video Tech. 27(4), 739–747 (2017).
[Crossref]

Wang, L.

G. Wu, B. Masia, A. Jarabo, Y. Zhang, L. Wang, Q. Dai, T. Chai, and Y. Liu, “Light field image processing: an overview,” IEEE J. Sel. Top. Signal Process. 11(7), 926–954 (2017).
[Crossref]

Wang, X.

Y. Zhang, H. Lv, Y. Liu, H. Wang, X. Wang, Q. Huang, X. Xiang, and Q. Dai, “Light-field depth estimation via epipolar plane image analysis and locally linear embedding,” IEEE Trans. Circ. Syst. Video Tech. 27(4), 739–747 (2017).
[Crossref]

Wanner, S.

S. Wanner and B. Goldluecke, “Globally consistent depth labeling of 4D light fields,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2012), pp. 41–48.
[Crossref]

Wilburn, B.

B. Wilburn, N. Joshi, V. Vaish, E. Talvala, E. Antunez, A. Barth, A. Adams, M. Horowitz, and M. Levoy, “High performance imaging using large camera arrays,” ACM Trans. Graph. 24(3), 765–776 (2005).
[Crossref]

Williem, I. K. P.

I. K. P. Williem, I. K. Park, and K. M. Lee, “Robust light field depth estimation using occlusion-noise aware data costs,” IEEE Trans. Pattern Anal. Mach. Intell. 40(10), 2484–2497 (2018).
[Crossref] [PubMed]

Wu, G.

G. Wu, B. Masia, A. Jarabo, Y. Zhang, L. Wang, Q. Dai, T. Chai, and Y. Liu, “Light field image processing: an overview,” IEEE J. Sel. Top. Signal Process. 11(7), 926–954 (2017).
[Crossref]

Xiang, X.

Y. Zhang, H. Lv, Y. Liu, H. Wang, X. Wang, Q. Huang, X. Xiang, and Q. Dai, “Light-field depth estimation via epipolar plane image analysis and locally linear embedding,” IEEE Trans. Circ. Syst. Video Tech. 27(4), 739–747 (2017).
[Crossref]

Yang, Z.

X. Peng, Z. Yang, and H. Niu, “Multi-resolution reconstruction of 3-D image with modified temporal unwrapping algorithm,” Opt. Commun. 224(1–3), 35–44 (2003).
[Crossref]

Yu, J.

H. Lin, C. Chen, S. B. Kang, and J. Yu, “Depth recovery from light field using focal stack symmetry,” in Proceedings of IEEE International Conference on Computer Vision (IEEE, 2015), pp. 3451–3459.
[Crossref]

Zhang, Y.

Y. Zhang, H. Lv, Y. Liu, H. Wang, X. Wang, Q. Huang, X. Xiang, and Q. Dai, “Light-field depth estimation via epipolar plane image analysis and locally linear embedding,” IEEE Trans. Circ. Syst. Video Tech. 27(4), 739–747 (2017).
[Crossref]

G. Wu, B. Masia, A. Jarabo, Y. Zhang, L. Wang, Q. Dai, T. Chai, and Y. Liu, “Light field image processing: an overview,” IEEE J. Sel. Top. Signal Process. 11(7), 926–954 (2017).
[Crossref]

Zhang, Z.

Zimmer, H.

C. Kim, H. Zimmer, Y. Pritch, A. Sorkine-Hornung, and M. Gross, “Scene reconstruction from high spatio-angular resolution light fields,” ACM Trans. Graph. 32(4), 73 (2013).
[Crossref]

Zuo, C.

ACM Trans. Graph. (4)

B. Wilburn, N. Joshi, V. Vaish, E. Talvala, E. Antunez, A. Barth, A. Adams, M. Horowitz, and M. Levoy, “High performance imaging using large camera arrays,” ACM Trans. Graph. 24(3), 765–776 (2005).
[Crossref]

A. Levin, R. Fergus, F. Durand, and W. Freeman, “Image and depth from a conventional camera with a coded aperture,” ACM Trans. Graph. 26(3), 70 (2007).
[Crossref]

A. Veeraraghavan, R. Raskar, A. Agrawal, A. Mohan, and J. Tumblin, “Dappled photography: mask enhanced cameras for heterodyned light fields and coded aperture refocusing,” ACM Trans. Graph. 26(3), 69 (2007).
[Crossref]

C. Kim, H. Zimmer, Y. Pritch, A. Sorkine-Hornung, and M. Gross, “Scene reconstruction from high spatio-angular resolution light fields,” ACM Trans. Graph. 32(4), 73 (2013).
[Crossref]

Adv. Opt. Photonics (1)

J. Geng, “Structured-light 3D surface imaging: a tutorial,” Adv. Opt. Photonics 3(2), 128–160 (2011).
[Crossref]

Appl. Opt. (1)

Computer (1)

M. Levoy, “Light fields and computational imaging,” Computer 39(8), 46–55 (2006).
[Crossref]

IEEE J. Sel. Top. Signal Process. (1)

G. Wu, B. Masia, A. Jarabo, Y. Zhang, L. Wang, Q. Dai, T. Chai, and Y. Liu, “Light field image processing: an overview,” IEEE J. Sel. Top. Signal Process. 11(7), 926–954 (2017).
[Crossref]

IEEE Signal Procss. Mag. (1)

I. Ihrke, J. Restrepo, and L. Mignard-Debise, “Principles of light field imaging briefly revisiting 25 years of research,” IEEE Signal Procss. Mag. 33(5), 59–69 (2016).

IEEE Trans. Circ. Syst. Video Tech. (1)

Y. Zhang, H. Lv, Y. Liu, H. Wang, X. Wang, Q. Huang, X. Xiang, and Q. Dai, “Light-field depth estimation via epipolar plane image analysis and locally linear embedding,” IEEE Trans. Circ. Syst. Video Tech. 27(4), 739–747 (2017).
[Crossref]

IEEE Trans. Pattern Anal. Mach. Intell. (1)

I. K. P. Williem, I. K. Park, and K. M. Lee, “Robust light field depth estimation using occlusion-noise aware data costs,” IEEE Trans. Pattern Anal. Mach. Intell. 40(10), 2484–2497 (2018).
[Crossref] [PubMed]

J. Math. Phys. (1)

A. Gershun, “The light field,” Translated by P. Moon and, G. Timoshenko, J. Math. Phys. 18(1–4), 51–151 (1936).

Opt. Commun. (1)

X. Peng, Z. Yang, and H. Niu, “Multi-resolution reconstruction of 3-D image with modified temporal unwrapping algorithm,” Opt. Commun. 224(1–3), 35–44 (2003).
[Crossref]

Opt. Express (3)

Pattern Recognit. (1)

J. Salvi, S. Fernandez, T. Pribanic, and X. Llado, “A state of the art in structured light patterns for surface profilometry,” Pattern Recognit. 43(8), 2666–2680 (2010).
[Crossref]

Other (7)

E. H. Adelson and J. R. Bergen, “The plenoptic function and the elements of early vision,” in Computational models of visual processing (MIT, 1991), pp. 3–20.

M. Levoy and P. Hanrahan, “Light field rendering,” in Proceedings of ACM SIGGRAPH (ACM, 1996), pp. 31–42.

S. J. Gortler, R. Grzeszczuk, R. Szeliski, and M. F. Cohen, “The lumigraph,” in Proceedings of ACM SIGGRAPH (ACM, 1996), pp. 43–54.

M. W. Tao, S. Hadap, J. Malik, and R. Ramamoorthi, “Depth from combining defocus and correspondence using light-field cameras,” in Proceedings of IEEE International Conference on Computer Vision (IEEE, 2013), pp. 673–680.
[Crossref]

H. Lin, C. Chen, S. B. Kang, and J. Yu, “Depth recovery from light field using focal stack symmetry,” in Proceedings of IEEE International Conference on Computer Vision (IEEE, 2015), pp. 3451–3459.
[Crossref]

R. Ng, M. Levoy, M. Brédif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” Stanford Technical Report CSTR (2005), pp. 1–11.

S. Wanner and B. Goldluecke, “Globally consistent depth labeling of 4D light fields,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2012), pp. 41–48.
[Crossref]

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (14)

Fig. 1
Fig. 1 Experimental scene: light field images under (a) uniform illumination and (b) structured illumination; (c) phase-encoded field; enlarged parts on upper right.
Fig. 2
Fig. 2 Refocused phase maps related to (a) α1, (b) α2, and (c) α3; (d) absolute difference between (a) and (b); histogram of (d).
Fig. 3
Fig. 3 Sheared PEF in different depths.
Fig. 4
Fig. 4 Distribution curves across depth range: (a) defocus response and (b) correspondence response during active light field depth estimation; (c) defocus response and (d) correspondence response during passive light field depth estimation.
Fig. 5
Fig. 5 Sub-aperture phase maps related to (a) u1 and (b) u2; (c) cross sections of the two phase maps with an enlarged diagram.
Fig. 6
Fig. 6 Flow chart of active light field depth estimation.
Fig. 7
Fig. 7 Comparison of correspondence response with weighted and unweighted angular variance.
Fig. 8
Fig. 8 Passive light field depth estimation: depth maps by (a) defocus cue, (b) correspondence cue, and (c) combination of two cues; (d) cross sections of depth maps in (a), (b), and (c).
Fig. 9
Fig. 9 Light field depth estimation with the SLF as an input: depth maps by (a) defocus cue, (b) correspondence cue, and (c) combination of two cues; (d) cross sections of depth maps in (a), (b), and (c).
Fig. 10
Fig. 10 Distribution curves of correspondence response with weighted angular variance across depth range.
Fig. 11
Fig. 11 Active light field depth estimation: (a) depth map; (b) cross sections of depth maps in (a), 9(c), and 8(c).
Fig. 12
Fig. 12 Light field depth estimation for different color segments: (a) measured scene divided by colors, corresponding to blue, green, orange, vermilion, and yellow, from top to bottom; histograms of depth map corresponding to each color segment by using (b) active and (c) passive methods.
Fig. 13
Fig. 13 Depth estimation evaluation referring to phase encoding: sub-aperture absolute phase difference between u1 and u2 corresponding to (a) active and (b) passive methods, respectively; angular variance of resampled PEF corresponding to (c) active and (d) passive methods, respectively.
Fig. 14
Fig. 14 Experimental results of passive (first column) and active (second column) light field depth estimation for other measured scenes, as well as cross sections of depth maps (third column).

Tables (2)

Tables Icon

Table 1 Relevant data of defocus response computation.

Tables Icon

Table 2 Relevant data of depth estimation evaluation (rad).

Equations (6)

Equations on this page are rendered with MathJax. Learn more.

L α ( s , u ) = L ( s α , u ) ,
L ¯ α ( s ) = 1 | N u | u N u L α ( s , u )
D α ( s ) = 1 | W s | s W s | Δ s L ¯ α ( s ) |
σ α 2 ( s ) = 1 | N u | u N u [ L α ( s , u ) L ¯ α ( s ) ] 2 .
C α ( s ) = 1 | W s | s W s σ α ( s ) ,
σ α 2 ( s ) = 1 | N u | u N u { ( | u | + 1 ) [ ϕ α ( s , u ) ϕ ¯ α ( s ) ] } 2 .

Metrics