Abstract

Recently we have shown that light-field photography images can be interpreted as limited-angle cone-beam tomography acquisitions. Here, we use this property to develop a direct-space tomographic refocusing formulation that allows one to refocus both unfocused and focused light-field images. We express the reconstruction as a convex optimization problem, thus enabling the use of various regularization terms to help suppress artifacts, and a wide class of existing advanced tomographic algorithms. This formulation also supports super-resolved reconstructions and the correction of the optical system’s limited frequency response (point spread function). We validate this method with numerical and real-world examples.

© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

Full Article  |  PDF Article
OSA Recommended Articles
Tomographic approach for the quantitative scene reconstruction from light field images

Nicola Viganò, Henri Der Sarkissian, Charlotte Herzog, Ombeline de la Rochefoucauld, Robert van Liere, and Kees Joost Batenburg
Opt. Express 26(18) 22574-22602 (2018)

On the fundamental comparison between unfocused and focused light field cameras

Shuaishuai Zhu, Andy Lai, Katherine Eaton, Peng Jin, and Liang Gao
Appl. Opt. 57(1) A1-A11 (2018)

Computational integral field spectroscopy with diverse imaging

Maciej Baranski, Shakil Rehman, Sanathanan S. Muttikulangara, George Barbastathis, and Jianmin Miao
J. Opt. Soc. Am. A 34(9) 1711-1719 (2017)

References

  • View by:
  • |
  • |
  • |

  1. R. Ng, “Digital light field photography,” Ph.D. thesis, Stanford University (2006).
  2. E. Y. Lam, “Computational photography with plenoptic camera and light field capture: tutorial,” Journal of the Optical Society of America A 32, 2021–2032 (2015).
    [Crossref]
  3. M. W. Tao, S. Hadap, J. Malik, and R. Ramamoorthi, “Depth from combining defocus and correspondence using light-field cameras,” in), Proceedings of the IEEE International Conference on Computer Vision, (IEEE, 2013), pp. 673–680.
  4. T. E. Bishop, S. Zanetti, and P. Favaro, “Light field super resolution,” in), 2009 IEEE International Conference on Computational Photography (ICCP), (IEEE, 2009), pp. 1–9.
  5. S. Wanner and B. Goldluecke, “Spatial and angular variational super-resolution of 4D light fields,” Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) 7576 LNCS, 608–621 (2012).
  6. K. Marwah, G. Wetzstein, Y. Bando, and R. Raskar, “Compressive light field photography using overcomplete dictionaries and optimized projections,” ACM Transactions on Graphics 32, 1 (2013).
    [Crossref]
  7. Y. Wang, G. Hou, Z. Sun, Z. Wang, and T. Tan, “A simple and robust super resolution method for light field images,” in 2016 IEEE International Conference on Image Processing (ICIP), (IEEE, 2016), pp. 1459–1463.
    [Crossref]
  8. Y. Yoon, H.-G. Jeon, D. Yoo, J.-Y. Lee, and I. S. Kweon, “Light-Field Image Super-Resolution Using Convolutional Neural Network,” IEEE Signal Processing Letters 24, 848–852 (2017).
    [Crossref]
  9. M. Rossi and P. Frossard, “Geometry-Consistent Light Field Super-Resolution via Graph-Based Regularization,” IEEE Transactions on Image Processing 27, 4207–4218 (2018).
    [Crossref] [PubMed]
  10. W. S. Chan, E. Y. Lam, M. K. Ng, and G. Y. Mak, “Super-resolution reconstruction in a computational compound-eye imaging system,” Multidimensional Systems and Signal Processing 18, 83–101 (2007).
    [Crossref]
  11. M. Broxton, L. Grosenick, S. Yang, N. Cohen, A. Andalman, K. Deisseroth, and M. Levoy, “Wave optics theory and 3-D deconvolution for the light field microscope,” Optics Express 21, 25418–25439 (2013).
    [Crossref] [PubMed]
  12. S. A. Shroff and K. Berkner, “Image formation analysis and high resolution image reconstruction for plenoptic imaging systems,” Applied Optics 52, D22 (2013).
    [Crossref] [PubMed]
  13. A. Lumsdaine and T. Georgiev, “The focused plenoptic camera,” in), IEEE International Conference on Computational Photography (ICCP), (IEEE, 2009), pp. 1–8.
  14. T. G. Georgiev and A. Lumsdaine, “Focused plenoptic camera and rendering,” Journal of Electronic Imaging 19, 021106 (2010).
    [Crossref]
  15. R. Ng, M. Levoy, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” Stanford UniversityTechnical Report CSTR2005– (2005).
  16. R. Ng, “Fourier slice photography,” ACM Transactions on Graphics 24, 735 (2005).
    [Crossref]
  17. D. G. Dansereau, O. Pizarro, and S. B. Williams, “Linear Volumetric Focus for Light Field Cameras,” ACM Transactions on Graphics 34, 1–20 (2015).
    [Crossref]
  18. T. Georgiev, G. Chunev, and A. Lumsdaine, “Superresolution with the focused plenoptic camera,” in Computational Imaging IX,C. A. Bouman, I. Pollak, and P. J. Wolfe, eds. (SPIE, 2011).
    [Crossref]
  19. C. Herzog, O. de La Rochefoucauld, G. Dovillaire, X. Granier, F. Harms, X. Levecq, E. Longo, L. Mignard-Debise, and P. Zeitoun, “Comparison of reconstruction approaches for plenoptic imaging systems,” in Unconventional Optical Imaging, C. Fournier, M. P. Georges, and G. Popescu, eds. (SPIE, 2018), May, p. 104.
    [Crossref]
  20. N. Viganò, H. Der Sarkissian, C. Herzog, O. de la Rochefoucauld, R. van Liere, and K. J. Batenburg, “Tomographic approach for the quantitative scene reconstruction from light field images,” Optics Express 26, 22574 (2018).
    [Crossref] [PubMed]
  21. M. Lang, H. Guo, J. E. Odegard, C. S. Burrus, and R. O. Wells, “Noise reduction using an undecimated discrete wavelet transform,” IEEE Signal Processing Letters 3, 10–12 (1996).
    [Crossref]
  22. A. K. Louis, P. Maass, and A. Rieder, Wavelets: Theory and Applications, Pure and Applied Mathematics(Wiley, 1997).
  23. S. Baker and T. Kanade, “Limits on super-resolution and how to break them,” IEEE Transactions on Pattern Analysis and Machine Intelligence 24, 1167–1183 (2002).
    [Crossref]
  24. A. C. Kak and M. Slaney, Principles of Computerized Tomographic Imaging (IEEE, 1988).
  25. A. V. der Sluis and H. V. der Vorst, “SIRT-and CG-type methods for the iterative solution of sparse linear least-squares problems,” Linear Algebra and its Applications 130, 257–303 (1990).
  26. A. Chambolle and T. Pock, “A first-order primal-dual algorithm for convex problems with applications to imaging,” Journal of Mathematical Imaging and Vision 40, 120–145 (2010).
    [Crossref]
  27. E. Y. Sidky, J. H. Jørgensen, and X. Pan, “Convex optimization problem prototyping for image reconstruction in computed tomography with the Chambolle-Pock algorithm,” Physics in Medicine and Biology 57, 3065–3091 (2012).
    [Crossref] [PubMed]
  28. S. Zhu, A. Lai, K. Eaton, P. Jin, and L. Gao, “On the fundamental comparison between unfocused and focused light field cameras,” Applied Optics 57, A1 (2018).
    [Crossref] [PubMed]
  29. T. M. Buzug, Computed Tomography(Springer-Verlag, 2008).
  30. M. Martínez-Corral and B. Javidi, “Fundamentals of 3d imaging and displays: a tutorial on integral imaging, light-field, and plenoptic systems,” Advances in Optics and Photonics 10, 512 (2018).
    [Crossref]
  31. G. H. Golub and C. F. Van Loan, Matrix Computations, vol. 28 (Johns Hopkins University, 1996).
  32. J. W. Goodman, Introduction to Fourier Optics, Electrical and Computer Engineering: Communications and Signal Processing (McGraw-Hill, 1996), 2nd ed.
  33. W. J. Palenstijn, K. J. Batenburg, and J. Sijbers, “Performance improvements for iterative electron tomography reconstruction using graphics processing units (GPUs),” Journal of Structural Biology 176, 250–253 (2011).
    [Crossref] [PubMed]
  34. G. Kutyniok, Compressed Sensing(Cambridge University, 2012).
  35. D. G. Dansereau, O. Pizarro, and S. B. Williams, “Decoding, Calibration and Rectification for Lenselet-Based Plenoptic Cameras,” in), 2013 IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2013), pp. 1027–1034.
    [Crossref]
  36. A. Lumsdaine, T. G. Georgiev, and G. Chunev, “Spatial analysis of discrete plenoptic sampling,” Proceedings of SPIE 8299, 829909 (2012).
    [Crossref]
  37. S. Boyd and L. Vandenberghe, Convex Optimization(Cambridge University, 2004).
    [Crossref]

2018 (4)

M. Rossi and P. Frossard, “Geometry-Consistent Light Field Super-Resolution via Graph-Based Regularization,” IEEE Transactions on Image Processing 27, 4207–4218 (2018).
[Crossref] [PubMed]

N. Viganò, H. Der Sarkissian, C. Herzog, O. de la Rochefoucauld, R. van Liere, and K. J. Batenburg, “Tomographic approach for the quantitative scene reconstruction from light field images,” Optics Express 26, 22574 (2018).
[Crossref] [PubMed]

S. Zhu, A. Lai, K. Eaton, P. Jin, and L. Gao, “On the fundamental comparison between unfocused and focused light field cameras,” Applied Optics 57, A1 (2018).
[Crossref] [PubMed]

M. Martínez-Corral and B. Javidi, “Fundamentals of 3d imaging and displays: a tutorial on integral imaging, light-field, and plenoptic systems,” Advances in Optics and Photonics 10, 512 (2018).
[Crossref]

2017 (1)

Y. Yoon, H.-G. Jeon, D. Yoo, J.-Y. Lee, and I. S. Kweon, “Light-Field Image Super-Resolution Using Convolutional Neural Network,” IEEE Signal Processing Letters 24, 848–852 (2017).
[Crossref]

2015 (2)

D. G. Dansereau, O. Pizarro, and S. B. Williams, “Linear Volumetric Focus for Light Field Cameras,” ACM Transactions on Graphics 34, 1–20 (2015).
[Crossref]

E. Y. Lam, “Computational photography with plenoptic camera and light field capture: tutorial,” Journal of the Optical Society of America A 32, 2021–2032 (2015).
[Crossref]

2013 (3)

K. Marwah, G. Wetzstein, Y. Bando, and R. Raskar, “Compressive light field photography using overcomplete dictionaries and optimized projections,” ACM Transactions on Graphics 32, 1 (2013).
[Crossref]

M. Broxton, L. Grosenick, S. Yang, N. Cohen, A. Andalman, K. Deisseroth, and M. Levoy, “Wave optics theory and 3-D deconvolution for the light field microscope,” Optics Express 21, 25418–25439 (2013).
[Crossref] [PubMed]

S. A. Shroff and K. Berkner, “Image formation analysis and high resolution image reconstruction for plenoptic imaging systems,” Applied Optics 52, D22 (2013).
[Crossref] [PubMed]

2012 (2)

E. Y. Sidky, J. H. Jørgensen, and X. Pan, “Convex optimization problem prototyping for image reconstruction in computed tomography with the Chambolle-Pock algorithm,” Physics in Medicine and Biology 57, 3065–3091 (2012).
[Crossref] [PubMed]

A. Lumsdaine, T. G. Georgiev, and G. Chunev, “Spatial analysis of discrete plenoptic sampling,” Proceedings of SPIE 8299, 829909 (2012).
[Crossref]

2011 (1)

W. J. Palenstijn, K. J. Batenburg, and J. Sijbers, “Performance improvements for iterative electron tomography reconstruction using graphics processing units (GPUs),” Journal of Structural Biology 176, 250–253 (2011).
[Crossref] [PubMed]

2010 (2)

A. Chambolle and T. Pock, “A first-order primal-dual algorithm for convex problems with applications to imaging,” Journal of Mathematical Imaging and Vision 40, 120–145 (2010).
[Crossref]

T. G. Georgiev and A. Lumsdaine, “Focused plenoptic camera and rendering,” Journal of Electronic Imaging 19, 021106 (2010).
[Crossref]

2007 (1)

W. S. Chan, E. Y. Lam, M. K. Ng, and G. Y. Mak, “Super-resolution reconstruction in a computational compound-eye imaging system,” Multidimensional Systems and Signal Processing 18, 83–101 (2007).
[Crossref]

2005 (1)

R. Ng, “Fourier slice photography,” ACM Transactions on Graphics 24, 735 (2005).
[Crossref]

2002 (1)

S. Baker and T. Kanade, “Limits on super-resolution and how to break them,” IEEE Transactions on Pattern Analysis and Machine Intelligence 24, 1167–1183 (2002).
[Crossref]

1996 (1)

M. Lang, H. Guo, J. E. Odegard, C. S. Burrus, and R. O. Wells, “Noise reduction using an undecimated discrete wavelet transform,” IEEE Signal Processing Letters 3, 10–12 (1996).
[Crossref]

1990 (1)

A. V. der Sluis and H. V. der Vorst, “SIRT-and CG-type methods for the iterative solution of sparse linear least-squares problems,” Linear Algebra and its Applications 130, 257–303 (1990).

Andalman, A.

M. Broxton, L. Grosenick, S. Yang, N. Cohen, A. Andalman, K. Deisseroth, and M. Levoy, “Wave optics theory and 3-D deconvolution for the light field microscope,” Optics Express 21, 25418–25439 (2013).
[Crossref] [PubMed]

Baker, S.

S. Baker and T. Kanade, “Limits on super-resolution and how to break them,” IEEE Transactions on Pattern Analysis and Machine Intelligence 24, 1167–1183 (2002).
[Crossref]

Bando, Y.

K. Marwah, G. Wetzstein, Y. Bando, and R. Raskar, “Compressive light field photography using overcomplete dictionaries and optimized projections,” ACM Transactions on Graphics 32, 1 (2013).
[Crossref]

Batenburg, K. J.

N. Viganò, H. Der Sarkissian, C. Herzog, O. de la Rochefoucauld, R. van Liere, and K. J. Batenburg, “Tomographic approach for the quantitative scene reconstruction from light field images,” Optics Express 26, 22574 (2018).
[Crossref] [PubMed]

W. J. Palenstijn, K. J. Batenburg, and J. Sijbers, “Performance improvements for iterative electron tomography reconstruction using graphics processing units (GPUs),” Journal of Structural Biology 176, 250–253 (2011).
[Crossref] [PubMed]

Berkner, K.

S. A. Shroff and K. Berkner, “Image formation analysis and high resolution image reconstruction for plenoptic imaging systems,” Applied Optics 52, D22 (2013).
[Crossref] [PubMed]

Bishop, T. E.

T. E. Bishop, S. Zanetti, and P. Favaro, “Light field super resolution,” in), 2009 IEEE International Conference on Computational Photography (ICCP), (IEEE, 2009), pp. 1–9.

Boyd, S.

S. Boyd and L. Vandenberghe, Convex Optimization(Cambridge University, 2004).
[Crossref]

Broxton, M.

M. Broxton, L. Grosenick, S. Yang, N. Cohen, A. Andalman, K. Deisseroth, and M. Levoy, “Wave optics theory and 3-D deconvolution for the light field microscope,” Optics Express 21, 25418–25439 (2013).
[Crossref] [PubMed]

Burrus, C. S.

M. Lang, H. Guo, J. E. Odegard, C. S. Burrus, and R. O. Wells, “Noise reduction using an undecimated discrete wavelet transform,” IEEE Signal Processing Letters 3, 10–12 (1996).
[Crossref]

Buzug, T. M.

T. M. Buzug, Computed Tomography(Springer-Verlag, 2008).

Chambolle, A.

A. Chambolle and T. Pock, “A first-order primal-dual algorithm for convex problems with applications to imaging,” Journal of Mathematical Imaging and Vision 40, 120–145 (2010).
[Crossref]

Chan, W. S.

W. S. Chan, E. Y. Lam, M. K. Ng, and G. Y. Mak, “Super-resolution reconstruction in a computational compound-eye imaging system,” Multidimensional Systems and Signal Processing 18, 83–101 (2007).
[Crossref]

Chunev, G.

A. Lumsdaine, T. G. Georgiev, and G. Chunev, “Spatial analysis of discrete plenoptic sampling,” Proceedings of SPIE 8299, 829909 (2012).
[Crossref]

T. Georgiev, G. Chunev, and A. Lumsdaine, “Superresolution with the focused plenoptic camera,” in Computational Imaging IX,C. A. Bouman, I. Pollak, and P. J. Wolfe, eds. (SPIE, 2011).
[Crossref]

Cohen, N.

M. Broxton, L. Grosenick, S. Yang, N. Cohen, A. Andalman, K. Deisseroth, and M. Levoy, “Wave optics theory and 3-D deconvolution for the light field microscope,” Optics Express 21, 25418–25439 (2013).
[Crossref] [PubMed]

Dansereau, D. G.

D. G. Dansereau, O. Pizarro, and S. B. Williams, “Linear Volumetric Focus for Light Field Cameras,” ACM Transactions on Graphics 34, 1–20 (2015).
[Crossref]

D. G. Dansereau, O. Pizarro, and S. B. Williams, “Decoding, Calibration and Rectification for Lenselet-Based Plenoptic Cameras,” in), 2013 IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2013), pp. 1027–1034.
[Crossref]

de La Rochefoucauld, O.

C. Herzog, O. de La Rochefoucauld, G. Dovillaire, X. Granier, F. Harms, X. Levecq, E. Longo, L. Mignard-Debise, and P. Zeitoun, “Comparison of reconstruction approaches for plenoptic imaging systems,” in Unconventional Optical Imaging, C. Fournier, M. P. Georges, and G. Popescu, eds. (SPIE, 2018), May, p. 104.
[Crossref]

Deisseroth, K.

M. Broxton, L. Grosenick, S. Yang, N. Cohen, A. Andalman, K. Deisseroth, and M. Levoy, “Wave optics theory and 3-D deconvolution for the light field microscope,” Optics Express 21, 25418–25439 (2013).
[Crossref] [PubMed]

Dovillaire, G.

C. Herzog, O. de La Rochefoucauld, G. Dovillaire, X. Granier, F. Harms, X. Levecq, E. Longo, L. Mignard-Debise, and P. Zeitoun, “Comparison of reconstruction approaches for plenoptic imaging systems,” in Unconventional Optical Imaging, C. Fournier, M. P. Georges, and G. Popescu, eds. (SPIE, 2018), May, p. 104.
[Crossref]

Duval, G.

R. Ng, M. Levoy, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” Stanford UniversityTechnical Report CSTR2005– (2005).

Eaton, K.

S. Zhu, A. Lai, K. Eaton, P. Jin, and L. Gao, “On the fundamental comparison between unfocused and focused light field cameras,” Applied Optics 57, A1 (2018).
[Crossref] [PubMed]

Favaro, P.

T. E. Bishop, S. Zanetti, and P. Favaro, “Light field super resolution,” in), 2009 IEEE International Conference on Computational Photography (ICCP), (IEEE, 2009), pp. 1–9.

Frossard, P.

M. Rossi and P. Frossard, “Geometry-Consistent Light Field Super-Resolution via Graph-Based Regularization,” IEEE Transactions on Image Processing 27, 4207–4218 (2018).
[Crossref] [PubMed]

Gao, L.

S. Zhu, A. Lai, K. Eaton, P. Jin, and L. Gao, “On the fundamental comparison between unfocused and focused light field cameras,” Applied Optics 57, A1 (2018).
[Crossref] [PubMed]

Georgiev, T.

A. Lumsdaine and T. Georgiev, “The focused plenoptic camera,” in), IEEE International Conference on Computational Photography (ICCP), (IEEE, 2009), pp. 1–8.

T. Georgiev, G. Chunev, and A. Lumsdaine, “Superresolution with the focused plenoptic camera,” in Computational Imaging IX,C. A. Bouman, I. Pollak, and P. J. Wolfe, eds. (SPIE, 2011).
[Crossref]

Georgiev, T. G.

A. Lumsdaine, T. G. Georgiev, and G. Chunev, “Spatial analysis of discrete plenoptic sampling,” Proceedings of SPIE 8299, 829909 (2012).
[Crossref]

T. G. Georgiev and A. Lumsdaine, “Focused plenoptic camera and rendering,” Journal of Electronic Imaging 19, 021106 (2010).
[Crossref]

Goldluecke, B.

S. Wanner and B. Goldluecke, “Spatial and angular variational super-resolution of 4D light fields,” Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) 7576 LNCS, 608–621 (2012).

Golub, G. H.

G. H. Golub and C. F. Van Loan, Matrix Computations, vol. 28 (Johns Hopkins University, 1996).

Goodman, J. W.

J. W. Goodman, Introduction to Fourier Optics, Electrical and Computer Engineering: Communications and Signal Processing (McGraw-Hill, 1996), 2nd ed.

Granier, X.

C. Herzog, O. de La Rochefoucauld, G. Dovillaire, X. Granier, F. Harms, X. Levecq, E. Longo, L. Mignard-Debise, and P. Zeitoun, “Comparison of reconstruction approaches for plenoptic imaging systems,” in Unconventional Optical Imaging, C. Fournier, M. P. Georges, and G. Popescu, eds. (SPIE, 2018), May, p. 104.
[Crossref]

Grosenick, L.

M. Broxton, L. Grosenick, S. Yang, N. Cohen, A. Andalman, K. Deisseroth, and M. Levoy, “Wave optics theory and 3-D deconvolution for the light field microscope,” Optics Express 21, 25418–25439 (2013).
[Crossref] [PubMed]

Guo, H.

M. Lang, H. Guo, J. E. Odegard, C. S. Burrus, and R. O. Wells, “Noise reduction using an undecimated discrete wavelet transform,” IEEE Signal Processing Letters 3, 10–12 (1996).
[Crossref]

Hadap, S.

M. W. Tao, S. Hadap, J. Malik, and R. Ramamoorthi, “Depth from combining defocus and correspondence using light-field cameras,” in), Proceedings of the IEEE International Conference on Computer Vision, (IEEE, 2013), pp. 673–680.

Hanrahan, P.

R. Ng, M. Levoy, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” Stanford UniversityTechnical Report CSTR2005– (2005).

Harms, F.

C. Herzog, O. de La Rochefoucauld, G. Dovillaire, X. Granier, F. Harms, X. Levecq, E. Longo, L. Mignard-Debise, and P. Zeitoun, “Comparison of reconstruction approaches for plenoptic imaging systems,” in Unconventional Optical Imaging, C. Fournier, M. P. Georges, and G. Popescu, eds. (SPIE, 2018), May, p. 104.
[Crossref]

Herzog, C.

N. Viganò, H. Der Sarkissian, C. Herzog, O. de la Rochefoucauld, R. van Liere, and K. J. Batenburg, “Tomographic approach for the quantitative scene reconstruction from light field images,” Optics Express 26, 22574 (2018).
[Crossref] [PubMed]

C. Herzog, O. de La Rochefoucauld, G. Dovillaire, X. Granier, F. Harms, X. Levecq, E. Longo, L. Mignard-Debise, and P. Zeitoun, “Comparison of reconstruction approaches for plenoptic imaging systems,” in Unconventional Optical Imaging, C. Fournier, M. P. Georges, and G. Popescu, eds. (SPIE, 2018), May, p. 104.
[Crossref]

Horowitz, M.

R. Ng, M. Levoy, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” Stanford UniversityTechnical Report CSTR2005– (2005).

Hou, G.

Y. Wang, G. Hou, Z. Sun, Z. Wang, and T. Tan, “A simple and robust super resolution method for light field images,” in 2016 IEEE International Conference on Image Processing (ICIP), (IEEE, 2016), pp. 1459–1463.
[Crossref]

Javidi, B.

M. Martínez-Corral and B. Javidi, “Fundamentals of 3d imaging and displays: a tutorial on integral imaging, light-field, and plenoptic systems,” Advances in Optics and Photonics 10, 512 (2018).
[Crossref]

Jeon, H.-G.

Y. Yoon, H.-G. Jeon, D. Yoo, J.-Y. Lee, and I. S. Kweon, “Light-Field Image Super-Resolution Using Convolutional Neural Network,” IEEE Signal Processing Letters 24, 848–852 (2017).
[Crossref]

Jin, P.

S. Zhu, A. Lai, K. Eaton, P. Jin, and L. Gao, “On the fundamental comparison between unfocused and focused light field cameras,” Applied Optics 57, A1 (2018).
[Crossref] [PubMed]

Jørgensen, J. H.

E. Y. Sidky, J. H. Jørgensen, and X. Pan, “Convex optimization problem prototyping for image reconstruction in computed tomography with the Chambolle-Pock algorithm,” Physics in Medicine and Biology 57, 3065–3091 (2012).
[Crossref] [PubMed]

Kak, A. C.

A. C. Kak and M. Slaney, Principles of Computerized Tomographic Imaging (IEEE, 1988).

Kanade, T.

S. Baker and T. Kanade, “Limits on super-resolution and how to break them,” IEEE Transactions on Pattern Analysis and Machine Intelligence 24, 1167–1183 (2002).
[Crossref]

Kutyniok, G.

G. Kutyniok, Compressed Sensing(Cambridge University, 2012).

Kweon, I. S.

Y. Yoon, H.-G. Jeon, D. Yoo, J.-Y. Lee, and I. S. Kweon, “Light-Field Image Super-Resolution Using Convolutional Neural Network,” IEEE Signal Processing Letters 24, 848–852 (2017).
[Crossref]

Lai, A.

S. Zhu, A. Lai, K. Eaton, P. Jin, and L. Gao, “On the fundamental comparison between unfocused and focused light field cameras,” Applied Optics 57, A1 (2018).
[Crossref] [PubMed]

Lam, E. Y.

E. Y. Lam, “Computational photography with plenoptic camera and light field capture: tutorial,” Journal of the Optical Society of America A 32, 2021–2032 (2015).
[Crossref]

W. S. Chan, E. Y. Lam, M. K. Ng, and G. Y. Mak, “Super-resolution reconstruction in a computational compound-eye imaging system,” Multidimensional Systems and Signal Processing 18, 83–101 (2007).
[Crossref]

Lang, M.

M. Lang, H. Guo, J. E. Odegard, C. S. Burrus, and R. O. Wells, “Noise reduction using an undecimated discrete wavelet transform,” IEEE Signal Processing Letters 3, 10–12 (1996).
[Crossref]

Lee, J.-Y.

Y. Yoon, H.-G. Jeon, D. Yoo, J.-Y. Lee, and I. S. Kweon, “Light-Field Image Super-Resolution Using Convolutional Neural Network,” IEEE Signal Processing Letters 24, 848–852 (2017).
[Crossref]

Levecq, X.

C. Herzog, O. de La Rochefoucauld, G. Dovillaire, X. Granier, F. Harms, X. Levecq, E. Longo, L. Mignard-Debise, and P. Zeitoun, “Comparison of reconstruction approaches for plenoptic imaging systems,” in Unconventional Optical Imaging, C. Fournier, M. P. Georges, and G. Popescu, eds. (SPIE, 2018), May, p. 104.
[Crossref]

Levoy, M.

M. Broxton, L. Grosenick, S. Yang, N. Cohen, A. Andalman, K. Deisseroth, and M. Levoy, “Wave optics theory and 3-D deconvolution for the light field microscope,” Optics Express 21, 25418–25439 (2013).
[Crossref] [PubMed]

R. Ng, M. Levoy, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” Stanford UniversityTechnical Report CSTR2005– (2005).

Liere, R. van

N. Viganò, H. Der Sarkissian, C. Herzog, O. de la Rochefoucauld, R. van Liere, and K. J. Batenburg, “Tomographic approach for the quantitative scene reconstruction from light field images,” Optics Express 26, 22574 (2018).
[Crossref] [PubMed]

Longo, E.

C. Herzog, O. de La Rochefoucauld, G. Dovillaire, X. Granier, F. Harms, X. Levecq, E. Longo, L. Mignard-Debise, and P. Zeitoun, “Comparison of reconstruction approaches for plenoptic imaging systems,” in Unconventional Optical Imaging, C. Fournier, M. P. Georges, and G. Popescu, eds. (SPIE, 2018), May, p. 104.
[Crossref]

Louis, A. K.

A. K. Louis, P. Maass, and A. Rieder, Wavelets: Theory and Applications, Pure and Applied Mathematics(Wiley, 1997).

Lumsdaine, A.

A. Lumsdaine, T. G. Georgiev, and G. Chunev, “Spatial analysis of discrete plenoptic sampling,” Proceedings of SPIE 8299, 829909 (2012).
[Crossref]

T. G. Georgiev and A. Lumsdaine, “Focused plenoptic camera and rendering,” Journal of Electronic Imaging 19, 021106 (2010).
[Crossref]

A. Lumsdaine and T. Georgiev, “The focused plenoptic camera,” in), IEEE International Conference on Computational Photography (ICCP), (IEEE, 2009), pp. 1–8.

T. Georgiev, G. Chunev, and A. Lumsdaine, “Superresolution with the focused plenoptic camera,” in Computational Imaging IX,C. A. Bouman, I. Pollak, and P. J. Wolfe, eds. (SPIE, 2011).
[Crossref]

Maass, P.

A. K. Louis, P. Maass, and A. Rieder, Wavelets: Theory and Applications, Pure and Applied Mathematics(Wiley, 1997).

Mak, G. Y.

W. S. Chan, E. Y. Lam, M. K. Ng, and G. Y. Mak, “Super-resolution reconstruction in a computational compound-eye imaging system,” Multidimensional Systems and Signal Processing 18, 83–101 (2007).
[Crossref]

Malik, J.

M. W. Tao, S. Hadap, J. Malik, and R. Ramamoorthi, “Depth from combining defocus and correspondence using light-field cameras,” in), Proceedings of the IEEE International Conference on Computer Vision, (IEEE, 2013), pp. 673–680.

Martínez-Corral, M.

M. Martínez-Corral and B. Javidi, “Fundamentals of 3d imaging and displays: a tutorial on integral imaging, light-field, and plenoptic systems,” Advances in Optics and Photonics 10, 512 (2018).
[Crossref]

Marwah, K.

K. Marwah, G. Wetzstein, Y. Bando, and R. Raskar, “Compressive light field photography using overcomplete dictionaries and optimized projections,” ACM Transactions on Graphics 32, 1 (2013).
[Crossref]

Mignard-Debise, L.

C. Herzog, O. de La Rochefoucauld, G. Dovillaire, X. Granier, F. Harms, X. Levecq, E. Longo, L. Mignard-Debise, and P. Zeitoun, “Comparison of reconstruction approaches for plenoptic imaging systems,” in Unconventional Optical Imaging, C. Fournier, M. P. Georges, and G. Popescu, eds. (SPIE, 2018), May, p. 104.
[Crossref]

Ng, M. K.

W. S. Chan, E. Y. Lam, M. K. Ng, and G. Y. Mak, “Super-resolution reconstruction in a computational compound-eye imaging system,” Multidimensional Systems and Signal Processing 18, 83–101 (2007).
[Crossref]

Ng, R.

R. Ng, “Fourier slice photography,” ACM Transactions on Graphics 24, 735 (2005).
[Crossref]

R. Ng, M. Levoy, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” Stanford UniversityTechnical Report CSTR2005– (2005).

R. Ng, “Digital light field photography,” Ph.D. thesis, Stanford University (2006).

Odegard, J. E.

M. Lang, H. Guo, J. E. Odegard, C. S. Burrus, and R. O. Wells, “Noise reduction using an undecimated discrete wavelet transform,” IEEE Signal Processing Letters 3, 10–12 (1996).
[Crossref]

Palenstijn, W. J.

W. J. Palenstijn, K. J. Batenburg, and J. Sijbers, “Performance improvements for iterative electron tomography reconstruction using graphics processing units (GPUs),” Journal of Structural Biology 176, 250–253 (2011).
[Crossref] [PubMed]

Pan, X.

E. Y. Sidky, J. H. Jørgensen, and X. Pan, “Convex optimization problem prototyping for image reconstruction in computed tomography with the Chambolle-Pock algorithm,” Physics in Medicine and Biology 57, 3065–3091 (2012).
[Crossref] [PubMed]

Pizarro, O.

D. G. Dansereau, O. Pizarro, and S. B. Williams, “Linear Volumetric Focus for Light Field Cameras,” ACM Transactions on Graphics 34, 1–20 (2015).
[Crossref]

D. G. Dansereau, O. Pizarro, and S. B. Williams, “Decoding, Calibration and Rectification for Lenselet-Based Plenoptic Cameras,” in), 2013 IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2013), pp. 1027–1034.
[Crossref]

Pock, T.

A. Chambolle and T. Pock, “A first-order primal-dual algorithm for convex problems with applications to imaging,” Journal of Mathematical Imaging and Vision 40, 120–145 (2010).
[Crossref]

Ramamoorthi, R.

M. W. Tao, S. Hadap, J. Malik, and R. Ramamoorthi, “Depth from combining defocus and correspondence using light-field cameras,” in), Proceedings of the IEEE International Conference on Computer Vision, (IEEE, 2013), pp. 673–680.

Raskar, R.

K. Marwah, G. Wetzstein, Y. Bando, and R. Raskar, “Compressive light field photography using overcomplete dictionaries and optimized projections,” ACM Transactions on Graphics 32, 1 (2013).
[Crossref]

Rieder, A.

A. K. Louis, P. Maass, and A. Rieder, Wavelets: Theory and Applications, Pure and Applied Mathematics(Wiley, 1997).

Rochefoucauld, O. de la

N. Viganò, H. Der Sarkissian, C. Herzog, O. de la Rochefoucauld, R. van Liere, and K. J. Batenburg, “Tomographic approach for the quantitative scene reconstruction from light field images,” Optics Express 26, 22574 (2018).
[Crossref] [PubMed]

Rossi, M.

M. Rossi and P. Frossard, “Geometry-Consistent Light Field Super-Resolution via Graph-Based Regularization,” IEEE Transactions on Image Processing 27, 4207–4218 (2018).
[Crossref] [PubMed]

Sarkissian, H. Der

N. Viganò, H. Der Sarkissian, C. Herzog, O. de la Rochefoucauld, R. van Liere, and K. J. Batenburg, “Tomographic approach for the quantitative scene reconstruction from light field images,” Optics Express 26, 22574 (2018).
[Crossref] [PubMed]

Shroff, S. A.

S. A. Shroff and K. Berkner, “Image formation analysis and high resolution image reconstruction for plenoptic imaging systems,” Applied Optics 52, D22 (2013).
[Crossref] [PubMed]

Sidky, E. Y.

E. Y. Sidky, J. H. Jørgensen, and X. Pan, “Convex optimization problem prototyping for image reconstruction in computed tomography with the Chambolle-Pock algorithm,” Physics in Medicine and Biology 57, 3065–3091 (2012).
[Crossref] [PubMed]

Sijbers, J.

W. J. Palenstijn, K. J. Batenburg, and J. Sijbers, “Performance improvements for iterative electron tomography reconstruction using graphics processing units (GPUs),” Journal of Structural Biology 176, 250–253 (2011).
[Crossref] [PubMed]

Slaney, M.

A. C. Kak and M. Slaney, Principles of Computerized Tomographic Imaging (IEEE, 1988).

Sluis, A. V. der

A. V. der Sluis and H. V. der Vorst, “SIRT-and CG-type methods for the iterative solution of sparse linear least-squares problems,” Linear Algebra and its Applications 130, 257–303 (1990).

Sun, Z.

Y. Wang, G. Hou, Z. Sun, Z. Wang, and T. Tan, “A simple and robust super resolution method for light field images,” in 2016 IEEE International Conference on Image Processing (ICIP), (IEEE, 2016), pp. 1459–1463.
[Crossref]

Tan, T.

Y. Wang, G. Hou, Z. Sun, Z. Wang, and T. Tan, “A simple and robust super resolution method for light field images,” in 2016 IEEE International Conference on Image Processing (ICIP), (IEEE, 2016), pp. 1459–1463.
[Crossref]

Tao, M. W.

M. W. Tao, S. Hadap, J. Malik, and R. Ramamoorthi, “Depth from combining defocus and correspondence using light-field cameras,” in), Proceedings of the IEEE International Conference on Computer Vision, (IEEE, 2013), pp. 673–680.

Van Loan, C. F.

G. H. Golub and C. F. Van Loan, Matrix Computations, vol. 28 (Johns Hopkins University, 1996).

Vandenberghe, L.

S. Boyd and L. Vandenberghe, Convex Optimization(Cambridge University, 2004).
[Crossref]

Viganò, N.

N. Viganò, H. Der Sarkissian, C. Herzog, O. de la Rochefoucauld, R. van Liere, and K. J. Batenburg, “Tomographic approach for the quantitative scene reconstruction from light field images,” Optics Express 26, 22574 (2018).
[Crossref] [PubMed]

Vorst, H. V. der

A. V. der Sluis and H. V. der Vorst, “SIRT-and CG-type methods for the iterative solution of sparse linear least-squares problems,” Linear Algebra and its Applications 130, 257–303 (1990).

Wang, Y.

Y. Wang, G. Hou, Z. Sun, Z. Wang, and T. Tan, “A simple and robust super resolution method for light field images,” in 2016 IEEE International Conference on Image Processing (ICIP), (IEEE, 2016), pp. 1459–1463.
[Crossref]

Wang, Z.

Y. Wang, G. Hou, Z. Sun, Z. Wang, and T. Tan, “A simple and robust super resolution method for light field images,” in 2016 IEEE International Conference on Image Processing (ICIP), (IEEE, 2016), pp. 1459–1463.
[Crossref]

Wanner, S.

S. Wanner and B. Goldluecke, “Spatial and angular variational super-resolution of 4D light fields,” Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) 7576 LNCS, 608–621 (2012).

Wells, R. O.

M. Lang, H. Guo, J. E. Odegard, C. S. Burrus, and R. O. Wells, “Noise reduction using an undecimated discrete wavelet transform,” IEEE Signal Processing Letters 3, 10–12 (1996).
[Crossref]

Wetzstein, G.

K. Marwah, G. Wetzstein, Y. Bando, and R. Raskar, “Compressive light field photography using overcomplete dictionaries and optimized projections,” ACM Transactions on Graphics 32, 1 (2013).
[Crossref]

Williams, S. B.

D. G. Dansereau, O. Pizarro, and S. B. Williams, “Linear Volumetric Focus for Light Field Cameras,” ACM Transactions on Graphics 34, 1–20 (2015).
[Crossref]

D. G. Dansereau, O. Pizarro, and S. B. Williams, “Decoding, Calibration and Rectification for Lenselet-Based Plenoptic Cameras,” in), 2013 IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2013), pp. 1027–1034.
[Crossref]

Yang, S.

M. Broxton, L. Grosenick, S. Yang, N. Cohen, A. Andalman, K. Deisseroth, and M. Levoy, “Wave optics theory and 3-D deconvolution for the light field microscope,” Optics Express 21, 25418–25439 (2013).
[Crossref] [PubMed]

Yoo, D.

Y. Yoon, H.-G. Jeon, D. Yoo, J.-Y. Lee, and I. S. Kweon, “Light-Field Image Super-Resolution Using Convolutional Neural Network,” IEEE Signal Processing Letters 24, 848–852 (2017).
[Crossref]

Yoon, Y.

Y. Yoon, H.-G. Jeon, D. Yoo, J.-Y. Lee, and I. S. Kweon, “Light-Field Image Super-Resolution Using Convolutional Neural Network,” IEEE Signal Processing Letters 24, 848–852 (2017).
[Crossref]

Zanetti, S.

T. E. Bishop, S. Zanetti, and P. Favaro, “Light field super resolution,” in), 2009 IEEE International Conference on Computational Photography (ICCP), (IEEE, 2009), pp. 1–9.

Zeitoun, P.

C. Herzog, O. de La Rochefoucauld, G. Dovillaire, X. Granier, F. Harms, X. Levecq, E. Longo, L. Mignard-Debise, and P. Zeitoun, “Comparison of reconstruction approaches for plenoptic imaging systems,” in Unconventional Optical Imaging, C. Fournier, M. P. Georges, and G. Popescu, eds. (SPIE, 2018), May, p. 104.
[Crossref]

Zhu, S.

S. Zhu, A. Lai, K. Eaton, P. Jin, and L. Gao, “On the fundamental comparison between unfocused and focused light field cameras,” Applied Optics 57, A1 (2018).
[Crossref] [PubMed]

ACM Transactions on Graphics (3)

K. Marwah, G. Wetzstein, Y. Bando, and R. Raskar, “Compressive light field photography using overcomplete dictionaries and optimized projections,” ACM Transactions on Graphics 32, 1 (2013).
[Crossref]

R. Ng, “Fourier slice photography,” ACM Transactions on Graphics 24, 735 (2005).
[Crossref]

D. G. Dansereau, O. Pizarro, and S. B. Williams, “Linear Volumetric Focus for Light Field Cameras,” ACM Transactions on Graphics 34, 1–20 (2015).
[Crossref]

Advances in Optics and Photonics (1)

M. Martínez-Corral and B. Javidi, “Fundamentals of 3d imaging and displays: a tutorial on integral imaging, light-field, and plenoptic systems,” Advances in Optics and Photonics 10, 512 (2018).
[Crossref]

Applied Optics (2)

S. Zhu, A. Lai, K. Eaton, P. Jin, and L. Gao, “On the fundamental comparison between unfocused and focused light field cameras,” Applied Optics 57, A1 (2018).
[Crossref] [PubMed]

S. A. Shroff and K. Berkner, “Image formation analysis and high resolution image reconstruction for plenoptic imaging systems,” Applied Optics 52, D22 (2013).
[Crossref] [PubMed]

IEEE Signal Processing Letters (2)

Y. Yoon, H.-G. Jeon, D. Yoo, J.-Y. Lee, and I. S. Kweon, “Light-Field Image Super-Resolution Using Convolutional Neural Network,” IEEE Signal Processing Letters 24, 848–852 (2017).
[Crossref]

M. Lang, H. Guo, J. E. Odegard, C. S. Burrus, and R. O. Wells, “Noise reduction using an undecimated discrete wavelet transform,” IEEE Signal Processing Letters 3, 10–12 (1996).
[Crossref]

IEEE Transactions on Image Processing (1)

M. Rossi and P. Frossard, “Geometry-Consistent Light Field Super-Resolution via Graph-Based Regularization,” IEEE Transactions on Image Processing 27, 4207–4218 (2018).
[Crossref] [PubMed]

IEEE Transactions on Pattern Analysis and Machine Intelligence (1)

S. Baker and T. Kanade, “Limits on super-resolution and how to break them,” IEEE Transactions on Pattern Analysis and Machine Intelligence 24, 1167–1183 (2002).
[Crossref]

Journal of Electronic Imaging (1)

T. G. Georgiev and A. Lumsdaine, “Focused plenoptic camera and rendering,” Journal of Electronic Imaging 19, 021106 (2010).
[Crossref]

Journal of Mathematical Imaging and Vision (1)

A. Chambolle and T. Pock, “A first-order primal-dual algorithm for convex problems with applications to imaging,” Journal of Mathematical Imaging and Vision 40, 120–145 (2010).
[Crossref]

Journal of Structural Biology (1)

W. J. Palenstijn, K. J. Batenburg, and J. Sijbers, “Performance improvements for iterative electron tomography reconstruction using graphics processing units (GPUs),” Journal of Structural Biology 176, 250–253 (2011).
[Crossref] [PubMed]

Journal of the Optical Society of America A (1)

E. Y. Lam, “Computational photography with plenoptic camera and light field capture: tutorial,” Journal of the Optical Society of America A 32, 2021–2032 (2015).
[Crossref]

Linear Algebra and its Applications (1)

A. V. der Sluis and H. V. der Vorst, “SIRT-and CG-type methods for the iterative solution of sparse linear least-squares problems,” Linear Algebra and its Applications 130, 257–303 (1990).

Multidimensional Systems and Signal Processing (1)

W. S. Chan, E. Y. Lam, M. K. Ng, and G. Y. Mak, “Super-resolution reconstruction in a computational compound-eye imaging system,” Multidimensional Systems and Signal Processing 18, 83–101 (2007).
[Crossref]

Optics Express (2)

M. Broxton, L. Grosenick, S. Yang, N. Cohen, A. Andalman, K. Deisseroth, and M. Levoy, “Wave optics theory and 3-D deconvolution for the light field microscope,” Optics Express 21, 25418–25439 (2013).
[Crossref] [PubMed]

N. Viganò, H. Der Sarkissian, C. Herzog, O. de la Rochefoucauld, R. van Liere, and K. J. Batenburg, “Tomographic approach for the quantitative scene reconstruction from light field images,” Optics Express 26, 22574 (2018).
[Crossref] [PubMed]

Physics in Medicine and Biology (1)

E. Y. Sidky, J. H. Jørgensen, and X. Pan, “Convex optimization problem prototyping for image reconstruction in computed tomography with the Chambolle-Pock algorithm,” Physics in Medicine and Biology 57, 3065–3091 (2012).
[Crossref] [PubMed]

Proceedings of SPIE (1)

A. Lumsdaine, T. G. Georgiev, and G. Chunev, “Spatial analysis of discrete plenoptic sampling,” Proceedings of SPIE 8299, 829909 (2012).
[Crossref]

Other (17)

S. Boyd and L. Vandenberghe, Convex Optimization(Cambridge University, 2004).
[Crossref]

T. M. Buzug, Computed Tomography(Springer-Verlag, 2008).

G. Kutyniok, Compressed Sensing(Cambridge University, 2012).

D. G. Dansereau, O. Pizarro, and S. B. Williams, “Decoding, Calibration and Rectification for Lenselet-Based Plenoptic Cameras,” in), 2013 IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2013), pp. 1027–1034.
[Crossref]

G. H. Golub and C. F. Van Loan, Matrix Computations, vol. 28 (Johns Hopkins University, 1996).

J. W. Goodman, Introduction to Fourier Optics, Electrical and Computer Engineering: Communications and Signal Processing (McGraw-Hill, 1996), 2nd ed.

R. Ng, “Digital light field photography,” Ph.D. thesis, Stanford University (2006).

R. Ng, M. Levoy, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” Stanford UniversityTechnical Report CSTR2005– (2005).

A. C. Kak and M. Slaney, Principles of Computerized Tomographic Imaging (IEEE, 1988).

A. K. Louis, P. Maass, and A. Rieder, Wavelets: Theory and Applications, Pure and Applied Mathematics(Wiley, 1997).

M. W. Tao, S. Hadap, J. Malik, and R. Ramamoorthi, “Depth from combining defocus and correspondence using light-field cameras,” in), Proceedings of the IEEE International Conference on Computer Vision, (IEEE, 2013), pp. 673–680.

T. E. Bishop, S. Zanetti, and P. Favaro, “Light field super resolution,” in), 2009 IEEE International Conference on Computational Photography (ICCP), (IEEE, 2009), pp. 1–9.

S. Wanner and B. Goldluecke, “Spatial and angular variational super-resolution of 4D light fields,” Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) 7576 LNCS, 608–621 (2012).

A. Lumsdaine and T. Georgiev, “The focused plenoptic camera,” in), IEEE International Conference on Computational Photography (ICCP), (IEEE, 2009), pp. 1–8.

Y. Wang, G. Hou, Z. Sun, Z. Wang, and T. Tan, “A simple and robust super resolution method for light field images,” in 2016 IEEE International Conference on Image Processing (ICIP), (IEEE, 2016), pp. 1459–1463.
[Crossref]

T. Georgiev, G. Chunev, and A. Lumsdaine, “Superresolution with the focused plenoptic camera,” in Computational Imaging IX,C. A. Bouman, I. Pollak, and P. J. Wolfe, eds. (SPIE, 2011).
[Crossref]

C. Herzog, O. de La Rochefoucauld, G. Dovillaire, X. Granier, F. Harms, X. Levecq, E. Longo, L. Mignard-Debise, and P. Zeitoun, “Comparison of reconstruction approaches for plenoptic imaging systems,” in Unconventional Optical Imaging, C. Fournier, M. P. Georges, and G. Popescu, eds. (SPIE, 2018), May, p. 104.
[Crossref]

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (15)

Fig. 1
Fig. 1 Comparison of the two light-field setups, and their naming conventions: (a) unfocused light-field (ULF); (b) focused light-field (FLF).
Fig. 2
Fig. 2 Projection geometry of the two light-field acquisition geometries: (a) unfocused light-field (ULF); (b) focused light-field (FLF). At the distance z0 all the sub-aperture images coincide on the ( s , t ) axes for the ULF case, while they experience shifts Δ s = ( a / b ) Δ σ and Δ t = ( a / b ) Δ τ for the FLF case.
Fig. 3
Fig. 3 Description of the logos synthetic case: (a) phantom used to simulate light field data, having a VOXEL logo at the acquisition focal distance z = 100 mm, and two CWI logos at the distances of 110 mm, and 90 mm; (b) simulated data, with two sub-aperture images for clarity.
Fig. 4
Fig. 4 Raw data from the Tarot Cards dataset. The data has been downscaled by a factor 4 in the ( s , t ) coordinates, and blurred with a known PSF.
Fig. 5
Fig. 5 Experimental setups used for the acquisition of the ULF and FLF datasets “Tea”, “Letters”, and “Flower”: (a) ULF setup, using the Lytro Illum plenoptic camera; (b) FLF setup, built at Imagine Optic in Bordeaux (France).
Fig. 6
Fig. 6 Raw data from the Tea dataset acquired using a Lytro Illum plenoptic camera, with the setup from Fig. 5(a).
Fig. 7
Fig. 7 Raw data for the Letters case: (a) Micro-image representation; (b) Sub-aperture representation. In both cases, two zoomed insets have been shown for clarity. The observed inversion between (a) and (b) is due to the fact that FLF data mixes spatial information with angular information at the micro-images [36], and that the lenslets perform an image space inversion of the images that form on their object space focal plane.
Fig. 8
Fig. 8 Raw data for the Flower case: (a) Original (high resolution) image used for this test case; (b) Micro-image representation; (c) Sub-aperture representation. In both cases, two zoomed insets have been shown for clarity.
Fig. 9
Fig. 9 Performance comparison of different refocusing approaches for the “logos” synthetic test case: (a) Phantom; (b) Integration; (c) Back-projection; (d) Fourier slice theorem; (e) SIRT without PSF; (f) Chambolle-Pock without PSF, with l2 data-divergence term and TV regularization term with λ = 1; (g) SIRT with PSF; (h) Chambolle-Pock using PSF, l2 data-divergence term, and TV regularization term with λ = 1.
Fig. 10
Fig. 10 Root-Mean-Square-Errors (RMSEs) and computational time for each reconstruction approach from Fig. 9 for the down-sampling factors 1, 2, and 4 of the sub-aperture images (the ( s , t ) coordinates of the light-field): (a) RMSE; (b) Computational time; (c) Scaling of sub-aperture image resolution and reconstruction up-sampling factor against the sampling factor.
Fig. 11
Fig. 11 Performance comparison of different refocusing approaches, at different levels of up-scaling (super-resolution) for the Tarot Cards case. First row shows reconstructions at up-sampling = 1, and second row up-sampling = 4. In the columns, instead, we have the different approaches: (a) Back-projection; (b) Fourier slice theorem; (c) SIRT with PSF; (d) Chambolle-Pock using PSF, l2 data-divergence term, and SWT Haar regularization with λ = 3.
Fig. 12
Fig. 12 Performance comparison of different refocusing approaches, for the full-resolution Tarot Cards case. We show one of the sub-aperture images on the side of different reconstruction approaches: (a) Central sub-aperture image; (b) Back-projection; (c) SIRT; (d) Chambolle-Pock using l2 data-divergence term, and SWT Haar regularization with λ = 3.
Fig. 13
Fig. 13 Performance comparison of different refocusing approaches, at different levels of up-scaling (super-resolution) for the Tea case. First row shows reconstructions at up-sampling = 1, and second row up-sampling = 4. In the columns, instead, we have the different approaches: (a) Back-projection; (b) Fourier slice theorem; (c) SIRT with PSF; (d) Chambolle-Pock using PSF, l2 data-divergence term, and SWT Haar regularization with λ = 1.
Fig. 14
Fig. 14 Performance comparison of different refocusing approaches, at different levels of upscaling (super-resolution) for the Letters case. First row shows reconstructions at up-sampling = 1, and second row up-sampling = 2. In the columns, instead, we have the different approaches: (a) Back-projection; (b) SIRT without PSF; (c) Chambolle-Pock with PSF, l2 data-divergence term, TV regularization, and λ = 100.
Fig. 15
Fig. 15 Performance comparison of different refocusing approaches, at different levels of upscaling (super-resolution) for the flower case. First row shows reconstructions at up-sampling = 1, and second row up-sampling = 2. In the columns, instead, we have the different approaches: (a) Expected reconstruction for an ideal camera with infinite bandwidth optical system response; (b) Back-projection; (c) SIRT without PSF; (d) Chambolle-Pock with PSF, l2 data-divergence term, SWT regularization, and λ = 5.

Equations (31)

Equations on this page are rendered with MathJax. Learn more.

u ULF = z 1 f 2 σ ,
v ULF = z 1 f 2 τ .
s FLF = s MLA + a b σ ,
t FLF = t MLA + a b τ ,
u FLF = z 1 + a b σ ,
v FLF = z 1 + a b τ .
L ( s i , t i , u , v ) = Ω o δ ( z z 0 s o + ( 1 z z 0 ) u M s i , z z 0 t o + ( 1 z z 0 ) v M t i ) × E ( s o , t o , z ) d s o d t o d z o ,
E ( s o , t o , z ) = Ω i δ ( z z 0 s o + ( 1 z z 0 ) u M s i , z z 0 t o + ( 1 z z 0 ) v M t i ) × L ( s i , t i , u , v ) d s i d t i d u d v ,
Δ s i , Δ u = a / ( z 1 + a ) Δ u ,
Δ t i , Δ v = a / ( z 1 + a ) Δ v .
Δ s o , Δ u = M Δ s i , Δ u = M a / ( z 1 + a ) Δ u ,
Δ t o , Δ v = M Δ t i , Δ v = M a / ( z 1 + a ) Δ v .
L ( s i , t i , u , v ) = Ω o δ ( z z 0 s o + ( 1 z z 0 + M a z 1 + a ) u M s i , z z 0 t o + ( 1 z z 0 + M a z 1 + a ) v M t i ) × E ( s o , t o , z ) d s o d t o d z o ,
E ( s o , t o , z ) = Ω i δ ( z z 0 s o + ( 1 z z 0 + M a z 1 + a ) u M s i , z z 0 t o + ( 1 z z 0 + M a z 1 + a ) v M t i ) × L ( s i , t i , u , v ) d s i d t i d u d v .
L ( s i , t i , u , v ) = A [ E ( s o , t o , z ) ] ( s i , t i , u , v ) ,
E ( s o , t o , z ) = A [ L ( s i , t i , u , v ) ] ( s o , t o , z ) ,
L ( s i , t i , u , v ) = Ω o , z δ ( α s o + ( 1 α ) u M s i , α t o + ( 1 α ) v M t i ) × E ( s o , t o , z ) d s o d t o ,
E ( s o , t o ) = Ω i δ ( α s o + ( 1 α ) u M s i , α t o + ( 1 α ) v M t i ) × L ( s i , t i , u , v ) d s i d t i d u d v ,
L ( s i , t i , u , v ) = Ω o , z δ ( α s o + ( 1 α + M a z 1 + a ) u M s i , α t o + ( 1 α + M a z 1 + a ) v M t i ) × E ( s o , t o , z ) d s o d t o ,
E ( s o , t o ) = Ω i δ ( α s o + ( 1 α + M a z 1 + a ) u M s i , α t o + ( 1 α + M a z 1 + a ) v M t i ) × L ( s i , t i , u , v ) d s i d t i d u d v .
A z x ¯ z = b ,
U n A z x ¯ z = b .
P z U n A z x ¯ z = b ,
x ^ z = A z T b ,
x ^ z = A z T U n T b ,
x ^ z = A ˜ z T U n T b .
x ^ z = arg min x { | | P z U n A z x b | | 2 2 } subject to : x 0 ,
x ^ z = arg min x { | | P z U n A z x b | | 2 2 + λ | | O x | | 1 } subject to : x 0 ,
R M S E ( x z ) = n N ( ( x z ) n ( x ^ z ) n ) 2 N
x z ( k + 1 ) = p o s ( x z ( k ) + D 2 A ˜ z T ( b A ˜ z x l ( k ) ) ) A ˜ z = P z U n A z ; A ˜ z T = A z T U n T P z T D 1 = d i a g ( 1 | A ˜ z | 1 ) ; D 2 = d i a g ( 1 | A ˜ z T | 1 )
x z ( 0 ) = 0 , x ¯ ( 0 ) = 0 , p d ( 0 ) = 0 , p tv ( 0 ) = 0 , D 2 = d i A g ( 1 | A ˜ z T | 1 + 4 λ ) for l : = [ 0 , L ) p d ( l + 1 ) = p d ( l ) + D 1 ( A z x ¯ ( l ) b ) d i A g ( 1 ) + D 1 p tv ( l + 1 ) = p t v ( l ) + 1 / 2 x ¯ ( l ) m A x   ( 1 , | p t v ( l ) + 1 / 2 x ¯ ( l ) | ) x z ( l + 1 ) = p o s ( x ( l ) D 2 A T p d ( l + 1 ) + λ D 2 d i v ( p tv ( l + 1 ) ) ) x ¯ ( l + 1 ) = x ( l + 1 ) + ( x ( l + 1 ) x ( l ) )

Metrics