Abstract

Synthetic aperture integral imaging using monocular video with arbitrary camera trajectory enables casual acquisition of three-dimensional information of any-scale scenes. This paper presents a novel algorithm for computational reconstruction and imaging of the scenes in this SAII system. Since dense geometry recovery and virtual view rendering are required to handle such unstructured input, for less computational costs and artifacts in both stages, we assume flat surfaces in homogeneous areas and take full advantage of the per-frame edges which are accurately reconstructed beforehand. A dense depth map of each real view is first estimated by successively generating two complete, named smoothest- and densest-surface, depth maps, both respecting local cues, and then merging them via Markov random field global optimization. This way, high-quality perspective images of any virtual camera array can be synthesized simply by back-projecting the obtained closest surfaces into the new views. The pixel-level operations throughout most parts of our pipeline allow high parallelism. Simulation results have shown that the proposed approach is robust to view-dependent occlusions and lack of textures in original frames and can produce recognizable slice images at different depths.

© 2018 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

Full Article  |  PDF Article
OSA Recommended Articles
Three-dimensional profilometric reconstruction using flexible sensing integral imaging and occlusion removal

Xin Shen, Adam Markman, and Bahram Javidi
Appl. Opt. 56(9) D151-D157 (2017)

Three-dimensional synthetic aperture integral imaging

Ju-Seog Jang and Bahram Javidi
Opt. Lett. 27(13) 1144-1146 (2002)

References

  • View by:
  • |
  • |
  • |

  1. G. Lippmann, “La photographie integrale,” C.R Acad. Sci. 146, 446–451 (1908).
  2. J. S. Jang and B. Javidi, “Three-dimensional synthetic aperture integral imaging,” Opt. Lett. 27, 1144–1146 (2002).
    [Crossref]
  3. X. Li, M. Zhao, Y. Xing, H. L. Zhang, L. Li, S. T. Kim, X. Zhou, and Q. H. Wang, “Designing optical 3d images encryption and reconstruction using monospectral synthetic aperture integral imaging,” Opt. Express 26, 11084–11099 (2018).
    [Crossref] [PubMed]
  4. X. Li, Y. Wang, Q. H. Wang, Y. Liu, and X. Zhou, “Modified integral imaging reconstruction and encryption using an improved sr reconstruction algorithm,” Opt. Lasers Eng. 112, 162–169 (2019).
    [Crossref]
  5. C. Kim, H. Zimmer, Y. Pritch, A. Sorkine-Hornung, and M. Gross, “Scene reconstruction from high spatio-angular resolution light fields,” ACM Trans. Graph. 32, 73 (2013).
    [Crossref]
  6. J. Wei, B. Resch, and H. P. A. Lensch, “Dense and occlusion-robust multi-view stereo for unstructured videos,” in Conf. Comput. and Robot Vis., (2016).
  7. M. W. Tao, S. Hadap, J. Malik, and R. Ramamoorthi, “Depth from combining defocus and correspondence using light-field cameras,” in IEEE Intern. Conf. Comput. Vis., (2013).
  8. N. Sabater, G. Boisson, B. Vandame, P. Kerbiriou, F. Babon, M. Hog, R. Gendrot, T. Langlois, O. Bureller, A. Schubert, and V. Allié, “Dataset and pipeline for multi-view light-field video,” in IEEE Conf. Comput. Vis. Pattern Recognit. Workshops, (2017).
  9. M. DaneshPanah, B. Javidi, and E. A. Watson, “Three dimensional imaging with randomly distributed sensors,” Opt. Express 16, 6368–6377 (2008).
    [Crossref] [PubMed]
  10. X. Xiao, M. DaneshPanah, M. Cho, and B. Javidi, “3d integral imaging using sparse sensors with unknown positions,” J. Disp. Technol. 6, 614–619 (2010).
    [Crossref]
  11. D. C. Schedl, C. Birklbauer, and O. Bimber, “Optimized sampling for view interpolation in light fields using local dictionaries,” Comput. Vis. Image Und. 168, 93–103 (2018).
    [Crossref]
  12. J. Wang, X. Xiao, and B. Javidi, “Three-dimensional integral imaging with flexible sensing,” Opt. Lett. 39, 6855–6858 (2014).
    [Crossref] [PubMed]
  13. R. Schulein, M. DaneshPanah, and B. Javidi, “3d imaging with axially distributed sensing,” Opt. Lett. 34, 2012–2014 (2009).
    [Crossref] [PubMed]
  14. D. Shin and M. Cho, “3d integral imaging display using axially recorded multiple images,” J. Opt. Soc. Korea 17, 410–414 (2013).
    [Crossref]
  15. M. Guo, Y. Si, Y. Lyu, S. Wang, and F. Jin, “Elemental image array generation based on discrete viewpoint pickup and window interception in integral imaging,” Appl. Opt. 54, 876–884 (2015).
    [Crossref] [PubMed]
  16. Y. Piao, H. Qu, M. Zhang, and M. Cho, “Three-dimensional integral imaging display system via off-axially distributed image sensing,” Opt. Lasers Eng. 85, 18–23 (2016).
    [Crossref]
  17. S. Hong, A. Dorado, G. Saavedra, J. C. Barreiro, and M. Martinez-Corral, “Three-dimensional integral-imaging display from calibrated and depth-hole filtered kinect information,” J. Disp. Technol. 12, 1301–1308 (2016).
    [Crossref]
  18. J. Zhang, X. Wang, Q. Zhang, Y. Chen, J. Du, and Y. Liu, “Integral imaging display for natural scene based on kinectfusion,” Optik-Int. J. Light. Electron Opt. 127, 791–794 (2016).
    [Crossref]
  19. B. Resch, H. P. A. Lensch, O. Wang, M. Pollefeys, and A. Solkine-Hornung, “Scalable structure from motion for densely sampled videos,” in IEEE Conf. Comput. Vis. Pattern Recognit., (2015).
  20. J. Engel, T. Schops, and D. Cremers, “Lsd-slam: Large-scale direct monocular slam,” in European Conf. Comput. Vis., (2014).
  21. A. P. Gee, D. Chekhlov, W. Mayol-Cuevas, and A. Calway, “Discovering planes and collapsing the state space in visual slam,” in British Machine Vis. Conf., (2007).
  22. A. Kundu, Y. Li, F. Dellaert, F. Li, and J. M. Rehg, “Joint semantic segmentation and 3d reconstruction from monocular video,” in European Conf. Comput. Vis., (2014).
  23. R. A. Newcombe and A. J. Davison, “Live dense reconstruction with a single moving camera,” in IEEE Conf. Comput. Vis. Pattern Recognit., (2010).
  24. Z. Kang and G. Medioni, “Progressive 3d model acquisition with a commodity hand-held camera,” in IEEE Winter Conf. Appl. Comput. Vis., (2015).
  25. R. Rzeszutek and D. Androutsos, “A framework for estimating relative depth in video,” Comput. Vis. Image Und. 133, 15–29 (2015).
    [Crossref]
  26. J. Wei, B. Resch, and H. P. A. Lensch, “Multi-view depth map estimation with cross-view consistency,” in British Machine Vis. Conf., (2014).
  27. G. Zhang, J. Jia, T. T. Wong, and H. Bao, “Consistent depth maps recovery from a video sequence,” IEEE Trans. Pattern Anal. Mach. Intell. 31, 974–988 (2009).
    [Crossref] [PubMed]
  28. C. Hoppe, M. Klopschitz, M. Donoser, and H. Bischof, “Incremental surface extraction from sparse structure-from-motion point clouds,” in British Machine Vis. Conf., (2013).
  29. J. Wei, B. Resch, and H. P. A. Lensch, “Dense and scalable reconstruction from unstructured videos with occlusions,” in Intern. Symp. on Vis., Modeling and Visual., (2017).
  30. S. W. Min, J. Kim, and B. Lee, “New characteristic equation of three-dimensional integral imaging system and its applications,” Jpn. J. Appl. Phys. Lett. 44, L71–L74 (2005).
    [Crossref]
  31. K. S. Park, S. W. Min, and Y. Cho, “Viewpoint vector rendering for efficient elemental image generation,” IEICE Trans. Inf. Syst. E90-D, 233–241 (2007).
    [Crossref]
  32. K. Yanaka, “Integral photography using hexagonal fly’s eye lens and fractional view,” Proc. SPIE 6803, 68031K (2008).
  33. M. Halle, “Multiple viewpoint rendering,” in Proc. Comput. Graph. Interactive Technol., (1998).
  34. S. Xing, X. Sang, X. Yu, C. Duo, B. Pang, X. Gao, S. Yang, Y. Guan, B. Yan, J. Yuan, and K. Wang, “High-efficient computer-generated integral imaging based on the backward ray-tracing technique and optical reconstruction,” Opt. Express 25, 330–338 (2017).
    [Crossref] [PubMed]
  35. H. Y. Shum, S. C. Chan, and S. B. Kang, in Image-based rendering, (Springer-Verlag, 2006).
  36. G. Wu, M. Zhao, L. Wang, Q. Dai, T. Chai, and Y. Liu, “Light field reconstruction using deep convolutional network on epi,” in IEEE Conf. Comput. Vis. Pattern Recognit., (2017).
  37. S. M. Seitz and C. R. Dyer, “View morphing,” in Proc. Comput. Graph. Interactive Technol., (1996).
  38. S. Vedula, S. Baker, and T. Kanade, “Image-based spatio-temporal modeling and view interpolation of dynamic events,” ACM Trans. Graph. 24, 240–261 (2005).
    [Crossref]
  39. S. Pujades, F. Devernay, and B. Goldluecke, “Bayesian view synthesis and image-based rendering principles,” in IEEE Conf. Comput. Vis. Pattern Recognit., (2014).
  40. E. Penner and L. Zhang, “Soft 3d reconstruction for view synthesis,” ACM Trans. Graph. 36, 235 (2017).
    [Crossref]
  41. F. Liu, M. Gleicher, H. Jin, and A. Agarwala, “Content preserving warps for 3d video stabilization,” ACM Trans. Graph. 28, 1–9 (2009).
  42. G. Chaurasia, O. Sorkine, and G. Drettakis, “Silhouette aware warping for image based rendering,” Comput. Graph. Forum 30, 1223–1232 (2011).
    [Crossref]
  43. G. Chaurasia, S. Duchene, O. Sorkine-Hornung, and G. Drettakis, “Depth synthesis and local warps for plausible image-based navigation,” ACM Trans. Graph. 32, 1–12 (2013).
    [Crossref]
  44. S. Jeschke, D. Cline, and P. Wonka, “A gpu laplacian solver for diffusion curves and poisson image editing,” ACM Trans. Graph. 28, 116 (2009).
    [Crossref]
  45. C. Bailer, M. Finckh, and H. Lensch, “Scale robust multi view stereo,” in European Conf. Comput. Vis.V, (2012).

2019 (1)

X. Li, Y. Wang, Q. H. Wang, Y. Liu, and X. Zhou, “Modified integral imaging reconstruction and encryption using an improved sr reconstruction algorithm,” Opt. Lasers Eng. 112, 162–169 (2019).
[Crossref]

2018 (2)

X. Li, M. Zhao, Y. Xing, H. L. Zhang, L. Li, S. T. Kim, X. Zhou, and Q. H. Wang, “Designing optical 3d images encryption and reconstruction using monospectral synthetic aperture integral imaging,” Opt. Express 26, 11084–11099 (2018).
[Crossref] [PubMed]

D. C. Schedl, C. Birklbauer, and O. Bimber, “Optimized sampling for view interpolation in light fields using local dictionaries,” Comput. Vis. Image Und. 168, 93–103 (2018).
[Crossref]

2017 (2)

2016 (3)

Y. Piao, H. Qu, M. Zhang, and M. Cho, “Three-dimensional integral imaging display system via off-axially distributed image sensing,” Opt. Lasers Eng. 85, 18–23 (2016).
[Crossref]

S. Hong, A. Dorado, G. Saavedra, J. C. Barreiro, and M. Martinez-Corral, “Three-dimensional integral-imaging display from calibrated and depth-hole filtered kinect information,” J. Disp. Technol. 12, 1301–1308 (2016).
[Crossref]

J. Zhang, X. Wang, Q. Zhang, Y. Chen, J. Du, and Y. Liu, “Integral imaging display for natural scene based on kinectfusion,” Optik-Int. J. Light. Electron Opt. 127, 791–794 (2016).
[Crossref]

2015 (2)

2014 (1)

2013 (3)

C. Kim, H. Zimmer, Y. Pritch, A. Sorkine-Hornung, and M. Gross, “Scene reconstruction from high spatio-angular resolution light fields,” ACM Trans. Graph. 32, 73 (2013).
[Crossref]

D. Shin and M. Cho, “3d integral imaging display using axially recorded multiple images,” J. Opt. Soc. Korea 17, 410–414 (2013).
[Crossref]

G. Chaurasia, S. Duchene, O. Sorkine-Hornung, and G. Drettakis, “Depth synthesis and local warps for plausible image-based navigation,” ACM Trans. Graph. 32, 1–12 (2013).
[Crossref]

2011 (1)

G. Chaurasia, O. Sorkine, and G. Drettakis, “Silhouette aware warping for image based rendering,” Comput. Graph. Forum 30, 1223–1232 (2011).
[Crossref]

2010 (1)

X. Xiao, M. DaneshPanah, M. Cho, and B. Javidi, “3d integral imaging using sparse sensors with unknown positions,” J. Disp. Technol. 6, 614–619 (2010).
[Crossref]

2009 (4)

R. Schulein, M. DaneshPanah, and B. Javidi, “3d imaging with axially distributed sensing,” Opt. Lett. 34, 2012–2014 (2009).
[Crossref] [PubMed]

G. Zhang, J. Jia, T. T. Wong, and H. Bao, “Consistent depth maps recovery from a video sequence,” IEEE Trans. Pattern Anal. Mach. Intell. 31, 974–988 (2009).
[Crossref] [PubMed]

S. Jeschke, D. Cline, and P. Wonka, “A gpu laplacian solver for diffusion curves and poisson image editing,” ACM Trans. Graph. 28, 116 (2009).
[Crossref]

F. Liu, M. Gleicher, H. Jin, and A. Agarwala, “Content preserving warps for 3d video stabilization,” ACM Trans. Graph. 28, 1–9 (2009).

2008 (2)

K. Yanaka, “Integral photography using hexagonal fly’s eye lens and fractional view,” Proc. SPIE 6803, 68031K (2008).

M. DaneshPanah, B. Javidi, and E. A. Watson, “Three dimensional imaging with randomly distributed sensors,” Opt. Express 16, 6368–6377 (2008).
[Crossref] [PubMed]

2007 (1)

K. S. Park, S. W. Min, and Y. Cho, “Viewpoint vector rendering for efficient elemental image generation,” IEICE Trans. Inf. Syst. E90-D, 233–241 (2007).
[Crossref]

2005 (2)

S. W. Min, J. Kim, and B. Lee, “New characteristic equation of three-dimensional integral imaging system and its applications,” Jpn. J. Appl. Phys. Lett. 44, L71–L74 (2005).
[Crossref]

S. Vedula, S. Baker, and T. Kanade, “Image-based spatio-temporal modeling and view interpolation of dynamic events,” ACM Trans. Graph. 24, 240–261 (2005).
[Crossref]

2002 (1)

1908 (1)

G. Lippmann, “La photographie integrale,” C.R Acad. Sci. 146, 446–451 (1908).

Agarwala, A.

F. Liu, M. Gleicher, H. Jin, and A. Agarwala, “Content preserving warps for 3d video stabilization,” ACM Trans. Graph. 28, 1–9 (2009).

Allié, V.

N. Sabater, G. Boisson, B. Vandame, P. Kerbiriou, F. Babon, M. Hog, R. Gendrot, T. Langlois, O. Bureller, A. Schubert, and V. Allié, “Dataset and pipeline for multi-view light-field video,” in IEEE Conf. Comput. Vis. Pattern Recognit. Workshops, (2017).

Androutsos, D.

R. Rzeszutek and D. Androutsos, “A framework for estimating relative depth in video,” Comput. Vis. Image Und. 133, 15–29 (2015).
[Crossref]

Babon, F.

N. Sabater, G. Boisson, B. Vandame, P. Kerbiriou, F. Babon, M. Hog, R. Gendrot, T. Langlois, O. Bureller, A. Schubert, and V. Allié, “Dataset and pipeline for multi-view light-field video,” in IEEE Conf. Comput. Vis. Pattern Recognit. Workshops, (2017).

Bailer, C.

C. Bailer, M. Finckh, and H. Lensch, “Scale robust multi view stereo,” in European Conf. Comput. Vis.V, (2012).

Baker, S.

S. Vedula, S. Baker, and T. Kanade, “Image-based spatio-temporal modeling and view interpolation of dynamic events,” ACM Trans. Graph. 24, 240–261 (2005).
[Crossref]

Bao, H.

G. Zhang, J. Jia, T. T. Wong, and H. Bao, “Consistent depth maps recovery from a video sequence,” IEEE Trans. Pattern Anal. Mach. Intell. 31, 974–988 (2009).
[Crossref] [PubMed]

Barreiro, J. C.

S. Hong, A. Dorado, G. Saavedra, J. C. Barreiro, and M. Martinez-Corral, “Three-dimensional integral-imaging display from calibrated and depth-hole filtered kinect information,” J. Disp. Technol. 12, 1301–1308 (2016).
[Crossref]

Bimber, O.

D. C. Schedl, C. Birklbauer, and O. Bimber, “Optimized sampling for view interpolation in light fields using local dictionaries,” Comput. Vis. Image Und. 168, 93–103 (2018).
[Crossref]

Birklbauer, C.

D. C. Schedl, C. Birklbauer, and O. Bimber, “Optimized sampling for view interpolation in light fields using local dictionaries,” Comput. Vis. Image Und. 168, 93–103 (2018).
[Crossref]

Bischof, H.

C. Hoppe, M. Klopschitz, M. Donoser, and H. Bischof, “Incremental surface extraction from sparse structure-from-motion point clouds,” in British Machine Vis. Conf., (2013).

Boisson, G.

N. Sabater, G. Boisson, B. Vandame, P. Kerbiriou, F. Babon, M. Hog, R. Gendrot, T. Langlois, O. Bureller, A. Schubert, and V. Allié, “Dataset and pipeline for multi-view light-field video,” in IEEE Conf. Comput. Vis. Pattern Recognit. Workshops, (2017).

Bureller, O.

N. Sabater, G. Boisson, B. Vandame, P. Kerbiriou, F. Babon, M. Hog, R. Gendrot, T. Langlois, O. Bureller, A. Schubert, and V. Allié, “Dataset and pipeline for multi-view light-field video,” in IEEE Conf. Comput. Vis. Pattern Recognit. Workshops, (2017).

Calway, A.

A. P. Gee, D. Chekhlov, W. Mayol-Cuevas, and A. Calway, “Discovering planes and collapsing the state space in visual slam,” in British Machine Vis. Conf., (2007).

Chai, T.

G. Wu, M. Zhao, L. Wang, Q. Dai, T. Chai, and Y. Liu, “Light field reconstruction using deep convolutional network on epi,” in IEEE Conf. Comput. Vis. Pattern Recognit., (2017).

Chan, S. C.

H. Y. Shum, S. C. Chan, and S. B. Kang, in Image-based rendering, (Springer-Verlag, 2006).

Chaurasia, G.

G. Chaurasia, S. Duchene, O. Sorkine-Hornung, and G. Drettakis, “Depth synthesis and local warps for plausible image-based navigation,” ACM Trans. Graph. 32, 1–12 (2013).
[Crossref]

G. Chaurasia, O. Sorkine, and G. Drettakis, “Silhouette aware warping for image based rendering,” Comput. Graph. Forum 30, 1223–1232 (2011).
[Crossref]

Chekhlov, D.

A. P. Gee, D. Chekhlov, W. Mayol-Cuevas, and A. Calway, “Discovering planes and collapsing the state space in visual slam,” in British Machine Vis. Conf., (2007).

Chen, Y.

J. Zhang, X. Wang, Q. Zhang, Y. Chen, J. Du, and Y. Liu, “Integral imaging display for natural scene based on kinectfusion,” Optik-Int. J. Light. Electron Opt. 127, 791–794 (2016).
[Crossref]

Cho, M.

Y. Piao, H. Qu, M. Zhang, and M. Cho, “Three-dimensional integral imaging display system via off-axially distributed image sensing,” Opt. Lasers Eng. 85, 18–23 (2016).
[Crossref]

D. Shin and M. Cho, “3d integral imaging display using axially recorded multiple images,” J. Opt. Soc. Korea 17, 410–414 (2013).
[Crossref]

X. Xiao, M. DaneshPanah, M. Cho, and B. Javidi, “3d integral imaging using sparse sensors with unknown positions,” J. Disp. Technol. 6, 614–619 (2010).
[Crossref]

Cho, Y.

K. S. Park, S. W. Min, and Y. Cho, “Viewpoint vector rendering for efficient elemental image generation,” IEICE Trans. Inf. Syst. E90-D, 233–241 (2007).
[Crossref]

Cline, D.

S. Jeschke, D. Cline, and P. Wonka, “A gpu laplacian solver for diffusion curves and poisson image editing,” ACM Trans. Graph. 28, 116 (2009).
[Crossref]

Cremers, D.

J. Engel, T. Schops, and D. Cremers, “Lsd-slam: Large-scale direct monocular slam,” in European Conf. Comput. Vis., (2014).

Dai, Q.

G. Wu, M. Zhao, L. Wang, Q. Dai, T. Chai, and Y. Liu, “Light field reconstruction using deep convolutional network on epi,” in IEEE Conf. Comput. Vis. Pattern Recognit., (2017).

DaneshPanah, M.

Davison, A. J.

R. A. Newcombe and A. J. Davison, “Live dense reconstruction with a single moving camera,” in IEEE Conf. Comput. Vis. Pattern Recognit., (2010).

Dellaert, F.

A. Kundu, Y. Li, F. Dellaert, F. Li, and J. M. Rehg, “Joint semantic segmentation and 3d reconstruction from monocular video,” in European Conf. Comput. Vis., (2014).

Devernay, F.

S. Pujades, F. Devernay, and B. Goldluecke, “Bayesian view synthesis and image-based rendering principles,” in IEEE Conf. Comput. Vis. Pattern Recognit., (2014).

Donoser, M.

C. Hoppe, M. Klopschitz, M. Donoser, and H. Bischof, “Incremental surface extraction from sparse structure-from-motion point clouds,” in British Machine Vis. Conf., (2013).

Dorado, A.

S. Hong, A. Dorado, G. Saavedra, J. C. Barreiro, and M. Martinez-Corral, “Three-dimensional integral-imaging display from calibrated and depth-hole filtered kinect information,” J. Disp. Technol. 12, 1301–1308 (2016).
[Crossref]

Drettakis, G.

G. Chaurasia, S. Duchene, O. Sorkine-Hornung, and G. Drettakis, “Depth synthesis and local warps for plausible image-based navigation,” ACM Trans. Graph. 32, 1–12 (2013).
[Crossref]

G. Chaurasia, O. Sorkine, and G. Drettakis, “Silhouette aware warping for image based rendering,” Comput. Graph. Forum 30, 1223–1232 (2011).
[Crossref]

Du, J.

J. Zhang, X. Wang, Q. Zhang, Y. Chen, J. Du, and Y. Liu, “Integral imaging display for natural scene based on kinectfusion,” Optik-Int. J. Light. Electron Opt. 127, 791–794 (2016).
[Crossref]

Duchene, S.

G. Chaurasia, S. Duchene, O. Sorkine-Hornung, and G. Drettakis, “Depth synthesis and local warps for plausible image-based navigation,” ACM Trans. Graph. 32, 1–12 (2013).
[Crossref]

Duo, C.

Dyer, C. R.

S. M. Seitz and C. R. Dyer, “View morphing,” in Proc. Comput. Graph. Interactive Technol., (1996).

Engel, J.

J. Engel, T. Schops, and D. Cremers, “Lsd-slam: Large-scale direct monocular slam,” in European Conf. Comput. Vis., (2014).

Finckh, M.

C. Bailer, M. Finckh, and H. Lensch, “Scale robust multi view stereo,” in European Conf. Comput. Vis.V, (2012).

Gao, X.

Gee, A. P.

A. P. Gee, D. Chekhlov, W. Mayol-Cuevas, and A. Calway, “Discovering planes and collapsing the state space in visual slam,” in British Machine Vis. Conf., (2007).

Gendrot, R.

N. Sabater, G. Boisson, B. Vandame, P. Kerbiriou, F. Babon, M. Hog, R. Gendrot, T. Langlois, O. Bureller, A. Schubert, and V. Allié, “Dataset and pipeline for multi-view light-field video,” in IEEE Conf. Comput. Vis. Pattern Recognit. Workshops, (2017).

Gleicher, M.

F. Liu, M. Gleicher, H. Jin, and A. Agarwala, “Content preserving warps for 3d video stabilization,” ACM Trans. Graph. 28, 1–9 (2009).

Goldluecke, B.

S. Pujades, F. Devernay, and B. Goldluecke, “Bayesian view synthesis and image-based rendering principles,” in IEEE Conf. Comput. Vis. Pattern Recognit., (2014).

Gross, M.

C. Kim, H. Zimmer, Y. Pritch, A. Sorkine-Hornung, and M. Gross, “Scene reconstruction from high spatio-angular resolution light fields,” ACM Trans. Graph. 32, 73 (2013).
[Crossref]

Guan, Y.

Guo, M.

Hadap, S.

M. W. Tao, S. Hadap, J. Malik, and R. Ramamoorthi, “Depth from combining defocus and correspondence using light-field cameras,” in IEEE Intern. Conf. Comput. Vis., (2013).

Halle, M.

M. Halle, “Multiple viewpoint rendering,” in Proc. Comput. Graph. Interactive Technol., (1998).

Hog, M.

N. Sabater, G. Boisson, B. Vandame, P. Kerbiriou, F. Babon, M. Hog, R. Gendrot, T. Langlois, O. Bureller, A. Schubert, and V. Allié, “Dataset and pipeline for multi-view light-field video,” in IEEE Conf. Comput. Vis. Pattern Recognit. Workshops, (2017).

Hong, S.

S. Hong, A. Dorado, G. Saavedra, J. C. Barreiro, and M. Martinez-Corral, “Three-dimensional integral-imaging display from calibrated and depth-hole filtered kinect information,” J. Disp. Technol. 12, 1301–1308 (2016).
[Crossref]

Hoppe, C.

C. Hoppe, M. Klopschitz, M. Donoser, and H. Bischof, “Incremental surface extraction from sparse structure-from-motion point clouds,” in British Machine Vis. Conf., (2013).

Jang, J. S.

Javidi, B.

Jeschke, S.

S. Jeschke, D. Cline, and P. Wonka, “A gpu laplacian solver for diffusion curves and poisson image editing,” ACM Trans. Graph. 28, 116 (2009).
[Crossref]

Jia, J.

G. Zhang, J. Jia, T. T. Wong, and H. Bao, “Consistent depth maps recovery from a video sequence,” IEEE Trans. Pattern Anal. Mach. Intell. 31, 974–988 (2009).
[Crossref] [PubMed]

Jin, F.

Jin, H.

F. Liu, M. Gleicher, H. Jin, and A. Agarwala, “Content preserving warps for 3d video stabilization,” ACM Trans. Graph. 28, 1–9 (2009).

Kanade, T.

S. Vedula, S. Baker, and T. Kanade, “Image-based spatio-temporal modeling and view interpolation of dynamic events,” ACM Trans. Graph. 24, 240–261 (2005).
[Crossref]

Kang, S. B.

H. Y. Shum, S. C. Chan, and S. B. Kang, in Image-based rendering, (Springer-Verlag, 2006).

Kang, Z.

Z. Kang and G. Medioni, “Progressive 3d model acquisition with a commodity hand-held camera,” in IEEE Winter Conf. Appl. Comput. Vis., (2015).

Kerbiriou, P.

N. Sabater, G. Boisson, B. Vandame, P. Kerbiriou, F. Babon, M. Hog, R. Gendrot, T. Langlois, O. Bureller, A. Schubert, and V. Allié, “Dataset and pipeline for multi-view light-field video,” in IEEE Conf. Comput. Vis. Pattern Recognit. Workshops, (2017).

Kim, C.

C. Kim, H. Zimmer, Y. Pritch, A. Sorkine-Hornung, and M. Gross, “Scene reconstruction from high spatio-angular resolution light fields,” ACM Trans. Graph. 32, 73 (2013).
[Crossref]

Kim, J.

S. W. Min, J. Kim, and B. Lee, “New characteristic equation of three-dimensional integral imaging system and its applications,” Jpn. J. Appl. Phys. Lett. 44, L71–L74 (2005).
[Crossref]

Kim, S. T.

Klopschitz, M.

C. Hoppe, M. Klopschitz, M. Donoser, and H. Bischof, “Incremental surface extraction from sparse structure-from-motion point clouds,” in British Machine Vis. Conf., (2013).

Kundu, A.

A. Kundu, Y. Li, F. Dellaert, F. Li, and J. M. Rehg, “Joint semantic segmentation and 3d reconstruction from monocular video,” in European Conf. Comput. Vis., (2014).

Langlois, T.

N. Sabater, G. Boisson, B. Vandame, P. Kerbiriou, F. Babon, M. Hog, R. Gendrot, T. Langlois, O. Bureller, A. Schubert, and V. Allié, “Dataset and pipeline for multi-view light-field video,” in IEEE Conf. Comput. Vis. Pattern Recognit. Workshops, (2017).

Lee, B.

S. W. Min, J. Kim, and B. Lee, “New characteristic equation of three-dimensional integral imaging system and its applications,” Jpn. J. Appl. Phys. Lett. 44, L71–L74 (2005).
[Crossref]

Lensch, H.

C. Bailer, M. Finckh, and H. Lensch, “Scale robust multi view stereo,” in European Conf. Comput. Vis.V, (2012).

Lensch, H. P. A.

J. Wei, B. Resch, and H. P. A. Lensch, “Dense and occlusion-robust multi-view stereo for unstructured videos,” in Conf. Comput. and Robot Vis., (2016).

J. Wei, B. Resch, and H. P. A. Lensch, “Dense and scalable reconstruction from unstructured videos with occlusions,” in Intern. Symp. on Vis., Modeling and Visual., (2017).

J. Wei, B. Resch, and H. P. A. Lensch, “Multi-view depth map estimation with cross-view consistency,” in British Machine Vis. Conf., (2014).

B. Resch, H. P. A. Lensch, O. Wang, M. Pollefeys, and A. Solkine-Hornung, “Scalable structure from motion for densely sampled videos,” in IEEE Conf. Comput. Vis. Pattern Recognit., (2015).

Li, F.

A. Kundu, Y. Li, F. Dellaert, F. Li, and J. M. Rehg, “Joint semantic segmentation and 3d reconstruction from monocular video,” in European Conf. Comput. Vis., (2014).

Li, L.

Li, X.

X. Li, Y. Wang, Q. H. Wang, Y. Liu, and X. Zhou, “Modified integral imaging reconstruction and encryption using an improved sr reconstruction algorithm,” Opt. Lasers Eng. 112, 162–169 (2019).
[Crossref]

X. Li, M. Zhao, Y. Xing, H. L. Zhang, L. Li, S. T. Kim, X. Zhou, and Q. H. Wang, “Designing optical 3d images encryption and reconstruction using monospectral synthetic aperture integral imaging,” Opt. Express 26, 11084–11099 (2018).
[Crossref] [PubMed]

Li, Y.

A. Kundu, Y. Li, F. Dellaert, F. Li, and J. M. Rehg, “Joint semantic segmentation and 3d reconstruction from monocular video,” in European Conf. Comput. Vis., (2014).

Lippmann, G.

G. Lippmann, “La photographie integrale,” C.R Acad. Sci. 146, 446–451 (1908).

Liu, F.

F. Liu, M. Gleicher, H. Jin, and A. Agarwala, “Content preserving warps for 3d video stabilization,” ACM Trans. Graph. 28, 1–9 (2009).

Liu, Y.

X. Li, Y. Wang, Q. H. Wang, Y. Liu, and X. Zhou, “Modified integral imaging reconstruction and encryption using an improved sr reconstruction algorithm,” Opt. Lasers Eng. 112, 162–169 (2019).
[Crossref]

J. Zhang, X. Wang, Q. Zhang, Y. Chen, J. Du, and Y. Liu, “Integral imaging display for natural scene based on kinectfusion,” Optik-Int. J. Light. Electron Opt. 127, 791–794 (2016).
[Crossref]

G. Wu, M. Zhao, L. Wang, Q. Dai, T. Chai, and Y. Liu, “Light field reconstruction using deep convolutional network on epi,” in IEEE Conf. Comput. Vis. Pattern Recognit., (2017).

Lyu, Y.

Malik, J.

M. W. Tao, S. Hadap, J. Malik, and R. Ramamoorthi, “Depth from combining defocus and correspondence using light-field cameras,” in IEEE Intern. Conf. Comput. Vis., (2013).

Martinez-Corral, M.

S. Hong, A. Dorado, G. Saavedra, J. C. Barreiro, and M. Martinez-Corral, “Three-dimensional integral-imaging display from calibrated and depth-hole filtered kinect information,” J. Disp. Technol. 12, 1301–1308 (2016).
[Crossref]

Mayol-Cuevas, W.

A. P. Gee, D. Chekhlov, W. Mayol-Cuevas, and A. Calway, “Discovering planes and collapsing the state space in visual slam,” in British Machine Vis. Conf., (2007).

Medioni, G.

Z. Kang and G. Medioni, “Progressive 3d model acquisition with a commodity hand-held camera,” in IEEE Winter Conf. Appl. Comput. Vis., (2015).

Min, S. W.

K. S. Park, S. W. Min, and Y. Cho, “Viewpoint vector rendering for efficient elemental image generation,” IEICE Trans. Inf. Syst. E90-D, 233–241 (2007).
[Crossref]

S. W. Min, J. Kim, and B. Lee, “New characteristic equation of three-dimensional integral imaging system and its applications,” Jpn. J. Appl. Phys. Lett. 44, L71–L74 (2005).
[Crossref]

Newcombe, R. A.

R. A. Newcombe and A. J. Davison, “Live dense reconstruction with a single moving camera,” in IEEE Conf. Comput. Vis. Pattern Recognit., (2010).

Pang, B.

Park, K. S.

K. S. Park, S. W. Min, and Y. Cho, “Viewpoint vector rendering for efficient elemental image generation,” IEICE Trans. Inf. Syst. E90-D, 233–241 (2007).
[Crossref]

Penner, E.

E. Penner and L. Zhang, “Soft 3d reconstruction for view synthesis,” ACM Trans. Graph. 36, 235 (2017).
[Crossref]

Piao, Y.

Y. Piao, H. Qu, M. Zhang, and M. Cho, “Three-dimensional integral imaging display system via off-axially distributed image sensing,” Opt. Lasers Eng. 85, 18–23 (2016).
[Crossref]

Pollefeys, M.

B. Resch, H. P. A. Lensch, O. Wang, M. Pollefeys, and A. Solkine-Hornung, “Scalable structure from motion for densely sampled videos,” in IEEE Conf. Comput. Vis. Pattern Recognit., (2015).

Pritch, Y.

C. Kim, H. Zimmer, Y. Pritch, A. Sorkine-Hornung, and M. Gross, “Scene reconstruction from high spatio-angular resolution light fields,” ACM Trans. Graph. 32, 73 (2013).
[Crossref]

Pujades, S.

S. Pujades, F. Devernay, and B. Goldluecke, “Bayesian view synthesis and image-based rendering principles,” in IEEE Conf. Comput. Vis. Pattern Recognit., (2014).

Qu, H.

Y. Piao, H. Qu, M. Zhang, and M. Cho, “Three-dimensional integral imaging display system via off-axially distributed image sensing,” Opt. Lasers Eng. 85, 18–23 (2016).
[Crossref]

Ramamoorthi, R.

M. W. Tao, S. Hadap, J. Malik, and R. Ramamoorthi, “Depth from combining defocus and correspondence using light-field cameras,” in IEEE Intern. Conf. Comput. Vis., (2013).

Rehg, J. M.

A. Kundu, Y. Li, F. Dellaert, F. Li, and J. M. Rehg, “Joint semantic segmentation and 3d reconstruction from monocular video,” in European Conf. Comput. Vis., (2014).

Resch, B.

B. Resch, H. P. A. Lensch, O. Wang, M. Pollefeys, and A. Solkine-Hornung, “Scalable structure from motion for densely sampled videos,” in IEEE Conf. Comput. Vis. Pattern Recognit., (2015).

J. Wei, B. Resch, and H. P. A. Lensch, “Multi-view depth map estimation with cross-view consistency,” in British Machine Vis. Conf., (2014).

J. Wei, B. Resch, and H. P. A. Lensch, “Dense and scalable reconstruction from unstructured videos with occlusions,” in Intern. Symp. on Vis., Modeling and Visual., (2017).

J. Wei, B. Resch, and H. P. A. Lensch, “Dense and occlusion-robust multi-view stereo for unstructured videos,” in Conf. Comput. and Robot Vis., (2016).

Rzeszutek, R.

R. Rzeszutek and D. Androutsos, “A framework for estimating relative depth in video,” Comput. Vis. Image Und. 133, 15–29 (2015).
[Crossref]

Saavedra, G.

S. Hong, A. Dorado, G. Saavedra, J. C. Barreiro, and M. Martinez-Corral, “Three-dimensional integral-imaging display from calibrated and depth-hole filtered kinect information,” J. Disp. Technol. 12, 1301–1308 (2016).
[Crossref]

Sabater, N.

N. Sabater, G. Boisson, B. Vandame, P. Kerbiriou, F. Babon, M. Hog, R. Gendrot, T. Langlois, O. Bureller, A. Schubert, and V. Allié, “Dataset and pipeline for multi-view light-field video,” in IEEE Conf. Comput. Vis. Pattern Recognit. Workshops, (2017).

Sang, X.

Schedl, D. C.

D. C. Schedl, C. Birklbauer, and O. Bimber, “Optimized sampling for view interpolation in light fields using local dictionaries,” Comput. Vis. Image Und. 168, 93–103 (2018).
[Crossref]

Schops, T.

J. Engel, T. Schops, and D. Cremers, “Lsd-slam: Large-scale direct monocular slam,” in European Conf. Comput. Vis., (2014).

Schubert, A.

N. Sabater, G. Boisson, B. Vandame, P. Kerbiriou, F. Babon, M. Hog, R. Gendrot, T. Langlois, O. Bureller, A. Schubert, and V. Allié, “Dataset and pipeline for multi-view light-field video,” in IEEE Conf. Comput. Vis. Pattern Recognit. Workshops, (2017).

Schulein, R.

Seitz, S. M.

S. M. Seitz and C. R. Dyer, “View morphing,” in Proc. Comput. Graph. Interactive Technol., (1996).

Shin, D.

Shum, H. Y.

H. Y. Shum, S. C. Chan, and S. B. Kang, in Image-based rendering, (Springer-Verlag, 2006).

Si, Y.

Solkine-Hornung, A.

B. Resch, H. P. A. Lensch, O. Wang, M. Pollefeys, and A. Solkine-Hornung, “Scalable structure from motion for densely sampled videos,” in IEEE Conf. Comput. Vis. Pattern Recognit., (2015).

Sorkine, O.

G. Chaurasia, O. Sorkine, and G. Drettakis, “Silhouette aware warping for image based rendering,” Comput. Graph. Forum 30, 1223–1232 (2011).
[Crossref]

Sorkine-Hornung, A.

C. Kim, H. Zimmer, Y. Pritch, A. Sorkine-Hornung, and M. Gross, “Scene reconstruction from high spatio-angular resolution light fields,” ACM Trans. Graph. 32, 73 (2013).
[Crossref]

Sorkine-Hornung, O.

G. Chaurasia, S. Duchene, O. Sorkine-Hornung, and G. Drettakis, “Depth synthesis and local warps for plausible image-based navigation,” ACM Trans. Graph. 32, 1–12 (2013).
[Crossref]

Tao, M. W.

M. W. Tao, S. Hadap, J. Malik, and R. Ramamoorthi, “Depth from combining defocus and correspondence using light-field cameras,” in IEEE Intern. Conf. Comput. Vis., (2013).

Vandame, B.

N. Sabater, G. Boisson, B. Vandame, P. Kerbiriou, F. Babon, M. Hog, R. Gendrot, T. Langlois, O. Bureller, A. Schubert, and V. Allié, “Dataset and pipeline for multi-view light-field video,” in IEEE Conf. Comput. Vis. Pattern Recognit. Workshops, (2017).

Vedula, S.

S. Vedula, S. Baker, and T. Kanade, “Image-based spatio-temporal modeling and view interpolation of dynamic events,” ACM Trans. Graph. 24, 240–261 (2005).
[Crossref]

Wang, J.

Wang, K.

Wang, L.

G. Wu, M. Zhao, L. Wang, Q. Dai, T. Chai, and Y. Liu, “Light field reconstruction using deep convolutional network on epi,” in IEEE Conf. Comput. Vis. Pattern Recognit., (2017).

Wang, O.

B. Resch, H. P. A. Lensch, O. Wang, M. Pollefeys, and A. Solkine-Hornung, “Scalable structure from motion for densely sampled videos,” in IEEE Conf. Comput. Vis. Pattern Recognit., (2015).

Wang, Q. H.

X. Li, Y. Wang, Q. H. Wang, Y. Liu, and X. Zhou, “Modified integral imaging reconstruction and encryption using an improved sr reconstruction algorithm,” Opt. Lasers Eng. 112, 162–169 (2019).
[Crossref]

X. Li, M. Zhao, Y. Xing, H. L. Zhang, L. Li, S. T. Kim, X. Zhou, and Q. H. Wang, “Designing optical 3d images encryption and reconstruction using monospectral synthetic aperture integral imaging,” Opt. Express 26, 11084–11099 (2018).
[Crossref] [PubMed]

Wang, S.

Wang, X.

J. Zhang, X. Wang, Q. Zhang, Y. Chen, J. Du, and Y. Liu, “Integral imaging display for natural scene based on kinectfusion,” Optik-Int. J. Light. Electron Opt. 127, 791–794 (2016).
[Crossref]

Wang, Y.

X. Li, Y. Wang, Q. H. Wang, Y. Liu, and X. Zhou, “Modified integral imaging reconstruction and encryption using an improved sr reconstruction algorithm,” Opt. Lasers Eng. 112, 162–169 (2019).
[Crossref]

Watson, E. A.

Wei, J.

J. Wei, B. Resch, and H. P. A. Lensch, “Dense and occlusion-robust multi-view stereo for unstructured videos,” in Conf. Comput. and Robot Vis., (2016).

J. Wei, B. Resch, and H. P. A. Lensch, “Dense and scalable reconstruction from unstructured videos with occlusions,” in Intern. Symp. on Vis., Modeling and Visual., (2017).

J. Wei, B. Resch, and H. P. A. Lensch, “Multi-view depth map estimation with cross-view consistency,” in British Machine Vis. Conf., (2014).

Wong, T. T.

G. Zhang, J. Jia, T. T. Wong, and H. Bao, “Consistent depth maps recovery from a video sequence,” IEEE Trans. Pattern Anal. Mach. Intell. 31, 974–988 (2009).
[Crossref] [PubMed]

Wonka, P.

S. Jeschke, D. Cline, and P. Wonka, “A gpu laplacian solver for diffusion curves and poisson image editing,” ACM Trans. Graph. 28, 116 (2009).
[Crossref]

Wu, G.

G. Wu, M. Zhao, L. Wang, Q. Dai, T. Chai, and Y. Liu, “Light field reconstruction using deep convolutional network on epi,” in IEEE Conf. Comput. Vis. Pattern Recognit., (2017).

Xiao, X.

J. Wang, X. Xiao, and B. Javidi, “Three-dimensional integral imaging with flexible sensing,” Opt. Lett. 39, 6855–6858 (2014).
[Crossref] [PubMed]

X. Xiao, M. DaneshPanah, M. Cho, and B. Javidi, “3d integral imaging using sparse sensors with unknown positions,” J. Disp. Technol. 6, 614–619 (2010).
[Crossref]

Xing, S.

Xing, Y.

Yan, B.

Yanaka, K.

K. Yanaka, “Integral photography using hexagonal fly’s eye lens and fractional view,” Proc. SPIE 6803, 68031K (2008).

Yang, S.

Yu, X.

Yuan, J.

Zhang, G.

G. Zhang, J. Jia, T. T. Wong, and H. Bao, “Consistent depth maps recovery from a video sequence,” IEEE Trans. Pattern Anal. Mach. Intell. 31, 974–988 (2009).
[Crossref] [PubMed]

Zhang, H. L.

Zhang, J.

J. Zhang, X. Wang, Q. Zhang, Y. Chen, J. Du, and Y. Liu, “Integral imaging display for natural scene based on kinectfusion,” Optik-Int. J. Light. Electron Opt. 127, 791–794 (2016).
[Crossref]

Zhang, L.

E. Penner and L. Zhang, “Soft 3d reconstruction for view synthesis,” ACM Trans. Graph. 36, 235 (2017).
[Crossref]

Zhang, M.

Y. Piao, H. Qu, M. Zhang, and M. Cho, “Three-dimensional integral imaging display system via off-axially distributed image sensing,” Opt. Lasers Eng. 85, 18–23 (2016).
[Crossref]

Zhang, Q.

J. Zhang, X. Wang, Q. Zhang, Y. Chen, J. Du, and Y. Liu, “Integral imaging display for natural scene based on kinectfusion,” Optik-Int. J. Light. Electron Opt. 127, 791–794 (2016).
[Crossref]

Zhao, M.

X. Li, M. Zhao, Y. Xing, H. L. Zhang, L. Li, S. T. Kim, X. Zhou, and Q. H. Wang, “Designing optical 3d images encryption and reconstruction using monospectral synthetic aperture integral imaging,” Opt. Express 26, 11084–11099 (2018).
[Crossref] [PubMed]

G. Wu, M. Zhao, L. Wang, Q. Dai, T. Chai, and Y. Liu, “Light field reconstruction using deep convolutional network on epi,” in IEEE Conf. Comput. Vis. Pattern Recognit., (2017).

Zhou, X.

X. Li, Y. Wang, Q. H. Wang, Y. Liu, and X. Zhou, “Modified integral imaging reconstruction and encryption using an improved sr reconstruction algorithm,” Opt. Lasers Eng. 112, 162–169 (2019).
[Crossref]

X. Li, M. Zhao, Y. Xing, H. L. Zhang, L. Li, S. T. Kim, X. Zhou, and Q. H. Wang, “Designing optical 3d images encryption and reconstruction using monospectral synthetic aperture integral imaging,” Opt. Express 26, 11084–11099 (2018).
[Crossref] [PubMed]

Zimmer, H.

C. Kim, H. Zimmer, Y. Pritch, A. Sorkine-Hornung, and M. Gross, “Scene reconstruction from high spatio-angular resolution light fields,” ACM Trans. Graph. 32, 73 (2013).
[Crossref]

ACM Trans. Graph. (6)

C. Kim, H. Zimmer, Y. Pritch, A. Sorkine-Hornung, and M. Gross, “Scene reconstruction from high spatio-angular resolution light fields,” ACM Trans. Graph. 32, 73 (2013).
[Crossref]

S. Vedula, S. Baker, and T. Kanade, “Image-based spatio-temporal modeling and view interpolation of dynamic events,” ACM Trans. Graph. 24, 240–261 (2005).
[Crossref]

E. Penner and L. Zhang, “Soft 3d reconstruction for view synthesis,” ACM Trans. Graph. 36, 235 (2017).
[Crossref]

F. Liu, M. Gleicher, H. Jin, and A. Agarwala, “Content preserving warps for 3d video stabilization,” ACM Trans. Graph. 28, 1–9 (2009).

G. Chaurasia, S. Duchene, O. Sorkine-Hornung, and G. Drettakis, “Depth synthesis and local warps for plausible image-based navigation,” ACM Trans. Graph. 32, 1–12 (2013).
[Crossref]

S. Jeschke, D. Cline, and P. Wonka, “A gpu laplacian solver for diffusion curves and poisson image editing,” ACM Trans. Graph. 28, 116 (2009).
[Crossref]

Appl. Opt. (1)

C.R Acad. Sci. (1)

G. Lippmann, “La photographie integrale,” C.R Acad. Sci. 146, 446–451 (1908).

Comput. Graph. Forum (1)

G. Chaurasia, O. Sorkine, and G. Drettakis, “Silhouette aware warping for image based rendering,” Comput. Graph. Forum 30, 1223–1232 (2011).
[Crossref]

Comput. Vis. Image Und. (2)

R. Rzeszutek and D. Androutsos, “A framework for estimating relative depth in video,” Comput. Vis. Image Und. 133, 15–29 (2015).
[Crossref]

D. C. Schedl, C. Birklbauer, and O. Bimber, “Optimized sampling for view interpolation in light fields using local dictionaries,” Comput. Vis. Image Und. 168, 93–103 (2018).
[Crossref]

IEEE Trans. Pattern Anal. Mach. Intell. (1)

G. Zhang, J. Jia, T. T. Wong, and H. Bao, “Consistent depth maps recovery from a video sequence,” IEEE Trans. Pattern Anal. Mach. Intell. 31, 974–988 (2009).
[Crossref] [PubMed]

IEICE Trans. Inf. Syst. (1)

K. S. Park, S. W. Min, and Y. Cho, “Viewpoint vector rendering for efficient elemental image generation,” IEICE Trans. Inf. Syst. E90-D, 233–241 (2007).
[Crossref]

J. Disp. Technol. (2)

X. Xiao, M. DaneshPanah, M. Cho, and B. Javidi, “3d integral imaging using sparse sensors with unknown positions,” J. Disp. Technol. 6, 614–619 (2010).
[Crossref]

S. Hong, A. Dorado, G. Saavedra, J. C. Barreiro, and M. Martinez-Corral, “Three-dimensional integral-imaging display from calibrated and depth-hole filtered kinect information,” J. Disp. Technol. 12, 1301–1308 (2016).
[Crossref]

J. Opt. Soc. Korea (1)

Jpn. J. Appl. Phys. Lett. (1)

S. W. Min, J. Kim, and B. Lee, “New characteristic equation of three-dimensional integral imaging system and its applications,” Jpn. J. Appl. Phys. Lett. 44, L71–L74 (2005).
[Crossref]

Opt. Express (3)

Opt. Lasers Eng. (2)

X. Li, Y. Wang, Q. H. Wang, Y. Liu, and X. Zhou, “Modified integral imaging reconstruction and encryption using an improved sr reconstruction algorithm,” Opt. Lasers Eng. 112, 162–169 (2019).
[Crossref]

Y. Piao, H. Qu, M. Zhang, and M. Cho, “Three-dimensional integral imaging display system via off-axially distributed image sensing,” Opt. Lasers Eng. 85, 18–23 (2016).
[Crossref]

Opt. Lett. (3)

Optik-Int. J. Light. Electron Opt. (1)

J. Zhang, X. Wang, Q. Zhang, Y. Chen, J. Du, and Y. Liu, “Integral imaging display for natural scene based on kinectfusion,” Optik-Int. J. Light. Electron Opt. 127, 791–794 (2016).
[Crossref]

Proc. SPIE (1)

K. Yanaka, “Integral photography using hexagonal fly’s eye lens and fractional view,” Proc. SPIE 6803, 68031K (2008).

Other (18)

M. Halle, “Multiple viewpoint rendering,” in Proc. Comput. Graph. Interactive Technol., (1998).

S. Pujades, F. Devernay, and B. Goldluecke, “Bayesian view synthesis and image-based rendering principles,” in IEEE Conf. Comput. Vis. Pattern Recognit., (2014).

C. Bailer, M. Finckh, and H. Lensch, “Scale robust multi view stereo,” in European Conf. Comput. Vis.V, (2012).

B. Resch, H. P. A. Lensch, O. Wang, M. Pollefeys, and A. Solkine-Hornung, “Scalable structure from motion for densely sampled videos,” in IEEE Conf. Comput. Vis. Pattern Recognit., (2015).

J. Engel, T. Schops, and D. Cremers, “Lsd-slam: Large-scale direct monocular slam,” in European Conf. Comput. Vis., (2014).

A. P. Gee, D. Chekhlov, W. Mayol-Cuevas, and A. Calway, “Discovering planes and collapsing the state space in visual slam,” in British Machine Vis. Conf., (2007).

A. Kundu, Y. Li, F. Dellaert, F. Li, and J. M. Rehg, “Joint semantic segmentation and 3d reconstruction from monocular video,” in European Conf. Comput. Vis., (2014).

R. A. Newcombe and A. J. Davison, “Live dense reconstruction with a single moving camera,” in IEEE Conf. Comput. Vis. Pattern Recognit., (2010).

Z. Kang and G. Medioni, “Progressive 3d model acquisition with a commodity hand-held camera,” in IEEE Winter Conf. Appl. Comput. Vis., (2015).

J. Wei, B. Resch, and H. P. A. Lensch, “Dense and occlusion-robust multi-view stereo for unstructured videos,” in Conf. Comput. and Robot Vis., (2016).

M. W. Tao, S. Hadap, J. Malik, and R. Ramamoorthi, “Depth from combining defocus and correspondence using light-field cameras,” in IEEE Intern. Conf. Comput. Vis., (2013).

N. Sabater, G. Boisson, B. Vandame, P. Kerbiriou, F. Babon, M. Hog, R. Gendrot, T. Langlois, O. Bureller, A. Schubert, and V. Allié, “Dataset and pipeline for multi-view light-field video,” in IEEE Conf. Comput. Vis. Pattern Recognit. Workshops, (2017).

J. Wei, B. Resch, and H. P. A. Lensch, “Multi-view depth map estimation with cross-view consistency,” in British Machine Vis. Conf., (2014).

H. Y. Shum, S. C. Chan, and S. B. Kang, in Image-based rendering, (Springer-Verlag, 2006).

G. Wu, M. Zhao, L. Wang, Q. Dai, T. Chai, and Y. Liu, “Light field reconstruction using deep convolutional network on epi,” in IEEE Conf. Comput. Vis. Pattern Recognit., (2017).

S. M. Seitz and C. R. Dyer, “View morphing,” in Proc. Comput. Graph. Interactive Technol., (1996).

C. Hoppe, M. Klopschitz, M. Donoser, and H. Bischof, “Incremental surface extraction from sparse structure-from-motion point clouds,” in British Machine Vis. Conf., (2013).

J. Wei, B. Resch, and H. P. A. Lensch, “Dense and scalable reconstruction from unstructured videos with occlusions,” in Intern. Symp. on Vis., Modeling and Visual., (2017).

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (11)

Fig. 1
Fig. 1 Principle of traditional SAII technique. A group of perspective images are recorded by a camera array, and then projected inversely through a corresponding pinhole array. The projections on each depth plane are overlapped and accumulated, generating a slice image.
Fig. 2
Fig. 2 Pipeline of the proposed SAII approach.
Fig. 3
Fig. 3 Depth diffusion from two pixels p1 and p2 to the mid-pixel p. Point Pw represents the projected isotropic interpolant, defining the diffused depth as d w = 1 2 ( d 1 + d 2 ). Pr is the assumed point on the principle surface (green), and dr is the corresponding real depth. O is the camera center.
Fig. 4
Fig. 4 Edge depth map and our obtained closest-surface depth map, as well as 3D points reconstructed by isotropic and proposed perspective diffusion methods. The floor and ceiling are enlarged for more clear comparison. Isotropic interpolation creates distorted surfaces while ours produces smooth and flat results.
Fig. 5
Fig. 5 Comparisons of the smoothest-surface, densest-surface, and optimal depth maps of the same view, as well as their reconstructed 3D points. The point clouds in (d), (f), and (i) are rotated for better visualization.
Fig. 6
Fig. 6 Trajectory of the video camera (gray) and visualization of the 15 × 15 virtual camera array (green) used for each tested scene in our experiments. All virtual cameras have the same intrinsic parameters with the real camera.
Fig. 7
Fig. 7 Comparisons of depth maps against the ground truth. We present (a) the depth maps, (b) relative error maps, and (c) distribution measuring how much of the estimated depth has a smaller relative error than a given threshold. In (b), the blue pixels have no depth, the red have an error larger than 0.01, the green have no ground truth data, and the pixels with an error between 0 and 0.01 are marked gray 0 to 255.
Fig. 8
Fig. 8 Edge depth maps and our estimated dense depth maps for the real-world scenes.
Fig. 9
Fig. 9 Synthesized images for a real view of real-world scenes using (a) the smoothest-surface, (b) densest-surface, and (c) optimal depth maps. (c) improves (a) and (b) by 1.67 dB and 1.09 dB in term of PSNR for Building, as well as 4.08 dB and 1.81 dB for Boxes. The parts in red circles are enlarged for easier comparison. Original images are given by (d).
Fig. 10
Fig. 10 Synthesized images for all virtual views (left) using our estimated depth maps, among which four images (highlighted in yellow) of each scene are enlarged (right).
Fig. 11
Fig. 11 Slice images at three distances produced by our SAII approach. The focused regions labeled by red circles are enlarged for better visualization.

Equations (8)

Equations on this page are rendered with MathJax. Learn more.

d x = δ 1 δ 2 d r + ( 1 δ 1 δ 2 ) ( δ 1 D t i ( p 1 ) + δ 2 D t i ( p 2 ) )
D t + 1 i ( p ) = δ ( d x ) d x + δ ( d y ) d y δ
C smoothest i ( p ) = { 0 , if s ( p ) < s left ( p ) or s ( p ) < s right ( p ) 1 exp ( min ( | s ( p ) s left ( p ) | 2 , | s ( p ) s right ( p ) | 2 2 σ s 2 ) , otherwise
( j * , q * ) = arg min ( j , q ) S j ( q ) , s . t P j i ( D smoothest j ( q ) ) = p D densest i ( p ) = T j * i ( D smoothest j * ( q * ) )
C densest i ( p ) = C smoothest j * ( q * ) c color ( p , q * ) , c color ( p , q * ) = exp ( I i ( p ) I j * ( q * ) 2 2 σ c 2
D optimal i = arg min Z source λ source j W source j | Z j Z source j | + λ flat ( x , y ) ( | Z j x | ( x , y ) + | Z j y | ( x , y ) ) + λ smooth ( x , y ) | Δ Z j | ( x , y )
{ Z source 1 , Z source 2 } = { D smoothest i , D densest i } { W source 1 , W source 2 } = { C smoothest i , C densest i }
( j * , q * ) = arg min ( j , q ) T j ν ( D optimal j ( q ) ) , s . t . P j ν ( D optimal j ( q ) ) = p I ν ( p ) = I frame j * ( q * )

Metrics