F. Liu, M. Gleicher, H. Jin, and A. Agarwala, “Content preserving warps for 3d video stabilization,” ACM Trans. Graph. 28, 1–9 (2009).

N. Sabater, G. Boisson, B. Vandame, P. Kerbiriou, F. Babon, M. Hog, R. Gendrot, T. Langlois, O. Bureller, A. Schubert, and V. Allié, “Dataset and pipeline for multi-view light-field video,” in IEEE Conf. Comput. Vis. Pattern Recognit. Workshops, (2017).

R. Rzeszutek and D. Androutsos, “A framework for estimating relative depth in video,” Comput. Vis. Image Und. 133, 15–29 (2015).

[Crossref]

N. Sabater, G. Boisson, B. Vandame, P. Kerbiriou, F. Babon, M. Hog, R. Gendrot, T. Langlois, O. Bureller, A. Schubert, and V. Allié, “Dataset and pipeline for multi-view light-field video,” in IEEE Conf. Comput. Vis. Pattern Recognit. Workshops, (2017).

C. Bailer, M. Finckh, and H. Lensch, “Scale robust multi view stereo,” in European Conf. Comput. Vis.V, (2012).

S. Vedula, S. Baker, and T. Kanade, “Image-based spatio-temporal modeling and view interpolation of dynamic events,” ACM Trans. Graph. 24, 240–261 (2005).

[Crossref]

G. Zhang, J. Jia, T. T. Wong, and H. Bao, “Consistent depth maps recovery from a video sequence,” IEEE Trans. Pattern Anal. Mach. Intell. 31, 974–988 (2009).

[Crossref]
[PubMed]

S. Hong, A. Dorado, G. Saavedra, J. C. Barreiro, and M. Martinez-Corral, “Three-dimensional integral-imaging display from calibrated and depth-hole filtered kinect information,” J. Disp. Technol. 12, 1301–1308 (2016).

[Crossref]

D. C. Schedl, C. Birklbauer, and O. Bimber, “Optimized sampling for view interpolation in light fields using local dictionaries,” Comput. Vis. Image Und. 168, 93–103 (2018).

[Crossref]

D. C. Schedl, C. Birklbauer, and O. Bimber, “Optimized sampling for view interpolation in light fields using local dictionaries,” Comput. Vis. Image Und. 168, 93–103 (2018).

[Crossref]

C. Hoppe, M. Klopschitz, M. Donoser, and H. Bischof, “Incremental surface extraction from sparse structure-from-motion point clouds,” in British Machine Vis. Conf., (2013).

N. Sabater, G. Boisson, B. Vandame, P. Kerbiriou, F. Babon, M. Hog, R. Gendrot, T. Langlois, O. Bureller, A. Schubert, and V. Allié, “Dataset and pipeline for multi-view light-field video,” in IEEE Conf. Comput. Vis. Pattern Recognit. Workshops, (2017).

N. Sabater, G. Boisson, B. Vandame, P. Kerbiriou, F. Babon, M. Hog, R. Gendrot, T. Langlois, O. Bureller, A. Schubert, and V. Allié, “Dataset and pipeline for multi-view light-field video,” in IEEE Conf. Comput. Vis. Pattern Recognit. Workshops, (2017).

A. P. Gee, D. Chekhlov, W. Mayol-Cuevas, and A. Calway, “Discovering planes and collapsing the state space in visual slam,” in British Machine Vis. Conf., (2007).

G. Wu, M. Zhao, L. Wang, Q. Dai, T. Chai, and Y. Liu, “Light field reconstruction using deep convolutional network on epi,” in IEEE Conf. Comput. Vis. Pattern Recognit., (2017).

H. Y. Shum, S. C. Chan, and S. B. Kang, in Image-based rendering, (Springer-Verlag, 2006).

G. Chaurasia, S. Duchene, O. Sorkine-Hornung, and G. Drettakis, “Depth synthesis and local warps for plausible image-based navigation,” ACM Trans. Graph. 32, 1–12 (2013).

[Crossref]

G. Chaurasia, O. Sorkine, and G. Drettakis, “Silhouette aware warping for image based rendering,” Comput. Graph. Forum 30, 1223–1232 (2011).

[Crossref]

A. P. Gee, D. Chekhlov, W. Mayol-Cuevas, and A. Calway, “Discovering planes and collapsing the state space in visual slam,” in British Machine Vis. Conf., (2007).

J. Zhang, X. Wang, Q. Zhang, Y. Chen, J. Du, and Y. Liu, “Integral imaging display for natural scene based on kinectfusion,” Optik-Int. J. Light. Electron Opt. 127, 791–794 (2016).

[Crossref]

Y. Piao, H. Qu, M. Zhang, and M. Cho, “Three-dimensional integral imaging display system via off-axially distributed image sensing,” Opt. Lasers Eng. 85, 18–23 (2016).

[Crossref]

D. Shin and M. Cho, “3d integral imaging display using axially recorded multiple images,” J. Opt. Soc. Korea 17, 410–414 (2013).

[Crossref]

X. Xiao, M. DaneshPanah, M. Cho, and B. Javidi, “3d integral imaging using sparse sensors with unknown positions,” J. Disp. Technol. 6, 614–619 (2010).

[Crossref]

K. S. Park, S. W. Min, and Y. Cho, “Viewpoint vector rendering for efficient elemental image generation,” IEICE Trans. Inf. Syst. E90-D, 233–241 (2007).

[Crossref]

S. Jeschke, D. Cline, and P. Wonka, “A gpu laplacian solver for diffusion curves and poisson image editing,” ACM Trans. Graph. 28, 116 (2009).

[Crossref]

J. Engel, T. Schops, and D. Cremers, “Lsd-slam: Large-scale direct monocular slam,” in European Conf. Comput. Vis., (2014).

G. Wu, M. Zhao, L. Wang, Q. Dai, T. Chai, and Y. Liu, “Light field reconstruction using deep convolutional network on epi,” in IEEE Conf. Comput. Vis. Pattern Recognit., (2017).

X. Xiao, M. DaneshPanah, M. Cho, and B. Javidi, “3d integral imaging using sparse sensors with unknown positions,” J. Disp. Technol. 6, 614–619 (2010).

[Crossref]

R. Schulein, M. DaneshPanah, and B. Javidi, “3d imaging with axially distributed sensing,” Opt. Lett. 34, 2012–2014 (2009).

[Crossref]
[PubMed]

M. DaneshPanah, B. Javidi, and E. A. Watson, “Three dimensional imaging with randomly distributed sensors,” Opt. Express 16, 6368–6377 (2008).

[Crossref]
[PubMed]

R. A. Newcombe and A. J. Davison, “Live dense reconstruction with a single moving camera,” in IEEE Conf. Comput. Vis. Pattern Recognit., (2010).

A. Kundu, Y. Li, F. Dellaert, F. Li, and J. M. Rehg, “Joint semantic segmentation and 3d reconstruction from monocular video,” in European Conf. Comput. Vis., (2014).

S. Pujades, F. Devernay, and B. Goldluecke, “Bayesian view synthesis and image-based rendering principles,” in IEEE Conf. Comput. Vis. Pattern Recognit., (2014).

C. Hoppe, M. Klopschitz, M. Donoser, and H. Bischof, “Incremental surface extraction from sparse structure-from-motion point clouds,” in British Machine Vis. Conf., (2013).

S. Hong, A. Dorado, G. Saavedra, J. C. Barreiro, and M. Martinez-Corral, “Three-dimensional integral-imaging display from calibrated and depth-hole filtered kinect information,” J. Disp. Technol. 12, 1301–1308 (2016).

[Crossref]

G. Chaurasia, S. Duchene, O. Sorkine-Hornung, and G. Drettakis, “Depth synthesis and local warps for plausible image-based navigation,” ACM Trans. Graph. 32, 1–12 (2013).

[Crossref]

G. Chaurasia, O. Sorkine, and G. Drettakis, “Silhouette aware warping for image based rendering,” Comput. Graph. Forum 30, 1223–1232 (2011).

[Crossref]

J. Zhang, X. Wang, Q. Zhang, Y. Chen, J. Du, and Y. Liu, “Integral imaging display for natural scene based on kinectfusion,” Optik-Int. J. Light. Electron Opt. 127, 791–794 (2016).

[Crossref]

G. Chaurasia, S. Duchene, O. Sorkine-Hornung, and G. Drettakis, “Depth synthesis and local warps for plausible image-based navigation,” ACM Trans. Graph. 32, 1–12 (2013).

[Crossref]

S. Xing, X. Sang, X. Yu, C. Duo, B. Pang, X. Gao, S. Yang, Y. Guan, B. Yan, J. Yuan, and K. Wang, “High-efficient computer-generated integral imaging based on the backward ray-tracing technique and optical reconstruction,” Opt. Express 25, 330–338 (2017).

[Crossref]
[PubMed]

S. M. Seitz and C. R. Dyer, “View morphing,” in Proc. Comput. Graph. Interactive Technol., (1996).

J. Engel, T. Schops, and D. Cremers, “Lsd-slam: Large-scale direct monocular slam,” in European Conf. Comput. Vis., (2014).

C. Bailer, M. Finckh, and H. Lensch, “Scale robust multi view stereo,” in European Conf. Comput. Vis.V, (2012).

S. Xing, X. Sang, X. Yu, C. Duo, B. Pang, X. Gao, S. Yang, Y. Guan, B. Yan, J. Yuan, and K. Wang, “High-efficient computer-generated integral imaging based on the backward ray-tracing technique and optical reconstruction,” Opt. Express 25, 330–338 (2017).

[Crossref]
[PubMed]

A. P. Gee, D. Chekhlov, W. Mayol-Cuevas, and A. Calway, “Discovering planes and collapsing the state space in visual slam,” in British Machine Vis. Conf., (2007).

N. Sabater, G. Boisson, B. Vandame, P. Kerbiriou, F. Babon, M. Hog, R. Gendrot, T. Langlois, O. Bureller, A. Schubert, and V. Allié, “Dataset and pipeline for multi-view light-field video,” in IEEE Conf. Comput. Vis. Pattern Recognit. Workshops, (2017).

F. Liu, M. Gleicher, H. Jin, and A. Agarwala, “Content preserving warps for 3d video stabilization,” ACM Trans. Graph. 28, 1–9 (2009).

S. Pujades, F. Devernay, and B. Goldluecke, “Bayesian view synthesis and image-based rendering principles,” in IEEE Conf. Comput. Vis. Pattern Recognit., (2014).

C. Kim, H. Zimmer, Y. Pritch, A. Sorkine-Hornung, and M. Gross, “Scene reconstruction from high spatio-angular resolution light fields,” ACM Trans. Graph. 32, 73 (2013).

[Crossref]

S. Xing, X. Sang, X. Yu, C. Duo, B. Pang, X. Gao, S. Yang, Y. Guan, B. Yan, J. Yuan, and K. Wang, “High-efficient computer-generated integral imaging based on the backward ray-tracing technique and optical reconstruction,” Opt. Express 25, 330–338 (2017).

[Crossref]
[PubMed]

M. W. Tao, S. Hadap, J. Malik, and R. Ramamoorthi, “Depth from combining defocus and correspondence using light-field cameras,” in IEEE Intern. Conf. Comput. Vis., (2013).

M. Halle, “Multiple viewpoint rendering,” in Proc. Comput. Graph. Interactive Technol., (1998).

N. Sabater, G. Boisson, B. Vandame, P. Kerbiriou, F. Babon, M. Hog, R. Gendrot, T. Langlois, O. Bureller, A. Schubert, and V. Allié, “Dataset and pipeline for multi-view light-field video,” in IEEE Conf. Comput. Vis. Pattern Recognit. Workshops, (2017).

S. Hong, A. Dorado, G. Saavedra, J. C. Barreiro, and M. Martinez-Corral, “Three-dimensional integral-imaging display from calibrated and depth-hole filtered kinect information,” J. Disp. Technol. 12, 1301–1308 (2016).

[Crossref]

C. Hoppe, M. Klopschitz, M. Donoser, and H. Bischof, “Incremental surface extraction from sparse structure-from-motion point clouds,” in British Machine Vis. Conf., (2013).

J. Wang, X. Xiao, and B. Javidi, “Three-dimensional integral imaging with flexible sensing,” Opt. Lett. 39, 6855–6858 (2014).

[Crossref]
[PubMed]

X. Xiao, M. DaneshPanah, M. Cho, and B. Javidi, “3d integral imaging using sparse sensors with unknown positions,” J. Disp. Technol. 6, 614–619 (2010).

[Crossref]

R. Schulein, M. DaneshPanah, and B. Javidi, “3d imaging with axially distributed sensing,” Opt. Lett. 34, 2012–2014 (2009).

[Crossref]
[PubMed]

M. DaneshPanah, B. Javidi, and E. A. Watson, “Three dimensional imaging with randomly distributed sensors,” Opt. Express 16, 6368–6377 (2008).

[Crossref]
[PubMed]

J. S. Jang and B. Javidi, “Three-dimensional synthetic aperture integral imaging,” Opt. Lett. 27, 1144–1146 (2002).

[Crossref]

S. Jeschke, D. Cline, and P. Wonka, “A gpu laplacian solver for diffusion curves and poisson image editing,” ACM Trans. Graph. 28, 116 (2009).

[Crossref]

G. Zhang, J. Jia, T. T. Wong, and H. Bao, “Consistent depth maps recovery from a video sequence,” IEEE Trans. Pattern Anal. Mach. Intell. 31, 974–988 (2009).

[Crossref]
[PubMed]

F. Liu, M. Gleicher, H. Jin, and A. Agarwala, “Content preserving warps for 3d video stabilization,” ACM Trans. Graph. 28, 1–9 (2009).

S. Vedula, S. Baker, and T. Kanade, “Image-based spatio-temporal modeling and view interpolation of dynamic events,” ACM Trans. Graph. 24, 240–261 (2005).

[Crossref]

H. Y. Shum, S. C. Chan, and S. B. Kang, in Image-based rendering, (Springer-Verlag, 2006).

Z. Kang and G. Medioni, “Progressive 3d model acquisition with a commodity hand-held camera,” in IEEE Winter Conf. Appl. Comput. Vis., (2015).

N. Sabater, G. Boisson, B. Vandame, P. Kerbiriou, F. Babon, M. Hog, R. Gendrot, T. Langlois, O. Bureller, A. Schubert, and V. Allié, “Dataset and pipeline for multi-view light-field video,” in IEEE Conf. Comput. Vis. Pattern Recognit. Workshops, (2017).

C. Kim, H. Zimmer, Y. Pritch, A. Sorkine-Hornung, and M. Gross, “Scene reconstruction from high spatio-angular resolution light fields,” ACM Trans. Graph. 32, 73 (2013).

[Crossref]

S. W. Min, J. Kim, and B. Lee, “New characteristic equation of three-dimensional integral imaging system and its applications,” Jpn. J. Appl. Phys. Lett. 44, L71–L74 (2005).

[Crossref]

X. Li, M. Zhao, Y. Xing, H. L. Zhang, L. Li, S. T. Kim, X. Zhou, and Q. H. Wang, “Designing optical 3d images encryption and reconstruction using monospectral synthetic aperture integral imaging,” Opt. Express 26, 11084–11099 (2018).

[Crossref]
[PubMed]

C. Hoppe, M. Klopschitz, M. Donoser, and H. Bischof, “Incremental surface extraction from sparse structure-from-motion point clouds,” in British Machine Vis. Conf., (2013).

A. Kundu, Y. Li, F. Dellaert, F. Li, and J. M. Rehg, “Joint semantic segmentation and 3d reconstruction from monocular video,” in European Conf. Comput. Vis., (2014).

N. Sabater, G. Boisson, B. Vandame, P. Kerbiriou, F. Babon, M. Hog, R. Gendrot, T. Langlois, O. Bureller, A. Schubert, and V. Allié, “Dataset and pipeline for multi-view light-field video,” in IEEE Conf. Comput. Vis. Pattern Recognit. Workshops, (2017).

S. W. Min, J. Kim, and B. Lee, “New characteristic equation of three-dimensional integral imaging system and its applications,” Jpn. J. Appl. Phys. Lett. 44, L71–L74 (2005).

[Crossref]

C. Bailer, M. Finckh, and H. Lensch, “Scale robust multi view stereo,” in European Conf. Comput. Vis.V, (2012).

J. Wei, B. Resch, and H. P. A. Lensch, “Dense and occlusion-robust multi-view stereo for unstructured videos,” in Conf. Comput. and Robot Vis., (2016).

J. Wei, B. Resch, and H. P. A. Lensch, “Dense and scalable reconstruction from unstructured videos with occlusions,” in Intern. Symp. on Vis., Modeling and Visual., (2017).

J. Wei, B. Resch, and H. P. A. Lensch, “Multi-view depth map estimation with cross-view consistency,” in British Machine Vis. Conf., (2014).

B. Resch, H. P. A. Lensch, O. Wang, M. Pollefeys, and A. Solkine-Hornung, “Scalable structure from motion for densely sampled videos,” in IEEE Conf. Comput. Vis. Pattern Recognit., (2015).

A. Kundu, Y. Li, F. Dellaert, F. Li, and J. M. Rehg, “Joint semantic segmentation and 3d reconstruction from monocular video,” in European Conf. Comput. Vis., (2014).

X. Li, M. Zhao, Y. Xing, H. L. Zhang, L. Li, S. T. Kim, X. Zhou, and Q. H. Wang, “Designing optical 3d images encryption and reconstruction using monospectral synthetic aperture integral imaging,” Opt. Express 26, 11084–11099 (2018).

[Crossref]
[PubMed]

X. Li, Y. Wang, Q. H. Wang, Y. Liu, and X. Zhou, “Modified integral imaging reconstruction and encryption using an improved sr reconstruction algorithm,” Opt. Lasers Eng. 112, 162–169 (2019).

[Crossref]

X. Li, M. Zhao, Y. Xing, H. L. Zhang, L. Li, S. T. Kim, X. Zhou, and Q. H. Wang, “Designing optical 3d images encryption and reconstruction using monospectral synthetic aperture integral imaging,” Opt. Express 26, 11084–11099 (2018).

[Crossref]
[PubMed]

A. Kundu, Y. Li, F. Dellaert, F. Li, and J. M. Rehg, “Joint semantic segmentation and 3d reconstruction from monocular video,” in European Conf. Comput. Vis., (2014).

G. Lippmann, “La photographie integrale,” C.R Acad. Sci. 146, 446–451 (1908).

F. Liu, M. Gleicher, H. Jin, and A. Agarwala, “Content preserving warps for 3d video stabilization,” ACM Trans. Graph. 28, 1–9 (2009).

X. Li, Y. Wang, Q. H. Wang, Y. Liu, and X. Zhou, “Modified integral imaging reconstruction and encryption using an improved sr reconstruction algorithm,” Opt. Lasers Eng. 112, 162–169 (2019).

[Crossref]

J. Zhang, X. Wang, Q. Zhang, Y. Chen, J. Du, and Y. Liu, “Integral imaging display for natural scene based on kinectfusion,” Optik-Int. J. Light. Electron Opt. 127, 791–794 (2016).

[Crossref]

G. Wu, M. Zhao, L. Wang, Q. Dai, T. Chai, and Y. Liu, “Light field reconstruction using deep convolutional network on epi,” in IEEE Conf. Comput. Vis. Pattern Recognit., (2017).

M. W. Tao, S. Hadap, J. Malik, and R. Ramamoorthi, “Depth from combining defocus and correspondence using light-field cameras,” in IEEE Intern. Conf. Comput. Vis., (2013).

S. Hong, A. Dorado, G. Saavedra, J. C. Barreiro, and M. Martinez-Corral, “Three-dimensional integral-imaging display from calibrated and depth-hole filtered kinect information,” J. Disp. Technol. 12, 1301–1308 (2016).

[Crossref]

A. P. Gee, D. Chekhlov, W. Mayol-Cuevas, and A. Calway, “Discovering planes and collapsing the state space in visual slam,” in British Machine Vis. Conf., (2007).

Z. Kang and G. Medioni, “Progressive 3d model acquisition with a commodity hand-held camera,” in IEEE Winter Conf. Appl. Comput. Vis., (2015).

K. S. Park, S. W. Min, and Y. Cho, “Viewpoint vector rendering for efficient elemental image generation,” IEICE Trans. Inf. Syst. E90-D, 233–241 (2007).

[Crossref]

S. W. Min, J. Kim, and B. Lee, “New characteristic equation of three-dimensional integral imaging system and its applications,” Jpn. J. Appl. Phys. Lett. 44, L71–L74 (2005).

[Crossref]

R. A. Newcombe and A. J. Davison, “Live dense reconstruction with a single moving camera,” in IEEE Conf. Comput. Vis. Pattern Recognit., (2010).

S. Xing, X. Sang, X. Yu, C. Duo, B. Pang, X. Gao, S. Yang, Y. Guan, B. Yan, J. Yuan, and K. Wang, “High-efficient computer-generated integral imaging based on the backward ray-tracing technique and optical reconstruction,” Opt. Express 25, 330–338 (2017).

[Crossref]
[PubMed]

K. S. Park, S. W. Min, and Y. Cho, “Viewpoint vector rendering for efficient elemental image generation,” IEICE Trans. Inf. Syst. E90-D, 233–241 (2007).

[Crossref]

E. Penner and L. Zhang, “Soft 3d reconstruction for view synthesis,” ACM Trans. Graph. 36, 235 (2017).

[Crossref]

Y. Piao, H. Qu, M. Zhang, and M. Cho, “Three-dimensional integral imaging display system via off-axially distributed image sensing,” Opt. Lasers Eng. 85, 18–23 (2016).

[Crossref]

B. Resch, H. P. A. Lensch, O. Wang, M. Pollefeys, and A. Solkine-Hornung, “Scalable structure from motion for densely sampled videos,” in IEEE Conf. Comput. Vis. Pattern Recognit., (2015).

C. Kim, H. Zimmer, Y. Pritch, A. Sorkine-Hornung, and M. Gross, “Scene reconstruction from high spatio-angular resolution light fields,” ACM Trans. Graph. 32, 73 (2013).

[Crossref]

S. Pujades, F. Devernay, and B. Goldluecke, “Bayesian view synthesis and image-based rendering principles,” in IEEE Conf. Comput. Vis. Pattern Recognit., (2014).

Y. Piao, H. Qu, M. Zhang, and M. Cho, “Three-dimensional integral imaging display system via off-axially distributed image sensing,” Opt. Lasers Eng. 85, 18–23 (2016).

[Crossref]

M. W. Tao, S. Hadap, J. Malik, and R. Ramamoorthi, “Depth from combining defocus and correspondence using light-field cameras,” in IEEE Intern. Conf. Comput. Vis., (2013).

A. Kundu, Y. Li, F. Dellaert, F. Li, and J. M. Rehg, “Joint semantic segmentation and 3d reconstruction from monocular video,” in European Conf. Comput. Vis., (2014).

B. Resch, H. P. A. Lensch, O. Wang, M. Pollefeys, and A. Solkine-Hornung, “Scalable structure from motion for densely sampled videos,” in IEEE Conf. Comput. Vis. Pattern Recognit., (2015).

J. Wei, B. Resch, and H. P. A. Lensch, “Multi-view depth map estimation with cross-view consistency,” in British Machine Vis. Conf., (2014).

J. Wei, B. Resch, and H. P. A. Lensch, “Dense and scalable reconstruction from unstructured videos with occlusions,” in Intern. Symp. on Vis., Modeling and Visual., (2017).

J. Wei, B. Resch, and H. P. A. Lensch, “Dense and occlusion-robust multi-view stereo for unstructured videos,” in Conf. Comput. and Robot Vis., (2016).

R. Rzeszutek and D. Androutsos, “A framework for estimating relative depth in video,” Comput. Vis. Image Und. 133, 15–29 (2015).

[Crossref]

S. Hong, A. Dorado, G. Saavedra, J. C. Barreiro, and M. Martinez-Corral, “Three-dimensional integral-imaging display from calibrated and depth-hole filtered kinect information,” J. Disp. Technol. 12, 1301–1308 (2016).

[Crossref]

N. Sabater, G. Boisson, B. Vandame, P. Kerbiriou, F. Babon, M. Hog, R. Gendrot, T. Langlois, O. Bureller, A. Schubert, and V. Allié, “Dataset and pipeline for multi-view light-field video,” in IEEE Conf. Comput. Vis. Pattern Recognit. Workshops, (2017).

S. Xing, X. Sang, X. Yu, C. Duo, B. Pang, X. Gao, S. Yang, Y. Guan, B. Yan, J. Yuan, and K. Wang, “High-efficient computer-generated integral imaging based on the backward ray-tracing technique and optical reconstruction,” Opt. Express 25, 330–338 (2017).

[Crossref]
[PubMed]

D. C. Schedl, C. Birklbauer, and O. Bimber, “Optimized sampling for view interpolation in light fields using local dictionaries,” Comput. Vis. Image Und. 168, 93–103 (2018).

[Crossref]

J. Engel, T. Schops, and D. Cremers, “Lsd-slam: Large-scale direct monocular slam,” in European Conf. Comput. Vis., (2014).

N. Sabater, G. Boisson, B. Vandame, P. Kerbiriou, F. Babon, M. Hog, R. Gendrot, T. Langlois, O. Bureller, A. Schubert, and V. Allié, “Dataset and pipeline for multi-view light-field video,” in IEEE Conf. Comput. Vis. Pattern Recognit. Workshops, (2017).

S. M. Seitz and C. R. Dyer, “View morphing,” in Proc. Comput. Graph. Interactive Technol., (1996).

H. Y. Shum, S. C. Chan, and S. B. Kang, in Image-based rendering, (Springer-Verlag, 2006).

B. Resch, H. P. A. Lensch, O. Wang, M. Pollefeys, and A. Solkine-Hornung, “Scalable structure from motion for densely sampled videos,” in IEEE Conf. Comput. Vis. Pattern Recognit., (2015).

G. Chaurasia, O. Sorkine, and G. Drettakis, “Silhouette aware warping for image based rendering,” Comput. Graph. Forum 30, 1223–1232 (2011).

[Crossref]

C. Kim, H. Zimmer, Y. Pritch, A. Sorkine-Hornung, and M. Gross, “Scene reconstruction from high spatio-angular resolution light fields,” ACM Trans. Graph. 32, 73 (2013).

[Crossref]

G. Chaurasia, S. Duchene, O. Sorkine-Hornung, and G. Drettakis, “Depth synthesis and local warps for plausible image-based navigation,” ACM Trans. Graph. 32, 1–12 (2013).

[Crossref]

M. W. Tao, S. Hadap, J. Malik, and R. Ramamoorthi, “Depth from combining defocus and correspondence using light-field cameras,” in IEEE Intern. Conf. Comput. Vis., (2013).

N. Sabater, G. Boisson, B. Vandame, P. Kerbiriou, F. Babon, M. Hog, R. Gendrot, T. Langlois, O. Bureller, A. Schubert, and V. Allié, “Dataset and pipeline for multi-view light-field video,” in IEEE Conf. Comput. Vis. Pattern Recognit. Workshops, (2017).

S. Vedula, S. Baker, and T. Kanade, “Image-based spatio-temporal modeling and view interpolation of dynamic events,” ACM Trans. Graph. 24, 240–261 (2005).

[Crossref]

S. Xing, X. Sang, X. Yu, C. Duo, B. Pang, X. Gao, S. Yang, Y. Guan, B. Yan, J. Yuan, and K. Wang, “High-efficient computer-generated integral imaging based on the backward ray-tracing technique and optical reconstruction,” Opt. Express 25, 330–338 (2017).

[Crossref]
[PubMed]

G. Wu, M. Zhao, L. Wang, Q. Dai, T. Chai, and Y. Liu, “Light field reconstruction using deep convolutional network on epi,” in IEEE Conf. Comput. Vis. Pattern Recognit., (2017).

B. Resch, H. P. A. Lensch, O. Wang, M. Pollefeys, and A. Solkine-Hornung, “Scalable structure from motion for densely sampled videos,” in IEEE Conf. Comput. Vis. Pattern Recognit., (2015).

X. Li, Y. Wang, Q. H. Wang, Y. Liu, and X. Zhou, “Modified integral imaging reconstruction and encryption using an improved sr reconstruction algorithm,” Opt. Lasers Eng. 112, 162–169 (2019).

[Crossref]

X. Li, M. Zhao, Y. Xing, H. L. Zhang, L. Li, S. T. Kim, X. Zhou, and Q. H. Wang, “Designing optical 3d images encryption and reconstruction using monospectral synthetic aperture integral imaging,” Opt. Express 26, 11084–11099 (2018).

[Crossref]
[PubMed]

J. Zhang, X. Wang, Q. Zhang, Y. Chen, J. Du, and Y. Liu, “Integral imaging display for natural scene based on kinectfusion,” Optik-Int. J. Light. Electron Opt. 127, 791–794 (2016).

[Crossref]

X. Li, Y. Wang, Q. H. Wang, Y. Liu, and X. Zhou, “Modified integral imaging reconstruction and encryption using an improved sr reconstruction algorithm,” Opt. Lasers Eng. 112, 162–169 (2019).

[Crossref]

J. Wei, B. Resch, and H. P. A. Lensch, “Dense and occlusion-robust multi-view stereo for unstructured videos,” in Conf. Comput. and Robot Vis., (2016).

J. Wei, B. Resch, and H. P. A. Lensch, “Dense and scalable reconstruction from unstructured videos with occlusions,” in Intern. Symp. on Vis., Modeling and Visual., (2017).

J. Wei, B. Resch, and H. P. A. Lensch, “Multi-view depth map estimation with cross-view consistency,” in British Machine Vis. Conf., (2014).

G. Zhang, J. Jia, T. T. Wong, and H. Bao, “Consistent depth maps recovery from a video sequence,” IEEE Trans. Pattern Anal. Mach. Intell. 31, 974–988 (2009).

[Crossref]
[PubMed]

S. Jeschke, D. Cline, and P. Wonka, “A gpu laplacian solver for diffusion curves and poisson image editing,” ACM Trans. Graph. 28, 116 (2009).

[Crossref]

G. Wu, M. Zhao, L. Wang, Q. Dai, T. Chai, and Y. Liu, “Light field reconstruction using deep convolutional network on epi,” in IEEE Conf. Comput. Vis. Pattern Recognit., (2017).

J. Wang, X. Xiao, and B. Javidi, “Three-dimensional integral imaging with flexible sensing,” Opt. Lett. 39, 6855–6858 (2014).

[Crossref]
[PubMed]

X. Xiao, M. DaneshPanah, M. Cho, and B. Javidi, “3d integral imaging using sparse sensors with unknown positions,” J. Disp. Technol. 6, 614–619 (2010).

[Crossref]

S. Xing, X. Sang, X. Yu, C. Duo, B. Pang, X. Gao, S. Yang, Y. Guan, B. Yan, J. Yuan, and K. Wang, “High-efficient computer-generated integral imaging based on the backward ray-tracing technique and optical reconstruction,” Opt. Express 25, 330–338 (2017).

[Crossref]
[PubMed]

X. Li, M. Zhao, Y. Xing, H. L. Zhang, L. Li, S. T. Kim, X. Zhou, and Q. H. Wang, “Designing optical 3d images encryption and reconstruction using monospectral synthetic aperture integral imaging,” Opt. Express 26, 11084–11099 (2018).

[Crossref]
[PubMed]

S. Xing, X. Sang, X. Yu, C. Duo, B. Pang, X. Gao, S. Yang, Y. Guan, B. Yan, J. Yuan, and K. Wang, “High-efficient computer-generated integral imaging based on the backward ray-tracing technique and optical reconstruction,” Opt. Express 25, 330–338 (2017).

[Crossref]
[PubMed]

K. Yanaka, “Integral photography using hexagonal fly’s eye lens and fractional view,” Proc. SPIE 6803, 68031K (2008).

S. Xing, X. Sang, X. Yu, C. Duo, B. Pang, X. Gao, S. Yang, Y. Guan, B. Yan, J. Yuan, and K. Wang, “High-efficient computer-generated integral imaging based on the backward ray-tracing technique and optical reconstruction,” Opt. Express 25, 330–338 (2017).

[Crossref]
[PubMed]

S. Xing, X. Sang, X. Yu, C. Duo, B. Pang, X. Gao, S. Yang, Y. Guan, B. Yan, J. Yuan, and K. Wang, “High-efficient computer-generated integral imaging based on the backward ray-tracing technique and optical reconstruction,” Opt. Express 25, 330–338 (2017).

[Crossref]
[PubMed]

S. Xing, X. Sang, X. Yu, C. Duo, B. Pang, X. Gao, S. Yang, Y. Guan, B. Yan, J. Yuan, and K. Wang, “High-efficient computer-generated integral imaging based on the backward ray-tracing technique and optical reconstruction,” Opt. Express 25, 330–338 (2017).

[Crossref]
[PubMed]

G. Zhang, J. Jia, T. T. Wong, and H. Bao, “Consistent depth maps recovery from a video sequence,” IEEE Trans. Pattern Anal. Mach. Intell. 31, 974–988 (2009).

[Crossref]
[PubMed]

X. Li, M. Zhao, Y. Xing, H. L. Zhang, L. Li, S. T. Kim, X. Zhou, and Q. H. Wang, “Designing optical 3d images encryption and reconstruction using monospectral synthetic aperture integral imaging,” Opt. Express 26, 11084–11099 (2018).

[Crossref]
[PubMed]

J. Zhang, X. Wang, Q. Zhang, Y. Chen, J. Du, and Y. Liu, “Integral imaging display for natural scene based on kinectfusion,” Optik-Int. J. Light. Electron Opt. 127, 791–794 (2016).

[Crossref]

E. Penner and L. Zhang, “Soft 3d reconstruction for view synthesis,” ACM Trans. Graph. 36, 235 (2017).

[Crossref]

Y. Piao, H. Qu, M. Zhang, and M. Cho, “Three-dimensional integral imaging display system via off-axially distributed image sensing,” Opt. Lasers Eng. 85, 18–23 (2016).

[Crossref]

J. Zhang, X. Wang, Q. Zhang, Y. Chen, J. Du, and Y. Liu, “Integral imaging display for natural scene based on kinectfusion,” Optik-Int. J. Light. Electron Opt. 127, 791–794 (2016).

[Crossref]

X. Li, M. Zhao, Y. Xing, H. L. Zhang, L. Li, S. T. Kim, X. Zhou, and Q. H. Wang, “Designing optical 3d images encryption and reconstruction using monospectral synthetic aperture integral imaging,” Opt. Express 26, 11084–11099 (2018).

[Crossref]
[PubMed]

G. Wu, M. Zhao, L. Wang, Q. Dai, T. Chai, and Y. Liu, “Light field reconstruction using deep convolutional network on epi,” in IEEE Conf. Comput. Vis. Pattern Recognit., (2017).

X. Li, Y. Wang, Q. H. Wang, Y. Liu, and X. Zhou, “Modified integral imaging reconstruction and encryption using an improved sr reconstruction algorithm,” Opt. Lasers Eng. 112, 162–169 (2019).

[Crossref]

X. Li, M. Zhao, Y. Xing, H. L. Zhang, L. Li, S. T. Kim, X. Zhou, and Q. H. Wang, “Designing optical 3d images encryption and reconstruction using monospectral synthetic aperture integral imaging,” Opt. Express 26, 11084–11099 (2018).

[Crossref]
[PubMed]

C. Kim, H. Zimmer, Y. Pritch, A. Sorkine-Hornung, and M. Gross, “Scene reconstruction from high spatio-angular resolution light fields,” ACM Trans. Graph. 32, 73 (2013).

[Crossref]

C. Kim, H. Zimmer, Y. Pritch, A. Sorkine-Hornung, and M. Gross, “Scene reconstruction from high spatio-angular resolution light fields,” ACM Trans. Graph. 32, 73 (2013).

[Crossref]

S. Vedula, S. Baker, and T. Kanade, “Image-based spatio-temporal modeling and view interpolation of dynamic events,” ACM Trans. Graph. 24, 240–261 (2005).

[Crossref]

E. Penner and L. Zhang, “Soft 3d reconstruction for view synthesis,” ACM Trans. Graph. 36, 235 (2017).

[Crossref]

F. Liu, M. Gleicher, H. Jin, and A. Agarwala, “Content preserving warps for 3d video stabilization,” ACM Trans. Graph. 28, 1–9 (2009).

G. Chaurasia, S. Duchene, O. Sorkine-Hornung, and G. Drettakis, “Depth synthesis and local warps for plausible image-based navigation,” ACM Trans. Graph. 32, 1–12 (2013).

[Crossref]

S. Jeschke, D. Cline, and P. Wonka, “A gpu laplacian solver for diffusion curves and poisson image editing,” ACM Trans. Graph. 28, 116 (2009).

[Crossref]

G. Lippmann, “La photographie integrale,” C.R Acad. Sci. 146, 446–451 (1908).

G. Chaurasia, O. Sorkine, and G. Drettakis, “Silhouette aware warping for image based rendering,” Comput. Graph. Forum 30, 1223–1232 (2011).

[Crossref]

R. Rzeszutek and D. Androutsos, “A framework for estimating relative depth in video,” Comput. Vis. Image Und. 133, 15–29 (2015).

[Crossref]

D. C. Schedl, C. Birklbauer, and O. Bimber, “Optimized sampling for view interpolation in light fields using local dictionaries,” Comput. Vis. Image Und. 168, 93–103 (2018).

[Crossref]

G. Zhang, J. Jia, T. T. Wong, and H. Bao, “Consistent depth maps recovery from a video sequence,” IEEE Trans. Pattern Anal. Mach. Intell. 31, 974–988 (2009).

[Crossref]
[PubMed]

K. S. Park, S. W. Min, and Y. Cho, “Viewpoint vector rendering for efficient elemental image generation,” IEICE Trans. Inf. Syst. E90-D, 233–241 (2007).

[Crossref]

X. Xiao, M. DaneshPanah, M. Cho, and B. Javidi, “3d integral imaging using sparse sensors with unknown positions,” J. Disp. Technol. 6, 614–619 (2010).

[Crossref]

S. Hong, A. Dorado, G. Saavedra, J. C. Barreiro, and M. Martinez-Corral, “Three-dimensional integral-imaging display from calibrated and depth-hole filtered kinect information,” J. Disp. Technol. 12, 1301–1308 (2016).

[Crossref]

S. W. Min, J. Kim, and B. Lee, “New characteristic equation of three-dimensional integral imaging system and its applications,” Jpn. J. Appl. Phys. Lett. 44, L71–L74 (2005).

[Crossref]

S. Xing, X. Sang, X. Yu, C. Duo, B. Pang, X. Gao, S. Yang, Y. Guan, B. Yan, J. Yuan, and K. Wang, “High-efficient computer-generated integral imaging based on the backward ray-tracing technique and optical reconstruction,” Opt. Express 25, 330–338 (2017).

[Crossref]
[PubMed]

M. DaneshPanah, B. Javidi, and E. A. Watson, “Three dimensional imaging with randomly distributed sensors,” Opt. Express 16, 6368–6377 (2008).

[Crossref]
[PubMed]

X. Li, M. Zhao, Y. Xing, H. L. Zhang, L. Li, S. T. Kim, X. Zhou, and Q. H. Wang, “Designing optical 3d images encryption and reconstruction using monospectral synthetic aperture integral imaging,” Opt. Express 26, 11084–11099 (2018).

[Crossref]
[PubMed]

X. Li, Y. Wang, Q. H. Wang, Y. Liu, and X. Zhou, “Modified integral imaging reconstruction and encryption using an improved sr reconstruction algorithm,” Opt. Lasers Eng. 112, 162–169 (2019).

[Crossref]

Y. Piao, H. Qu, M. Zhang, and M. Cho, “Three-dimensional integral imaging display system via off-axially distributed image sensing,” Opt. Lasers Eng. 85, 18–23 (2016).

[Crossref]

J. Wang, X. Xiao, and B. Javidi, “Three-dimensional integral imaging with flexible sensing,” Opt. Lett. 39, 6855–6858 (2014).

[Crossref]
[PubMed]

R. Schulein, M. DaneshPanah, and B. Javidi, “3d imaging with axially distributed sensing,” Opt. Lett. 34, 2012–2014 (2009).

[Crossref]
[PubMed]

J. S. Jang and B. Javidi, “Three-dimensional synthetic aperture integral imaging,” Opt. Lett. 27, 1144–1146 (2002).

[Crossref]

J. Zhang, X. Wang, Q. Zhang, Y. Chen, J. Du, and Y. Liu, “Integral imaging display for natural scene based on kinectfusion,” Optik-Int. J. Light. Electron Opt. 127, 791–794 (2016).

[Crossref]

K. Yanaka, “Integral photography using hexagonal fly’s eye lens and fractional view,” Proc. SPIE 6803, 68031K (2008).

M. Halle, “Multiple viewpoint rendering,” in Proc. Comput. Graph. Interactive Technol., (1998).

S. Pujades, F. Devernay, and B. Goldluecke, “Bayesian view synthesis and image-based rendering principles,” in IEEE Conf. Comput. Vis. Pattern Recognit., (2014).

C. Bailer, M. Finckh, and H. Lensch, “Scale robust multi view stereo,” in European Conf. Comput. Vis.V, (2012).

B. Resch, H. P. A. Lensch, O. Wang, M. Pollefeys, and A. Solkine-Hornung, “Scalable structure from motion for densely sampled videos,” in IEEE Conf. Comput. Vis. Pattern Recognit., (2015).

J. Engel, T. Schops, and D. Cremers, “Lsd-slam: Large-scale direct monocular slam,” in European Conf. Comput. Vis., (2014).

A. P. Gee, D. Chekhlov, W. Mayol-Cuevas, and A. Calway, “Discovering planes and collapsing the state space in visual slam,” in British Machine Vis. Conf., (2007).

A. Kundu, Y. Li, F. Dellaert, F. Li, and J. M. Rehg, “Joint semantic segmentation and 3d reconstruction from monocular video,” in European Conf. Comput. Vis., (2014).

R. A. Newcombe and A. J. Davison, “Live dense reconstruction with a single moving camera,” in IEEE Conf. Comput. Vis. Pattern Recognit., (2010).

Z. Kang and G. Medioni, “Progressive 3d model acquisition with a commodity hand-held camera,” in IEEE Winter Conf. Appl. Comput. Vis., (2015).

J. Wei, B. Resch, and H. P. A. Lensch, “Dense and occlusion-robust multi-view stereo for unstructured videos,” in Conf. Comput. and Robot Vis., (2016).

M. W. Tao, S. Hadap, J. Malik, and R. Ramamoorthi, “Depth from combining defocus and correspondence using light-field cameras,” in IEEE Intern. Conf. Comput. Vis., (2013).

N. Sabater, G. Boisson, B. Vandame, P. Kerbiriou, F. Babon, M. Hog, R. Gendrot, T. Langlois, O. Bureller, A. Schubert, and V. Allié, “Dataset and pipeline for multi-view light-field video,” in IEEE Conf. Comput. Vis. Pattern Recognit. Workshops, (2017).

J. Wei, B. Resch, and H. P. A. Lensch, “Multi-view depth map estimation with cross-view consistency,” in British Machine Vis. Conf., (2014).

H. Y. Shum, S. C. Chan, and S. B. Kang, in Image-based rendering, (Springer-Verlag, 2006).

G. Wu, M. Zhao, L. Wang, Q. Dai, T. Chai, and Y. Liu, “Light field reconstruction using deep convolutional network on epi,” in IEEE Conf. Comput. Vis. Pattern Recognit., (2017).

S. M. Seitz and C. R. Dyer, “View morphing,” in Proc. Comput. Graph. Interactive Technol., (1996).

C. Hoppe, M. Klopschitz, M. Donoser, and H. Bischof, “Incremental surface extraction from sparse structure-from-motion point clouds,” in British Machine Vis. Conf., (2013).

J. Wei, B. Resch, and H. P. A. Lensch, “Dense and scalable reconstruction from unstructured videos with occlusions,” in Intern. Symp. on Vis., Modeling and Visual., (2017).