Abstract

Low signal-to-noise ratio (SNR) measurements, primarily due to the quartic attenuation of intensity with distance, are arguably the fundamental barrier to real-time, high-resolution, non-line-of-sight (NLoS) imaging at long standoffs. To better model, characterize, and exploit these low SNR measurements, we use spectral estimation theory to derive a noise model for NLoS correlography. We use this model to develop a speckle correlation-based technique for recovering occluded objects from indirect reflections. Then, using only synthetic data sampled from the proposed noise model, and without knowledge of the experimental scenes nor their geometry, we train a deep convolutional neural network to solve the noisy phase retrieval problem associated with correlography. We validate that the resulting deep-inverse correlography approach is exceptionally robust to noise, far exceeding the capabilities of existing NLoS systems both in terms of spatial resolution achieved and in terms of total capture time. We use the proposed technique to demonstrate NLoS imaging with 300 µm resolution at a 1 m standoff, using just two 1/8th ${s}$ exposure-length images from a standard complementary metal oxide semiconductor detector.

© 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

Full Article  |  PDF Article
OSA Recommended Articles
Non-line-of-sight imaging using a time-gated single photon avalanche diode

Mauro Buttafava, Jessica Zeman, Alberto Tosi, Kevin Eliceiri, and Andreas Velten
Opt. Express 23(16) 20997-21011 (2015)

Improved algorithm of non-line-of-sight imaging based on the Bayesian statistics

Luzhe Huang, Xiaobin Wang, Yifan Yuan, Songyun Gu, and Yonghang Shen
J. Opt. Soc. Am. A 36(5) 834-838 (2019)

Phasor field waves: A Huygens-like light transport model for non-line-of-sight imaging applications

Syed Azer Reza, Marco La Manna, Sebastian Bauer, and Andreas Velten
Opt. Express 27(20) 29380-29400 (2019)

References

  • View by:
  • |
  • |
  • |

  1. A. Kirmani, T. Hutchison, J. Davis, and R. Raskar, “Looking around the corner using transient imaging,” in Proceedings of IEEE International Conference on Computer Vision (2009), pp. 159–166.
  2. A. Velten, D. Wu, A. Jarabo, B. Masia, C. Barsi, C. Joshi, E. Lawson, M. Bawendi, D. Gutierrez, and R. Raskar, “Femto-photography: capturing and visualizing the propagation of light,” ACM Trans. Graphics 32, 44 (2013).
    [Crossref]
  3. R. Pandharkar, A. Velten, A. Bardagjy, E. Lawson, M. Bawendi, and R. Raskar, “Estimating motion and size of moving non-line-of-sight objects in cluttered environments,” in Proc. of IEEE International Conference on Computer Vision and Pattern Recognition (2011), pp. 265–272.
  4. A. Velten, T. Willwacher, O. Gupta, A. Veeraraghavan, M. Bawendi, and R. Raskar, “Recovering three-dimensional shape around a corner using ultrafast time-of-flight imaging,” Nat. Commun. 3, 745 (2012).
    [Crossref]
  5. O. Gupta, T. Willwacher, A. Velten, A. Veeraraghavan, and R. Raskar, “Reconstruction of hidden 3D shapes using diffuse reflections,” Opt. Express 20, 19096–19108 (2012).
    [Crossref]
  6. A. K. Pediredla, M. Buttafava, A. Tosi, O. Cossairt, and A. Veeraraghavan, “Reconstructing rooms using photon echoes: a plane based model and reconstruction algorithm for looking around the corner,” in Proceedings of IEEE International Conference on Computational Photography (IEEE, 2017).
  7. M. O’Toole, D. B. Lindell, and G. Wetzstein, “Confocal non-line-of-sight imaging based on the light cone transform,” Nature 555, 338–341 (2018).
    [Crossref]
  8. F. Xu, G. Shulkind, C. Thrampoulidis, J. H. Shapiro, A. Torralba, F. N. C. Wong, and G. W. Wornell, “Revealing hidden scenes by photon-efficient occlusion-based opportunistic active imaging,” Opt. Express 26, 9945–9962 (2018).
    [Crossref]
  9. M. O’Toole, D. B. Lindell, and G. Wetzstein, “Real-time non-line-of-sight imaging,” in ACM SIGGRAPH 2018 Emerging Technologies (ACM, 2018), paper 14.
  10. S. Xin, S. Nousias, K. N. Kutulakos, A. C. Sankaranarayanan, S. G. Narasimhan, and I. Gkioulekas, “A theory of Fermat paths for non-line-of-sight shape reconstruction,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (2019), pp. 6800–6809.
  11. X. Liu, I. Guillén, M. La Manna, J. H. Nam, S. A. Reza, T. H. Le, A. Jarabo, D. Gutierrez, and A. Velten, “Non-line-of-sight imaging using phasor-field virtual wave optics,” Nature 572, 620–623 (2019).
    [Crossref]
  12. D. B. Lindell, G. Wetzstein, and M. O’Toole, “Wave-based non-line-of-sight imaging using fast f-k migration,” ACM Trans. Graphics 38, 116 (2019).
    [Crossref]
  13. O. Katz, E. Small, and Y. Silberberg, “Looking around corners and through thin turbid layers in real time with scattered incoherent light,” Nat. Photonics 6, 549–553 (2012).
    [Crossref]
  14. A. Viswanath, P. Rangarajan, D. MacFarlane, and M. P. Christensen, “Indirect imaging using correlography,” in Computational Optical Sensing and Imaging (Optical Society of America, 2018), paper CM2E–3.
  15. O. Katz, P. Heidmann, M. Fink, and S. Gigan, “Non-invasive single-shot imaging through scattering layers and around corners via speckle correlations,” Nat. Photonics 8, 784–790 (2014).
    [Crossref]
  16. P. S. Idell, J. R. Fienup, and R. S. Goodman, “Image synthesis from nonimaged laser-speckle patterns,” Opt. Lett. 12, 858–860 (1987).
    [Crossref]
  17. J. R. Fienup and P. S. Idell, “Imaging correlography with sparse arrays of detectors,” Opt. Eng. 27, 279778 (1988).
    [Crossref]
  18. P. S. Idell, J. D. Gonglewski, D. G. Voelz, and J. Knopp, “Image synthesis from nonimaged laser-speckle patterns: experimental verification,” Opt. Lett. 14, 154–156 (1989).
    [Crossref]
  19. J. Bertolotti, E. G. van Putten, C. Blum, A. Lagendijk, W. L. Vos, and A. P. Mosk, “Non-invasive imaging through opaque scattering layers,” Nature 491, 232–234 (2012).
    [Crossref]
  20. R. W. Gerchberg, “A practical algorithm for the determination of phase from image and diffraction plane pictures,” Optik 35, 237–246 (1972).
  21. J. R. Fienup, “Phase retrieval algorithms: a comparison,” Appl. Opt. 21, 2758–2769 (1982).
    [Crossref]
  22. H. H. Bauschke, P. L. Combettes, and D. R. Luke, “Hybrid projection-reflection method for phase retrieval,” J. Opt. Soc. Am. A 20, 1025–1034 (2003).
    [Crossref]
  23. S. Marchesini, Y.-C. Tu, and H.-T. Wu, “Alternating projection, ptychographic imaging and phase synchronization,” Appl. Comput. Harmon. Anal. 41, 815–851 (2016).
    [Crossref]
  24. P. Netrapalli, P. Jain, and S. Sanghavi, “Phase retrieval using alternating minimization,” in Advances in Neural Information Processing Systems (2013), pp. 2796–2804.
  25. H. Zhang, Y. Chi, and Y. Liang, “Provable non-convex phase retrieval with outliers: median truncated Wirtinger flow,” in Proc. International Conference on Machine Learning (2016), pp. 1022–1031.
  26. G. Wang, G. B. Giannakis, and Y. C. Eldar, “Solving systems of random quadratic equations via truncated amplitude flow,” IEEE Trans. Inf. Theory 64, 773–794 (2018).
    [Crossref]
  27. Y. Chen and E. Candes, “Solving random quadratic systems of equations is nearly as easy as solving linear systems,” in Advances in Neural Information Processing Systems (2015), pp. 739–747.
  28. F. Heide, S. Diamond, M. Nießner, J. Ragan-Kelley, W. Heidrich, and G. Wetzstein, “Proximal: efficient image optimization using proximal algorithms,” ACM Trans. Graphics 35, 84 (2016).
    [Crossref]
  29. Y. Rivenson, Z. Göröcs, H. Günaydin, Y. Zhang, H. Wang, and A. Ozcan, “Deep learning microscopy,” Optica 4, 1437–1443 (2017).
    [Crossref]
  30. E. Nehme, L. E. Weiss, T. Michaeli, and Y. Shechtman, “Deep-storm: super-resolution single-molecule microscopy by deep learning,” Optica 5, 458–464 (2018).
    [Crossref]
  31. A. Sinha, J. Lee, S. Li, and G. Barbastathis, “Lensless computational imaging through deep learning,” Optica 4, 1117–1125 (2017).
    [Crossref]
  32. F. Wang, H. Wang, H. Wang, G. Li, and G. Situ, “Learning from simulation: an end-to-end deep-learning approach for computational ghost imaging,” Opt. Express 27, 25560–25572 (2019).
    [Crossref]
  33. Y. Li, Y. Xue, and L. Tian, “Deep speckle correlation: a deep learning approach towards scalable imaging through scattering media,” Optica 5, 1181–1190 (2018).
    [Crossref]
  34. S. Li, M. Deng, J. Lee, A. Sinha, and G. Barbastathis, “Imaging through glass diffusers using densely connected convolutional networks,” Optica 5, 803–813 (2018).
    [Crossref]
  35. Y. Sun, Z. Xia, and U. S. Kamilov, “Efficient and accurate inversion of multiple scattering with deep learning,” Opt. Express 26, 14678–14688 (2018).
    [Crossref]
  36. M. Lyu, H. Wang, G. Li, S. Zheng, and G. Situ, “Learning-based lensless imaging through optically thick scattering media,” Adv. Photon. 1, 036002 (2019).
    [Crossref]
  37. I. Freund, M. Rosenbluh, and S. Feng, “Memory effects in propagation of optical waves through disordered media,” Phys. Rev. Lett. 61, 2328–2331 (1988).
    [Crossref]
  38. N. Abramson, “Light-in-flight recording by holography,” Opt. Lett. 3, 121–123 (1978).
    [Crossref]
  39. T. Maeda, G. Satat, T. Swedish, L. Sinha, and R. Raskar, “Recent advances in imaging around corners,” arXiv:1910.05613 (2019).
  40. M. Buttafava, J. Zeman, A. Tosi, K. Eliceiri, and A. Velten, “Non-line-of-sight imaging using a time-gated single photon avalanche diode,” Opt. Express 23, 20997–21011 (2015).
    [Crossref]
  41. F. Nolet, S. Parent, N. Roy, M.-O. Mercier, S. Charlebois, R. Fontaine, and J.-F. Pratte, “Quenching circuit and SPAD integrated in CMOS 65  nm with 7.8  ps FWHM single photon timing resolution,” Instruments 2, 19 (2018).
    [Crossref]
  42. L. Parmesan, N. A. Dutton, N. J. Calder, A. J. Holmes, L. A. Grant, and R. K. Henderson, “A 9.8  µm sample and hold time to amplitude converter CMOS SPAD pixel,” in 44th European Solid State Device Research Conference (ESSDERC) (IEEE, 2014), pp. 290–293.
  43. Y. Maruyama and E. Charbon, “A time-gated 128 ×128 CMOS SPAD array for on-chip fluorescence detection,” in Proceedings International Image Sensor Workshop (IISW) (2011).
  44. F. Heide, M. B. Hullin, J. Gregson, and W. Heidrich, “Low-budget transient imaging using photonic mixer devices,” ACM Trans. Graphics 32, 45 (2013).
    [Crossref]
  45. A. Kadambi, R. Whyte, A. Bhandari, L. Streeter, C. Barsi, A. Dorrington, and R. Raskar, “Coded time of flight cameras: sparse deconvolution to address multipath interference and recover time profiles,” ACM Trans. Graphics 32, 167 (2013).
    [Crossref]
  46. F. Heide, L. Xiao, W. Heidrich, and M. B. Hullin, “Diffuse mirrors: 3D reconstruction from diffuse indirect illumination using inexpensive time-of-flight sensors,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (2014), pp. 3222–3229.
  47. A. Kadambi, H. Zhao, B. Shi, and R. Raskar, “Occluded imaging with time-of-flight sensors,” ACM Trans. Graphics 35, 15 (2016).
    [Crossref]
  48. I. Freund, “Looking through walls and around corners,” Phys. A 168, 49–65 (1990).
    [Crossref]
  49. J. Klein, C. Peters, J. Martín, M. Laurenzis, and M. B. Hullin, “Tracking objects outside the line of sight using 2D intensity images,” Sci. Rep. 6, 32491 (2016).
    [Crossref]
  50. P. Caramazza, A. Boccolini, D. Buschek, M. Hullin, C. F. Higham, R. Henderson, R. Murray-Smith, and D. Faccio, “Neural network identification of people hidden from view with a single-pixel, single-photon detector,” Sci. Rep. 8, 11945 (2018).
    [Crossref]
  51. S. Chan, R. E. Warburton, G. Gariepy, J. Leach, and D. Faccio, “Non-line-of-sight tracking of people at long range,” Opt. Express 25, 10109–10117 (2017).
    [Crossref]
  52. K. L. Bouman, V. Ye, A. B. Yedidia, F. Durand, G. W. Wornell, A. Torralba, and W. T. Freeman, “Turning corners into cameras: principles and methods,” in Proceedings of IEEE International Conference on Computer Vision (2017), Vol. 1, pp. 8.
  53. B. M. Smith, M. O’Toole, and M. Gupta, “Tracking multiple objects outside the line of sight using speckle imaging,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (2018), pp. 6258–6266.
  54. M. Tancik, G. Satat, and R. Raskar, “Flash photography for data-driven hidden scene recovery,” arXiv:1810.11710 (2018).
  55. C. Saunders, J. Murray-Bruce, and V. K. Goyal, “Computational periscopy with an ordinary digital camera,” Nature 565, 472–475 (2019).
    [Crossref]
  56. M. Batarseh, S. Sukhov, Z. Shen, H. Gemar, R. Rezvani, and A. Dogariu, “Passive sensing around the corner using spatial coherence,” Nat. Commun. 9, 3629 (2018).
    [Crossref]
  57. R. Bates, “Fourier phase problems are uniquely solvable in mute than one dimension. I: underlying theory,” Optik (Stuttgart) 61, 247–262 (1982).
  58. D. R. Luke, “Relaxed averaged alternating reflections for diffraction imaging,” Inverse Probl. 21, 37 (2004).
    [Crossref]
  59. Z. Wen, C. Yang, X. Liu, and S. Marchesini, “Alternating direction methods for classical and ptychographic phase retrieval,” Inverse Probl. 28, 115010 (2012).
    [Crossref]
  60. E. J. Candes, T. Strohmer, and V. Voroninski, “Phaselift: exact and stable signal recovery from magnitude measurements via convex programming,” Commun. Pure Appl. Math. 66, 1241–1274 (2013).
    [Crossref]
  61. A. Kappeler, S. Ghosh, J. Holloway, O. Cossairt, and A. Katsaggelos, “Ptychnet: CNN based Fourier ptychography,” in Proceedings of IEEE International Conference on Image Processing (IEEE, 2017), pp. 1712–1716.
  62. L. Boominathan, M. Maniparambil, H. Gupta, R. Baburajan, and K. Mitra, “Phase retrieval for Fourier ptychography under varying amount of measurements,” arXiv:1805.03593 (2018).
  63. Y. Rivenson, Y. Zhang, H. Günaydın, D. Teng, and A. Ozcan, “Phase recovery and holographic image reconstruction using deep learning in neural networks,” Light Sci. Appl. 7, 17141 (2018).
    [Crossref]
  64. Z. Kemp, “Propagation based phase retrieval of simulated intensity measurements using artificial neural networks,” J. Opt. 20, 045606 (2018).
    [Crossref]
  65. M. R. Kellman, E. Bostan, N. Repina, M. Lustig, and L. Waller, “Physics-based learned design: optimized coded-illumination for quantitative phase imaging,” arXiv:1808.03571 (2018).
  66. A. Goy, K. Arthur, S. Li, and G. Barbastathis, “Low photon count phase retrieval using deep learning,” Phys. Rev. Lett. 121, 243902 (2018).
    [Crossref]
  67. M. J. Cherukara, Y. S. Nashed, and R. J. Harder, “Real-time coherent diffraction inversion using deep generative networks,” Sci. Rep. 8, 16520 (2018).
    [Crossref]
  68. C. A. Metzler, P. Schniter, A. Veeraraghavan, and R. G. Baraniuk, “prDeep: robust phase retrieval with a flexible deep network,” in Proceedings International Conference on Machine Learning (2018), pp. 3498–3507.
  69. J. Fienup (private communication, 2017).
  70. P. Welch, “The use of fast Fourier transform for the estimation of power spectra: a method based on time averaging over short, modified periodograms,” IEEE Trans. Audio Electroacoust. 15, 70–73 (1967).
    [Crossref]
  71. R. J. Muirhead, Aspects of Multivariate Statistical Theory (Wiley, 2009), Vol. 197.
  72. D. Martin, C. Fowlkes, D. Tal, and J. Malik, “A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics,” in Proceedings of IEEE International Conference on Computer Vision (2001), Vol. 2, pp. 416–423.
  73. K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of IEEE International Conference on Computer Vision and Pattern Recognition (2016), pp. 770–778.
  74. O. Ronneberger, P. Fischer, and T. Brox, “U-net: convolutional networks for biomedical image segmentation,” in International Conference on Medical image computing and computer-assisted intervention (Springer, 2015), pp. 234–241.
  75. K. H. Jin, M. T. McCann, E. Froustey, and M. Unser, “Deep convolutional neural network for inverse problems in imaging,” IEEE Trans. Image Process. 26, 4509–4522 (2017).
    [Crossref]
  76. C. Chen, Q. Chen, J. Xu, and V. Koltun, “Learning to see in the dark,” in Proceedings of IEEE International Conference on Computer Vision and Pattern Recognition (2018).
  77. D. P. Kingma and J. Ba, “Adam: a method for stochastic optimization,” arXiv:1412.6980 (2014).
  78. https://github.com/ricedsp/Deep_Inverse_Correlography

2019 (5)

X. Liu, I. Guillén, M. La Manna, J. H. Nam, S. A. Reza, T. H. Le, A. Jarabo, D. Gutierrez, and A. Velten, “Non-line-of-sight imaging using phasor-field virtual wave optics,” Nature 572, 620–623 (2019).
[Crossref]

D. B. Lindell, G. Wetzstein, and M. O’Toole, “Wave-based non-line-of-sight imaging using fast f-k migration,” ACM Trans. Graphics 38, 116 (2019).
[Crossref]

F. Wang, H. Wang, H. Wang, G. Li, and G. Situ, “Learning from simulation: an end-to-end deep-learning approach for computational ghost imaging,” Opt. Express 27, 25560–25572 (2019).
[Crossref]

M. Lyu, H. Wang, G. Li, S. Zheng, and G. Situ, “Learning-based lensless imaging through optically thick scattering media,” Adv. Photon. 1, 036002 (2019).
[Crossref]

C. Saunders, J. Murray-Bruce, and V. K. Goyal, “Computational periscopy with an ordinary digital camera,” Nature 565, 472–475 (2019).
[Crossref]

2018 (14)

M. Batarseh, S. Sukhov, Z. Shen, H. Gemar, R. Rezvani, and A. Dogariu, “Passive sensing around the corner using spatial coherence,” Nat. Commun. 9, 3629 (2018).
[Crossref]

P. Caramazza, A. Boccolini, D. Buschek, M. Hullin, C. F. Higham, R. Henderson, R. Murray-Smith, and D. Faccio, “Neural network identification of people hidden from view with a single-pixel, single-photon detector,” Sci. Rep. 8, 11945 (2018).
[Crossref]

Y. Rivenson, Y. Zhang, H. Günaydın, D. Teng, and A. Ozcan, “Phase recovery and holographic image reconstruction using deep learning in neural networks,” Light Sci. Appl. 7, 17141 (2018).
[Crossref]

Z. Kemp, “Propagation based phase retrieval of simulated intensity measurements using artificial neural networks,” J. Opt. 20, 045606 (2018).
[Crossref]

A. Goy, K. Arthur, S. Li, and G. Barbastathis, “Low photon count phase retrieval using deep learning,” Phys. Rev. Lett. 121, 243902 (2018).
[Crossref]

M. J. Cherukara, Y. S. Nashed, and R. J. Harder, “Real-time coherent diffraction inversion using deep generative networks,” Sci. Rep. 8, 16520 (2018).
[Crossref]

F. Nolet, S. Parent, N. Roy, M.-O. Mercier, S. Charlebois, R. Fontaine, and J.-F. Pratte, “Quenching circuit and SPAD integrated in CMOS 65  nm with 7.8  ps FWHM single photon timing resolution,” Instruments 2, 19 (2018).
[Crossref]

Y. Li, Y. Xue, and L. Tian, “Deep speckle correlation: a deep learning approach towards scalable imaging through scattering media,” Optica 5, 1181–1190 (2018).
[Crossref]

S. Li, M. Deng, J. Lee, A. Sinha, and G. Barbastathis, “Imaging through glass diffusers using densely connected convolutional networks,” Optica 5, 803–813 (2018).
[Crossref]

Y. Sun, Z. Xia, and U. S. Kamilov, “Efficient and accurate inversion of multiple scattering with deep learning,” Opt. Express 26, 14678–14688 (2018).
[Crossref]

E. Nehme, L. E. Weiss, T. Michaeli, and Y. Shechtman, “Deep-storm: super-resolution single-molecule microscopy by deep learning,” Optica 5, 458–464 (2018).
[Crossref]

M. O’Toole, D. B. Lindell, and G. Wetzstein, “Confocal non-line-of-sight imaging based on the light cone transform,” Nature 555, 338–341 (2018).
[Crossref]

F. Xu, G. Shulkind, C. Thrampoulidis, J. H. Shapiro, A. Torralba, F. N. C. Wong, and G. W. Wornell, “Revealing hidden scenes by photon-efficient occlusion-based opportunistic active imaging,” Opt. Express 26, 9945–9962 (2018).
[Crossref]

G. Wang, G. B. Giannakis, and Y. C. Eldar, “Solving systems of random quadratic equations via truncated amplitude flow,” IEEE Trans. Inf. Theory 64, 773–794 (2018).
[Crossref]

2017 (4)

2016 (4)

S. Marchesini, Y.-C. Tu, and H.-T. Wu, “Alternating projection, ptychographic imaging and phase synchronization,” Appl. Comput. Harmon. Anal. 41, 815–851 (2016).
[Crossref]

A. Kadambi, H. Zhao, B. Shi, and R. Raskar, “Occluded imaging with time-of-flight sensors,” ACM Trans. Graphics 35, 15 (2016).
[Crossref]

J. Klein, C. Peters, J. Martín, M. Laurenzis, and M. B. Hullin, “Tracking objects outside the line of sight using 2D intensity images,” Sci. Rep. 6, 32491 (2016).
[Crossref]

F. Heide, S. Diamond, M. Nießner, J. Ragan-Kelley, W. Heidrich, and G. Wetzstein, “Proximal: efficient image optimization using proximal algorithms,” ACM Trans. Graphics 35, 84 (2016).
[Crossref]

2015 (1)

2014 (1)

O. Katz, P. Heidmann, M. Fink, and S. Gigan, “Non-invasive single-shot imaging through scattering layers and around corners via speckle correlations,” Nat. Photonics 8, 784–790 (2014).
[Crossref]

2013 (4)

A. Velten, D. Wu, A. Jarabo, B. Masia, C. Barsi, C. Joshi, E. Lawson, M. Bawendi, D. Gutierrez, and R. Raskar, “Femto-photography: capturing and visualizing the propagation of light,” ACM Trans. Graphics 32, 44 (2013).
[Crossref]

F. Heide, M. B. Hullin, J. Gregson, and W. Heidrich, “Low-budget transient imaging using photonic mixer devices,” ACM Trans. Graphics 32, 45 (2013).
[Crossref]

A. Kadambi, R. Whyte, A. Bhandari, L. Streeter, C. Barsi, A. Dorrington, and R. Raskar, “Coded time of flight cameras: sparse deconvolution to address multipath interference and recover time profiles,” ACM Trans. Graphics 32, 167 (2013).
[Crossref]

E. J. Candes, T. Strohmer, and V. Voroninski, “Phaselift: exact and stable signal recovery from magnitude measurements via convex programming,” Commun. Pure Appl. Math. 66, 1241–1274 (2013).
[Crossref]

2012 (5)

Z. Wen, C. Yang, X. Liu, and S. Marchesini, “Alternating direction methods for classical and ptychographic phase retrieval,” Inverse Probl. 28, 115010 (2012).
[Crossref]

A. Velten, T. Willwacher, O. Gupta, A. Veeraraghavan, M. Bawendi, and R. Raskar, “Recovering three-dimensional shape around a corner using ultrafast time-of-flight imaging,” Nat. Commun. 3, 745 (2012).
[Crossref]

O. Gupta, T. Willwacher, A. Velten, A. Veeraraghavan, and R. Raskar, “Reconstruction of hidden 3D shapes using diffuse reflections,” Opt. Express 20, 19096–19108 (2012).
[Crossref]

O. Katz, E. Small, and Y. Silberberg, “Looking around corners and through thin turbid layers in real time with scattered incoherent light,” Nat. Photonics 6, 549–553 (2012).
[Crossref]

J. Bertolotti, E. G. van Putten, C. Blum, A. Lagendijk, W. L. Vos, and A. P. Mosk, “Non-invasive imaging through opaque scattering layers,” Nature 491, 232–234 (2012).
[Crossref]

2004 (1)

D. R. Luke, “Relaxed averaged alternating reflections for diffraction imaging,” Inverse Probl. 21, 37 (2004).
[Crossref]

2003 (1)

1990 (1)

I. Freund, “Looking through walls and around corners,” Phys. A 168, 49–65 (1990).
[Crossref]

1989 (1)

1988 (2)

J. R. Fienup and P. S. Idell, “Imaging correlography with sparse arrays of detectors,” Opt. Eng. 27, 279778 (1988).
[Crossref]

I. Freund, M. Rosenbluh, and S. Feng, “Memory effects in propagation of optical waves through disordered media,” Phys. Rev. Lett. 61, 2328–2331 (1988).
[Crossref]

1987 (1)

1982 (2)

J. R. Fienup, “Phase retrieval algorithms: a comparison,” Appl. Opt. 21, 2758–2769 (1982).
[Crossref]

R. Bates, “Fourier phase problems are uniquely solvable in mute than one dimension. I: underlying theory,” Optik (Stuttgart) 61, 247–262 (1982).

1978 (1)

1972 (1)

R. W. Gerchberg, “A practical algorithm for the determination of phase from image and diffraction plane pictures,” Optik 35, 237–246 (1972).

1967 (1)

P. Welch, “The use of fast Fourier transform for the estimation of power spectra: a method based on time averaging over short, modified periodograms,” IEEE Trans. Audio Electroacoust. 15, 70–73 (1967).
[Crossref]

Abramson, N.

Arthur, K.

A. Goy, K. Arthur, S. Li, and G. Barbastathis, “Low photon count phase retrieval using deep learning,” Phys. Rev. Lett. 121, 243902 (2018).
[Crossref]

Ba, J.

D. P. Kingma and J. Ba, “Adam: a method for stochastic optimization,” arXiv:1412.6980 (2014).

Baburajan, R.

L. Boominathan, M. Maniparambil, H. Gupta, R. Baburajan, and K. Mitra, “Phase retrieval for Fourier ptychography under varying amount of measurements,” arXiv:1805.03593 (2018).

Baraniuk, R. G.

C. A. Metzler, P. Schniter, A. Veeraraghavan, and R. G. Baraniuk, “prDeep: robust phase retrieval with a flexible deep network,” in Proceedings International Conference on Machine Learning (2018), pp. 3498–3507.

Barbastathis, G.

Bardagjy, A.

R. Pandharkar, A. Velten, A. Bardagjy, E. Lawson, M. Bawendi, and R. Raskar, “Estimating motion and size of moving non-line-of-sight objects in cluttered environments,” in Proc. of IEEE International Conference on Computer Vision and Pattern Recognition (2011), pp. 265–272.

Barsi, C.

A. Velten, D. Wu, A. Jarabo, B. Masia, C. Barsi, C. Joshi, E. Lawson, M. Bawendi, D. Gutierrez, and R. Raskar, “Femto-photography: capturing and visualizing the propagation of light,” ACM Trans. Graphics 32, 44 (2013).
[Crossref]

A. Kadambi, R. Whyte, A. Bhandari, L. Streeter, C. Barsi, A. Dorrington, and R. Raskar, “Coded time of flight cameras: sparse deconvolution to address multipath interference and recover time profiles,” ACM Trans. Graphics 32, 167 (2013).
[Crossref]

Batarseh, M.

M. Batarseh, S. Sukhov, Z. Shen, H. Gemar, R. Rezvani, and A. Dogariu, “Passive sensing around the corner using spatial coherence,” Nat. Commun. 9, 3629 (2018).
[Crossref]

Bates, R.

R. Bates, “Fourier phase problems are uniquely solvable in mute than one dimension. I: underlying theory,” Optik (Stuttgart) 61, 247–262 (1982).

Bauschke, H. H.

Bawendi, M.

A. Velten, D. Wu, A. Jarabo, B. Masia, C. Barsi, C. Joshi, E. Lawson, M. Bawendi, D. Gutierrez, and R. Raskar, “Femto-photography: capturing and visualizing the propagation of light,” ACM Trans. Graphics 32, 44 (2013).
[Crossref]

A. Velten, T. Willwacher, O. Gupta, A. Veeraraghavan, M. Bawendi, and R. Raskar, “Recovering three-dimensional shape around a corner using ultrafast time-of-flight imaging,” Nat. Commun. 3, 745 (2012).
[Crossref]

R. Pandharkar, A. Velten, A. Bardagjy, E. Lawson, M. Bawendi, and R. Raskar, “Estimating motion and size of moving non-line-of-sight objects in cluttered environments,” in Proc. of IEEE International Conference on Computer Vision and Pattern Recognition (2011), pp. 265–272.

Bertolotti, J.

J. Bertolotti, E. G. van Putten, C. Blum, A. Lagendijk, W. L. Vos, and A. P. Mosk, “Non-invasive imaging through opaque scattering layers,” Nature 491, 232–234 (2012).
[Crossref]

Bhandari, A.

A. Kadambi, R. Whyte, A. Bhandari, L. Streeter, C. Barsi, A. Dorrington, and R. Raskar, “Coded time of flight cameras: sparse deconvolution to address multipath interference and recover time profiles,” ACM Trans. Graphics 32, 167 (2013).
[Crossref]

Blum, C.

J. Bertolotti, E. G. van Putten, C. Blum, A. Lagendijk, W. L. Vos, and A. P. Mosk, “Non-invasive imaging through opaque scattering layers,” Nature 491, 232–234 (2012).
[Crossref]

Boccolini, A.

P. Caramazza, A. Boccolini, D. Buschek, M. Hullin, C. F. Higham, R. Henderson, R. Murray-Smith, and D. Faccio, “Neural network identification of people hidden from view with a single-pixel, single-photon detector,” Sci. Rep. 8, 11945 (2018).
[Crossref]

Boominathan, L.

L. Boominathan, M. Maniparambil, H. Gupta, R. Baburajan, and K. Mitra, “Phase retrieval for Fourier ptychography under varying amount of measurements,” arXiv:1805.03593 (2018).

Bostan, E.

M. R. Kellman, E. Bostan, N. Repina, M. Lustig, and L. Waller, “Physics-based learned design: optimized coded-illumination for quantitative phase imaging,” arXiv:1808.03571 (2018).

Bouman, K. L.

K. L. Bouman, V. Ye, A. B. Yedidia, F. Durand, G. W. Wornell, A. Torralba, and W. T. Freeman, “Turning corners into cameras: principles and methods,” in Proceedings of IEEE International Conference on Computer Vision (2017), Vol. 1, pp. 8.

Brox, T.

O. Ronneberger, P. Fischer, and T. Brox, “U-net: convolutional networks for biomedical image segmentation,” in International Conference on Medical image computing and computer-assisted intervention (Springer, 2015), pp. 234–241.

Buschek, D.

P. Caramazza, A. Boccolini, D. Buschek, M. Hullin, C. F. Higham, R. Henderson, R. Murray-Smith, and D. Faccio, “Neural network identification of people hidden from view with a single-pixel, single-photon detector,” Sci. Rep. 8, 11945 (2018).
[Crossref]

Buttafava, M.

M. Buttafava, J. Zeman, A. Tosi, K. Eliceiri, and A. Velten, “Non-line-of-sight imaging using a time-gated single photon avalanche diode,” Opt. Express 23, 20997–21011 (2015).
[Crossref]

A. K. Pediredla, M. Buttafava, A. Tosi, O. Cossairt, and A. Veeraraghavan, “Reconstructing rooms using photon echoes: a plane based model and reconstruction algorithm for looking around the corner,” in Proceedings of IEEE International Conference on Computational Photography (IEEE, 2017).

Calder, N. J.

L. Parmesan, N. A. Dutton, N. J. Calder, A. J. Holmes, L. A. Grant, and R. K. Henderson, “A 9.8  µm sample and hold time to amplitude converter CMOS SPAD pixel,” in 44th European Solid State Device Research Conference (ESSDERC) (IEEE, 2014), pp. 290–293.

Candes, E.

Y. Chen and E. Candes, “Solving random quadratic systems of equations is nearly as easy as solving linear systems,” in Advances in Neural Information Processing Systems (2015), pp. 739–747.

Candes, E. J.

E. J. Candes, T. Strohmer, and V. Voroninski, “Phaselift: exact and stable signal recovery from magnitude measurements via convex programming,” Commun. Pure Appl. Math. 66, 1241–1274 (2013).
[Crossref]

Caramazza, P.

P. Caramazza, A. Boccolini, D. Buschek, M. Hullin, C. F. Higham, R. Henderson, R. Murray-Smith, and D. Faccio, “Neural network identification of people hidden from view with a single-pixel, single-photon detector,” Sci. Rep. 8, 11945 (2018).
[Crossref]

Chan, S.

Charbon, E.

Y. Maruyama and E. Charbon, “A time-gated 128 ×128 CMOS SPAD array for on-chip fluorescence detection,” in Proceedings International Image Sensor Workshop (IISW) (2011).

Charlebois, S.

F. Nolet, S. Parent, N. Roy, M.-O. Mercier, S. Charlebois, R. Fontaine, and J.-F. Pratte, “Quenching circuit and SPAD integrated in CMOS 65  nm with 7.8  ps FWHM single photon timing resolution,” Instruments 2, 19 (2018).
[Crossref]

Chen, C.

C. Chen, Q. Chen, J. Xu, and V. Koltun, “Learning to see in the dark,” in Proceedings of IEEE International Conference on Computer Vision and Pattern Recognition (2018).

Chen, Q.

C. Chen, Q. Chen, J. Xu, and V. Koltun, “Learning to see in the dark,” in Proceedings of IEEE International Conference on Computer Vision and Pattern Recognition (2018).

Chen, Y.

Y. Chen and E. Candes, “Solving random quadratic systems of equations is nearly as easy as solving linear systems,” in Advances in Neural Information Processing Systems (2015), pp. 739–747.

Cherukara, M. J.

M. J. Cherukara, Y. S. Nashed, and R. J. Harder, “Real-time coherent diffraction inversion using deep generative networks,” Sci. Rep. 8, 16520 (2018).
[Crossref]

Chi, Y.

H. Zhang, Y. Chi, and Y. Liang, “Provable non-convex phase retrieval with outliers: median truncated Wirtinger flow,” in Proc. International Conference on Machine Learning (2016), pp. 1022–1031.

Christensen, M. P.

A. Viswanath, P. Rangarajan, D. MacFarlane, and M. P. Christensen, “Indirect imaging using correlography,” in Computational Optical Sensing and Imaging (Optical Society of America, 2018), paper CM2E–3.

Combettes, P. L.

Cossairt, O.

A. K. Pediredla, M. Buttafava, A. Tosi, O. Cossairt, and A. Veeraraghavan, “Reconstructing rooms using photon echoes: a plane based model and reconstruction algorithm for looking around the corner,” in Proceedings of IEEE International Conference on Computational Photography (IEEE, 2017).

A. Kappeler, S. Ghosh, J. Holloway, O. Cossairt, and A. Katsaggelos, “Ptychnet: CNN based Fourier ptychography,” in Proceedings of IEEE International Conference on Image Processing (IEEE, 2017), pp. 1712–1716.

Davis, J.

A. Kirmani, T. Hutchison, J. Davis, and R. Raskar, “Looking around the corner using transient imaging,” in Proceedings of IEEE International Conference on Computer Vision (2009), pp. 159–166.

Deng, M.

Diamond, S.

F. Heide, S. Diamond, M. Nießner, J. Ragan-Kelley, W. Heidrich, and G. Wetzstein, “Proximal: efficient image optimization using proximal algorithms,” ACM Trans. Graphics 35, 84 (2016).
[Crossref]

Dogariu, A.

M. Batarseh, S. Sukhov, Z. Shen, H. Gemar, R. Rezvani, and A. Dogariu, “Passive sensing around the corner using spatial coherence,” Nat. Commun. 9, 3629 (2018).
[Crossref]

Dorrington, A.

A. Kadambi, R. Whyte, A. Bhandari, L. Streeter, C. Barsi, A. Dorrington, and R. Raskar, “Coded time of flight cameras: sparse deconvolution to address multipath interference and recover time profiles,” ACM Trans. Graphics 32, 167 (2013).
[Crossref]

Durand, F.

K. L. Bouman, V. Ye, A. B. Yedidia, F. Durand, G. W. Wornell, A. Torralba, and W. T. Freeman, “Turning corners into cameras: principles and methods,” in Proceedings of IEEE International Conference on Computer Vision (2017), Vol. 1, pp. 8.

Dutton, N. A.

L. Parmesan, N. A. Dutton, N. J. Calder, A. J. Holmes, L. A. Grant, and R. K. Henderson, “A 9.8  µm sample and hold time to amplitude converter CMOS SPAD pixel,” in 44th European Solid State Device Research Conference (ESSDERC) (IEEE, 2014), pp. 290–293.

Eldar, Y. C.

G. Wang, G. B. Giannakis, and Y. C. Eldar, “Solving systems of random quadratic equations via truncated amplitude flow,” IEEE Trans. Inf. Theory 64, 773–794 (2018).
[Crossref]

Eliceiri, K.

Faccio, D.

P. Caramazza, A. Boccolini, D. Buschek, M. Hullin, C. F. Higham, R. Henderson, R. Murray-Smith, and D. Faccio, “Neural network identification of people hidden from view with a single-pixel, single-photon detector,” Sci. Rep. 8, 11945 (2018).
[Crossref]

S. Chan, R. E. Warburton, G. Gariepy, J. Leach, and D. Faccio, “Non-line-of-sight tracking of people at long range,” Opt. Express 25, 10109–10117 (2017).
[Crossref]

Feng, S.

I. Freund, M. Rosenbluh, and S. Feng, “Memory effects in propagation of optical waves through disordered media,” Phys. Rev. Lett. 61, 2328–2331 (1988).
[Crossref]

Fienup, J.

J. Fienup (private communication, 2017).

Fienup, J. R.

Fink, M.

O. Katz, P. Heidmann, M. Fink, and S. Gigan, “Non-invasive single-shot imaging through scattering layers and around corners via speckle correlations,” Nat. Photonics 8, 784–790 (2014).
[Crossref]

Fischer, P.

O. Ronneberger, P. Fischer, and T. Brox, “U-net: convolutional networks for biomedical image segmentation,” in International Conference on Medical image computing and computer-assisted intervention (Springer, 2015), pp. 234–241.

Fontaine, R.

F. Nolet, S. Parent, N. Roy, M.-O. Mercier, S. Charlebois, R. Fontaine, and J.-F. Pratte, “Quenching circuit and SPAD integrated in CMOS 65  nm with 7.8  ps FWHM single photon timing resolution,” Instruments 2, 19 (2018).
[Crossref]

Fowlkes, C.

D. Martin, C. Fowlkes, D. Tal, and J. Malik, “A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics,” in Proceedings of IEEE International Conference on Computer Vision (2001), Vol. 2, pp. 416–423.

Freeman, W. T.

K. L. Bouman, V. Ye, A. B. Yedidia, F. Durand, G. W. Wornell, A. Torralba, and W. T. Freeman, “Turning corners into cameras: principles and methods,” in Proceedings of IEEE International Conference on Computer Vision (2017), Vol. 1, pp. 8.

Freund, I.

I. Freund, “Looking through walls and around corners,” Phys. A 168, 49–65 (1990).
[Crossref]

I. Freund, M. Rosenbluh, and S. Feng, “Memory effects in propagation of optical waves through disordered media,” Phys. Rev. Lett. 61, 2328–2331 (1988).
[Crossref]

Froustey, E.

K. H. Jin, M. T. McCann, E. Froustey, and M. Unser, “Deep convolutional neural network for inverse problems in imaging,” IEEE Trans. Image Process. 26, 4509–4522 (2017).
[Crossref]

Gariepy, G.

Gemar, H.

M. Batarseh, S. Sukhov, Z. Shen, H. Gemar, R. Rezvani, and A. Dogariu, “Passive sensing around the corner using spatial coherence,” Nat. Commun. 9, 3629 (2018).
[Crossref]

Gerchberg, R. W.

R. W. Gerchberg, “A practical algorithm for the determination of phase from image and diffraction plane pictures,” Optik 35, 237–246 (1972).

Ghosh, S.

A. Kappeler, S. Ghosh, J. Holloway, O. Cossairt, and A. Katsaggelos, “Ptychnet: CNN based Fourier ptychography,” in Proceedings of IEEE International Conference on Image Processing (IEEE, 2017), pp. 1712–1716.

Giannakis, G. B.

G. Wang, G. B. Giannakis, and Y. C. Eldar, “Solving systems of random quadratic equations via truncated amplitude flow,” IEEE Trans. Inf. Theory 64, 773–794 (2018).
[Crossref]

Gigan, S.

O. Katz, P. Heidmann, M. Fink, and S. Gigan, “Non-invasive single-shot imaging through scattering layers and around corners via speckle correlations,” Nat. Photonics 8, 784–790 (2014).
[Crossref]

Gkioulekas, I.

S. Xin, S. Nousias, K. N. Kutulakos, A. C. Sankaranarayanan, S. G. Narasimhan, and I. Gkioulekas, “A theory of Fermat paths for non-line-of-sight shape reconstruction,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (2019), pp. 6800–6809.

Gonglewski, J. D.

Goodman, R. S.

Göröcs, Z.

Goy, A.

A. Goy, K. Arthur, S. Li, and G. Barbastathis, “Low photon count phase retrieval using deep learning,” Phys. Rev. Lett. 121, 243902 (2018).
[Crossref]

Goyal, V. K.

C. Saunders, J. Murray-Bruce, and V. K. Goyal, “Computational periscopy with an ordinary digital camera,” Nature 565, 472–475 (2019).
[Crossref]

Grant, L. A.

L. Parmesan, N. A. Dutton, N. J. Calder, A. J. Holmes, L. A. Grant, and R. K. Henderson, “A 9.8  µm sample and hold time to amplitude converter CMOS SPAD pixel,” in 44th European Solid State Device Research Conference (ESSDERC) (IEEE, 2014), pp. 290–293.

Gregson, J.

F. Heide, M. B. Hullin, J. Gregson, and W. Heidrich, “Low-budget transient imaging using photonic mixer devices,” ACM Trans. Graphics 32, 45 (2013).
[Crossref]

Guillén, I.

X. Liu, I. Guillén, M. La Manna, J. H. Nam, S. A. Reza, T. H. Le, A. Jarabo, D. Gutierrez, and A. Velten, “Non-line-of-sight imaging using phasor-field virtual wave optics,” Nature 572, 620–623 (2019).
[Crossref]

Günaydin, H.

Y. Rivenson, Y. Zhang, H. Günaydın, D. Teng, and A. Ozcan, “Phase recovery and holographic image reconstruction using deep learning in neural networks,” Light Sci. Appl. 7, 17141 (2018).
[Crossref]

Y. Rivenson, Z. Göröcs, H. Günaydin, Y. Zhang, H. Wang, and A. Ozcan, “Deep learning microscopy,” Optica 4, 1437–1443 (2017).
[Crossref]

Gupta, H.

L. Boominathan, M. Maniparambil, H. Gupta, R. Baburajan, and K. Mitra, “Phase retrieval for Fourier ptychography under varying amount of measurements,” arXiv:1805.03593 (2018).

Gupta, M.

B. M. Smith, M. O’Toole, and M. Gupta, “Tracking multiple objects outside the line of sight using speckle imaging,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (2018), pp. 6258–6266.

Gupta, O.

O. Gupta, T. Willwacher, A. Velten, A. Veeraraghavan, and R. Raskar, “Reconstruction of hidden 3D shapes using diffuse reflections,” Opt. Express 20, 19096–19108 (2012).
[Crossref]

A. Velten, T. Willwacher, O. Gupta, A. Veeraraghavan, M. Bawendi, and R. Raskar, “Recovering three-dimensional shape around a corner using ultrafast time-of-flight imaging,” Nat. Commun. 3, 745 (2012).
[Crossref]

Gutierrez, D.

X. Liu, I. Guillén, M. La Manna, J. H. Nam, S. A. Reza, T. H. Le, A. Jarabo, D. Gutierrez, and A. Velten, “Non-line-of-sight imaging using phasor-field virtual wave optics,” Nature 572, 620–623 (2019).
[Crossref]

A. Velten, D. Wu, A. Jarabo, B. Masia, C. Barsi, C. Joshi, E. Lawson, M. Bawendi, D. Gutierrez, and R. Raskar, “Femto-photography: capturing and visualizing the propagation of light,” ACM Trans. Graphics 32, 44 (2013).
[Crossref]

Harder, R. J.

M. J. Cherukara, Y. S. Nashed, and R. J. Harder, “Real-time coherent diffraction inversion using deep generative networks,” Sci. Rep. 8, 16520 (2018).
[Crossref]

He, K.

K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of IEEE International Conference on Computer Vision and Pattern Recognition (2016), pp. 770–778.

Heide, F.

F. Heide, S. Diamond, M. Nießner, J. Ragan-Kelley, W. Heidrich, and G. Wetzstein, “Proximal: efficient image optimization using proximal algorithms,” ACM Trans. Graphics 35, 84 (2016).
[Crossref]

F. Heide, M. B. Hullin, J. Gregson, and W. Heidrich, “Low-budget transient imaging using photonic mixer devices,” ACM Trans. Graphics 32, 45 (2013).
[Crossref]

F. Heide, L. Xiao, W. Heidrich, and M. B. Hullin, “Diffuse mirrors: 3D reconstruction from diffuse indirect illumination using inexpensive time-of-flight sensors,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (2014), pp. 3222–3229.

Heidmann, P.

O. Katz, P. Heidmann, M. Fink, and S. Gigan, “Non-invasive single-shot imaging through scattering layers and around corners via speckle correlations,” Nat. Photonics 8, 784–790 (2014).
[Crossref]

Heidrich, W.

F. Heide, S. Diamond, M. Nießner, J. Ragan-Kelley, W. Heidrich, and G. Wetzstein, “Proximal: efficient image optimization using proximal algorithms,” ACM Trans. Graphics 35, 84 (2016).
[Crossref]

F. Heide, M. B. Hullin, J. Gregson, and W. Heidrich, “Low-budget transient imaging using photonic mixer devices,” ACM Trans. Graphics 32, 45 (2013).
[Crossref]

F. Heide, L. Xiao, W. Heidrich, and M. B. Hullin, “Diffuse mirrors: 3D reconstruction from diffuse indirect illumination using inexpensive time-of-flight sensors,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (2014), pp. 3222–3229.

Henderson, R.

P. Caramazza, A. Boccolini, D. Buschek, M. Hullin, C. F. Higham, R. Henderson, R. Murray-Smith, and D. Faccio, “Neural network identification of people hidden from view with a single-pixel, single-photon detector,” Sci. Rep. 8, 11945 (2018).
[Crossref]

Henderson, R. K.

L. Parmesan, N. A. Dutton, N. J. Calder, A. J. Holmes, L. A. Grant, and R. K. Henderson, “A 9.8  µm sample and hold time to amplitude converter CMOS SPAD pixel,” in 44th European Solid State Device Research Conference (ESSDERC) (IEEE, 2014), pp. 290–293.

Higham, C. F.

P. Caramazza, A. Boccolini, D. Buschek, M. Hullin, C. F. Higham, R. Henderson, R. Murray-Smith, and D. Faccio, “Neural network identification of people hidden from view with a single-pixel, single-photon detector,” Sci. Rep. 8, 11945 (2018).
[Crossref]

Holloway, J.

A. Kappeler, S. Ghosh, J. Holloway, O. Cossairt, and A. Katsaggelos, “Ptychnet: CNN based Fourier ptychography,” in Proceedings of IEEE International Conference on Image Processing (IEEE, 2017), pp. 1712–1716.

Holmes, A. J.

L. Parmesan, N. A. Dutton, N. J. Calder, A. J. Holmes, L. A. Grant, and R. K. Henderson, “A 9.8  µm sample and hold time to amplitude converter CMOS SPAD pixel,” in 44th European Solid State Device Research Conference (ESSDERC) (IEEE, 2014), pp. 290–293.

Hullin, M.

P. Caramazza, A. Boccolini, D. Buschek, M. Hullin, C. F. Higham, R. Henderson, R. Murray-Smith, and D. Faccio, “Neural network identification of people hidden from view with a single-pixel, single-photon detector,” Sci. Rep. 8, 11945 (2018).
[Crossref]

Hullin, M. B.

J. Klein, C. Peters, J. Martín, M. Laurenzis, and M. B. Hullin, “Tracking objects outside the line of sight using 2D intensity images,” Sci. Rep. 6, 32491 (2016).
[Crossref]

F. Heide, M. B. Hullin, J. Gregson, and W. Heidrich, “Low-budget transient imaging using photonic mixer devices,” ACM Trans. Graphics 32, 45 (2013).
[Crossref]

F. Heide, L. Xiao, W. Heidrich, and M. B. Hullin, “Diffuse mirrors: 3D reconstruction from diffuse indirect illumination using inexpensive time-of-flight sensors,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (2014), pp. 3222–3229.

Hutchison, T.

A. Kirmani, T. Hutchison, J. Davis, and R. Raskar, “Looking around the corner using transient imaging,” in Proceedings of IEEE International Conference on Computer Vision (2009), pp. 159–166.

Idell, P. S.

Jain, P.

P. Netrapalli, P. Jain, and S. Sanghavi, “Phase retrieval using alternating minimization,” in Advances in Neural Information Processing Systems (2013), pp. 2796–2804.

Jarabo, A.

X. Liu, I. Guillén, M. La Manna, J. H. Nam, S. A. Reza, T. H. Le, A. Jarabo, D. Gutierrez, and A. Velten, “Non-line-of-sight imaging using phasor-field virtual wave optics,” Nature 572, 620–623 (2019).
[Crossref]

A. Velten, D. Wu, A. Jarabo, B. Masia, C. Barsi, C. Joshi, E. Lawson, M. Bawendi, D. Gutierrez, and R. Raskar, “Femto-photography: capturing and visualizing the propagation of light,” ACM Trans. Graphics 32, 44 (2013).
[Crossref]

Jin, K. H.

K. H. Jin, M. T. McCann, E. Froustey, and M. Unser, “Deep convolutional neural network for inverse problems in imaging,” IEEE Trans. Image Process. 26, 4509–4522 (2017).
[Crossref]

Joshi, C.

A. Velten, D. Wu, A. Jarabo, B. Masia, C. Barsi, C. Joshi, E. Lawson, M. Bawendi, D. Gutierrez, and R. Raskar, “Femto-photography: capturing and visualizing the propagation of light,” ACM Trans. Graphics 32, 44 (2013).
[Crossref]

Kadambi, A.

A. Kadambi, H. Zhao, B. Shi, and R. Raskar, “Occluded imaging with time-of-flight sensors,” ACM Trans. Graphics 35, 15 (2016).
[Crossref]

A. Kadambi, R. Whyte, A. Bhandari, L. Streeter, C. Barsi, A. Dorrington, and R. Raskar, “Coded time of flight cameras: sparse deconvolution to address multipath interference and recover time profiles,” ACM Trans. Graphics 32, 167 (2013).
[Crossref]

Kamilov, U. S.

Kappeler, A.

A. Kappeler, S. Ghosh, J. Holloway, O. Cossairt, and A. Katsaggelos, “Ptychnet: CNN based Fourier ptychography,” in Proceedings of IEEE International Conference on Image Processing (IEEE, 2017), pp. 1712–1716.

Katsaggelos, A.

A. Kappeler, S. Ghosh, J. Holloway, O. Cossairt, and A. Katsaggelos, “Ptychnet: CNN based Fourier ptychography,” in Proceedings of IEEE International Conference on Image Processing (IEEE, 2017), pp. 1712–1716.

Katz, O.

O. Katz, P. Heidmann, M. Fink, and S. Gigan, “Non-invasive single-shot imaging through scattering layers and around corners via speckle correlations,” Nat. Photonics 8, 784–790 (2014).
[Crossref]

O. Katz, E. Small, and Y. Silberberg, “Looking around corners and through thin turbid layers in real time with scattered incoherent light,” Nat. Photonics 6, 549–553 (2012).
[Crossref]

Kellman, M. R.

M. R. Kellman, E. Bostan, N. Repina, M. Lustig, and L. Waller, “Physics-based learned design: optimized coded-illumination for quantitative phase imaging,” arXiv:1808.03571 (2018).

Kemp, Z.

Z. Kemp, “Propagation based phase retrieval of simulated intensity measurements using artificial neural networks,” J. Opt. 20, 045606 (2018).
[Crossref]

Kingma, D. P.

D. P. Kingma and J. Ba, “Adam: a method for stochastic optimization,” arXiv:1412.6980 (2014).

Kirmani, A.

A. Kirmani, T. Hutchison, J. Davis, and R. Raskar, “Looking around the corner using transient imaging,” in Proceedings of IEEE International Conference on Computer Vision (2009), pp. 159–166.

Klein, J.

J. Klein, C. Peters, J. Martín, M. Laurenzis, and M. B. Hullin, “Tracking objects outside the line of sight using 2D intensity images,” Sci. Rep. 6, 32491 (2016).
[Crossref]

Knopp, J.

Koltun, V.

C. Chen, Q. Chen, J. Xu, and V. Koltun, “Learning to see in the dark,” in Proceedings of IEEE International Conference on Computer Vision and Pattern Recognition (2018).

Kutulakos, K. N.

S. Xin, S. Nousias, K. N. Kutulakos, A. C. Sankaranarayanan, S. G. Narasimhan, and I. Gkioulekas, “A theory of Fermat paths for non-line-of-sight shape reconstruction,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (2019), pp. 6800–6809.

La Manna, M.

X. Liu, I. Guillén, M. La Manna, J. H. Nam, S. A. Reza, T. H. Le, A. Jarabo, D. Gutierrez, and A. Velten, “Non-line-of-sight imaging using phasor-field virtual wave optics,” Nature 572, 620–623 (2019).
[Crossref]

Lagendijk, A.

J. Bertolotti, E. G. van Putten, C. Blum, A. Lagendijk, W. L. Vos, and A. P. Mosk, “Non-invasive imaging through opaque scattering layers,” Nature 491, 232–234 (2012).
[Crossref]

Laurenzis, M.

J. Klein, C. Peters, J. Martín, M. Laurenzis, and M. B. Hullin, “Tracking objects outside the line of sight using 2D intensity images,” Sci. Rep. 6, 32491 (2016).
[Crossref]

Lawson, E.

A. Velten, D. Wu, A. Jarabo, B. Masia, C. Barsi, C. Joshi, E. Lawson, M. Bawendi, D. Gutierrez, and R. Raskar, “Femto-photography: capturing and visualizing the propagation of light,” ACM Trans. Graphics 32, 44 (2013).
[Crossref]

R. Pandharkar, A. Velten, A. Bardagjy, E. Lawson, M. Bawendi, and R. Raskar, “Estimating motion and size of moving non-line-of-sight objects in cluttered environments,” in Proc. of IEEE International Conference on Computer Vision and Pattern Recognition (2011), pp. 265–272.

Le, T. H.

X. Liu, I. Guillén, M. La Manna, J. H. Nam, S. A. Reza, T. H. Le, A. Jarabo, D. Gutierrez, and A. Velten, “Non-line-of-sight imaging using phasor-field virtual wave optics,” Nature 572, 620–623 (2019).
[Crossref]

Leach, J.

Lee, J.

Li, G.

F. Wang, H. Wang, H. Wang, G. Li, and G. Situ, “Learning from simulation: an end-to-end deep-learning approach for computational ghost imaging,” Opt. Express 27, 25560–25572 (2019).
[Crossref]

M. Lyu, H. Wang, G. Li, S. Zheng, and G. Situ, “Learning-based lensless imaging through optically thick scattering media,” Adv. Photon. 1, 036002 (2019).
[Crossref]

Li, S.

Li, Y.

Liang, Y.

H. Zhang, Y. Chi, and Y. Liang, “Provable non-convex phase retrieval with outliers: median truncated Wirtinger flow,” in Proc. International Conference on Machine Learning (2016), pp. 1022–1031.

Lindell, D. B.

D. B. Lindell, G. Wetzstein, and M. O’Toole, “Wave-based non-line-of-sight imaging using fast f-k migration,” ACM Trans. Graphics 38, 116 (2019).
[Crossref]

M. O’Toole, D. B. Lindell, and G. Wetzstein, “Confocal non-line-of-sight imaging based on the light cone transform,” Nature 555, 338–341 (2018).
[Crossref]

M. O’Toole, D. B. Lindell, and G. Wetzstein, “Real-time non-line-of-sight imaging,” in ACM SIGGRAPH 2018 Emerging Technologies (ACM, 2018), paper 14.

Liu, X.

X. Liu, I. Guillén, M. La Manna, J. H. Nam, S. A. Reza, T. H. Le, A. Jarabo, D. Gutierrez, and A. Velten, “Non-line-of-sight imaging using phasor-field virtual wave optics,” Nature 572, 620–623 (2019).
[Crossref]

Z. Wen, C. Yang, X. Liu, and S. Marchesini, “Alternating direction methods for classical and ptychographic phase retrieval,” Inverse Probl. 28, 115010 (2012).
[Crossref]

Luke, D. R.

Lustig, M.

M. R. Kellman, E. Bostan, N. Repina, M. Lustig, and L. Waller, “Physics-based learned design: optimized coded-illumination for quantitative phase imaging,” arXiv:1808.03571 (2018).

Lyu, M.

M. Lyu, H. Wang, G. Li, S. Zheng, and G. Situ, “Learning-based lensless imaging through optically thick scattering media,” Adv. Photon. 1, 036002 (2019).
[Crossref]

MacFarlane, D.

A. Viswanath, P. Rangarajan, D. MacFarlane, and M. P. Christensen, “Indirect imaging using correlography,” in Computational Optical Sensing and Imaging (Optical Society of America, 2018), paper CM2E–3.

Maeda, T.

T. Maeda, G. Satat, T. Swedish, L. Sinha, and R. Raskar, “Recent advances in imaging around corners,” arXiv:1910.05613 (2019).

Malik, J.

D. Martin, C. Fowlkes, D. Tal, and J. Malik, “A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics,” in Proceedings of IEEE International Conference on Computer Vision (2001), Vol. 2, pp. 416–423.

Maniparambil, M.

L. Boominathan, M. Maniparambil, H. Gupta, R. Baburajan, and K. Mitra, “Phase retrieval for Fourier ptychography under varying amount of measurements,” arXiv:1805.03593 (2018).

Marchesini, S.

S. Marchesini, Y.-C. Tu, and H.-T. Wu, “Alternating projection, ptychographic imaging and phase synchronization,” Appl. Comput. Harmon. Anal. 41, 815–851 (2016).
[Crossref]

Z. Wen, C. Yang, X. Liu, and S. Marchesini, “Alternating direction methods for classical and ptychographic phase retrieval,” Inverse Probl. 28, 115010 (2012).
[Crossref]

Martin, D.

D. Martin, C. Fowlkes, D. Tal, and J. Malik, “A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics,” in Proceedings of IEEE International Conference on Computer Vision (2001), Vol. 2, pp. 416–423.

Martín, J.

J. Klein, C. Peters, J. Martín, M. Laurenzis, and M. B. Hullin, “Tracking objects outside the line of sight using 2D intensity images,” Sci. Rep. 6, 32491 (2016).
[Crossref]

Maruyama, Y.

Y. Maruyama and E. Charbon, “A time-gated 128 ×128 CMOS SPAD array for on-chip fluorescence detection,” in Proceedings International Image Sensor Workshop (IISW) (2011).

Masia, B.

A. Velten, D. Wu, A. Jarabo, B. Masia, C. Barsi, C. Joshi, E. Lawson, M. Bawendi, D. Gutierrez, and R. Raskar, “Femto-photography: capturing and visualizing the propagation of light,” ACM Trans. Graphics 32, 44 (2013).
[Crossref]

McCann, M. T.

K. H. Jin, M. T. McCann, E. Froustey, and M. Unser, “Deep convolutional neural network for inverse problems in imaging,” IEEE Trans. Image Process. 26, 4509–4522 (2017).
[Crossref]

Mercier, M.-O.

F. Nolet, S. Parent, N. Roy, M.-O. Mercier, S. Charlebois, R. Fontaine, and J.-F. Pratte, “Quenching circuit and SPAD integrated in CMOS 65  nm with 7.8  ps FWHM single photon timing resolution,” Instruments 2, 19 (2018).
[Crossref]

Metzler, C. A.

C. A. Metzler, P. Schniter, A. Veeraraghavan, and R. G. Baraniuk, “prDeep: robust phase retrieval with a flexible deep network,” in Proceedings International Conference on Machine Learning (2018), pp. 3498–3507.

Michaeli, T.

Mitra, K.

L. Boominathan, M. Maniparambil, H. Gupta, R. Baburajan, and K. Mitra, “Phase retrieval for Fourier ptychography under varying amount of measurements,” arXiv:1805.03593 (2018).

Mosk, A. P.

J. Bertolotti, E. G. van Putten, C. Blum, A. Lagendijk, W. L. Vos, and A. P. Mosk, “Non-invasive imaging through opaque scattering layers,” Nature 491, 232–234 (2012).
[Crossref]

Muirhead, R. J.

R. J. Muirhead, Aspects of Multivariate Statistical Theory (Wiley, 2009), Vol. 197.

Murray-Bruce, J.

C. Saunders, J. Murray-Bruce, and V. K. Goyal, “Computational periscopy with an ordinary digital camera,” Nature 565, 472–475 (2019).
[Crossref]

Murray-Smith, R.

P. Caramazza, A. Boccolini, D. Buschek, M. Hullin, C. F. Higham, R. Henderson, R. Murray-Smith, and D. Faccio, “Neural network identification of people hidden from view with a single-pixel, single-photon detector,” Sci. Rep. 8, 11945 (2018).
[Crossref]

Nam, J. H.

X. Liu, I. Guillén, M. La Manna, J. H. Nam, S. A. Reza, T. H. Le, A. Jarabo, D. Gutierrez, and A. Velten, “Non-line-of-sight imaging using phasor-field virtual wave optics,” Nature 572, 620–623 (2019).
[Crossref]

Narasimhan, S. G.

S. Xin, S. Nousias, K. N. Kutulakos, A. C. Sankaranarayanan, S. G. Narasimhan, and I. Gkioulekas, “A theory of Fermat paths for non-line-of-sight shape reconstruction,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (2019), pp. 6800–6809.

Nashed, Y. S.

M. J. Cherukara, Y. S. Nashed, and R. J. Harder, “Real-time coherent diffraction inversion using deep generative networks,” Sci. Rep. 8, 16520 (2018).
[Crossref]

Nehme, E.

Netrapalli, P.

P. Netrapalli, P. Jain, and S. Sanghavi, “Phase retrieval using alternating minimization,” in Advances in Neural Information Processing Systems (2013), pp. 2796–2804.

Nießner, M.

F. Heide, S. Diamond, M. Nießner, J. Ragan-Kelley, W. Heidrich, and G. Wetzstein, “Proximal: efficient image optimization using proximal algorithms,” ACM Trans. Graphics 35, 84 (2016).
[Crossref]

Nolet, F.

F. Nolet, S. Parent, N. Roy, M.-O. Mercier, S. Charlebois, R. Fontaine, and J.-F. Pratte, “Quenching circuit and SPAD integrated in CMOS 65  nm with 7.8  ps FWHM single photon timing resolution,” Instruments 2, 19 (2018).
[Crossref]

Nousias, S.

S. Xin, S. Nousias, K. N. Kutulakos, A. C. Sankaranarayanan, S. G. Narasimhan, and I. Gkioulekas, “A theory of Fermat paths for non-line-of-sight shape reconstruction,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (2019), pp. 6800–6809.

O’Toole, M.

D. B. Lindell, G. Wetzstein, and M. O’Toole, “Wave-based non-line-of-sight imaging using fast f-k migration,” ACM Trans. Graphics 38, 116 (2019).
[Crossref]

M. O’Toole, D. B. Lindell, and G. Wetzstein, “Confocal non-line-of-sight imaging based on the light cone transform,” Nature 555, 338–341 (2018).
[Crossref]

B. M. Smith, M. O’Toole, and M. Gupta, “Tracking multiple objects outside the line of sight using speckle imaging,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (2018), pp. 6258–6266.

M. O’Toole, D. B. Lindell, and G. Wetzstein, “Real-time non-line-of-sight imaging,” in ACM SIGGRAPH 2018 Emerging Technologies (ACM, 2018), paper 14.

Ozcan, A.

Y. Rivenson, Y. Zhang, H. Günaydın, D. Teng, and A. Ozcan, “Phase recovery and holographic image reconstruction using deep learning in neural networks,” Light Sci. Appl. 7, 17141 (2018).
[Crossref]

Y. Rivenson, Z. Göröcs, H. Günaydin, Y. Zhang, H. Wang, and A. Ozcan, “Deep learning microscopy,” Optica 4, 1437–1443 (2017).
[Crossref]

Pandharkar, R.

R. Pandharkar, A. Velten, A. Bardagjy, E. Lawson, M. Bawendi, and R. Raskar, “Estimating motion and size of moving non-line-of-sight objects in cluttered environments,” in Proc. of IEEE International Conference on Computer Vision and Pattern Recognition (2011), pp. 265–272.

Parent, S.

F. Nolet, S. Parent, N. Roy, M.-O. Mercier, S. Charlebois, R. Fontaine, and J.-F. Pratte, “Quenching circuit and SPAD integrated in CMOS 65  nm with 7.8  ps FWHM single photon timing resolution,” Instruments 2, 19 (2018).
[Crossref]

Parmesan, L.

L. Parmesan, N. A. Dutton, N. J. Calder, A. J. Holmes, L. A. Grant, and R. K. Henderson, “A 9.8  µm sample and hold time to amplitude converter CMOS SPAD pixel,” in 44th European Solid State Device Research Conference (ESSDERC) (IEEE, 2014), pp. 290–293.

Pediredla, A. K.

A. K. Pediredla, M. Buttafava, A. Tosi, O. Cossairt, and A. Veeraraghavan, “Reconstructing rooms using photon echoes: a plane based model and reconstruction algorithm for looking around the corner,” in Proceedings of IEEE International Conference on Computational Photography (IEEE, 2017).

Peters, C.

J. Klein, C. Peters, J. Martín, M. Laurenzis, and M. B. Hullin, “Tracking objects outside the line of sight using 2D intensity images,” Sci. Rep. 6, 32491 (2016).
[Crossref]

Pratte, J.-F.

F. Nolet, S. Parent, N. Roy, M.-O. Mercier, S. Charlebois, R. Fontaine, and J.-F. Pratte, “Quenching circuit and SPAD integrated in CMOS 65  nm with 7.8  ps FWHM single photon timing resolution,” Instruments 2, 19 (2018).
[Crossref]

Ragan-Kelley, J.

F. Heide, S. Diamond, M. Nießner, J. Ragan-Kelley, W. Heidrich, and G. Wetzstein, “Proximal: efficient image optimization using proximal algorithms,” ACM Trans. Graphics 35, 84 (2016).
[Crossref]

Rangarajan, P.

A. Viswanath, P. Rangarajan, D. MacFarlane, and M. P. Christensen, “Indirect imaging using correlography,” in Computational Optical Sensing and Imaging (Optical Society of America, 2018), paper CM2E–3.

Raskar, R.

A. Kadambi, H. Zhao, B. Shi, and R. Raskar, “Occluded imaging with time-of-flight sensors,” ACM Trans. Graphics 35, 15 (2016).
[Crossref]

A. Kadambi, R. Whyte, A. Bhandari, L. Streeter, C. Barsi, A. Dorrington, and R. Raskar, “Coded time of flight cameras: sparse deconvolution to address multipath interference and recover time profiles,” ACM Trans. Graphics 32, 167 (2013).
[Crossref]

A. Velten, D. Wu, A. Jarabo, B. Masia, C. Barsi, C. Joshi, E. Lawson, M. Bawendi, D. Gutierrez, and R. Raskar, “Femto-photography: capturing and visualizing the propagation of light,” ACM Trans. Graphics 32, 44 (2013).
[Crossref]

A. Velten, T. Willwacher, O. Gupta, A. Veeraraghavan, M. Bawendi, and R. Raskar, “Recovering three-dimensional shape around a corner using ultrafast time-of-flight imaging,” Nat. Commun. 3, 745 (2012).
[Crossref]

O. Gupta, T. Willwacher, A. Velten, A. Veeraraghavan, and R. Raskar, “Reconstruction of hidden 3D shapes using diffuse reflections,” Opt. Express 20, 19096–19108 (2012).
[Crossref]

M. Tancik, G. Satat, and R. Raskar, “Flash photography for data-driven hidden scene recovery,” arXiv:1810.11710 (2018).

T. Maeda, G. Satat, T. Swedish, L. Sinha, and R. Raskar, “Recent advances in imaging around corners,” arXiv:1910.05613 (2019).

R. Pandharkar, A. Velten, A. Bardagjy, E. Lawson, M. Bawendi, and R. Raskar, “Estimating motion and size of moving non-line-of-sight objects in cluttered environments,” in Proc. of IEEE International Conference on Computer Vision and Pattern Recognition (2011), pp. 265–272.

A. Kirmani, T. Hutchison, J. Davis, and R. Raskar, “Looking around the corner using transient imaging,” in Proceedings of IEEE International Conference on Computer Vision (2009), pp. 159–166.

Ren, S.

K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of IEEE International Conference on Computer Vision and Pattern Recognition (2016), pp. 770–778.

Repina, N.

M. R. Kellman, E. Bostan, N. Repina, M. Lustig, and L. Waller, “Physics-based learned design: optimized coded-illumination for quantitative phase imaging,” arXiv:1808.03571 (2018).

Reza, S. A.

X. Liu, I. Guillén, M. La Manna, J. H. Nam, S. A. Reza, T. H. Le, A. Jarabo, D. Gutierrez, and A. Velten, “Non-line-of-sight imaging using phasor-field virtual wave optics,” Nature 572, 620–623 (2019).
[Crossref]

Rezvani, R.

M. Batarseh, S. Sukhov, Z. Shen, H. Gemar, R. Rezvani, and A. Dogariu, “Passive sensing around the corner using spatial coherence,” Nat. Commun. 9, 3629 (2018).
[Crossref]

Rivenson, Y.

Y. Rivenson, Y. Zhang, H. Günaydın, D. Teng, and A. Ozcan, “Phase recovery and holographic image reconstruction using deep learning in neural networks,” Light Sci. Appl. 7, 17141 (2018).
[Crossref]

Y. Rivenson, Z. Göröcs, H. Günaydin, Y. Zhang, H. Wang, and A. Ozcan, “Deep learning microscopy,” Optica 4, 1437–1443 (2017).
[Crossref]

Ronneberger, O.

O. Ronneberger, P. Fischer, and T. Brox, “U-net: convolutional networks for biomedical image segmentation,” in International Conference on Medical image computing and computer-assisted intervention (Springer, 2015), pp. 234–241.

Rosenbluh, M.

I. Freund, M. Rosenbluh, and S. Feng, “Memory effects in propagation of optical waves through disordered media,” Phys. Rev. Lett. 61, 2328–2331 (1988).
[Crossref]

Roy, N.

F. Nolet, S. Parent, N. Roy, M.-O. Mercier, S. Charlebois, R. Fontaine, and J.-F. Pratte, “Quenching circuit and SPAD integrated in CMOS 65  nm with 7.8  ps FWHM single photon timing resolution,” Instruments 2, 19 (2018).
[Crossref]

Sanghavi, S.

P. Netrapalli, P. Jain, and S. Sanghavi, “Phase retrieval using alternating minimization,” in Advances in Neural Information Processing Systems (2013), pp. 2796–2804.

Sankaranarayanan, A. C.

S. Xin, S. Nousias, K. N. Kutulakos, A. C. Sankaranarayanan, S. G. Narasimhan, and I. Gkioulekas, “A theory of Fermat paths for non-line-of-sight shape reconstruction,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (2019), pp. 6800–6809.

Satat, G.

T. Maeda, G. Satat, T. Swedish, L. Sinha, and R. Raskar, “Recent advances in imaging around corners,” arXiv:1910.05613 (2019).

M. Tancik, G. Satat, and R. Raskar, “Flash photography for data-driven hidden scene recovery,” arXiv:1810.11710 (2018).

Saunders, C.

C. Saunders, J. Murray-Bruce, and V. K. Goyal, “Computational periscopy with an ordinary digital camera,” Nature 565, 472–475 (2019).
[Crossref]

Schniter, P.

C. A. Metzler, P. Schniter, A. Veeraraghavan, and R. G. Baraniuk, “prDeep: robust phase retrieval with a flexible deep network,” in Proceedings International Conference on Machine Learning (2018), pp. 3498–3507.

Shapiro, J. H.

Shechtman, Y.

Shen, Z.

M. Batarseh, S. Sukhov, Z. Shen, H. Gemar, R. Rezvani, and A. Dogariu, “Passive sensing around the corner using spatial coherence,” Nat. Commun. 9, 3629 (2018).
[Crossref]

Shi, B.

A. Kadambi, H. Zhao, B. Shi, and R. Raskar, “Occluded imaging with time-of-flight sensors,” ACM Trans. Graphics 35, 15 (2016).
[Crossref]

Shulkind, G.

Silberberg, Y.

O. Katz, E. Small, and Y. Silberberg, “Looking around corners and through thin turbid layers in real time with scattered incoherent light,” Nat. Photonics 6, 549–553 (2012).
[Crossref]

Sinha, A.

Sinha, L.

T. Maeda, G. Satat, T. Swedish, L. Sinha, and R. Raskar, “Recent advances in imaging around corners,” arXiv:1910.05613 (2019).

Situ, G.

F. Wang, H. Wang, H. Wang, G. Li, and G. Situ, “Learning from simulation: an end-to-end deep-learning approach for computational ghost imaging,” Opt. Express 27, 25560–25572 (2019).
[Crossref]

M. Lyu, H. Wang, G. Li, S. Zheng, and G. Situ, “Learning-based lensless imaging through optically thick scattering media,” Adv. Photon. 1, 036002 (2019).
[Crossref]

Small, E.

O. Katz, E. Small, and Y. Silberberg, “Looking around corners and through thin turbid layers in real time with scattered incoherent light,” Nat. Photonics 6, 549–553 (2012).
[Crossref]

Smith, B. M.

B. M. Smith, M. O’Toole, and M. Gupta, “Tracking multiple objects outside the line of sight using speckle imaging,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (2018), pp. 6258–6266.

Streeter, L.

A. Kadambi, R. Whyte, A. Bhandari, L. Streeter, C. Barsi, A. Dorrington, and R. Raskar, “Coded time of flight cameras: sparse deconvolution to address multipath interference and recover time profiles,” ACM Trans. Graphics 32, 167 (2013).
[Crossref]

Strohmer, T.

E. J. Candes, T. Strohmer, and V. Voroninski, “Phaselift: exact and stable signal recovery from magnitude measurements via convex programming,” Commun. Pure Appl. Math. 66, 1241–1274 (2013).
[Crossref]

Sukhov, S.

M. Batarseh, S. Sukhov, Z. Shen, H. Gemar, R. Rezvani, and A. Dogariu, “Passive sensing around the corner using spatial coherence,” Nat. Commun. 9, 3629 (2018).
[Crossref]

Sun, J.

K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of IEEE International Conference on Computer Vision and Pattern Recognition (2016), pp. 770–778.

Sun, Y.

Swedish, T.

T. Maeda, G. Satat, T. Swedish, L. Sinha, and R. Raskar, “Recent advances in imaging around corners,” arXiv:1910.05613 (2019).

Tal, D.

D. Martin, C. Fowlkes, D. Tal, and J. Malik, “A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics,” in Proceedings of IEEE International Conference on Computer Vision (2001), Vol. 2, pp. 416–423.

Tancik, M.

M. Tancik, G. Satat, and R. Raskar, “Flash photography for data-driven hidden scene recovery,” arXiv:1810.11710 (2018).

Teng, D.

Y. Rivenson, Y. Zhang, H. Günaydın, D. Teng, and A. Ozcan, “Phase recovery and holographic image reconstruction using deep learning in neural networks,” Light Sci. Appl. 7, 17141 (2018).
[Crossref]

Thrampoulidis, C.

Tian, L.

Torralba, A.

F. Xu, G. Shulkind, C. Thrampoulidis, J. H. Shapiro, A. Torralba, F. N. C. Wong, and G. W. Wornell, “Revealing hidden scenes by photon-efficient occlusion-based opportunistic active imaging,” Opt. Express 26, 9945–9962 (2018).
[Crossref]

K. L. Bouman, V. Ye, A. B. Yedidia, F. Durand, G. W. Wornell, A. Torralba, and W. T. Freeman, “Turning corners into cameras: principles and methods,” in Proceedings of IEEE International Conference on Computer Vision (2017), Vol. 1, pp. 8.

Tosi, A.

M. Buttafava, J. Zeman, A. Tosi, K. Eliceiri, and A. Velten, “Non-line-of-sight imaging using a time-gated single photon avalanche diode,” Opt. Express 23, 20997–21011 (2015).
[Crossref]

A. K. Pediredla, M. Buttafava, A. Tosi, O. Cossairt, and A. Veeraraghavan, “Reconstructing rooms using photon echoes: a plane based model and reconstruction algorithm for looking around the corner,” in Proceedings of IEEE International Conference on Computational Photography (IEEE, 2017).

Tu, Y.-C.

S. Marchesini, Y.-C. Tu, and H.-T. Wu, “Alternating projection, ptychographic imaging and phase synchronization,” Appl. Comput. Harmon. Anal. 41, 815–851 (2016).
[Crossref]

Unser, M.

K. H. Jin, M. T. McCann, E. Froustey, and M. Unser, “Deep convolutional neural network for inverse problems in imaging,” IEEE Trans. Image Process. 26, 4509–4522 (2017).
[Crossref]

van Putten, E. G.

J. Bertolotti, E. G. van Putten, C. Blum, A. Lagendijk, W. L. Vos, and A. P. Mosk, “Non-invasive imaging through opaque scattering layers,” Nature 491, 232–234 (2012).
[Crossref]

Veeraraghavan, A.

A. Velten, T. Willwacher, O. Gupta, A. Veeraraghavan, M. Bawendi, and R. Raskar, “Recovering three-dimensional shape around a corner using ultrafast time-of-flight imaging,” Nat. Commun. 3, 745 (2012).
[Crossref]

O. Gupta, T. Willwacher, A. Velten, A. Veeraraghavan, and R. Raskar, “Reconstruction of hidden 3D shapes using diffuse reflections,” Opt. Express 20, 19096–19108 (2012).
[Crossref]

A. K. Pediredla, M. Buttafava, A. Tosi, O. Cossairt, and A. Veeraraghavan, “Reconstructing rooms using photon echoes: a plane based model and reconstruction algorithm for looking around the corner,” in Proceedings of IEEE International Conference on Computational Photography (IEEE, 2017).

C. A. Metzler, P. Schniter, A. Veeraraghavan, and R. G. Baraniuk, “prDeep: robust phase retrieval with a flexible deep network,” in Proceedings International Conference on Machine Learning (2018), pp. 3498–3507.

Velten, A.

X. Liu, I. Guillén, M. La Manna, J. H. Nam, S. A. Reza, T. H. Le, A. Jarabo, D. Gutierrez, and A. Velten, “Non-line-of-sight imaging using phasor-field virtual wave optics,” Nature 572, 620–623 (2019).
[Crossref]

M. Buttafava, J. Zeman, A. Tosi, K. Eliceiri, and A. Velten, “Non-line-of-sight imaging using a time-gated single photon avalanche diode,” Opt. Express 23, 20997–21011 (2015).
[Crossref]

A. Velten, D. Wu, A. Jarabo, B. Masia, C. Barsi, C. Joshi, E. Lawson, M. Bawendi, D. Gutierrez, and R. Raskar, “Femto-photography: capturing and visualizing the propagation of light,” ACM Trans. Graphics 32, 44 (2013).
[Crossref]

A. Velten, T. Willwacher, O. Gupta, A. Veeraraghavan, M. Bawendi, and R. Raskar, “Recovering three-dimensional shape around a corner using ultrafast time-of-flight imaging,” Nat. Commun. 3, 745 (2012).
[Crossref]

O. Gupta, T. Willwacher, A. Velten, A. Veeraraghavan, and R. Raskar, “Reconstruction of hidden 3D shapes using diffuse reflections,” Opt. Express 20, 19096–19108 (2012).
[Crossref]

R. Pandharkar, A. Velten, A. Bardagjy, E. Lawson, M. Bawendi, and R. Raskar, “Estimating motion and size of moving non-line-of-sight objects in cluttered environments,” in Proc. of IEEE International Conference on Computer Vision and Pattern Recognition (2011), pp. 265–272.

Viswanath, A.

A. Viswanath, P. Rangarajan, D. MacFarlane, and M. P. Christensen, “Indirect imaging using correlography,” in Computational Optical Sensing and Imaging (Optical Society of America, 2018), paper CM2E–3.

Voelz, D. G.

Voroninski, V.

E. J. Candes, T. Strohmer, and V. Voroninski, “Phaselift: exact and stable signal recovery from magnitude measurements via convex programming,” Commun. Pure Appl. Math. 66, 1241–1274 (2013).
[Crossref]

Vos, W. L.

J. Bertolotti, E. G. van Putten, C. Blum, A. Lagendijk, W. L. Vos, and A. P. Mosk, “Non-invasive imaging through opaque scattering layers,” Nature 491, 232–234 (2012).
[Crossref]

Waller, L.

M. R. Kellman, E. Bostan, N. Repina, M. Lustig, and L. Waller, “Physics-based learned design: optimized coded-illumination for quantitative phase imaging,” arXiv:1808.03571 (2018).

Wang, F.

Wang, G.

G. Wang, G. B. Giannakis, and Y. C. Eldar, “Solving systems of random quadratic equations via truncated amplitude flow,” IEEE Trans. Inf. Theory 64, 773–794 (2018).
[Crossref]

Wang, H.

Warburton, R. E.

Weiss, L. E.

Welch, P.

P. Welch, “The use of fast Fourier transform for the estimation of power spectra: a method based on time averaging over short, modified periodograms,” IEEE Trans. Audio Electroacoust. 15, 70–73 (1967).
[Crossref]

Wen, Z.

Z. Wen, C. Yang, X. Liu, and S. Marchesini, “Alternating direction methods for classical and ptychographic phase retrieval,” Inverse Probl. 28, 115010 (2012).
[Crossref]

Wetzstein, G.

D. B. Lindell, G. Wetzstein, and M. O’Toole, “Wave-based non-line-of-sight imaging using fast f-k migration,” ACM Trans. Graphics 38, 116 (2019).
[Crossref]

M. O’Toole, D. B. Lindell, and G. Wetzstein, “Confocal non-line-of-sight imaging based on the light cone transform,” Nature 555, 338–341 (2018).
[Crossref]

F. Heide, S. Diamond, M. Nießner, J. Ragan-Kelley, W. Heidrich, and G. Wetzstein, “Proximal: efficient image optimization using proximal algorithms,” ACM Trans. Graphics 35, 84 (2016).
[Crossref]

M. O’Toole, D. B. Lindell, and G. Wetzstein, “Real-time non-line-of-sight imaging,” in ACM SIGGRAPH 2018 Emerging Technologies (ACM, 2018), paper 14.

Whyte, R.

A. Kadambi, R. Whyte, A. Bhandari, L. Streeter, C. Barsi, A. Dorrington, and R. Raskar, “Coded time of flight cameras: sparse deconvolution to address multipath interference and recover time profiles,” ACM Trans. Graphics 32, 167 (2013).
[Crossref]

Willwacher, T.

O. Gupta, T. Willwacher, A. Velten, A. Veeraraghavan, and R. Raskar, “Reconstruction of hidden 3D shapes using diffuse reflections,” Opt. Express 20, 19096–19108 (2012).
[Crossref]

A. Velten, T. Willwacher, O. Gupta, A. Veeraraghavan, M. Bawendi, and R. Raskar, “Recovering three-dimensional shape around a corner using ultrafast time-of-flight imaging,” Nat. Commun. 3, 745 (2012).
[Crossref]

Wong, F. N. C.

Wornell, G. W.

F. Xu, G. Shulkind, C. Thrampoulidis, J. H. Shapiro, A. Torralba, F. N. C. Wong, and G. W. Wornell, “Revealing hidden scenes by photon-efficient occlusion-based opportunistic active imaging,” Opt. Express 26, 9945–9962 (2018).
[Crossref]

K. L. Bouman, V. Ye, A. B. Yedidia, F. Durand, G. W. Wornell, A. Torralba, and W. T. Freeman, “Turning corners into cameras: principles and methods,” in Proceedings of IEEE International Conference on Computer Vision (2017), Vol. 1, pp. 8.

Wu, D.

A. Velten, D. Wu, A. Jarabo, B. Masia, C. Barsi, C. Joshi, E. Lawson, M. Bawendi, D. Gutierrez, and R. Raskar, “Femto-photography: capturing and visualizing the propagation of light,” ACM Trans. Graphics 32, 44 (2013).
[Crossref]

Wu, H.-T.

S. Marchesini, Y.-C. Tu, and H.-T. Wu, “Alternating projection, ptychographic imaging and phase synchronization,” Appl. Comput. Harmon. Anal. 41, 815–851 (2016).
[Crossref]

Xia, Z.

Xiao, L.

F. Heide, L. Xiao, W. Heidrich, and M. B. Hullin, “Diffuse mirrors: 3D reconstruction from diffuse indirect illumination using inexpensive time-of-flight sensors,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (2014), pp. 3222–3229.

Xin, S.

S. Xin, S. Nousias, K. N. Kutulakos, A. C. Sankaranarayanan, S. G. Narasimhan, and I. Gkioulekas, “A theory of Fermat paths for non-line-of-sight shape reconstruction,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (2019), pp. 6800–6809.

Xu, F.

Xu, J.

C. Chen, Q. Chen, J. Xu, and V. Koltun, “Learning to see in the dark,” in Proceedings of IEEE International Conference on Computer Vision and Pattern Recognition (2018).

Xue, Y.

Yang, C.

Z. Wen, C. Yang, X. Liu, and S. Marchesini, “Alternating direction methods for classical and ptychographic phase retrieval,” Inverse Probl. 28, 115010 (2012).
[Crossref]

Ye, V.

K. L. Bouman, V. Ye, A. B. Yedidia, F. Durand, G. W. Wornell, A. Torralba, and W. T. Freeman, “Turning corners into cameras: principles and methods,” in Proceedings of IEEE International Conference on Computer Vision (2017), Vol. 1, pp. 8.

Yedidia, A. B.

K. L. Bouman, V. Ye, A. B. Yedidia, F. Durand, G. W. Wornell, A. Torralba, and W. T. Freeman, “Turning corners into cameras: principles and methods,” in Proceedings of IEEE International Conference on Computer Vision (2017), Vol. 1, pp. 8.

Zeman, J.

Zhang, H.

H. Zhang, Y. Chi, and Y. Liang, “Provable non-convex phase retrieval with outliers: median truncated Wirtinger flow,” in Proc. International Conference on Machine Learning (2016), pp. 1022–1031.

Zhang, X.

K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of IEEE International Conference on Computer Vision and Pattern Recognition (2016), pp. 770–778.

Zhang, Y.

Y. Rivenson, Y. Zhang, H. Günaydın, D. Teng, and A. Ozcan, “Phase recovery and holographic image reconstruction using deep learning in neural networks,” Light Sci. Appl. 7, 17141 (2018).
[Crossref]

Y. Rivenson, Z. Göröcs, H. Günaydin, Y. Zhang, H. Wang, and A. Ozcan, “Deep learning microscopy,” Optica 4, 1437–1443 (2017).
[Crossref]

Zhao, H.

A. Kadambi, H. Zhao, B. Shi, and R. Raskar, “Occluded imaging with time-of-flight sensors,” ACM Trans. Graphics 35, 15 (2016).
[Crossref]

Zheng, S.

M. Lyu, H. Wang, G. Li, S. Zheng, and G. Situ, “Learning-based lensless imaging through optically thick scattering media,” Adv. Photon. 1, 036002 (2019).
[Crossref]

ACM Trans. Graphics (6)

A. Velten, D. Wu, A. Jarabo, B. Masia, C. Barsi, C. Joshi, E. Lawson, M. Bawendi, D. Gutierrez, and R. Raskar, “Femto-photography: capturing and visualizing the propagation of light,” ACM Trans. Graphics 32, 44 (2013).
[Crossref]

D. B. Lindell, G. Wetzstein, and M. O’Toole, “Wave-based non-line-of-sight imaging using fast f-k migration,” ACM Trans. Graphics 38, 116 (2019).
[Crossref]

F. Heide, S. Diamond, M. Nießner, J. Ragan-Kelley, W. Heidrich, and G. Wetzstein, “Proximal: efficient image optimization using proximal algorithms,” ACM Trans. Graphics 35, 84 (2016).
[Crossref]

F. Heide, M. B. Hullin, J. Gregson, and W. Heidrich, “Low-budget transient imaging using photonic mixer devices,” ACM Trans. Graphics 32, 45 (2013).
[Crossref]

A. Kadambi, R. Whyte, A. Bhandari, L. Streeter, C. Barsi, A. Dorrington, and R. Raskar, “Coded time of flight cameras: sparse deconvolution to address multipath interference and recover time profiles,” ACM Trans. Graphics 32, 167 (2013).
[Crossref]

A. Kadambi, H. Zhao, B. Shi, and R. Raskar, “Occluded imaging with time-of-flight sensors,” ACM Trans. Graphics 35, 15 (2016).
[Crossref]

Adv. Photon. (1)

M. Lyu, H. Wang, G. Li, S. Zheng, and G. Situ, “Learning-based lensless imaging through optically thick scattering media,” Adv. Photon. 1, 036002 (2019).
[Crossref]

Appl. Comput. Harmon. Anal. (1)

S. Marchesini, Y.-C. Tu, and H.-T. Wu, “Alternating projection, ptychographic imaging and phase synchronization,” Appl. Comput. Harmon. Anal. 41, 815–851 (2016).
[Crossref]

Appl. Opt. (1)

Commun. Pure Appl. Math. (1)

E. J. Candes, T. Strohmer, and V. Voroninski, “Phaselift: exact and stable signal recovery from magnitude measurements via convex programming,” Commun. Pure Appl. Math. 66, 1241–1274 (2013).
[Crossref]

IEEE Trans. Audio Electroacoust. (1)

P. Welch, “The use of fast Fourier transform for the estimation of power spectra: a method based on time averaging over short, modified periodograms,” IEEE Trans. Audio Electroacoust. 15, 70–73 (1967).
[Crossref]

IEEE Trans. Image Process. (1)

K. H. Jin, M. T. McCann, E. Froustey, and M. Unser, “Deep convolutional neural network for inverse problems in imaging,” IEEE Trans. Image Process. 26, 4509–4522 (2017).
[Crossref]

IEEE Trans. Inf. Theory (1)

G. Wang, G. B. Giannakis, and Y. C. Eldar, “Solving systems of random quadratic equations via truncated amplitude flow,” IEEE Trans. Inf. Theory 64, 773–794 (2018).
[Crossref]

Instruments (1)

F. Nolet, S. Parent, N. Roy, M.-O. Mercier, S. Charlebois, R. Fontaine, and J.-F. Pratte, “Quenching circuit and SPAD integrated in CMOS 65  nm with 7.8  ps FWHM single photon timing resolution,” Instruments 2, 19 (2018).
[Crossref]

Inverse Probl. (2)

D. R. Luke, “Relaxed averaged alternating reflections for diffraction imaging,” Inverse Probl. 21, 37 (2004).
[Crossref]

Z. Wen, C. Yang, X. Liu, and S. Marchesini, “Alternating direction methods for classical and ptychographic phase retrieval,” Inverse Probl. 28, 115010 (2012).
[Crossref]

J. Opt. (1)

Z. Kemp, “Propagation based phase retrieval of simulated intensity measurements using artificial neural networks,” J. Opt. 20, 045606 (2018).
[Crossref]

J. Opt. Soc. Am. A (1)

Light Sci. Appl. (1)

Y. Rivenson, Y. Zhang, H. Günaydın, D. Teng, and A. Ozcan, “Phase recovery and holographic image reconstruction using deep learning in neural networks,” Light Sci. Appl. 7, 17141 (2018).
[Crossref]

Nat. Commun. (2)

M. Batarseh, S. Sukhov, Z. Shen, H. Gemar, R. Rezvani, and A. Dogariu, “Passive sensing around the corner using spatial coherence,” Nat. Commun. 9, 3629 (2018).
[Crossref]

A. Velten, T. Willwacher, O. Gupta, A. Veeraraghavan, M. Bawendi, and R. Raskar, “Recovering three-dimensional shape around a corner using ultrafast time-of-flight imaging,” Nat. Commun. 3, 745 (2012).
[Crossref]

Nat. Photonics (2)

O. Katz, E. Small, and Y. Silberberg, “Looking around corners and through thin turbid layers in real time with scattered incoherent light,” Nat. Photonics 6, 549–553 (2012).
[Crossref]

O. Katz, P. Heidmann, M. Fink, and S. Gigan, “Non-invasive single-shot imaging through scattering layers and around corners via speckle correlations,” Nat. Photonics 8, 784–790 (2014).
[Crossref]

Nature (4)

X. Liu, I. Guillén, M. La Manna, J. H. Nam, S. A. Reza, T. H. Le, A. Jarabo, D. Gutierrez, and A. Velten, “Non-line-of-sight imaging using phasor-field virtual wave optics,” Nature 572, 620–623 (2019).
[Crossref]

M. O’Toole, D. B. Lindell, and G. Wetzstein, “Confocal non-line-of-sight imaging based on the light cone transform,” Nature 555, 338–341 (2018).
[Crossref]

J. Bertolotti, E. G. van Putten, C. Blum, A. Lagendijk, W. L. Vos, and A. P. Mosk, “Non-invasive imaging through opaque scattering layers,” Nature 491, 232–234 (2012).
[Crossref]

C. Saunders, J. Murray-Bruce, and V. K. Goyal, “Computational periscopy with an ordinary digital camera,” Nature 565, 472–475 (2019).
[Crossref]

Opt. Eng. (1)

J. R. Fienup and P. S. Idell, “Imaging correlography with sparse arrays of detectors,” Opt. Eng. 27, 279778 (1988).
[Crossref]

Opt. Express (6)

Opt. Lett. (3)

Optica (5)

Optik (1)

R. W. Gerchberg, “A practical algorithm for the determination of phase from image and diffraction plane pictures,” Optik 35, 237–246 (1972).

Optik (Stuttgart) (1)

R. Bates, “Fourier phase problems are uniquely solvable in mute than one dimension. I: underlying theory,” Optik (Stuttgart) 61, 247–262 (1982).

Phys. A (1)

I. Freund, “Looking through walls and around corners,” Phys. A 168, 49–65 (1990).
[Crossref]

Phys. Rev. Lett. (2)

A. Goy, K. Arthur, S. Li, and G. Barbastathis, “Low photon count phase retrieval using deep learning,” Phys. Rev. Lett. 121, 243902 (2018).
[Crossref]

I. Freund, M. Rosenbluh, and S. Feng, “Memory effects in propagation of optical waves through disordered media,” Phys. Rev. Lett. 61, 2328–2331 (1988).
[Crossref]

Sci. Rep. (3)

M. J. Cherukara, Y. S. Nashed, and R. J. Harder, “Real-time coherent diffraction inversion using deep generative networks,” Sci. Rep. 8, 16520 (2018).
[Crossref]

J. Klein, C. Peters, J. Martín, M. Laurenzis, and M. B. Hullin, “Tracking objects outside the line of sight using 2D intensity images,” Sci. Rep. 6, 32491 (2016).
[Crossref]

P. Caramazza, A. Boccolini, D. Buschek, M. Hullin, C. F. Higham, R. Henderson, R. Murray-Smith, and D. Faccio, “Neural network identification of people hidden from view with a single-pixel, single-photon detector,” Sci. Rep. 8, 11945 (2018).
[Crossref]

Other (28)

K. L. Bouman, V. Ye, A. B. Yedidia, F. Durand, G. W. Wornell, A. Torralba, and W. T. Freeman, “Turning corners into cameras: principles and methods,” in Proceedings of IEEE International Conference on Computer Vision (2017), Vol. 1, pp. 8.

B. M. Smith, M. O’Toole, and M. Gupta, “Tracking multiple objects outside the line of sight using speckle imaging,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (2018), pp. 6258–6266.

M. Tancik, G. Satat, and R. Raskar, “Flash photography for data-driven hidden scene recovery,” arXiv:1810.11710 (2018).

T. Maeda, G. Satat, T. Swedish, L. Sinha, and R. Raskar, “Recent advances in imaging around corners,” arXiv:1910.05613 (2019).

L. Parmesan, N. A. Dutton, N. J. Calder, A. J. Holmes, L. A. Grant, and R. K. Henderson, “A 9.8  µm sample and hold time to amplitude converter CMOS SPAD pixel,” in 44th European Solid State Device Research Conference (ESSDERC) (IEEE, 2014), pp. 290–293.

Y. Maruyama and E. Charbon, “A time-gated 128 ×128 CMOS SPAD array for on-chip fluorescence detection,” in Proceedings International Image Sensor Workshop (IISW) (2011).

F. Heide, L. Xiao, W. Heidrich, and M. B. Hullin, “Diffuse mirrors: 3D reconstruction from diffuse indirect illumination using inexpensive time-of-flight sensors,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (2014), pp. 3222–3229.

C. A. Metzler, P. Schniter, A. Veeraraghavan, and R. G. Baraniuk, “prDeep: robust phase retrieval with a flexible deep network,” in Proceedings International Conference on Machine Learning (2018), pp. 3498–3507.

J. Fienup (private communication, 2017).

R. J. Muirhead, Aspects of Multivariate Statistical Theory (Wiley, 2009), Vol. 197.

D. Martin, C. Fowlkes, D. Tal, and J. Malik, “A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics,” in Proceedings of IEEE International Conference on Computer Vision (2001), Vol. 2, pp. 416–423.

K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of IEEE International Conference on Computer Vision and Pattern Recognition (2016), pp. 770–778.

O. Ronneberger, P. Fischer, and T. Brox, “U-net: convolutional networks for biomedical image segmentation,” in International Conference on Medical image computing and computer-assisted intervention (Springer, 2015), pp. 234–241.

A. Kappeler, S. Ghosh, J. Holloway, O. Cossairt, and A. Katsaggelos, “Ptychnet: CNN based Fourier ptychography,” in Proceedings of IEEE International Conference on Image Processing (IEEE, 2017), pp. 1712–1716.

L. Boominathan, M. Maniparambil, H. Gupta, R. Baburajan, and K. Mitra, “Phase retrieval for Fourier ptychography under varying amount of measurements,” arXiv:1805.03593 (2018).

Y. Chen and E. Candes, “Solving random quadratic systems of equations is nearly as easy as solving linear systems,” in Advances in Neural Information Processing Systems (2015), pp. 739–747.

P. Netrapalli, P. Jain, and S. Sanghavi, “Phase retrieval using alternating minimization,” in Advances in Neural Information Processing Systems (2013), pp. 2796–2804.

H. Zhang, Y. Chi, and Y. Liang, “Provable non-convex phase retrieval with outliers: median truncated Wirtinger flow,” in Proc. International Conference on Machine Learning (2016), pp. 1022–1031.

A. Kirmani, T. Hutchison, J. Davis, and R. Raskar, “Looking around the corner using transient imaging,” in Proceedings of IEEE International Conference on Computer Vision (2009), pp. 159–166.

A. Viswanath, P. Rangarajan, D. MacFarlane, and M. P. Christensen, “Indirect imaging using correlography,” in Computational Optical Sensing and Imaging (Optical Society of America, 2018), paper CM2E–3.

A. K. Pediredla, M. Buttafava, A. Tosi, O. Cossairt, and A. Veeraraghavan, “Reconstructing rooms using photon echoes: a plane based model and reconstruction algorithm for looking around the corner,” in Proceedings of IEEE International Conference on Computational Photography (IEEE, 2017).

R. Pandharkar, A. Velten, A. Bardagjy, E. Lawson, M. Bawendi, and R. Raskar, “Estimating motion and size of moving non-line-of-sight objects in cluttered environments,” in Proc. of IEEE International Conference on Computer Vision and Pattern Recognition (2011), pp. 265–272.

M. O’Toole, D. B. Lindell, and G. Wetzstein, “Real-time non-line-of-sight imaging,” in ACM SIGGRAPH 2018 Emerging Technologies (ACM, 2018), paper 14.

S. Xin, S. Nousias, K. N. Kutulakos, A. C. Sankaranarayanan, S. G. Narasimhan, and I. Gkioulekas, “A theory of Fermat paths for non-line-of-sight shape reconstruction,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (2019), pp. 6800–6809.

M. R. Kellman, E. Bostan, N. Repina, M. Lustig, and L. Waller, “Physics-based learned design: optimized coded-illumination for quantitative phase imaging,” arXiv:1808.03571 (2018).

C. Chen, Q. Chen, J. Xu, and V. Koltun, “Learning to see in the dark,” in Proceedings of IEEE International Conference on Computer Vision and Pattern Recognition (2018).

D. P. Kingma and J. Ba, “Adam: a method for stochastic optimization,” arXiv:1412.6980 (2014).

https://github.com/ricedsp/Deep_Inverse_Correlography

Supplementary Material (1)

NameDescription
» Supplement 1       Supplemental document.

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (9)

Fig. 1.
Fig. 1. NLoS imaging setup. A camera uses light scattered off of a rough wall, known as a virtual detector, to reconstruct an image of the hidden object. When using a continuous-wave laser, the camera records speckle. Inset, NLoS correlography estimates an object’s autocorrelation using speckle images. It then uses this autocorrelation estimate to recover the object’s shape by solving a PR problem.
Fig. 2.
Fig. 2. Long-exposure NLoS correlography example. 25 nonoverlapping $ 400 \times 400 $ speckle subimages were drawn from each of 50 distinct, 1 s exposure-length speckle images. These subimages were then used to estimate the hidden object’s autocorrelation (middle) using Eqs. (8) and (9). HIO [21] was then used to reconstruct the object’s albedo (right). (a) Hidden object; (b) speckle subimage; (c) estimate of $ r \star r $; (d) estimate of $ r $.
Fig. 3.
Fig. 3. Short-exposure NLoS correlography example. 25 nonoverlapping $ 400 \times 400 $ speckle subimages were drawn from each of 2 distinct, 1/8 s exposure-length speckle images. These subimages were then used to estimate the hidden object’s autocorrelation (middle) using Eqs. (8) and (9). HIO [21] was then used to reconstruct the object’s albedo (right). (a) Hidden object; (b) speckle subimage; (c) estimate of $ r \star r $; (d) estimate of $ r $.
Fig. 4.
Fig. 4. Spending the photon budget. Simulated autocorrelation estimates of a Lambertian $ 1\;{\rm cm} \times 1\;{\rm cm} $ letter “V” using (b) 100, 0.1 s measurements; (c) 10, 1 s measurements; and (d) 1, 10 s measurement. With many short-exposure measurements, the spatially invariant noise dominates. With too few measurements, shot-noise-like signal-dependent noise from the finite-sample error dominates. (a) Ground truth; (b) ${100}\; \times \;.{1}$; (c) ${10}\; \times \;{1}$; (d) ${1}\; \times \;{10}$.
Fig. 5.
Fig. 5. Distribution of experimental autocorrelation estimates. Variance versus mean with varying exposure and fixed $ N = 128 $ (left) and variance over mean with fixed exposure and a varying $ N $ (right). As predicted by our model Eq. (11), the variance of our $ r \star r $ estimate grows quadratically with respect to its mean, and the ratio between the variance and mean grows linearly with respect to $ \frac{1}{N} $.
Fig. 6.
Fig. 6. Unstructured training data. Examples of images formed with a Canny edge detector (top) and their associated noisy autocorrelations (bottom).
Fig. 7.
Fig. 7. Noisy estimates of (a) $ \boldsymbol{r} \star \boldsymbol{r} $, (b) $ |{\cal F}(\boldsymbol{r})| $ and (c) the associated $ \boldsymbol{r} $. $ {r \star r} $ and $ r $ share similar features whereas $ |{\cal F}(r)| $ and $ r $, for the most part, do not.
Fig. 8.
Fig. 8. Experimental setup. Light passes from the laser, to the virtual source, to the hidden object, to the virtual detector, and finally to the camera.
Fig. 9.
Fig. 9. Simulated and experimental reconstructions with varying exposure lengths. Because it is more robust to noise, the CNN-based method can operate with far less light and thus at higher frame rates than a system relying on traditional PR algorithms like HIO [21] or Alt-Min [24]. (The vertical/horizontal lines that can be observed in the experimental short-exposure autocorrelation estimates are the result of correlated, fixed pattern read noise between pixels. The 7 and F were measured at different orientations.)

Equations (15)

Equations on this page are rendered with MathJax. Learn more.

F ( r r ) = | F ( r ) | 2 ,
E V S in ( x VS ) = 1.
E V S out ( x VS ) = e j θ V S out ( x VS ) ,
E O in F ( E V S out\! ) ,
E O out ( x 0 ) = E O in ( x O ) r ( x O ) ,
E V D in F ( E O out\! ) .
I | F ( E O out\! ) | 2 .
lim N 1 N n = 1 N | F 1 ( I n ) | 2 = r r ( Δ x O ) + δ ( Δ x O ) [ x 1 = r ( x 1 ) d x 1 ] 2 ,
lim N | 1 N n = 1 N F 1 ( I n ) | 2 = δ ( Δ x O ) [ x 1 = r ( x 1 ) d x 1 ] 2 .
r r ^ ( Δ x ) 1 N n = 1 N | F 1 I n | 2 ( Δ x ) .
r r ^ ( Δ x ) N ( μ ( Δ x ) , σ 2 ( Δ x ) )
r r ^ ( Δ x ) N ( S ( Δ x ) , γ 1 N S 2 ( Δ x ) ) ,
S ( Δ x ) = H ( r r ( Δ x ) + b ) ,
S ( Δ x ) = r r ( Δ x ) + b .
r r ^ ( Δ x ) N ( r r ( Δ x ) + b , γ 1 N ( r r ( Δ x ) + b ) 2 ) r r ( Δ x ) + N ( 0 , γ N ( r r ( Δ x ) ) 2 ) + N ( b , γ N b 2 ) .

Metrics