Abstract

The adaptive optics (AO) technique is widely used to compensate for ocular aberrations and improve imaging resolution. However, when affected by intraocular scatter, speckle noise, and other factors, the quality of the retinal image will be degraded. To effectively improve the image quality without increasing the imaging system’s complexity, the post-processing method of image deblurring is adopted. In this study, we proposed a conditional adversarial network-based method for directly learning an end-to-end mapping between blurry and restored AO retinal images. The proposed model was validated on synthetically generated AO retinal images and real retinal images. The restoration results of synthetic images were evaluated with the metrics of peak signal-to-noise ratio (PSNR), structural similarity (SSIM), perceptual distance, and error rate of cone counting. Moreover, the blind image quality index (BIQI) was used as the no-reference image quality assessment (NR-IQA) algorithm to evaluate the restoration results on real AO retinal images. The experimental results indicate that the images restored by the proposed method have sharper quality and higher signal-to-noise ratio (SNR) when compared with other state-of-the-art methods, which has great practical significance for clinical research and analysis.

© 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

Full Article  |  PDF Article
OSA Recommended Articles
Deblurring adaptive optics retinal images using deep convolutional neural networks

Xiao Fei, Junlei Zhao, Haoxin Zhao, Dai Yun, and Yudong Zhang
Biomed. Opt. Express 8(12) 5675-5687 (2017)

Use of focus measure operators for characterization of flood illumination adaptive optics ophthalmoscopy image quality

David Alonso-Caneiro, Danuta M. Sampson, Avenell L. Chew, Michael J. Collins, and Fred K. Chen
Biomed. Opt. Express 9(2) 679-693 (2018)

Retinal optical coherence tomography image enhancement via deep learning

Kerry J. Halupka, Bhavna J. Antony, Matthew H. Lee, Katie A. Lucy, Ravneet S. Rai, Hiroshi Ishikawa, Gadi Wollstein, Joel S. Schuman, and Rahil Garnavi
Biomed. Opt. Express 9(12) 6205-6221 (2018)

References

  • View by:
  • |
  • |
  • |

  1. J. Liang, D. R. Williams, and D. T. Miller, “Supernormal vision and high-resolution retinal imaging through adaptive optics,” J. Opt. Soc. Am. A 14(11), 2884–2892 (1997).
    [Crossref]
  2. F. Vargasmartin, P. M. Prieto, and P. Artal, “Correction of the aberrations in the human eye with a liquid-crystal spatial light modulator: limits to performance,” J. Opt. Soc. Am. A 15(9), 2552–2562 (1998).
    [Crossref]
  3. A. Roorda, F. Romeroborja, W. J. Donnelly, H. M. Queener, T. J. Hebert, and M. C. W. Campbell, “Adaptive optics scanning laser ophthalmoscopy,” Opt. Express 10(9), 405–412 (2002).
    [Crossref]
  4. S. A. Burns, S. Marcos, A. E. Elsner, and S. Bara, “Contrast improvement of confocal retinal imaging by use of phase-correcting plates,” Opt. Lett. 27(6), 400–402 (2002).
    [Crossref]
  5. J. Lu, H. Li, L. Wei, G. Shi, and Y. Zhang, “Retina imaging in vivo with the adaptive optics confocal scanning laser ophthalmoscope,” in IEEE International Conference on Photonics (2009), 7519.
  6. J. Arines, “Partially compensated deconvolution from wavefront sensing images of the eye fundus,” Opt. Commun. 284(6), 1548–1552 (2011).
    [Crossref]
  7. V. Nourrit, B. Vohnsen, and P. Artal, “Blind deconvolution for high-resolution confocal scanning laser ophthalmoscopy,” J. Opt. 7(10), 585–592 (2005).
    [Crossref]
  8. J. C. Christou, A. Roorda, and D. R. Williams, “Deconvolution of adaptive optics retinal images,” J. Opt. Soc. Am. A 21(8), 1393–1401 (2004).
    [Crossref]
  9. H. Li, J. Lu, G. Shi, and Y. Zhang, “Real-time blind deconvolution of retinal images in adaptive optics scanning laser ophthalmoscopy,” Opt. Commun. 284(13), 3258–3263 (2011).
    [Crossref]
  10. X. Fei, J. Zhao, H. Zhao, D. Yun, and Y. Zhang, “Deblurring adaptive optics retinal images using deep convolutional neural networks,” Biomed. Opt. Express 8(12), 5675–5687 (2017).
    [Crossref]
  11. C. Rao, Y. Tian, and H. Bao, AO-Based High Resolution Image Post-Process (InTech, 2012).
  12. A. Lazareva, M. Asad, and G. G. Slabaugh, “Learning to Deblur Adaptive Optics Retinal Images,” in International Conference on Image Analysis and Recognition (2017), pp. 497–506.
  13. M. Mathieu, C. Couprie, and Y. LeCun, “Deep multi-scale video prediction beyond mean square error,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2015), pp. 1–14.
  14. D. Pathak, P. Krahenbuhl, J. Donahue, T. Darrell, and A. A. Efros, “Context encoders: Feature learning by inpainting,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2016), pp. 2536–2544.
  15. R. Zhang, P. Isola, and A. A. Efros, “Colorful image colorization,” in European Conference on Computer Vision (ECCV) (Springer, 2016), pp. 649–666.
  16. W. Li, Y. He, W. Kong, F. Gao, J. Wang, and G. Shi, “Enhancement of Retinal Image From Line-Scanning Ophthalmoscope Using Generative Adversarial Networks,” IEEE Access 7, 99830–99841 (2019).
    [Crossref]
  17. I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” in Advances in Neural Information Processing Systems (2014), pp. 2672–2680.
  18. O. Kupyn, V. Budzan, M. Mykhailych, D. Mishkin, and J. Matas, “DeblurGAN: Blind Motion Deblurring Using Conditional Adversarial Networks,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2018), pp. 8183–8192.
  19. X. Yu, Y. Qu, and M. Hong, “Underwater-GAN: Underwater Image Restoration via Conditional Generative Adversarial Network,” in International Conference on Pattern Recognition (ICPR) (2018), pp. 66–75.
  20. S. Osindero, “Conditional Generative Adversarial Nets,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2014), pp. 1–7.
  21. P. Isola, J. Zhu, T. Zhou, and A. A. Efros, “Image-to-Image Translation with Conditional Adversarial Networks,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2016), pp. 1–17.
  22. C. Li and M. Wand, “Precomputed Real-Time Texture Synthesis with Markovian Generative Adversarial Networks,” in European Conference on Computer Vision (ECCV) (2016), pp. 702–716.
  23. X. Tao, H. Gao, Y. Wang, X. Shen, J. Wang, and J. Jia, “Scale-recurrent Network for Deep Image Deblurring,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2018), pp. 1–9.
  24. L. Mariotti and N. Devaney, “Performance analysis of cone detection algorithms,” J. Opt. Soc. Am. A 32(4), 497–506 (2015).
    [Crossref]
  25. L. N. Thibos, A. Bradley, and X. Hong, “A statistical model of the aberration structure of normal, well-corrected eyes,” Oph. Phys. Optics 22(5), 427–433 (2002).
    [Crossref]
  26. D. T. Miller and A. Roorda, “Adaptive Optics in Retinal Microscopy and Vision,” in Handbook of Optics (McGaw-Hill, 2009).
  27. D. X. Hammer, R. D. Ferguson, C. E. Bigelow, N. Iftimia, T. E. Ustun, and S. A. Burns, “Adaptive optics scanning laser ophthalmoscope for stabilized retinal imaging,” Opt. Express 14(8), 3354–3367 (2006).
    [Crossref]
  28. R. F. Cooper, C. S. Langlo, A. Dubra, and J. Carroll, “Automatic detection of modal spacing (Yellott’s ring) in adaptive optics scanning light ophthalmoscope images,” Ophthalmic Physiol. Opt. 33(4), 540–549 (2013).
    [Crossref]
  29. S. A. Burns, R. Tumbar, A. E. Elsner, D. Ferguson, and D. X. Hammer, “Large Field of View, Modular, Stabilized, Adaptive-Optics-Based Scanning Laser Ophthalmoscope,” J. Opt. Soc. Am. A 24(5), 1313–1326 (2007).
    [Crossref]
  30. Y. He, G. Deng, L. Wei, X. Li, J. Yang, G. Shi, and Y. Zhang, “Design of a Compact, Bimorph Deformable Mirror-Based Adaptive Optics Scanning Laser Ophthalmoscope,” Adv. Exp. Med. Biol. 923, 375–383 (2016).
    [Crossref]
  31. M. Arjovsky, S. Chintala, and L. Bottou, “Wasserstein gan,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2017), pp. 1–32.
  32. Z. Chen and Y. Tong, “Face super-resolution through wasserstein gans,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2017), pp. 1–8.
  33. C. Ledig, L. Theis, F. Huszar, J. Caballero, A. Cunningham, A. Acosta, A. P. Aitken, A. Tejani, J. Totz, and Z. Wang, “Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2016), pp. 105–114.
  34. S. Nah, T. H. Kim, and K. M. Lee, “Deep Multi-scale Convolutional Neural Network for Dynamic Scene Deblurring,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2017), pp. 257–265.
  35. K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2014), pp. 1–14.
  36. D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” in Proceedings of International Conference for Learning Representations (ICLR) (2015), pp. 1–15.
  37. R. Zhang, P. Isola, A. A. Efros, E. Shechtman, and O. Wang, “The unreasonable effectiveness of deep features as a perceptual metric,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2018), pp. 586–595.
  38. A. Lucas, S. L. Tapia, R. Molina, and A. K. Katsaggelos, “Generative Adversarial Networks and Perceptual Losses for Video Super-Resolution,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2018), pp. 51–55.
  39. B. Davidson, A. Kalitzeos, J. Carroll, A. Dubra, S. Ourselin, M. Michaelides, and C. Bergeles, “Automatic Cone Photoreceptor Localisation in Healthy and Stargardt Afflicted Retinas Using Deep Learning,” Sci. Rep. 8(1), 7911 (2018).
    [Crossref]
  40. J. Lu, B. Gu, X. Wang, and Y. Zhang, “High speed adaptive optics ophthalmoscopy with an anamorphic point spread function,” Opt. Express 26(11), 14356–14374 (2018).
    [Crossref]
  41. Y. Wang, Y. He, L. Wei, X. Li, J. Yang, H. Zhou, and Y. Zhang, “Bimorph deformable mirror based adaptive optics scanning laser ophthalmoscope for retina imaging in vivo,” Chin. Opt. Lett. 15(12), 121102 (2017).
    [Crossref]
  42. A. K. Moorthy and A. C. Bovik, “A Two-Step Framework for Constructing Blind Image Quality Indices,” IEEE Signal Process. Lett. 17(5), 513–516 (2010).
    [Crossref]
  43. H. Song, C. T. Y. Ping, Z. Zhangyi, A. E. Elsner, and S. A. Burns, “Variation of Cone Photoreceptor Packing Density with Retinal Eccentricity and Age,” Invest. Ophthalmol. Visual Sci. 52(10), 7376–7384 (2011).
    [Crossref]
  44. M. N. Muthiah, C. Gias, F. K. Chen, J. Zhong, Z. McClelland, F. B. Sallo, T. Peto, P. J. Coffey, and L. da Cruz, “Cone photoreceptor definition on adaptive optics retinal imaging,” Br. J. Ophthalmol. 98(8), 1073–1079 (2014).
    [Crossref]
  45. B. A. Wandell, Foundations of Vision (Sinauer Associates Inc, 1995), Chap.3.

2019 (1)

W. Li, Y. He, W. Kong, F. Gao, J. Wang, and G. Shi, “Enhancement of Retinal Image From Line-Scanning Ophthalmoscope Using Generative Adversarial Networks,” IEEE Access 7, 99830–99841 (2019).
[Crossref]

2018 (2)

B. Davidson, A. Kalitzeos, J. Carroll, A. Dubra, S. Ourselin, M. Michaelides, and C. Bergeles, “Automatic Cone Photoreceptor Localisation in Healthy and Stargardt Afflicted Retinas Using Deep Learning,” Sci. Rep. 8(1), 7911 (2018).
[Crossref]

J. Lu, B. Gu, X. Wang, and Y. Zhang, “High speed adaptive optics ophthalmoscopy with an anamorphic point spread function,” Opt. Express 26(11), 14356–14374 (2018).
[Crossref]

2017 (2)

2016 (1)

Y. He, G. Deng, L. Wei, X. Li, J. Yang, G. Shi, and Y. Zhang, “Design of a Compact, Bimorph Deformable Mirror-Based Adaptive Optics Scanning Laser Ophthalmoscope,” Adv. Exp. Med. Biol. 923, 375–383 (2016).
[Crossref]

2015 (1)

2014 (1)

M. N. Muthiah, C. Gias, F. K. Chen, J. Zhong, Z. McClelland, F. B. Sallo, T. Peto, P. J. Coffey, and L. da Cruz, “Cone photoreceptor definition on adaptive optics retinal imaging,” Br. J. Ophthalmol. 98(8), 1073–1079 (2014).
[Crossref]

2013 (1)

R. F. Cooper, C. S. Langlo, A. Dubra, and J. Carroll, “Automatic detection of modal spacing (Yellott’s ring) in adaptive optics scanning light ophthalmoscope images,” Ophthalmic Physiol. Opt. 33(4), 540–549 (2013).
[Crossref]

2011 (3)

H. Li, J. Lu, G. Shi, and Y. Zhang, “Real-time blind deconvolution of retinal images in adaptive optics scanning laser ophthalmoscopy,” Opt. Commun. 284(13), 3258–3263 (2011).
[Crossref]

J. Arines, “Partially compensated deconvolution from wavefront sensing images of the eye fundus,” Opt. Commun. 284(6), 1548–1552 (2011).
[Crossref]

H. Song, C. T. Y. Ping, Z. Zhangyi, A. E. Elsner, and S. A. Burns, “Variation of Cone Photoreceptor Packing Density with Retinal Eccentricity and Age,” Invest. Ophthalmol. Visual Sci. 52(10), 7376–7384 (2011).
[Crossref]

2010 (1)

A. K. Moorthy and A. C. Bovik, “A Two-Step Framework for Constructing Blind Image Quality Indices,” IEEE Signal Process. Lett. 17(5), 513–516 (2010).
[Crossref]

2007 (1)

2006 (1)

2005 (1)

V. Nourrit, B. Vohnsen, and P. Artal, “Blind deconvolution for high-resolution confocal scanning laser ophthalmoscopy,” J. Opt. 7(10), 585–592 (2005).
[Crossref]

2004 (1)

2002 (3)

1998 (1)

1997 (1)

Acosta, A.

C. Ledig, L. Theis, F. Huszar, J. Caballero, A. Cunningham, A. Acosta, A. P. Aitken, A. Tejani, J. Totz, and Z. Wang, “Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2016), pp. 105–114.

Aitken, A. P.

C. Ledig, L. Theis, F. Huszar, J. Caballero, A. Cunningham, A. Acosta, A. P. Aitken, A. Tejani, J. Totz, and Z. Wang, “Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2016), pp. 105–114.

Arines, J.

J. Arines, “Partially compensated deconvolution from wavefront sensing images of the eye fundus,” Opt. Commun. 284(6), 1548–1552 (2011).
[Crossref]

Arjovsky, M.

M. Arjovsky, S. Chintala, and L. Bottou, “Wasserstein gan,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2017), pp. 1–32.

Artal, P.

V. Nourrit, B. Vohnsen, and P. Artal, “Blind deconvolution for high-resolution confocal scanning laser ophthalmoscopy,” J. Opt. 7(10), 585–592 (2005).
[Crossref]

F. Vargasmartin, P. M. Prieto, and P. Artal, “Correction of the aberrations in the human eye with a liquid-crystal spatial light modulator: limits to performance,” J. Opt. Soc. Am. A 15(9), 2552–2562 (1998).
[Crossref]

Asad, M.

A. Lazareva, M. Asad, and G. G. Slabaugh, “Learning to Deblur Adaptive Optics Retinal Images,” in International Conference on Image Analysis and Recognition (2017), pp. 497–506.

Ba, J.

D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” in Proceedings of International Conference for Learning Representations (ICLR) (2015), pp. 1–15.

Bao, H.

C. Rao, Y. Tian, and H. Bao, AO-Based High Resolution Image Post-Process (InTech, 2012).

Bara, S.

Bengio, Y.

I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” in Advances in Neural Information Processing Systems (2014), pp. 2672–2680.

Bergeles, C.

B. Davidson, A. Kalitzeos, J. Carroll, A. Dubra, S. Ourselin, M. Michaelides, and C. Bergeles, “Automatic Cone Photoreceptor Localisation in Healthy and Stargardt Afflicted Retinas Using Deep Learning,” Sci. Rep. 8(1), 7911 (2018).
[Crossref]

Bigelow, C. E.

Bottou, L.

M. Arjovsky, S. Chintala, and L. Bottou, “Wasserstein gan,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2017), pp. 1–32.

Bovik, A. C.

A. K. Moorthy and A. C. Bovik, “A Two-Step Framework for Constructing Blind Image Quality Indices,” IEEE Signal Process. Lett. 17(5), 513–516 (2010).
[Crossref]

Bradley, A.

L. N. Thibos, A. Bradley, and X. Hong, “A statistical model of the aberration structure of normal, well-corrected eyes,” Oph. Phys. Optics 22(5), 427–433 (2002).
[Crossref]

Budzan, V.

O. Kupyn, V. Budzan, M. Mykhailych, D. Mishkin, and J. Matas, “DeblurGAN: Blind Motion Deblurring Using Conditional Adversarial Networks,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2018), pp. 8183–8192.

Burns, S. A.

Caballero, J.

C. Ledig, L. Theis, F. Huszar, J. Caballero, A. Cunningham, A. Acosta, A. P. Aitken, A. Tejani, J. Totz, and Z. Wang, “Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2016), pp. 105–114.

Campbell, M. C. W.

Carroll, J.

B. Davidson, A. Kalitzeos, J. Carroll, A. Dubra, S. Ourselin, M. Michaelides, and C. Bergeles, “Automatic Cone Photoreceptor Localisation in Healthy and Stargardt Afflicted Retinas Using Deep Learning,” Sci. Rep. 8(1), 7911 (2018).
[Crossref]

R. F. Cooper, C. S. Langlo, A. Dubra, and J. Carroll, “Automatic detection of modal spacing (Yellott’s ring) in adaptive optics scanning light ophthalmoscope images,” Ophthalmic Physiol. Opt. 33(4), 540–549 (2013).
[Crossref]

Chen, F. K.

M. N. Muthiah, C. Gias, F. K. Chen, J. Zhong, Z. McClelland, F. B. Sallo, T. Peto, P. J. Coffey, and L. da Cruz, “Cone photoreceptor definition on adaptive optics retinal imaging,” Br. J. Ophthalmol. 98(8), 1073–1079 (2014).
[Crossref]

Chen, Z.

Z. Chen and Y. Tong, “Face super-resolution through wasserstein gans,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2017), pp. 1–8.

Chintala, S.

M. Arjovsky, S. Chintala, and L. Bottou, “Wasserstein gan,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2017), pp. 1–32.

Christou, J. C.

Coffey, P. J.

M. N. Muthiah, C. Gias, F. K. Chen, J. Zhong, Z. McClelland, F. B. Sallo, T. Peto, P. J. Coffey, and L. da Cruz, “Cone photoreceptor definition on adaptive optics retinal imaging,” Br. J. Ophthalmol. 98(8), 1073–1079 (2014).
[Crossref]

Cooper, R. F.

R. F. Cooper, C. S. Langlo, A. Dubra, and J. Carroll, “Automatic detection of modal spacing (Yellott’s ring) in adaptive optics scanning light ophthalmoscope images,” Ophthalmic Physiol. Opt. 33(4), 540–549 (2013).
[Crossref]

Couprie, C.

M. Mathieu, C. Couprie, and Y. LeCun, “Deep multi-scale video prediction beyond mean square error,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2015), pp. 1–14.

Courville, A.

I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” in Advances in Neural Information Processing Systems (2014), pp. 2672–2680.

Cunningham, A.

C. Ledig, L. Theis, F. Huszar, J. Caballero, A. Cunningham, A. Acosta, A. P. Aitken, A. Tejani, J. Totz, and Z. Wang, “Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2016), pp. 105–114.

da Cruz, L.

M. N. Muthiah, C. Gias, F. K. Chen, J. Zhong, Z. McClelland, F. B. Sallo, T. Peto, P. J. Coffey, and L. da Cruz, “Cone photoreceptor definition on adaptive optics retinal imaging,” Br. J. Ophthalmol. 98(8), 1073–1079 (2014).
[Crossref]

Darrell, T.

D. Pathak, P. Krahenbuhl, J. Donahue, T. Darrell, and A. A. Efros, “Context encoders: Feature learning by inpainting,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2016), pp. 2536–2544.

Davidson, B.

B. Davidson, A. Kalitzeos, J. Carroll, A. Dubra, S. Ourselin, M. Michaelides, and C. Bergeles, “Automatic Cone Photoreceptor Localisation in Healthy and Stargardt Afflicted Retinas Using Deep Learning,” Sci. Rep. 8(1), 7911 (2018).
[Crossref]

Deng, G.

Y. He, G. Deng, L. Wei, X. Li, J. Yang, G. Shi, and Y. Zhang, “Design of a Compact, Bimorph Deformable Mirror-Based Adaptive Optics Scanning Laser Ophthalmoscope,” Adv. Exp. Med. Biol. 923, 375–383 (2016).
[Crossref]

Devaney, N.

Donahue, J.

D. Pathak, P. Krahenbuhl, J. Donahue, T. Darrell, and A. A. Efros, “Context encoders: Feature learning by inpainting,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2016), pp. 2536–2544.

Donnelly, W. J.

Dubra, A.

B. Davidson, A. Kalitzeos, J. Carroll, A. Dubra, S. Ourselin, M. Michaelides, and C. Bergeles, “Automatic Cone Photoreceptor Localisation in Healthy and Stargardt Afflicted Retinas Using Deep Learning,” Sci. Rep. 8(1), 7911 (2018).
[Crossref]

R. F. Cooper, C. S. Langlo, A. Dubra, and J. Carroll, “Automatic detection of modal spacing (Yellott’s ring) in adaptive optics scanning light ophthalmoscope images,” Ophthalmic Physiol. Opt. 33(4), 540–549 (2013).
[Crossref]

Efros, A. A.

P. Isola, J. Zhu, T. Zhou, and A. A. Efros, “Image-to-Image Translation with Conditional Adversarial Networks,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2016), pp. 1–17.

R. Zhang, P. Isola, and A. A. Efros, “Colorful image colorization,” in European Conference on Computer Vision (ECCV) (Springer, 2016), pp. 649–666.

D. Pathak, P. Krahenbuhl, J. Donahue, T. Darrell, and A. A. Efros, “Context encoders: Feature learning by inpainting,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2016), pp. 2536–2544.

R. Zhang, P. Isola, A. A. Efros, E. Shechtman, and O. Wang, “The unreasonable effectiveness of deep features as a perceptual metric,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2018), pp. 586–595.

Elsner, A. E.

Fei, X.

Ferguson, D.

Ferguson, R. D.

Gao, F.

W. Li, Y. He, W. Kong, F. Gao, J. Wang, and G. Shi, “Enhancement of Retinal Image From Line-Scanning Ophthalmoscope Using Generative Adversarial Networks,” IEEE Access 7, 99830–99841 (2019).
[Crossref]

Gao, H.

X. Tao, H. Gao, Y. Wang, X. Shen, J. Wang, and J. Jia, “Scale-recurrent Network for Deep Image Deblurring,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2018), pp. 1–9.

Gias, C.

M. N. Muthiah, C. Gias, F. K. Chen, J. Zhong, Z. McClelland, F. B. Sallo, T. Peto, P. J. Coffey, and L. da Cruz, “Cone photoreceptor definition on adaptive optics retinal imaging,” Br. J. Ophthalmol. 98(8), 1073–1079 (2014).
[Crossref]

Goodfellow, I.

I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” in Advances in Neural Information Processing Systems (2014), pp. 2672–2680.

Gu, B.

Hammer, D. X.

He, Y.

W. Li, Y. He, W. Kong, F. Gao, J. Wang, and G. Shi, “Enhancement of Retinal Image From Line-Scanning Ophthalmoscope Using Generative Adversarial Networks,” IEEE Access 7, 99830–99841 (2019).
[Crossref]

Y. Wang, Y. He, L. Wei, X. Li, J. Yang, H. Zhou, and Y. Zhang, “Bimorph deformable mirror based adaptive optics scanning laser ophthalmoscope for retina imaging in vivo,” Chin. Opt. Lett. 15(12), 121102 (2017).
[Crossref]

Y. He, G. Deng, L. Wei, X. Li, J. Yang, G. Shi, and Y. Zhang, “Design of a Compact, Bimorph Deformable Mirror-Based Adaptive Optics Scanning Laser Ophthalmoscope,” Adv. Exp. Med. Biol. 923, 375–383 (2016).
[Crossref]

Hebert, T. J.

Hong, M.

X. Yu, Y. Qu, and M. Hong, “Underwater-GAN: Underwater Image Restoration via Conditional Generative Adversarial Network,” in International Conference on Pattern Recognition (ICPR) (2018), pp. 66–75.

Hong, X.

L. N. Thibos, A. Bradley, and X. Hong, “A statistical model of the aberration structure of normal, well-corrected eyes,” Oph. Phys. Optics 22(5), 427–433 (2002).
[Crossref]

Huszar, F.

C. Ledig, L. Theis, F. Huszar, J. Caballero, A. Cunningham, A. Acosta, A. P. Aitken, A. Tejani, J. Totz, and Z. Wang, “Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2016), pp. 105–114.

Iftimia, N.

Isola, P.

P. Isola, J. Zhu, T. Zhou, and A. A. Efros, “Image-to-Image Translation with Conditional Adversarial Networks,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2016), pp. 1–17.

R. Zhang, P. Isola, and A. A. Efros, “Colorful image colorization,” in European Conference on Computer Vision (ECCV) (Springer, 2016), pp. 649–666.

R. Zhang, P. Isola, A. A. Efros, E. Shechtman, and O. Wang, “The unreasonable effectiveness of deep features as a perceptual metric,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2018), pp. 586–595.

Jia, J.

X. Tao, H. Gao, Y. Wang, X. Shen, J. Wang, and J. Jia, “Scale-recurrent Network for Deep Image Deblurring,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2018), pp. 1–9.

Kalitzeos, A.

B. Davidson, A. Kalitzeos, J. Carroll, A. Dubra, S. Ourselin, M. Michaelides, and C. Bergeles, “Automatic Cone Photoreceptor Localisation in Healthy and Stargardt Afflicted Retinas Using Deep Learning,” Sci. Rep. 8(1), 7911 (2018).
[Crossref]

Katsaggelos, A. K.

A. Lucas, S. L. Tapia, R. Molina, and A. K. Katsaggelos, “Generative Adversarial Networks and Perceptual Losses for Video Super-Resolution,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2018), pp. 51–55.

Kim, T. H.

S. Nah, T. H. Kim, and K. M. Lee, “Deep Multi-scale Convolutional Neural Network for Dynamic Scene Deblurring,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2017), pp. 257–265.

Kingma, D. P.

D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” in Proceedings of International Conference for Learning Representations (ICLR) (2015), pp. 1–15.

Kong, W.

W. Li, Y. He, W. Kong, F. Gao, J. Wang, and G. Shi, “Enhancement of Retinal Image From Line-Scanning Ophthalmoscope Using Generative Adversarial Networks,” IEEE Access 7, 99830–99841 (2019).
[Crossref]

Krahenbuhl, P.

D. Pathak, P. Krahenbuhl, J. Donahue, T. Darrell, and A. A. Efros, “Context encoders: Feature learning by inpainting,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2016), pp. 2536–2544.

Kupyn, O.

O. Kupyn, V. Budzan, M. Mykhailych, D. Mishkin, and J. Matas, “DeblurGAN: Blind Motion Deblurring Using Conditional Adversarial Networks,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2018), pp. 8183–8192.

Langlo, C. S.

R. F. Cooper, C. S. Langlo, A. Dubra, and J. Carroll, “Automatic detection of modal spacing (Yellott’s ring) in adaptive optics scanning light ophthalmoscope images,” Ophthalmic Physiol. Opt. 33(4), 540–549 (2013).
[Crossref]

Lazareva, A.

A. Lazareva, M. Asad, and G. G. Slabaugh, “Learning to Deblur Adaptive Optics Retinal Images,” in International Conference on Image Analysis and Recognition (2017), pp. 497–506.

LeCun, Y.

M. Mathieu, C. Couprie, and Y. LeCun, “Deep multi-scale video prediction beyond mean square error,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2015), pp. 1–14.

Ledig, C.

C. Ledig, L. Theis, F. Huszar, J. Caballero, A. Cunningham, A. Acosta, A. P. Aitken, A. Tejani, J. Totz, and Z. Wang, “Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2016), pp. 105–114.

Lee, K. M.

S. Nah, T. H. Kim, and K. M. Lee, “Deep Multi-scale Convolutional Neural Network for Dynamic Scene Deblurring,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2017), pp. 257–265.

Li, C.

C. Li and M. Wand, “Precomputed Real-Time Texture Synthesis with Markovian Generative Adversarial Networks,” in European Conference on Computer Vision (ECCV) (2016), pp. 702–716.

Li, H.

H. Li, J. Lu, G. Shi, and Y. Zhang, “Real-time blind deconvolution of retinal images in adaptive optics scanning laser ophthalmoscopy,” Opt. Commun. 284(13), 3258–3263 (2011).
[Crossref]

J. Lu, H. Li, L. Wei, G. Shi, and Y. Zhang, “Retina imaging in vivo with the adaptive optics confocal scanning laser ophthalmoscope,” in IEEE International Conference on Photonics (2009), 7519.

Li, W.

W. Li, Y. He, W. Kong, F. Gao, J. Wang, and G. Shi, “Enhancement of Retinal Image From Line-Scanning Ophthalmoscope Using Generative Adversarial Networks,” IEEE Access 7, 99830–99841 (2019).
[Crossref]

Li, X.

Y. Wang, Y. He, L. Wei, X. Li, J. Yang, H. Zhou, and Y. Zhang, “Bimorph deformable mirror based adaptive optics scanning laser ophthalmoscope for retina imaging in vivo,” Chin. Opt. Lett. 15(12), 121102 (2017).
[Crossref]

Y. He, G. Deng, L. Wei, X. Li, J. Yang, G. Shi, and Y. Zhang, “Design of a Compact, Bimorph Deformable Mirror-Based Adaptive Optics Scanning Laser Ophthalmoscope,” Adv. Exp. Med. Biol. 923, 375–383 (2016).
[Crossref]

Liang, J.

Lu, J.

J. Lu, B. Gu, X. Wang, and Y. Zhang, “High speed adaptive optics ophthalmoscopy with an anamorphic point spread function,” Opt. Express 26(11), 14356–14374 (2018).
[Crossref]

H. Li, J. Lu, G. Shi, and Y. Zhang, “Real-time blind deconvolution of retinal images in adaptive optics scanning laser ophthalmoscopy,” Opt. Commun. 284(13), 3258–3263 (2011).
[Crossref]

J. Lu, H. Li, L. Wei, G. Shi, and Y. Zhang, “Retina imaging in vivo with the adaptive optics confocal scanning laser ophthalmoscope,” in IEEE International Conference on Photonics (2009), 7519.

Lucas, A.

A. Lucas, S. L. Tapia, R. Molina, and A. K. Katsaggelos, “Generative Adversarial Networks and Perceptual Losses for Video Super-Resolution,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2018), pp. 51–55.

Marcos, S.

Mariotti, L.

Matas, J.

O. Kupyn, V. Budzan, M. Mykhailych, D. Mishkin, and J. Matas, “DeblurGAN: Blind Motion Deblurring Using Conditional Adversarial Networks,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2018), pp. 8183–8192.

Mathieu, M.

M. Mathieu, C. Couprie, and Y. LeCun, “Deep multi-scale video prediction beyond mean square error,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2015), pp. 1–14.

McClelland, Z.

M. N. Muthiah, C. Gias, F. K. Chen, J. Zhong, Z. McClelland, F. B. Sallo, T. Peto, P. J. Coffey, and L. da Cruz, “Cone photoreceptor definition on adaptive optics retinal imaging,” Br. J. Ophthalmol. 98(8), 1073–1079 (2014).
[Crossref]

Michaelides, M.

B. Davidson, A. Kalitzeos, J. Carroll, A. Dubra, S. Ourselin, M. Michaelides, and C. Bergeles, “Automatic Cone Photoreceptor Localisation in Healthy and Stargardt Afflicted Retinas Using Deep Learning,” Sci. Rep. 8(1), 7911 (2018).
[Crossref]

Miller, D. T.

J. Liang, D. R. Williams, and D. T. Miller, “Supernormal vision and high-resolution retinal imaging through adaptive optics,” J. Opt. Soc. Am. A 14(11), 2884–2892 (1997).
[Crossref]

D. T. Miller and A. Roorda, “Adaptive Optics in Retinal Microscopy and Vision,” in Handbook of Optics (McGaw-Hill, 2009).

Mirza, M.

I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” in Advances in Neural Information Processing Systems (2014), pp. 2672–2680.

Mishkin, D.

O. Kupyn, V. Budzan, M. Mykhailych, D. Mishkin, and J. Matas, “DeblurGAN: Blind Motion Deblurring Using Conditional Adversarial Networks,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2018), pp. 8183–8192.

Molina, R.

A. Lucas, S. L. Tapia, R. Molina, and A. K. Katsaggelos, “Generative Adversarial Networks and Perceptual Losses for Video Super-Resolution,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2018), pp. 51–55.

Moorthy, A. K.

A. K. Moorthy and A. C. Bovik, “A Two-Step Framework for Constructing Blind Image Quality Indices,” IEEE Signal Process. Lett. 17(5), 513–516 (2010).
[Crossref]

Muthiah, M. N.

M. N. Muthiah, C. Gias, F. K. Chen, J. Zhong, Z. McClelland, F. B. Sallo, T. Peto, P. J. Coffey, and L. da Cruz, “Cone photoreceptor definition on adaptive optics retinal imaging,” Br. J. Ophthalmol. 98(8), 1073–1079 (2014).
[Crossref]

Mykhailych, M.

O. Kupyn, V. Budzan, M. Mykhailych, D. Mishkin, and J. Matas, “DeblurGAN: Blind Motion Deblurring Using Conditional Adversarial Networks,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2018), pp. 8183–8192.

Nah, S.

S. Nah, T. H. Kim, and K. M. Lee, “Deep Multi-scale Convolutional Neural Network for Dynamic Scene Deblurring,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2017), pp. 257–265.

Nourrit, V.

V. Nourrit, B. Vohnsen, and P. Artal, “Blind deconvolution for high-resolution confocal scanning laser ophthalmoscopy,” J. Opt. 7(10), 585–592 (2005).
[Crossref]

Osindero, S.

S. Osindero, “Conditional Generative Adversarial Nets,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2014), pp. 1–7.

Ourselin, S.

B. Davidson, A. Kalitzeos, J. Carroll, A. Dubra, S. Ourselin, M. Michaelides, and C. Bergeles, “Automatic Cone Photoreceptor Localisation in Healthy and Stargardt Afflicted Retinas Using Deep Learning,” Sci. Rep. 8(1), 7911 (2018).
[Crossref]

Ozair, S.

I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” in Advances in Neural Information Processing Systems (2014), pp. 2672–2680.

Pathak, D.

D. Pathak, P. Krahenbuhl, J. Donahue, T. Darrell, and A. A. Efros, “Context encoders: Feature learning by inpainting,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2016), pp. 2536–2544.

Peto, T.

M. N. Muthiah, C. Gias, F. K. Chen, J. Zhong, Z. McClelland, F. B. Sallo, T. Peto, P. J. Coffey, and L. da Cruz, “Cone photoreceptor definition on adaptive optics retinal imaging,” Br. J. Ophthalmol. 98(8), 1073–1079 (2014).
[Crossref]

Ping, C. T. Y.

H. Song, C. T. Y. Ping, Z. Zhangyi, A. E. Elsner, and S. A. Burns, “Variation of Cone Photoreceptor Packing Density with Retinal Eccentricity and Age,” Invest. Ophthalmol. Visual Sci. 52(10), 7376–7384 (2011).
[Crossref]

Pouget-Abadie, J.

I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” in Advances in Neural Information Processing Systems (2014), pp. 2672–2680.

Prieto, P. M.

Qu, Y.

X. Yu, Y. Qu, and M. Hong, “Underwater-GAN: Underwater Image Restoration via Conditional Generative Adversarial Network,” in International Conference on Pattern Recognition (ICPR) (2018), pp. 66–75.

Queener, H. M.

Rao, C.

C. Rao, Y. Tian, and H. Bao, AO-Based High Resolution Image Post-Process (InTech, 2012).

Romeroborja, F.

Roorda, A.

Sallo, F. B.

M. N. Muthiah, C. Gias, F. K. Chen, J. Zhong, Z. McClelland, F. B. Sallo, T. Peto, P. J. Coffey, and L. da Cruz, “Cone photoreceptor definition on adaptive optics retinal imaging,” Br. J. Ophthalmol. 98(8), 1073–1079 (2014).
[Crossref]

Shechtman, E.

R. Zhang, P. Isola, A. A. Efros, E. Shechtman, and O. Wang, “The unreasonable effectiveness of deep features as a perceptual metric,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2018), pp. 586–595.

Shen, X.

X. Tao, H. Gao, Y. Wang, X. Shen, J. Wang, and J. Jia, “Scale-recurrent Network for Deep Image Deblurring,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2018), pp. 1–9.

Shi, G.

W. Li, Y. He, W. Kong, F. Gao, J. Wang, and G. Shi, “Enhancement of Retinal Image From Line-Scanning Ophthalmoscope Using Generative Adversarial Networks,” IEEE Access 7, 99830–99841 (2019).
[Crossref]

Y. He, G. Deng, L. Wei, X. Li, J. Yang, G. Shi, and Y. Zhang, “Design of a Compact, Bimorph Deformable Mirror-Based Adaptive Optics Scanning Laser Ophthalmoscope,” Adv. Exp. Med. Biol. 923, 375–383 (2016).
[Crossref]

H. Li, J. Lu, G. Shi, and Y. Zhang, “Real-time blind deconvolution of retinal images in adaptive optics scanning laser ophthalmoscopy,” Opt. Commun. 284(13), 3258–3263 (2011).
[Crossref]

J. Lu, H. Li, L. Wei, G. Shi, and Y. Zhang, “Retina imaging in vivo with the adaptive optics confocal scanning laser ophthalmoscope,” in IEEE International Conference on Photonics (2009), 7519.

Simonyan, K.

K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2014), pp. 1–14.

Slabaugh, G. G.

A. Lazareva, M. Asad, and G. G. Slabaugh, “Learning to Deblur Adaptive Optics Retinal Images,” in International Conference on Image Analysis and Recognition (2017), pp. 497–506.

Song, H.

H. Song, C. T. Y. Ping, Z. Zhangyi, A. E. Elsner, and S. A. Burns, “Variation of Cone Photoreceptor Packing Density with Retinal Eccentricity and Age,” Invest. Ophthalmol. Visual Sci. 52(10), 7376–7384 (2011).
[Crossref]

Tao, X.

X. Tao, H. Gao, Y. Wang, X. Shen, J. Wang, and J. Jia, “Scale-recurrent Network for Deep Image Deblurring,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2018), pp. 1–9.

Tapia, S. L.

A. Lucas, S. L. Tapia, R. Molina, and A. K. Katsaggelos, “Generative Adversarial Networks and Perceptual Losses for Video Super-Resolution,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2018), pp. 51–55.

Tejani, A.

C. Ledig, L. Theis, F. Huszar, J. Caballero, A. Cunningham, A. Acosta, A. P. Aitken, A. Tejani, J. Totz, and Z. Wang, “Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2016), pp. 105–114.

Theis, L.

C. Ledig, L. Theis, F. Huszar, J. Caballero, A. Cunningham, A. Acosta, A. P. Aitken, A. Tejani, J. Totz, and Z. Wang, “Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2016), pp. 105–114.

Thibos, L. N.

L. N. Thibos, A. Bradley, and X. Hong, “A statistical model of the aberration structure of normal, well-corrected eyes,” Oph. Phys. Optics 22(5), 427–433 (2002).
[Crossref]

Tian, Y.

C. Rao, Y. Tian, and H. Bao, AO-Based High Resolution Image Post-Process (InTech, 2012).

Tong, Y.

Z. Chen and Y. Tong, “Face super-resolution through wasserstein gans,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2017), pp. 1–8.

Totz, J.

C. Ledig, L. Theis, F. Huszar, J. Caballero, A. Cunningham, A. Acosta, A. P. Aitken, A. Tejani, J. Totz, and Z. Wang, “Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2016), pp. 105–114.

Tumbar, R.

Ustun, T. E.

Vargasmartin, F.

Vohnsen, B.

V. Nourrit, B. Vohnsen, and P. Artal, “Blind deconvolution for high-resolution confocal scanning laser ophthalmoscopy,” J. Opt. 7(10), 585–592 (2005).
[Crossref]

Wand, M.

C. Li and M. Wand, “Precomputed Real-Time Texture Synthesis with Markovian Generative Adversarial Networks,” in European Conference on Computer Vision (ECCV) (2016), pp. 702–716.

Wandell, B. A.

B. A. Wandell, Foundations of Vision (Sinauer Associates Inc, 1995), Chap.3.

Wang, J.

W. Li, Y. He, W. Kong, F. Gao, J. Wang, and G. Shi, “Enhancement of Retinal Image From Line-Scanning Ophthalmoscope Using Generative Adversarial Networks,” IEEE Access 7, 99830–99841 (2019).
[Crossref]

X. Tao, H. Gao, Y. Wang, X. Shen, J. Wang, and J. Jia, “Scale-recurrent Network for Deep Image Deblurring,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2018), pp. 1–9.

Wang, O.

R. Zhang, P. Isola, A. A. Efros, E. Shechtman, and O. Wang, “The unreasonable effectiveness of deep features as a perceptual metric,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2018), pp. 586–595.

Wang, X.

Wang, Y.

Y. Wang, Y. He, L. Wei, X. Li, J. Yang, H. Zhou, and Y. Zhang, “Bimorph deformable mirror based adaptive optics scanning laser ophthalmoscope for retina imaging in vivo,” Chin. Opt. Lett. 15(12), 121102 (2017).
[Crossref]

X. Tao, H. Gao, Y. Wang, X. Shen, J. Wang, and J. Jia, “Scale-recurrent Network for Deep Image Deblurring,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2018), pp. 1–9.

Wang, Z.

C. Ledig, L. Theis, F. Huszar, J. Caballero, A. Cunningham, A. Acosta, A. P. Aitken, A. Tejani, J. Totz, and Z. Wang, “Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2016), pp. 105–114.

Warde-Farley, D.

I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” in Advances in Neural Information Processing Systems (2014), pp. 2672–2680.

Wei, L.

Y. Wang, Y. He, L. Wei, X. Li, J. Yang, H. Zhou, and Y. Zhang, “Bimorph deformable mirror based adaptive optics scanning laser ophthalmoscope for retina imaging in vivo,” Chin. Opt. Lett. 15(12), 121102 (2017).
[Crossref]

Y. He, G. Deng, L. Wei, X. Li, J. Yang, G. Shi, and Y. Zhang, “Design of a Compact, Bimorph Deformable Mirror-Based Adaptive Optics Scanning Laser Ophthalmoscope,” Adv. Exp. Med. Biol. 923, 375–383 (2016).
[Crossref]

J. Lu, H. Li, L. Wei, G. Shi, and Y. Zhang, “Retina imaging in vivo with the adaptive optics confocal scanning laser ophthalmoscope,” in IEEE International Conference on Photonics (2009), 7519.

Williams, D. R.

Xu, B.

I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” in Advances in Neural Information Processing Systems (2014), pp. 2672–2680.

Yang, J.

Y. Wang, Y. He, L. Wei, X. Li, J. Yang, H. Zhou, and Y. Zhang, “Bimorph deformable mirror based adaptive optics scanning laser ophthalmoscope for retina imaging in vivo,” Chin. Opt. Lett. 15(12), 121102 (2017).
[Crossref]

Y. He, G. Deng, L. Wei, X. Li, J. Yang, G. Shi, and Y. Zhang, “Design of a Compact, Bimorph Deformable Mirror-Based Adaptive Optics Scanning Laser Ophthalmoscope,” Adv. Exp. Med. Biol. 923, 375–383 (2016).
[Crossref]

Yu, X.

X. Yu, Y. Qu, and M. Hong, “Underwater-GAN: Underwater Image Restoration via Conditional Generative Adversarial Network,” in International Conference on Pattern Recognition (ICPR) (2018), pp. 66–75.

Yun, D.

Zhang, R.

R. Zhang, P. Isola, and A. A. Efros, “Colorful image colorization,” in European Conference on Computer Vision (ECCV) (Springer, 2016), pp. 649–666.

R. Zhang, P. Isola, A. A. Efros, E. Shechtman, and O. Wang, “The unreasonable effectiveness of deep features as a perceptual metric,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2018), pp. 586–595.

Zhang, Y.

J. Lu, B. Gu, X. Wang, and Y. Zhang, “High speed adaptive optics ophthalmoscopy with an anamorphic point spread function,” Opt. Express 26(11), 14356–14374 (2018).
[Crossref]

Y. Wang, Y. He, L. Wei, X. Li, J. Yang, H. Zhou, and Y. Zhang, “Bimorph deformable mirror based adaptive optics scanning laser ophthalmoscope for retina imaging in vivo,” Chin. Opt. Lett. 15(12), 121102 (2017).
[Crossref]

X. Fei, J. Zhao, H. Zhao, D. Yun, and Y. Zhang, “Deblurring adaptive optics retinal images using deep convolutional neural networks,” Biomed. Opt. Express 8(12), 5675–5687 (2017).
[Crossref]

Y. He, G. Deng, L. Wei, X. Li, J. Yang, G. Shi, and Y. Zhang, “Design of a Compact, Bimorph Deformable Mirror-Based Adaptive Optics Scanning Laser Ophthalmoscope,” Adv. Exp. Med. Biol. 923, 375–383 (2016).
[Crossref]

H. Li, J. Lu, G. Shi, and Y. Zhang, “Real-time blind deconvolution of retinal images in adaptive optics scanning laser ophthalmoscopy,” Opt. Commun. 284(13), 3258–3263 (2011).
[Crossref]

J. Lu, H. Li, L. Wei, G. Shi, and Y. Zhang, “Retina imaging in vivo with the adaptive optics confocal scanning laser ophthalmoscope,” in IEEE International Conference on Photonics (2009), 7519.

Zhangyi, Z.

H. Song, C. T. Y. Ping, Z. Zhangyi, A. E. Elsner, and S. A. Burns, “Variation of Cone Photoreceptor Packing Density with Retinal Eccentricity and Age,” Invest. Ophthalmol. Visual Sci. 52(10), 7376–7384 (2011).
[Crossref]

Zhao, H.

Zhao, J.

Zhong, J.

M. N. Muthiah, C. Gias, F. K. Chen, J. Zhong, Z. McClelland, F. B. Sallo, T. Peto, P. J. Coffey, and L. da Cruz, “Cone photoreceptor definition on adaptive optics retinal imaging,” Br. J. Ophthalmol. 98(8), 1073–1079 (2014).
[Crossref]

Zhou, H.

Zhou, T.

P. Isola, J. Zhu, T. Zhou, and A. A. Efros, “Image-to-Image Translation with Conditional Adversarial Networks,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2016), pp. 1–17.

Zhu, J.

P. Isola, J. Zhu, T. Zhou, and A. A. Efros, “Image-to-Image Translation with Conditional Adversarial Networks,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2016), pp. 1–17.

Zisserman, A.

K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2014), pp. 1–14.

Adv. Exp. Med. Biol. (1)

Y. He, G. Deng, L. Wei, X. Li, J. Yang, G. Shi, and Y. Zhang, “Design of a Compact, Bimorph Deformable Mirror-Based Adaptive Optics Scanning Laser Ophthalmoscope,” Adv. Exp. Med. Biol. 923, 375–383 (2016).
[Crossref]

Biomed. Opt. Express (1)

Br. J. Ophthalmol. (1)

M. N. Muthiah, C. Gias, F. K. Chen, J. Zhong, Z. McClelland, F. B. Sallo, T. Peto, P. J. Coffey, and L. da Cruz, “Cone photoreceptor definition on adaptive optics retinal imaging,” Br. J. Ophthalmol. 98(8), 1073–1079 (2014).
[Crossref]

Chin. Opt. Lett. (1)

IEEE Access (1)

W. Li, Y. He, W. Kong, F. Gao, J. Wang, and G. Shi, “Enhancement of Retinal Image From Line-Scanning Ophthalmoscope Using Generative Adversarial Networks,” IEEE Access 7, 99830–99841 (2019).
[Crossref]

IEEE Signal Process. Lett. (1)

A. K. Moorthy and A. C. Bovik, “A Two-Step Framework for Constructing Blind Image Quality Indices,” IEEE Signal Process. Lett. 17(5), 513–516 (2010).
[Crossref]

Invest. Ophthalmol. Visual Sci. (1)

H. Song, C. T. Y. Ping, Z. Zhangyi, A. E. Elsner, and S. A. Burns, “Variation of Cone Photoreceptor Packing Density with Retinal Eccentricity and Age,” Invest. Ophthalmol. Visual Sci. 52(10), 7376–7384 (2011).
[Crossref]

J. Opt. (1)

V. Nourrit, B. Vohnsen, and P. Artal, “Blind deconvolution for high-resolution confocal scanning laser ophthalmoscopy,” J. Opt. 7(10), 585–592 (2005).
[Crossref]

J. Opt. Soc. Am. A (5)

Oph. Phys. Optics (1)

L. N. Thibos, A. Bradley, and X. Hong, “A statistical model of the aberration structure of normal, well-corrected eyes,” Oph. Phys. Optics 22(5), 427–433 (2002).
[Crossref]

Ophthalmic Physiol. Opt. (1)

R. F. Cooper, C. S. Langlo, A. Dubra, and J. Carroll, “Automatic detection of modal spacing (Yellott’s ring) in adaptive optics scanning light ophthalmoscope images,” Ophthalmic Physiol. Opt. 33(4), 540–549 (2013).
[Crossref]

Opt. Commun. (2)

J. Arines, “Partially compensated deconvolution from wavefront sensing images of the eye fundus,” Opt. Commun. 284(6), 1548–1552 (2011).
[Crossref]

H. Li, J. Lu, G. Shi, and Y. Zhang, “Real-time blind deconvolution of retinal images in adaptive optics scanning laser ophthalmoscopy,” Opt. Commun. 284(13), 3258–3263 (2011).
[Crossref]

Opt. Express (3)

Opt. Lett. (1)

Sci. Rep. (1)

B. Davidson, A. Kalitzeos, J. Carroll, A. Dubra, S. Ourselin, M. Michaelides, and C. Bergeles, “Automatic Cone Photoreceptor Localisation in Healthy and Stargardt Afflicted Retinas Using Deep Learning,” Sci. Rep. 8(1), 7911 (2018).
[Crossref]

Other (23)

J. Lu, H. Li, L. Wei, G. Shi, and Y. Zhang, “Retina imaging in vivo with the adaptive optics confocal scanning laser ophthalmoscope,” in IEEE International Conference on Photonics (2009), 7519.

C. Rao, Y. Tian, and H. Bao, AO-Based High Resolution Image Post-Process (InTech, 2012).

A. Lazareva, M. Asad, and G. G. Slabaugh, “Learning to Deblur Adaptive Optics Retinal Images,” in International Conference on Image Analysis and Recognition (2017), pp. 497–506.

M. Mathieu, C. Couprie, and Y. LeCun, “Deep multi-scale video prediction beyond mean square error,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2015), pp. 1–14.

D. Pathak, P. Krahenbuhl, J. Donahue, T. Darrell, and A. A. Efros, “Context encoders: Feature learning by inpainting,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2016), pp. 2536–2544.

R. Zhang, P. Isola, and A. A. Efros, “Colorful image colorization,” in European Conference on Computer Vision (ECCV) (Springer, 2016), pp. 649–666.

B. A. Wandell, Foundations of Vision (Sinauer Associates Inc, 1995), Chap.3.

D. T. Miller and A. Roorda, “Adaptive Optics in Retinal Microscopy and Vision,” in Handbook of Optics (McGaw-Hill, 2009).

I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” in Advances in Neural Information Processing Systems (2014), pp. 2672–2680.

O. Kupyn, V. Budzan, M. Mykhailych, D. Mishkin, and J. Matas, “DeblurGAN: Blind Motion Deblurring Using Conditional Adversarial Networks,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2018), pp. 8183–8192.

X. Yu, Y. Qu, and M. Hong, “Underwater-GAN: Underwater Image Restoration via Conditional Generative Adversarial Network,” in International Conference on Pattern Recognition (ICPR) (2018), pp. 66–75.

S. Osindero, “Conditional Generative Adversarial Nets,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2014), pp. 1–7.

P. Isola, J. Zhu, T. Zhou, and A. A. Efros, “Image-to-Image Translation with Conditional Adversarial Networks,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2016), pp. 1–17.

C. Li and M. Wand, “Precomputed Real-Time Texture Synthesis with Markovian Generative Adversarial Networks,” in European Conference on Computer Vision (ECCV) (2016), pp. 702–716.

X. Tao, H. Gao, Y. Wang, X. Shen, J. Wang, and J. Jia, “Scale-recurrent Network for Deep Image Deblurring,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2018), pp. 1–9.

M. Arjovsky, S. Chintala, and L. Bottou, “Wasserstein gan,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2017), pp. 1–32.

Z. Chen and Y. Tong, “Face super-resolution through wasserstein gans,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2017), pp. 1–8.

C. Ledig, L. Theis, F. Huszar, J. Caballero, A. Cunningham, A. Acosta, A. P. Aitken, A. Tejani, J. Totz, and Z. Wang, “Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2016), pp. 105–114.

S. Nah, T. H. Kim, and K. M. Lee, “Deep Multi-scale Convolutional Neural Network for Dynamic Scene Deblurring,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2017), pp. 257–265.

K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2014), pp. 1–14.

D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” in Proceedings of International Conference for Learning Representations (ICLR) (2015), pp. 1–15.

R. Zhang, P. Isola, A. A. Efros, E. Shechtman, and O. Wang, “The unreasonable effectiveness of deep features as a perceptual metric,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2018), pp. 586–595.

A. Lucas, S. L. Tapia, R. Molina, and A. K. Katsaggelos, “Generative Adversarial Networks and Perceptual Losses for Video Super-Resolution,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2018), pp. 51–55.

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (7)

Fig. 1.
Fig. 1. Proposed model architecture. Our model is composed of a U-net based generator and a PatchGAN-based discriminator. (conv: convolutional layer; BN: batch normalization layer; ReLU: rectified linear unit; LReLU: leaky rectified linear unit; Tanh: TanHyperbolic function)
Fig. 2.
Fig. 2. Deblurring results of synthetic AO retinal images imitating at different eccentricities and noises. (a1)-(a3) Ground truth; (b1) blurry images (0.3 mm eccentricity from the foveal center, and the Gaussian noise with a standard deviation of 0.03); (b2) blurry images (0.5 mm eccentricity from the foveal center, and the Gaussian noise with a standard deviation of 0.02); (b3) blurry images (1.0 mm eccentricity from the foveal center, and the Gaussian noise with a standard deviation of 0.05). Restored images from (c1), (c2), (c3) the ALM; (d1), (d2), (d3) the DeblurGAN method; (e1), (e2), (e3) the SRNdeblur method; and (f1), (f2), (f3) the proposed method. (Zooming-in the figure will provide a better look at the restoration quality).
Fig. 3.
Fig. 3. Results of cone detection on synthetic retinal images. (a) Original blurry image (with small residual wavefront aberrations and Gaussian noise with a standard deviation of 0.01). Cone detection on (b) ground truth (1.5 mm eccentricity from the foveal center); (c) blurry image; image restored by (d) the ALM, (e) the DeblurGAN method, (f) the SRNdeblur method, and (g) the proposed method.
Fig. 4.
Fig. 4. Results of cone detection on synthetic retinal images. (a) Original blurry image (with large residual wavefront aberrations and Gaussian noise with a standard deviation of 0.05). Cone detection on (b) ground truth (1.0 mm eccentricity from the foveal center); (c) blurry image; image restored by (d) the ALM, (e) the DeblurGAN method, (f) the SRNdeblur method, and (g) the proposed method.
Fig. 5.
Fig. 5. Deblurring results of real retinal images captured by the AOSLO system [40]. (a) Original image (0.8 mm eccentricity from the foveal center). Restored images from (b) the ALM; (c) the DeblurGAN method; (d) the SRNdeblur method; and (e) the proposed method. (f) The corresponding image average power spectra. (Zooming-in the figure will provide a better look at the restoration quality).
Fig. 6.
Fig. 6. Deblurring results of real retinal images captured by the AOSLO system [41]. (a) Original image (1.2 mm eccentricity from the foveal center). Restored images from (b) the ALM; (c) the DeblurGAN method; (d) the SRNdeblur method; and (e) the proposed method. (f) Corresponding image average power spectra. (Zooming-in the figure will provide a better look at the restoration quality).
Fig. 7.
Fig. 7. Deblurring results of real retinal images captured by AOSLO system [41]. (a) Original image (1.8 mm eccentricity from the foveal center). Restored images from (b) the ALM; (c) the DeblurGAN method; (d) the SRNdeblur method; and (e) the proposed method. (f) The corresponding image average power spectra. (Zooming-in the figure will provide a better look at the restoration quality).

Tables (8)

Tables Icon

Table 1. Performance comparison with state-of-the-art methods tested on synthetic retinal images (* Data are expressed as mean ± standard deviation)

Tables Icon

Table 2. Statistical comparison of the CV of different metrics for different methods (* Confidence interval is 95%)

Tables Icon

Table 3. Performance comparison with state-of-the-art method tested on real retinal images (* Data are expressed as mean ± standard deviation)

Tables Icon

Table 4. Image restoration results comparison for the real retinal images captured from different systems (* Data are expressed as mean ± standard deviation)

Tables Icon

Table 5. Image restoration results comparison for the real retinal images with different eccentricities (* Data are expressed as mean ± standard deviation)

Tables Icon

Table 6. Configuration detail of the convolutional layers in the proposed generator network

Tables Icon

Table 7. Configuration detail of the convolutional layers in the proposed discriminator network

Tables Icon

Table 8. Configuration detail of the convolutional layers in the state-of-the-art AO deblurring model [10]

Equations (16)

Equations on this page are rendered with MathJax. Learn more.

PSF = | | F F T { P ( x , y ) exp ( i 2 π λ φ ( x , y ) ) } | | 2
P ( x , y ) = { 1   x 2 + y 2 r 2 0 o t h e r
φ ( x , y ) = i a i Z i ( x , y ) ,
min G max D E I B p ( I B ) , z p ( z ) D ( I B , G ( I B , z ) ) + E I B P ( I B ) , I R P ( I R ) D ( I B , I R )
L = L G A N + α L p i x e l + β L p e r c e p t u a l ,
L G A N = n = 1 N D θ D ( G θ G ( I B ) ) .
L p i x e l = 1 W H x = 1 W y = 1 H | ( I R ) x , y ( G θ G ( I B ) ) x , y | ,
L p e r c e p t u a l = 1 W i , j H i , j x = 1 W i , j y = 1 H i , j ( ϕ i , j ( I R ) x , y ϕ i , j ( G θ G ( I B ) ) x , y ) 2 ,
M S E = n = 1 N ( x n y n ) N
P S N R = 10 × l g ( 255 2 M S E )
S S I M ( x , y ) = ( 2 µ x µ y + c 1 ) ( 2 σ x y + c 2 ) ( µ x 2 + µ y 2 + c 1 ) ( σ x 2 + σ y 2 + c 2 ) ,
d ( x , y ) = l 1 H l W l h , w | | w l ( x ^ h w l y ^ h w l ) | | 2 2 ,
E r r o r  =  | c o n e N o t r u t h N o | t r u t h N o × 100 % ,
B I Q I = i = 1 5 p i q i
Time O ( l = 1 D M l 2 K l 2 C l i n C l o u t )
M = ( N K + 2 × P ) S + 1

Metrics