Abstract

This paper proposes an image-based wavefront sensing approach using deep learning, which is applicable to both point source and any extended scenes at the same time, while the training process is performed without any simulated or real extended scenes. Rather than directly recovering phase information from image plane intensities, we first extract a special feature in the frequency domain that is independent of the original objects but only determined by phase aberrations (a pair of phase diversity images is needed in this process). Then the deep long short-term memory (LSTM) network (a variant of recurrent neural network) is introduced to establish the accurate non-linear mapping between the extracted feature image and phase aberrations. Simulations and an experiment are performed to demonstrate the effectiveness and accuracy of the proposed approach. Some other discussions are further presented for demonstrating the superior non-linear fitting capacity of deep LSTM compared to Resnet 18 (a variant of convolutional neural network) specifically for the problem encountered in this paper. The effect of the incoherency of light on the accuracy of the recovered wavefront phase is also quantitatively discussed. This work will contribute to the application of deep learning to image-based wavefront sensing and high-resolution image reconstruction.

© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

Full Article  |  PDF Article
OSA Recommended Articles
Feature-based phase retrieval wavefront sensing approach using machine learning

Guohao Ju, Xin Qi, Hongcai Ma, and Changxiang Yan
Opt. Express 26(24) 31767-31783 (2018)

Deep learning wavefront sensing

Yohei Nishizaki, Matias Valdivia, Ryoichi Horisaki, Katsuhisa Kitaguchi, Mamoru Saito, Jun Tanida, and Esteban Vera
Opt. Express 27(1) 240-251 (2019)

Piston sensing of sparse aperture systems with a single broadband image via deep learning

Xiafei Ma, Zongliang Xie, Haotong Ma, Yangjie Xu, Ge Ren, and Yang Liu
Opt. Express 27(11) 16058-16070 (2019)

References

  • View by:
  • |
  • |
  • |

  1. J. R. Fienup, “Phase retrieval algorithms: a comparison,” Appl. Opt. 21(15), 2758–2769 (1982).
    [Crossref] [PubMed]
  2. R. A. Gonsalves and R. C. Hidlaw, “Wavefront sensing by phase retrieval,” Proc. SPIE 207, 32–39 (1979).
    [Crossref]
  3. J. R. Fienup, J. C. Marron, T. J. Schulz, and J. H. Seldin, “Hubble space telescope characterized by using phase-retrieval algorithms,” Appl. Opt. 32(10), 1747–1767 (1993).
    [Crossref] [PubMed]
  4. B. H. Dean, D. L. Aronstein, J. S. Smith, R. Shiri, and D. S. Acton, “Phase Retrieval Algorithm for JWST Flight and Testbed Telescope,” Proc. SPIE 6265, 626511 (2006).
    [Crossref]
  5. N. Védrenne, L. M. Mugnier, V. Michau, M. T. Velluet, and R. Bierent, “Laser beam complex amplitude measurement by phase diversity,” Opt. Express 22(4), 4575–4589 (2014).
    [Crossref] [PubMed]
  6. K. F. Tehrani, Y. Zhang, P. Shen, and P. Kner, “Adaptive optics stochastic optical reconstruction microscopy (AO-STORM) by particle swarm optimization,” Biomed. Opt. Express 8(11), 5087–5097 (2017).
    [Crossref] [PubMed]
  7. Y. Rivenson, Y. Zhang, H. Günaydın, D. Teng, and A. Ozcan, “Phase recovery and holographic image reconstruction using deep learning in neural networks,” Light Sci. Appl. 7(2), 17141 (2018).
    [Crossref] [PubMed]
  8. A. Sinha, J. Lee, S. Li, and G. Barbastathis, “Lensless computational imaging through deep learning,” Optica 4(9), 1117–1125 (2017).
    [Crossref]
  9. T. K. Barrett and D. G. Sandler, “Artificial neural network for the determination of Hubble Space Telescope aberration from stellar images,” Appl. Opt. 32(10), 1720–1727 (1993).
    [Crossref] [PubMed]
  10. G. Ju, X. Qi, H. Ma, and C. Yan, “Feature-based phase retrieval wavefront sensing approach using machine learning,” Opt. Express 26(24), 31767–31783 (2018).
    [Crossref] [PubMed]
  11. S. W. Paine and J. R. Fienup, “Machine learning for improved image-based wavefront sensing,” Opt. Lett. 43(6), 1235–1238 (2018).
    [Crossref] [PubMed]
  12. Y. Nishizaki, M. Valdivia, R. Horisaki, K. Kitaguchi, M. Saito, J. Tanida, and E. Vera, “Deep learning wavefront sensing,” Opt. Express 27(1), 240–251 (2019).
    [Crossref] [PubMed]
  13. R. W. Gerchberg and W. O. Saxton, “A Practical Algorithm for the Determination of Phase from Image and Diffraction Plane Pictures,” Optik (Stuttg.) 35, 237–246 (1972).
  14. D. L. Misell, “An examination of an iterative method for the solution of the phase problem in optics and electron optics,” J. Phys. D 6(18), 2200–2216 (1973).
    [Crossref]
  15. R. A. Gonsalves and R. C. Hidlaw, “Wavefront sensing by phase retrieval,” Proc. SPIE 207, 32–39 (1979).
    [Crossref]
  16. R. A. Gonsalves, “Phase retrieval and diversity in adaptive optics,” Opt. Eng. 21(5), 829–832 (1982).
    [Crossref]
  17. R. G. Paxman, T. J. Schulz, and J. R. Fienup, “Joint estimation of object and aberrations by using phase diversity,” J. Opt. Soc. Am. A 9(7), 1072–1085 (1992).
    [Crossref]
  18. F. A. Gers, J. Schmidhuber, and F. Cummins, “Learning to forget: continual prediction with LSTM,” in 9th International Conference on Artificial Neural Networks: ICANN ’99 (1999), pp. 850–855.
    [Crossref]
  19. K. Greff, R. K. Srivastava, J. Koutník, B. R. Steunebrink, and J. Schmidhuber, “LSTM: A search space odyssey,” IEEE Trans. Neural Netw. Learn. Syst. 28(10), 2222–2232 (2017).
    [Crossref] [PubMed]
  20. D. Yue, S. Xu, and H. Nie, “Co-phasing of the segmented mirror and image retrieval based on phase diversity using a modified algorithm,” Appl. Opt. 54(26), 7917–7924 (2015).
    [Crossref] [PubMed]
  21. P. G. Zhang, C. L. Yang, Z. H. Xu, Z. L. Cao, Q. Q. Mu, and L. Xuan, “Hybrid particle swarm global optimization algorithm for phase diversity phase retrieval,” Opt. Express 24(22), 25704–25717 (2016).
    [Crossref] [PubMed]
  22. S. Targ, D. Almeida, and K. Lyman, “Resnet in Resnet: Generalizing Residual Architectures,” arXiv preprint arXiv:1603.08029 (2016).
  23. S. Ayyachamy, V. Alex, M. Khened, and G. Krishnamurthi, “Medical image retrieval using Resnet-18 for clinical diagnosis,” Proc. SPIE 10954, 1095410 (2019).
  24. R. L. Kendrick, D. S. Acton, and A. L. Duncan, “Phase-diversity wave-front sensor for imaging systems,” Appl. Opt. 33(27), 6533–6546 (1994).
    [Crossref] [PubMed]
  25. B. H. Dean and C. W. Bowers, “Diversity selection for phase-diverse phase retrieval,” J. Opt. Soc. Am. A 20(8), 1490–1504 (2003).
    [Crossref] [PubMed]

2019 (2)

Y. Nishizaki, M. Valdivia, R. Horisaki, K. Kitaguchi, M. Saito, J. Tanida, and E. Vera, “Deep learning wavefront sensing,” Opt. Express 27(1), 240–251 (2019).
[Crossref] [PubMed]

S. Ayyachamy, V. Alex, M. Khened, and G. Krishnamurthi, “Medical image retrieval using Resnet-18 for clinical diagnosis,” Proc. SPIE 10954, 1095410 (2019).

2018 (3)

2017 (3)

2016 (1)

2015 (1)

2014 (1)

2006 (1)

B. H. Dean, D. L. Aronstein, J. S. Smith, R. Shiri, and D. S. Acton, “Phase Retrieval Algorithm for JWST Flight and Testbed Telescope,” Proc. SPIE 6265, 626511 (2006).
[Crossref]

2003 (1)

1994 (1)

1993 (2)

1992 (1)

1982 (2)

R. A. Gonsalves, “Phase retrieval and diversity in adaptive optics,” Opt. Eng. 21(5), 829–832 (1982).
[Crossref]

J. R. Fienup, “Phase retrieval algorithms: a comparison,” Appl. Opt. 21(15), 2758–2769 (1982).
[Crossref] [PubMed]

1979 (2)

R. A. Gonsalves and R. C. Hidlaw, “Wavefront sensing by phase retrieval,” Proc. SPIE 207, 32–39 (1979).
[Crossref]

R. A. Gonsalves and R. C. Hidlaw, “Wavefront sensing by phase retrieval,” Proc. SPIE 207, 32–39 (1979).
[Crossref]

1973 (1)

D. L. Misell, “An examination of an iterative method for the solution of the phase problem in optics and electron optics,” J. Phys. D 6(18), 2200–2216 (1973).
[Crossref]

1972 (1)

R. W. Gerchberg and W. O. Saxton, “A Practical Algorithm for the Determination of Phase from Image and Diffraction Plane Pictures,” Optik (Stuttg.) 35, 237–246 (1972).

Acton, D. S.

B. H. Dean, D. L. Aronstein, J. S. Smith, R. Shiri, and D. S. Acton, “Phase Retrieval Algorithm for JWST Flight and Testbed Telescope,” Proc. SPIE 6265, 626511 (2006).
[Crossref]

R. L. Kendrick, D. S. Acton, and A. L. Duncan, “Phase-diversity wave-front sensor for imaging systems,” Appl. Opt. 33(27), 6533–6546 (1994).
[Crossref] [PubMed]

Alex, V.

S. Ayyachamy, V. Alex, M. Khened, and G. Krishnamurthi, “Medical image retrieval using Resnet-18 for clinical diagnosis,” Proc. SPIE 10954, 1095410 (2019).

Aronstein, D. L.

B. H. Dean, D. L. Aronstein, J. S. Smith, R. Shiri, and D. S. Acton, “Phase Retrieval Algorithm for JWST Flight and Testbed Telescope,” Proc. SPIE 6265, 626511 (2006).
[Crossref]

Ayyachamy, S.

S. Ayyachamy, V. Alex, M. Khened, and G. Krishnamurthi, “Medical image retrieval using Resnet-18 for clinical diagnosis,” Proc. SPIE 10954, 1095410 (2019).

Barbastathis, G.

Barrett, T. K.

Bierent, R.

Bowers, C. W.

Cao, Z. L.

Cummins, F.

F. A. Gers, J. Schmidhuber, and F. Cummins, “Learning to forget: continual prediction with LSTM,” in 9th International Conference on Artificial Neural Networks: ICANN ’99 (1999), pp. 850–855.
[Crossref]

Dean, B. H.

B. H. Dean, D. L. Aronstein, J. S. Smith, R. Shiri, and D. S. Acton, “Phase Retrieval Algorithm for JWST Flight and Testbed Telescope,” Proc. SPIE 6265, 626511 (2006).
[Crossref]

B. H. Dean and C. W. Bowers, “Diversity selection for phase-diverse phase retrieval,” J. Opt. Soc. Am. A 20(8), 1490–1504 (2003).
[Crossref] [PubMed]

Duncan, A. L.

Fienup, J. R.

Gerchberg, R. W.

R. W. Gerchberg and W. O. Saxton, “A Practical Algorithm for the Determination of Phase from Image and Diffraction Plane Pictures,” Optik (Stuttg.) 35, 237–246 (1972).

Gers, F. A.

F. A. Gers, J. Schmidhuber, and F. Cummins, “Learning to forget: continual prediction with LSTM,” in 9th International Conference on Artificial Neural Networks: ICANN ’99 (1999), pp. 850–855.
[Crossref]

Gonsalves, R. A.

R. A. Gonsalves, “Phase retrieval and diversity in adaptive optics,” Opt. Eng. 21(5), 829–832 (1982).
[Crossref]

R. A. Gonsalves and R. C. Hidlaw, “Wavefront sensing by phase retrieval,” Proc. SPIE 207, 32–39 (1979).
[Crossref]

R. A. Gonsalves and R. C. Hidlaw, “Wavefront sensing by phase retrieval,” Proc. SPIE 207, 32–39 (1979).
[Crossref]

Greff, K.

K. Greff, R. K. Srivastava, J. Koutník, B. R. Steunebrink, and J. Schmidhuber, “LSTM: A search space odyssey,” IEEE Trans. Neural Netw. Learn. Syst. 28(10), 2222–2232 (2017).
[Crossref] [PubMed]

Günaydin, H.

Y. Rivenson, Y. Zhang, H. Günaydın, D. Teng, and A. Ozcan, “Phase recovery and holographic image reconstruction using deep learning in neural networks,” Light Sci. Appl. 7(2), 17141 (2018).
[Crossref] [PubMed]

Hidlaw, R. C.

R. A. Gonsalves and R. C. Hidlaw, “Wavefront sensing by phase retrieval,” Proc. SPIE 207, 32–39 (1979).
[Crossref]

R. A. Gonsalves and R. C. Hidlaw, “Wavefront sensing by phase retrieval,” Proc. SPIE 207, 32–39 (1979).
[Crossref]

Horisaki, R.

Ju, G.

Kendrick, R. L.

Khened, M.

S. Ayyachamy, V. Alex, M. Khened, and G. Krishnamurthi, “Medical image retrieval using Resnet-18 for clinical diagnosis,” Proc. SPIE 10954, 1095410 (2019).

Kitaguchi, K.

Kner, P.

Koutník, J.

K. Greff, R. K. Srivastava, J. Koutník, B. R. Steunebrink, and J. Schmidhuber, “LSTM: A search space odyssey,” IEEE Trans. Neural Netw. Learn. Syst. 28(10), 2222–2232 (2017).
[Crossref] [PubMed]

Krishnamurthi, G.

S. Ayyachamy, V. Alex, M. Khened, and G. Krishnamurthi, “Medical image retrieval using Resnet-18 for clinical diagnosis,” Proc. SPIE 10954, 1095410 (2019).

Lee, J.

Li, S.

Ma, H.

Marron, J. C.

Michau, V.

Misell, D. L.

D. L. Misell, “An examination of an iterative method for the solution of the phase problem in optics and electron optics,” J. Phys. D 6(18), 2200–2216 (1973).
[Crossref]

Mu, Q. Q.

Mugnier, L. M.

Nie, H.

Nishizaki, Y.

Ozcan, A.

Y. Rivenson, Y. Zhang, H. Günaydın, D. Teng, and A. Ozcan, “Phase recovery and holographic image reconstruction using deep learning in neural networks,” Light Sci. Appl. 7(2), 17141 (2018).
[Crossref] [PubMed]

Paine, S. W.

Paxman, R. G.

Qi, X.

Rivenson, Y.

Y. Rivenson, Y. Zhang, H. Günaydın, D. Teng, and A. Ozcan, “Phase recovery and holographic image reconstruction using deep learning in neural networks,” Light Sci. Appl. 7(2), 17141 (2018).
[Crossref] [PubMed]

Saito, M.

Sandler, D. G.

Saxton, W. O.

R. W. Gerchberg and W. O. Saxton, “A Practical Algorithm for the Determination of Phase from Image and Diffraction Plane Pictures,” Optik (Stuttg.) 35, 237–246 (1972).

Schmidhuber, J.

K. Greff, R. K. Srivastava, J. Koutník, B. R. Steunebrink, and J. Schmidhuber, “LSTM: A search space odyssey,” IEEE Trans. Neural Netw. Learn. Syst. 28(10), 2222–2232 (2017).
[Crossref] [PubMed]

F. A. Gers, J. Schmidhuber, and F. Cummins, “Learning to forget: continual prediction with LSTM,” in 9th International Conference on Artificial Neural Networks: ICANN ’99 (1999), pp. 850–855.
[Crossref]

Schulz, T. J.

Seldin, J. H.

Shen, P.

Shiri, R.

B. H. Dean, D. L. Aronstein, J. S. Smith, R. Shiri, and D. S. Acton, “Phase Retrieval Algorithm for JWST Flight and Testbed Telescope,” Proc. SPIE 6265, 626511 (2006).
[Crossref]

Sinha, A.

Smith, J. S.

B. H. Dean, D. L. Aronstein, J. S. Smith, R. Shiri, and D. S. Acton, “Phase Retrieval Algorithm for JWST Flight and Testbed Telescope,” Proc. SPIE 6265, 626511 (2006).
[Crossref]

Srivastava, R. K.

K. Greff, R. K. Srivastava, J. Koutník, B. R. Steunebrink, and J. Schmidhuber, “LSTM: A search space odyssey,” IEEE Trans. Neural Netw. Learn. Syst. 28(10), 2222–2232 (2017).
[Crossref] [PubMed]

Steunebrink, B. R.

K. Greff, R. K. Srivastava, J. Koutník, B. R. Steunebrink, and J. Schmidhuber, “LSTM: A search space odyssey,” IEEE Trans. Neural Netw. Learn. Syst. 28(10), 2222–2232 (2017).
[Crossref] [PubMed]

Tanida, J.

Tehrani, K. F.

Teng, D.

Y. Rivenson, Y. Zhang, H. Günaydın, D. Teng, and A. Ozcan, “Phase recovery and holographic image reconstruction using deep learning in neural networks,” Light Sci. Appl. 7(2), 17141 (2018).
[Crossref] [PubMed]

Valdivia, M.

Védrenne, N.

Velluet, M. T.

Vera, E.

Xu, S.

Xu, Z. H.

Xuan, L.

Yan, C.

Yang, C. L.

Yue, D.

Zhang, P. G.

Zhang, Y.

Y. Rivenson, Y. Zhang, H. Günaydın, D. Teng, and A. Ozcan, “Phase recovery and holographic image reconstruction using deep learning in neural networks,” Light Sci. Appl. 7(2), 17141 (2018).
[Crossref] [PubMed]

K. F. Tehrani, Y. Zhang, P. Shen, and P. Kner, “Adaptive optics stochastic optical reconstruction microscopy (AO-STORM) by particle swarm optimization,” Biomed. Opt. Express 8(11), 5087–5097 (2017).
[Crossref] [PubMed]

Appl. Opt. (5)

Biomed. Opt. Express (1)

IEEE Trans. Neural Netw. Learn. Syst. (1)

K. Greff, R. K. Srivastava, J. Koutník, B. R. Steunebrink, and J. Schmidhuber, “LSTM: A search space odyssey,” IEEE Trans. Neural Netw. Learn. Syst. 28(10), 2222–2232 (2017).
[Crossref] [PubMed]

J. Opt. Soc. Am. A (2)

J. Phys. D (1)

D. L. Misell, “An examination of an iterative method for the solution of the phase problem in optics and electron optics,” J. Phys. D 6(18), 2200–2216 (1973).
[Crossref]

Light Sci. Appl. (1)

Y. Rivenson, Y. Zhang, H. Günaydın, D. Teng, and A. Ozcan, “Phase recovery and holographic image reconstruction using deep learning in neural networks,” Light Sci. Appl. 7(2), 17141 (2018).
[Crossref] [PubMed]

Opt. Eng. (1)

R. A. Gonsalves, “Phase retrieval and diversity in adaptive optics,” Opt. Eng. 21(5), 829–832 (1982).
[Crossref]

Opt. Express (4)

Opt. Lett. (1)

Optica (1)

Optik (Stuttg.) (1)

R. W. Gerchberg and W. O. Saxton, “A Practical Algorithm for the Determination of Phase from Image and Diffraction Plane Pictures,” Optik (Stuttg.) 35, 237–246 (1972).

Proc. SPIE (4)

R. A. Gonsalves and R. C. Hidlaw, “Wavefront sensing by phase retrieval,” Proc. SPIE 207, 32–39 (1979).
[Crossref]

B. H. Dean, D. L. Aronstein, J. S. Smith, R. Shiri, and D. S. Acton, “Phase Retrieval Algorithm for JWST Flight and Testbed Telescope,” Proc. SPIE 6265, 626511 (2006).
[Crossref]

R. A. Gonsalves and R. C. Hidlaw, “Wavefront sensing by phase retrieval,” Proc. SPIE 207, 32–39 (1979).
[Crossref]

S. Ayyachamy, V. Alex, M. Khened, and G. Krishnamurthi, “Medical image retrieval using Resnet-18 for clinical diagnosis,” Proc. SPIE 10954, 1095410 (2019).

Other (2)

F. A. Gers, J. Schmidhuber, and F. Cummins, “Learning to forget: continual prediction with LSTM,” in 9th International Conference on Artificial Neural Networks: ICANN ’99 (1999), pp. 850–855.
[Crossref]

S. Targ, D. Almeida, and K. Lyman, “Resnet in Resnet: Generalizing Residual Architectures,” arXiv preprint arXiv:1603.08029 (2016).

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (14)

Fig. 1
Fig. 1 Illustration of the extracted features for different objects in the presence of the same phase aberrations. (a) and (b) show two extended scenes, the corresponding in-focus and defocus images, as well as the feature images obtained with Eq. (7), respectively. (c) shows a point source, the corresponding in-focus and defocus images as well as the extracted feature image. We can clearly see that the feature images extracted from different objects are the same.
Fig. 2
Fig. 2 Diagram of the unfold basic RNN.
Fig. 3
Fig. 3 Diagram of the LSTM unit.
Fig. 4
Fig. 4 Sketch map of the object-independent wavefront sensing approach using deep LSTM networks. The feature image extracted from a pair of focal plane images is first decomposed into a series of patches (each patch includes one row of the feature image). Then these patches compose a sequence, which serves as the input of the a deep LSTM network. The output sequence of the deep LSTM network is further taken as a vector, which serves as the input of a fully-connected layer, before a set of aberration coefficients can be obtained.
Fig. 5
Fig. 5 Application procedure of the object-independent image-based wavefront sensing approach using deep LSTM network. We can see that while the network is trained using simulated PSFs, it can be applicable to unknown extended scenes at the same time.
Fig. 6
Fig. 6 Distributions of the error between the targets and the actual outputs of the deep LSTM network for the training set (a) and test set (b). We can see that deep LSTM network can accurately fit the non-linear mapping between the extracted feature image and the phase aberrations.
Fig. 7
Fig. 7 Simulation results of image reconstruction with the recovered aberration coefficients. We can see that the resolution of the reconstructed images can be effectively improved, which demonstrates the accuracies of the recovered aberration coefficients from the side.
Fig. 8
Fig. 8 Sketch map of the experimental setup. Here the scattering matter can be any semitransparent solution which is used to change the parallel laser light into scattered light.
Fig. 9
Fig. 9 Experimental results of image reconstruction with the recovered aberrations. We can see that the resolution of the reconstructed images is effectively improved compared to original aberrated images, which demonstrates the accuracy of the recovered aberration coefficients from the side.
Fig. 10
Fig. 10 Distributions of the error between the targets and the actual outputs of the Resnet 18 for the training set (a) and test set (b). Comparing this figure with Fig. 6 we can recognize that the fitting accuracy of Resnet 18 is much less than deep LSTM for the problem encountered in the paper.
Fig. 11
Fig. 11 Distributions of the error between the targets and the actual outputs of the deep LSTM for the training set (a) and test set (b) when 2nd~21th Fringe Zernike coefficients are considered. Comparing this figure with Fig. 6 we can recognize that the fitting accuracy of the deep LSTM is not sensitive to the number of the outputs.
Fig. 12
Fig. 12 Distributions of the error between the targets and the actual outputs of the Restnet 18 for the training set (a) and test set (b) when 2nd~21th Fringe Zernike coefficients are considered. Comparing this figure with Fig. 11 we can recognize that the fitting accuracy of Resnet 18 is much less than deep LSTM for the problem encountered in the paper. Besides, the fitting accuracy decreases obviously as the number of the outputs increases.
Fig. 13
Fig. 13 Distributions of the error between the targets and the actual outputs of the deep LSTM for the training set (a) and test set (b) when 2nd~37th Fringe Zernike coefficients are considered. We can recognize that while the accuracy actually decreases as the number of aberration coefficients increases, the degradation in accuracy is still not obvious for deep LSTM network.
Fig. 14
Fig. 14 Influence of the incoherency of light on the accuracy of the deep LSTM network which is trained with the coherent model for 4 cases with different numbers and different ranges of aberration coefficients. (a), (b), (c) and (d) show the mean absolute error of the recovered aberration coefficients which corresponds to Case 1, Case 2, Case 3 and Case 4, respectively. On one hand, we can easily see that the accuracy decreases as the spectrum bandwidth increases. On the other hand, we can still recognize that the accuracy is nearly unaffected when the spectrum bandwidth is comparatively small (<150nm).

Tables (3)

Tables Icon

Table 1 The comparisons between the introduced aberration coefficients (A) and the aberration coefficients recovered from simulated aberrated extended scenes (B)

Tables Icon

Table 2 The comparisons between the aberration coefficients obtained with deep LSTM network (A) and the aberration coefficients recovered using traditional phase diversity algorithm (B) for the 4 pair of images

Tables Icon

Table 3 Four cases considered in the simulations for demonstrating the influence of the incoherency of incident light when deep LSTM network is trained based on the coherent model

Equations (10)

Equations on this page are rendered with MathJax. Learn more.

i=os,
I=OS,
I a I b = S a S b ,
F{ i a } F{ i b } = P a ( ψ ) P a ( ψ ) P b ( ψ ) P b ( ψ ) ,
P a ( ψ )=pexp{ jψ },
P b ( ψ )=pexp{ j( ψ+Δψ ) },
f= F{ i a } F{ i b } ,
f 0 =| F 1 { F{ i a } F{ i b } } |,
h t =H( W xh x t + W hh h t1 + b h ), o t = W ho h t + b o ,
F t =σ( W xF x t + W hF h t1 + b F ), I t =σ( W xI x t + W hI h t1 + b I ), O t =σ( W xO x t + W hO h t1 + b O ), G t =tanh( W xG x t + W hG h t1 + b G ), c t = c t1 F t + I t G t , h t =tanh( c t ) O t ,

Metrics