Abstract

This paper introduces an image registration algorithm between an eye reflection and a scene image. Although there are currently a large number of image registration algorithms, this task remains difficult due to nonlinear distortions at the eye surface and large amounts of noise, such as iris texture, eyelids, eyelashes, and their shadows. To overcome this issue, we developed an image registration method combining an aspherical eye model that simulates nonlinear distortions considering eye geometry and a two-step iterative registration strategy that obtains dense correspondence of the feature points to achieve accurate image registrations for the entire image region. We obtained a database of eye reflection and scene images featuring four subjects in indoor and outdoor scenes and compared the registration performance with different asphericity conditions. Results showed that the proposed approach can perform accurate registration with an average accuracy of 1.05 deg by using the aspherical cornea model. This work is relevant for eye image analysis in general, enabling novel applications and scenarios.

© 2016 Optical Society of America

Full Article  |  PDF Article
OSA Recommended Articles
Registration of OCT fundus images with color fundus photographs based on blood vessel ridges

Ying Li, Giovanni Gregori, Robert W. Knighton, Brandon J. Lujan, and Philip J. Rosenfeld
Opt. Express 19(1) 7-16 (2011)

Analysis of macular OCT images using deformable registration

Min Chen, Andrew Lang, Howard S. Ying, Peter A. Calabresi, Jerry L. Prince, and Aaron Carass
Biomed. Opt. Express 5(7) 2196-2214 (2014)

Theoretical Eye Model with Aspherics*

W. Lotmar
J. Opt. Soc. Am. 61(11) 1522-1529 (1971)

References

  • View by:
  • |
  • |
  • |

  1. K. W. Bowyer, K. Hollingsworth, and P. J. Flynn, “Image understanding for iris biometrics: a survey,” Comput. Vis. Image Underst. 110, 281–307 (2008).
    [Crossref]
  2. L. Ma, T. Tan, S. Member, Y. Wang, and D. Zhang, “Personal identification based on iris texture analysis,” IEEE Trans. Pattern Anal. Mach. Intell. 25, 1519–1533 (2003).
    [Crossref]
  3. K. Nishino and S. K. Nayar, “Corneal imaging system: environment from eyes,” Int. J. Comput. Vis. 70, 23–40 (2006).
    [Crossref]
  4. A. Nakazawa and C. Nitschke, “Point of gaze estimation through corneal surface reflection in an active illumination environment,” in Proceedings of European Conference on Computer Vision (ECCV) (Springer-Verlag, 2012), pp. 159–172.
  5. K. Nishino, P. N. Belhumeur, and S. K. Nayar, “Using eye reflections for face recognition under varying illumination,” in Proceedings of IEEE International Conference on Computer Vision (ICCV) (2005), pp. 519–526.
  6. C. Nitschke and A. Nakazawa, “Super-resolution from corneal images,” in Proceedings of British Machine Vision Conference (BMVC) (BMVA, 2012), pp. 22.1–22.12.
  7. J. G. Daugman, “High confidence visual recognition of persons by a test of statistical independence,” IEEE Trans. Pattern Anal. Mach. Intell. 15, 1148–1161 (1993).
    [Crossref]
  8. S. Lovegrove and A. J. Davison, “Real-time spherical mosaicing using whole image alignment,” in Computer Vision—ECCV 2010 (Springer, 2010), pp. 73–86.
  9. M. V. S. Sakharkar and S. Gupta, “Image stitching techniques-an overview,” Int. J. Comput. Sci. Appl. 6, 324–330 (2013).
  10. H.-Y. Shum and R. Szeliski, “Construction of panoramic image mosaics with global and local alignment,” in Panoramic Vision (Springer, 2001), pp. 227–268.
  11. Z. He, T. Tan, Z. Sun, and X. Qiu, “Toward accurate and fast iris segmentation for iris biometrics,” IEEE Trans. Pattern Anal. Mach. Intell. 31, 1670–1684 (2009).
    [Crossref]
  12. T. Tan, Z. He, and Z. Sun, “Efficient and robust segmentation of noisy iris images for non-cooperative iris recognition,” Image Vis. Comput. 28, 223–230 (2010).
    [Crossref]
  13. H. Wang, S. Lin, X. Liu, and S. B. Kang, “Separating reflections in human iris images for illumination estimation,” in Proceedings of International Conference on Computer Vision (ICCV) (2005), Vol. 2, pp. 1691–1698.
  14. M. Backes, T. Chen, M. Dürmuth, H. P. A. Lensch, and M. Welk, “Tempest in a teapot: Compromising reflections revisited,” in Proceedings of IEEE Symposium on Security and Privacy (SP) (2009), pp. 315–327.
  15. C. Nitschke, A. Nakazawa, and H. Takemura, “Corneal imaging revisited: an overview of corneal reflection analysis and applications,” IPSJ Trans. Comput. Vis. Appl. 5, 1–18 (2013).
    [Crossref]
  16. P. Sturm, S. Ramalingam, J.-P. Tardif, S. Gasparini, and J. Barreto, “Camera models and fundamental concepts used in geometric computer vision,” Found. Trends Comput. Graph. Vis. 6, 1–183 (2011).
    [Crossref]
  17. D. G. Lowe, “Distinctive image features from scale-invariant keypoints,” Int. J. Comput. Vis. 60, 91–110 (2004).
  18. H. Bay, T. Tuytelaars, and L. V. Gool, “SURF: speeded up robust features,” in Proceedings of European Conference on Computer Vision (ECCV), A. Leonardis, H. Bischof, and A. Pinz, eds., Vol. 3951 in Lecture Notes in Computer Science (Springer, 2006), pp. 404–417.
  19. J. Matas, O. Chum, M. Urban, and T. Pajdla, “Robust wide baseline stereo from maximally stable extremal regions,” in Proceedings of British Machine Vision Conference (BMVC) (BMVA, 2002), pp. 36.1–36.10.
  20. K. Fujiwara, K. Nishino, J. Takamatsu, B. Zheng, and K. Ikeuchi, “Locally rigid globally non-rigid surface registration,” in Proceedings of IEEE International Conference on Computer Vision (ICCV) (2011), pp. 1527–1534.
  21. C. Liu, J. Yuen, A. Torralba, J. Sivic, and W. T. Freeman, “SIFT flow: dense correspondence across different scenes,” in Proceedings of European Conference on Computer Vision (ECCV) (Springer-Verlag, 2008), pp. 28–42.
  22. M. A. Fischler and R. C. Bolles, “Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography,” Commun. ACM 24, 381–395 (1981).
    [Crossref]
  23. O. Chum and J. Matas, “Matching with PROSAC-progressive sample consensus,” in IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR) (IEEE, 2005), Vol. 1, pp. 220–226.
  24. D. Nistér, “Preemptive RANSAC for live structure and motion estimation,” Mach. Vis. Appl. 16, 321–329 (2005).
    [Crossref]
  25. J. Civera, O. G. Grasa, A. J. Davison, and J. Montiel, “1-point RANSAC for extended Kalman filtering: application to real-time structure from motion and visual odometry,” J. Field Rob. 27, 609–631 (2010).
    [Crossref]
  26. D. Scaramuzza, F. Fraundorfer, and R. Siegwart, “Real-time monocular visual odometry for on-road vehicles with 1-point RANSAC,” in IEEE International Conference on Robotics and Automation (ICRA) (IEEE, 2009), pp. 4293–4299.
  27. J. J. McAuley, T. S. Caetano, and M. S. Barbosa, “Graph rigidity, cyclic belief propagation, and point pattern matching,” IEEE Trans. Pattern Anal. Mach. Intell. 30, 2047–2054 (2008).
    [Crossref]
  28. E. Ask, O. Enqvist, and F. Kahl, “Optimal geometric fitting under the truncated l2-norm,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2013), pp. 1722–1729.
  29. E. Ask, O. Enqvist, L. Svarm, F. Kahl, and G. Lippolis, “Tractable and reliable registration of 2d point sets,” in Computer Vision—ECCV 2014, D. Fleet, T. Pajdla, B. Schiele, and T. Tuytelaars, eds., Vol. 8689 of Lecture Notes in Computer Science (Springer, 2014), pp. 393–406.
  30. A. Nakazawa, “Noise stable image registration using random resample consensus,” in 2016 International Conference on Pattern Recognition (ICPR2) (IAPR, 2016).
  31. B. D. Lucas and T. Kanade, “An iterative image registration technique with an application to stereo vision,” in International Joint Conference on Artificial Intelligence (1981), Vol. 81, pp. 674–679.
  32. M. Guillon, D. P. Lydon, and C. Wilson, “Corneal topography: a clinical model,” Ophthal. Physiol. Opt. 6, 47–56 (1986).
    [Crossref]
  33. J. Ying, B. Wang, and M. Shi, “Anterior corneal asphericity calculated by the tangential radius of curvature,” J. Biomed. Opt. 17, 0750051 (2012).
    [Crossref]
  34. J. J. Moré, “The Levenberg-Marquardt algorithm: implementation and theory,” in Numerical Analysis (Springer, 1978), pp. 105–116.
  35. Z. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Anal. Mach. Intell. 22, 1330–1334 (2000).
    [Crossref]
  36. A. Hast and J. Nysjö, “Optimal RANSAC-towards a repeatable algorithm for finding the optimal set,” J. WSCG 21, 21–30 (2013).
  37. D. Li, D. Winfield, and D. J. Parkhurst, “Starburst: a hybrid algorithm for video-based eye tracking combining feature-based and model-based approaches,” in 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR)-Workshops (IEEE, 2005), pp. 79.
  38. E. Wood, T. Baltruaitis, X. Zhang, Y. Sugano, P. Robinson, and A. Bulling, “Rendering of eyes for eye-shape registration and gaze estimation,” in 2015 IEEE International Conference on Computer Vision (ICCV) (IEEE, 2015), pp. 3756–3764.
  39. M. Barbosa and A. C. James, “Joint iris boundary detection and fit: a real-time method for accurate pupil tracking,” Biomed. Opt. Express 5, 2458–2470 (2014).
    [Crossref]
  40. W. Zhang, M. L. Smith, L. N. Smith, and A. Farooq, “Eye center localization and gaze gesture recognition for human-computer interaction,” J. Opt. Soc. Am. A 33, 314–325 (2016).
    [Crossref]
  41. A. Nakazawa, C. Nitschke, and T. Nishida, “Non-calibrated and real-time human view estimation using a mobile corneal imaging camera,” in International Conference on Multimedia & Expo Workshops (ICMEW) (IEEE, 2015), pp. 1–6.

2016 (1)

2014 (1)

2013 (3)

A. Hast and J. Nysjö, “Optimal RANSAC-towards a repeatable algorithm for finding the optimal set,” J. WSCG 21, 21–30 (2013).

C. Nitschke, A. Nakazawa, and H. Takemura, “Corneal imaging revisited: an overview of corneal reflection analysis and applications,” IPSJ Trans. Comput. Vis. Appl. 5, 1–18 (2013).
[Crossref]

M. V. S. Sakharkar and S. Gupta, “Image stitching techniques-an overview,” Int. J. Comput. Sci. Appl. 6, 324–330 (2013).

2012 (1)

J. Ying, B. Wang, and M. Shi, “Anterior corneal asphericity calculated by the tangential radius of curvature,” J. Biomed. Opt. 17, 0750051 (2012).
[Crossref]

2011 (1)

P. Sturm, S. Ramalingam, J.-P. Tardif, S. Gasparini, and J. Barreto, “Camera models and fundamental concepts used in geometric computer vision,” Found. Trends Comput. Graph. Vis. 6, 1–183 (2011).
[Crossref]

2010 (2)

J. Civera, O. G. Grasa, A. J. Davison, and J. Montiel, “1-point RANSAC for extended Kalman filtering: application to real-time structure from motion and visual odometry,” J. Field Rob. 27, 609–631 (2010).
[Crossref]

T. Tan, Z. He, and Z. Sun, “Efficient and robust segmentation of noisy iris images for non-cooperative iris recognition,” Image Vis. Comput. 28, 223–230 (2010).
[Crossref]

2009 (1)

Z. He, T. Tan, Z. Sun, and X. Qiu, “Toward accurate and fast iris segmentation for iris biometrics,” IEEE Trans. Pattern Anal. Mach. Intell. 31, 1670–1684 (2009).
[Crossref]

2008 (2)

J. J. McAuley, T. S. Caetano, and M. S. Barbosa, “Graph rigidity, cyclic belief propagation, and point pattern matching,” IEEE Trans. Pattern Anal. Mach. Intell. 30, 2047–2054 (2008).
[Crossref]

K. W. Bowyer, K. Hollingsworth, and P. J. Flynn, “Image understanding for iris biometrics: a survey,” Comput. Vis. Image Underst. 110, 281–307 (2008).
[Crossref]

2006 (1)

K. Nishino and S. K. Nayar, “Corneal imaging system: environment from eyes,” Int. J. Comput. Vis. 70, 23–40 (2006).
[Crossref]

2005 (1)

D. Nistér, “Preemptive RANSAC for live structure and motion estimation,” Mach. Vis. Appl. 16, 321–329 (2005).
[Crossref]

2004 (1)

D. G. Lowe, “Distinctive image features from scale-invariant keypoints,” Int. J. Comput. Vis. 60, 91–110 (2004).

2003 (1)

L. Ma, T. Tan, S. Member, Y. Wang, and D. Zhang, “Personal identification based on iris texture analysis,” IEEE Trans. Pattern Anal. Mach. Intell. 25, 1519–1533 (2003).
[Crossref]

2000 (1)

Z. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Anal. Mach. Intell. 22, 1330–1334 (2000).
[Crossref]

1993 (1)

J. G. Daugman, “High confidence visual recognition of persons by a test of statistical independence,” IEEE Trans. Pattern Anal. Mach. Intell. 15, 1148–1161 (1993).
[Crossref]

1986 (1)

M. Guillon, D. P. Lydon, and C. Wilson, “Corneal topography: a clinical model,” Ophthal. Physiol. Opt. 6, 47–56 (1986).
[Crossref]

1981 (1)

M. A. Fischler and R. C. Bolles, “Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography,” Commun. ACM 24, 381–395 (1981).
[Crossref]

Ask, E.

E. Ask, O. Enqvist, and F. Kahl, “Optimal geometric fitting under the truncated l2-norm,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2013), pp. 1722–1729.

E. Ask, O. Enqvist, L. Svarm, F. Kahl, and G. Lippolis, “Tractable and reliable registration of 2d point sets,” in Computer Vision—ECCV 2014, D. Fleet, T. Pajdla, B. Schiele, and T. Tuytelaars, eds., Vol. 8689 of Lecture Notes in Computer Science (Springer, 2014), pp. 393–406.

Backes, M.

M. Backes, T. Chen, M. Dürmuth, H. P. A. Lensch, and M. Welk, “Tempest in a teapot: Compromising reflections revisited,” in Proceedings of IEEE Symposium on Security and Privacy (SP) (2009), pp. 315–327.

Baltruaitis, T.

E. Wood, T. Baltruaitis, X. Zhang, Y. Sugano, P. Robinson, and A. Bulling, “Rendering of eyes for eye-shape registration and gaze estimation,” in 2015 IEEE International Conference on Computer Vision (ICCV) (IEEE, 2015), pp. 3756–3764.

Barbosa, M.

Barbosa, M. S.

J. J. McAuley, T. S. Caetano, and M. S. Barbosa, “Graph rigidity, cyclic belief propagation, and point pattern matching,” IEEE Trans. Pattern Anal. Mach. Intell. 30, 2047–2054 (2008).
[Crossref]

Barreto, J.

P. Sturm, S. Ramalingam, J.-P. Tardif, S. Gasparini, and J. Barreto, “Camera models and fundamental concepts used in geometric computer vision,” Found. Trends Comput. Graph. Vis. 6, 1–183 (2011).
[Crossref]

Bay, H.

H. Bay, T. Tuytelaars, and L. V. Gool, “SURF: speeded up robust features,” in Proceedings of European Conference on Computer Vision (ECCV), A. Leonardis, H. Bischof, and A. Pinz, eds., Vol. 3951 in Lecture Notes in Computer Science (Springer, 2006), pp. 404–417.

Belhumeur, P. N.

K. Nishino, P. N. Belhumeur, and S. K. Nayar, “Using eye reflections for face recognition under varying illumination,” in Proceedings of IEEE International Conference on Computer Vision (ICCV) (2005), pp. 519–526.

Bolles, R. C.

M. A. Fischler and R. C. Bolles, “Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography,” Commun. ACM 24, 381–395 (1981).
[Crossref]

Bowyer, K. W.

K. W. Bowyer, K. Hollingsworth, and P. J. Flynn, “Image understanding for iris biometrics: a survey,” Comput. Vis. Image Underst. 110, 281–307 (2008).
[Crossref]

Bulling, A.

E. Wood, T. Baltruaitis, X. Zhang, Y. Sugano, P. Robinson, and A. Bulling, “Rendering of eyes for eye-shape registration and gaze estimation,” in 2015 IEEE International Conference on Computer Vision (ICCV) (IEEE, 2015), pp. 3756–3764.

Caetano, T. S.

J. J. McAuley, T. S. Caetano, and M. S. Barbosa, “Graph rigidity, cyclic belief propagation, and point pattern matching,” IEEE Trans. Pattern Anal. Mach. Intell. 30, 2047–2054 (2008).
[Crossref]

Chen, T.

M. Backes, T. Chen, M. Dürmuth, H. P. A. Lensch, and M. Welk, “Tempest in a teapot: Compromising reflections revisited,” in Proceedings of IEEE Symposium on Security and Privacy (SP) (2009), pp. 315–327.

Chum, O.

O. Chum and J. Matas, “Matching with PROSAC-progressive sample consensus,” in IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR) (IEEE, 2005), Vol. 1, pp. 220–226.

J. Matas, O. Chum, M. Urban, and T. Pajdla, “Robust wide baseline stereo from maximally stable extremal regions,” in Proceedings of British Machine Vision Conference (BMVC) (BMVA, 2002), pp. 36.1–36.10.

Civera, J.

J. Civera, O. G. Grasa, A. J. Davison, and J. Montiel, “1-point RANSAC for extended Kalman filtering: application to real-time structure from motion and visual odometry,” J. Field Rob. 27, 609–631 (2010).
[Crossref]

Daugman, J. G.

J. G. Daugman, “High confidence visual recognition of persons by a test of statistical independence,” IEEE Trans. Pattern Anal. Mach. Intell. 15, 1148–1161 (1993).
[Crossref]

Davison, A. J.

J. Civera, O. G. Grasa, A. J. Davison, and J. Montiel, “1-point RANSAC for extended Kalman filtering: application to real-time structure from motion and visual odometry,” J. Field Rob. 27, 609–631 (2010).
[Crossref]

S. Lovegrove and A. J. Davison, “Real-time spherical mosaicing using whole image alignment,” in Computer Vision—ECCV 2010 (Springer, 2010), pp. 73–86.

Dürmuth, M.

M. Backes, T. Chen, M. Dürmuth, H. P. A. Lensch, and M. Welk, “Tempest in a teapot: Compromising reflections revisited,” in Proceedings of IEEE Symposium on Security and Privacy (SP) (2009), pp. 315–327.

Enqvist, O.

E. Ask, O. Enqvist, L. Svarm, F. Kahl, and G. Lippolis, “Tractable and reliable registration of 2d point sets,” in Computer Vision—ECCV 2014, D. Fleet, T. Pajdla, B. Schiele, and T. Tuytelaars, eds., Vol. 8689 of Lecture Notes in Computer Science (Springer, 2014), pp. 393–406.

E. Ask, O. Enqvist, and F. Kahl, “Optimal geometric fitting under the truncated l2-norm,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2013), pp. 1722–1729.

Farooq, A.

Fischler, M. A.

M. A. Fischler and R. C. Bolles, “Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography,” Commun. ACM 24, 381–395 (1981).
[Crossref]

Flynn, P. J.

K. W. Bowyer, K. Hollingsworth, and P. J. Flynn, “Image understanding for iris biometrics: a survey,” Comput. Vis. Image Underst. 110, 281–307 (2008).
[Crossref]

Fraundorfer, F.

D. Scaramuzza, F. Fraundorfer, and R. Siegwart, “Real-time monocular visual odometry for on-road vehicles with 1-point RANSAC,” in IEEE International Conference on Robotics and Automation (ICRA) (IEEE, 2009), pp. 4293–4299.

Freeman, W. T.

C. Liu, J. Yuen, A. Torralba, J. Sivic, and W. T. Freeman, “SIFT flow: dense correspondence across different scenes,” in Proceedings of European Conference on Computer Vision (ECCV) (Springer-Verlag, 2008), pp. 28–42.

Fujiwara, K.

K. Fujiwara, K. Nishino, J. Takamatsu, B. Zheng, and K. Ikeuchi, “Locally rigid globally non-rigid surface registration,” in Proceedings of IEEE International Conference on Computer Vision (ICCV) (2011), pp. 1527–1534.

Gasparini, S.

P. Sturm, S. Ramalingam, J.-P. Tardif, S. Gasparini, and J. Barreto, “Camera models and fundamental concepts used in geometric computer vision,” Found. Trends Comput. Graph. Vis. 6, 1–183 (2011).
[Crossref]

Gool, L. V.

H. Bay, T. Tuytelaars, and L. V. Gool, “SURF: speeded up robust features,” in Proceedings of European Conference on Computer Vision (ECCV), A. Leonardis, H. Bischof, and A. Pinz, eds., Vol. 3951 in Lecture Notes in Computer Science (Springer, 2006), pp. 404–417.

Grasa, O. G.

J. Civera, O. G. Grasa, A. J. Davison, and J. Montiel, “1-point RANSAC for extended Kalman filtering: application to real-time structure from motion and visual odometry,” J. Field Rob. 27, 609–631 (2010).
[Crossref]

Guillon, M.

M. Guillon, D. P. Lydon, and C. Wilson, “Corneal topography: a clinical model,” Ophthal. Physiol. Opt. 6, 47–56 (1986).
[Crossref]

Gupta, S.

M. V. S. Sakharkar and S. Gupta, “Image stitching techniques-an overview,” Int. J. Comput. Sci. Appl. 6, 324–330 (2013).

Hast, A.

A. Hast and J. Nysjö, “Optimal RANSAC-towards a repeatable algorithm for finding the optimal set,” J. WSCG 21, 21–30 (2013).

He, Z.

T. Tan, Z. He, and Z. Sun, “Efficient and robust segmentation of noisy iris images for non-cooperative iris recognition,” Image Vis. Comput. 28, 223–230 (2010).
[Crossref]

Z. He, T. Tan, Z. Sun, and X. Qiu, “Toward accurate and fast iris segmentation for iris biometrics,” IEEE Trans. Pattern Anal. Mach. Intell. 31, 1670–1684 (2009).
[Crossref]

Hollingsworth, K.

K. W. Bowyer, K. Hollingsworth, and P. J. Flynn, “Image understanding for iris biometrics: a survey,” Comput. Vis. Image Underst. 110, 281–307 (2008).
[Crossref]

Ikeuchi, K.

K. Fujiwara, K. Nishino, J. Takamatsu, B. Zheng, and K. Ikeuchi, “Locally rigid globally non-rigid surface registration,” in Proceedings of IEEE International Conference on Computer Vision (ICCV) (2011), pp. 1527–1534.

James, A. C.

Kahl, F.

E. Ask, O. Enqvist, and F. Kahl, “Optimal geometric fitting under the truncated l2-norm,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2013), pp. 1722–1729.

E. Ask, O. Enqvist, L. Svarm, F. Kahl, and G. Lippolis, “Tractable and reliable registration of 2d point sets,” in Computer Vision—ECCV 2014, D. Fleet, T. Pajdla, B. Schiele, and T. Tuytelaars, eds., Vol. 8689 of Lecture Notes in Computer Science (Springer, 2014), pp. 393–406.

Kanade, T.

B. D. Lucas and T. Kanade, “An iterative image registration technique with an application to stereo vision,” in International Joint Conference on Artificial Intelligence (1981), Vol. 81, pp. 674–679.

Kang, S. B.

H. Wang, S. Lin, X. Liu, and S. B. Kang, “Separating reflections in human iris images for illumination estimation,” in Proceedings of International Conference on Computer Vision (ICCV) (2005), Vol. 2, pp. 1691–1698.

Lensch, H. P. A.

M. Backes, T. Chen, M. Dürmuth, H. P. A. Lensch, and M. Welk, “Tempest in a teapot: Compromising reflections revisited,” in Proceedings of IEEE Symposium on Security and Privacy (SP) (2009), pp. 315–327.

Li, D.

D. Li, D. Winfield, and D. J. Parkhurst, “Starburst: a hybrid algorithm for video-based eye tracking combining feature-based and model-based approaches,” in 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR)-Workshops (IEEE, 2005), pp. 79.

Lin, S.

H. Wang, S. Lin, X. Liu, and S. B. Kang, “Separating reflections in human iris images for illumination estimation,” in Proceedings of International Conference on Computer Vision (ICCV) (2005), Vol. 2, pp. 1691–1698.

Lippolis, G.

E. Ask, O. Enqvist, L. Svarm, F. Kahl, and G. Lippolis, “Tractable and reliable registration of 2d point sets,” in Computer Vision—ECCV 2014, D. Fleet, T. Pajdla, B. Schiele, and T. Tuytelaars, eds., Vol. 8689 of Lecture Notes in Computer Science (Springer, 2014), pp. 393–406.

Liu, C.

C. Liu, J. Yuen, A. Torralba, J. Sivic, and W. T. Freeman, “SIFT flow: dense correspondence across different scenes,” in Proceedings of European Conference on Computer Vision (ECCV) (Springer-Verlag, 2008), pp. 28–42.

Liu, X.

H. Wang, S. Lin, X. Liu, and S. B. Kang, “Separating reflections in human iris images for illumination estimation,” in Proceedings of International Conference on Computer Vision (ICCV) (2005), Vol. 2, pp. 1691–1698.

Lovegrove, S.

S. Lovegrove and A. J. Davison, “Real-time spherical mosaicing using whole image alignment,” in Computer Vision—ECCV 2010 (Springer, 2010), pp. 73–86.

Lowe, D. G.

D. G. Lowe, “Distinctive image features from scale-invariant keypoints,” Int. J. Comput. Vis. 60, 91–110 (2004).

Lucas, B. D.

B. D. Lucas and T. Kanade, “An iterative image registration technique with an application to stereo vision,” in International Joint Conference on Artificial Intelligence (1981), Vol. 81, pp. 674–679.

Lydon, D. P.

M. Guillon, D. P. Lydon, and C. Wilson, “Corneal topography: a clinical model,” Ophthal. Physiol. Opt. 6, 47–56 (1986).
[Crossref]

Ma, L.

L. Ma, T. Tan, S. Member, Y. Wang, and D. Zhang, “Personal identification based on iris texture analysis,” IEEE Trans. Pattern Anal. Mach. Intell. 25, 1519–1533 (2003).
[Crossref]

Matas, J.

J. Matas, O. Chum, M. Urban, and T. Pajdla, “Robust wide baseline stereo from maximally stable extremal regions,” in Proceedings of British Machine Vision Conference (BMVC) (BMVA, 2002), pp. 36.1–36.10.

O. Chum and J. Matas, “Matching with PROSAC-progressive sample consensus,” in IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR) (IEEE, 2005), Vol. 1, pp. 220–226.

McAuley, J. J.

J. J. McAuley, T. S. Caetano, and M. S. Barbosa, “Graph rigidity, cyclic belief propagation, and point pattern matching,” IEEE Trans. Pattern Anal. Mach. Intell. 30, 2047–2054 (2008).
[Crossref]

Member, S.

L. Ma, T. Tan, S. Member, Y. Wang, and D. Zhang, “Personal identification based on iris texture analysis,” IEEE Trans. Pattern Anal. Mach. Intell. 25, 1519–1533 (2003).
[Crossref]

Montiel, J.

J. Civera, O. G. Grasa, A. J. Davison, and J. Montiel, “1-point RANSAC for extended Kalman filtering: application to real-time structure from motion and visual odometry,” J. Field Rob. 27, 609–631 (2010).
[Crossref]

Moré, J. J.

J. J. Moré, “The Levenberg-Marquardt algorithm: implementation and theory,” in Numerical Analysis (Springer, 1978), pp. 105–116.

Nakazawa, A.

C. Nitschke, A. Nakazawa, and H. Takemura, “Corneal imaging revisited: an overview of corneal reflection analysis and applications,” IPSJ Trans. Comput. Vis. Appl. 5, 1–18 (2013).
[Crossref]

C. Nitschke and A. Nakazawa, “Super-resolution from corneal images,” in Proceedings of British Machine Vision Conference (BMVC) (BMVA, 2012), pp. 22.1–22.12.

A. Nakazawa and C. Nitschke, “Point of gaze estimation through corneal surface reflection in an active illumination environment,” in Proceedings of European Conference on Computer Vision (ECCV) (Springer-Verlag, 2012), pp. 159–172.

A. Nakazawa, “Noise stable image registration using random resample consensus,” in 2016 International Conference on Pattern Recognition (ICPR2) (IAPR, 2016).

A. Nakazawa, C. Nitschke, and T. Nishida, “Non-calibrated and real-time human view estimation using a mobile corneal imaging camera,” in International Conference on Multimedia & Expo Workshops (ICMEW) (IEEE, 2015), pp. 1–6.

Nayar, S. K.

K. Nishino and S. K. Nayar, “Corneal imaging system: environment from eyes,” Int. J. Comput. Vis. 70, 23–40 (2006).
[Crossref]

K. Nishino, P. N. Belhumeur, and S. K. Nayar, “Using eye reflections for face recognition under varying illumination,” in Proceedings of IEEE International Conference on Computer Vision (ICCV) (2005), pp. 519–526.

Nishida, T.

A. Nakazawa, C. Nitschke, and T. Nishida, “Non-calibrated and real-time human view estimation using a mobile corneal imaging camera,” in International Conference on Multimedia & Expo Workshops (ICMEW) (IEEE, 2015), pp. 1–6.

Nishino, K.

K. Nishino and S. K. Nayar, “Corneal imaging system: environment from eyes,” Int. J. Comput. Vis. 70, 23–40 (2006).
[Crossref]

K. Nishino, P. N. Belhumeur, and S. K. Nayar, “Using eye reflections for face recognition under varying illumination,” in Proceedings of IEEE International Conference on Computer Vision (ICCV) (2005), pp. 519–526.

K. Fujiwara, K. Nishino, J. Takamatsu, B. Zheng, and K. Ikeuchi, “Locally rigid globally non-rigid surface registration,” in Proceedings of IEEE International Conference on Computer Vision (ICCV) (2011), pp. 1527–1534.

Nistér, D.

D. Nistér, “Preemptive RANSAC for live structure and motion estimation,” Mach. Vis. Appl. 16, 321–329 (2005).
[Crossref]

Nitschke, C.

C. Nitschke, A. Nakazawa, and H. Takemura, “Corneal imaging revisited: an overview of corneal reflection analysis and applications,” IPSJ Trans. Comput. Vis. Appl. 5, 1–18 (2013).
[Crossref]

C. Nitschke and A. Nakazawa, “Super-resolution from corneal images,” in Proceedings of British Machine Vision Conference (BMVC) (BMVA, 2012), pp. 22.1–22.12.

A. Nakazawa and C. Nitschke, “Point of gaze estimation through corneal surface reflection in an active illumination environment,” in Proceedings of European Conference on Computer Vision (ECCV) (Springer-Verlag, 2012), pp. 159–172.

A. Nakazawa, C. Nitschke, and T. Nishida, “Non-calibrated and real-time human view estimation using a mobile corneal imaging camera,” in International Conference on Multimedia & Expo Workshops (ICMEW) (IEEE, 2015), pp. 1–6.

Nysjö, J.

A. Hast and J. Nysjö, “Optimal RANSAC-towards a repeatable algorithm for finding the optimal set,” J. WSCG 21, 21–30 (2013).

Pajdla, T.

J. Matas, O. Chum, M. Urban, and T. Pajdla, “Robust wide baseline stereo from maximally stable extremal regions,” in Proceedings of British Machine Vision Conference (BMVC) (BMVA, 2002), pp. 36.1–36.10.

Parkhurst, D. J.

D. Li, D. Winfield, and D. J. Parkhurst, “Starburst: a hybrid algorithm for video-based eye tracking combining feature-based and model-based approaches,” in 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR)-Workshops (IEEE, 2005), pp. 79.

Qiu, X.

Z. He, T. Tan, Z. Sun, and X. Qiu, “Toward accurate and fast iris segmentation for iris biometrics,” IEEE Trans. Pattern Anal. Mach. Intell. 31, 1670–1684 (2009).
[Crossref]

Ramalingam, S.

P. Sturm, S. Ramalingam, J.-P. Tardif, S. Gasparini, and J. Barreto, “Camera models and fundamental concepts used in geometric computer vision,” Found. Trends Comput. Graph. Vis. 6, 1–183 (2011).
[Crossref]

Robinson, P.

E. Wood, T. Baltruaitis, X. Zhang, Y. Sugano, P. Robinson, and A. Bulling, “Rendering of eyes for eye-shape registration and gaze estimation,” in 2015 IEEE International Conference on Computer Vision (ICCV) (IEEE, 2015), pp. 3756–3764.

Sakharkar, M. V. S.

M. V. S. Sakharkar and S. Gupta, “Image stitching techniques-an overview,” Int. J. Comput. Sci. Appl. 6, 324–330 (2013).

Scaramuzza, D.

D. Scaramuzza, F. Fraundorfer, and R. Siegwart, “Real-time monocular visual odometry for on-road vehicles with 1-point RANSAC,” in IEEE International Conference on Robotics and Automation (ICRA) (IEEE, 2009), pp. 4293–4299.

Shi, M.

J. Ying, B. Wang, and M. Shi, “Anterior corneal asphericity calculated by the tangential radius of curvature,” J. Biomed. Opt. 17, 0750051 (2012).
[Crossref]

Shum, H.-Y.

H.-Y. Shum and R. Szeliski, “Construction of panoramic image mosaics with global and local alignment,” in Panoramic Vision (Springer, 2001), pp. 227–268.

Siegwart, R.

D. Scaramuzza, F. Fraundorfer, and R. Siegwart, “Real-time monocular visual odometry for on-road vehicles with 1-point RANSAC,” in IEEE International Conference on Robotics and Automation (ICRA) (IEEE, 2009), pp. 4293–4299.

Sivic, J.

C. Liu, J. Yuen, A. Torralba, J. Sivic, and W. T. Freeman, “SIFT flow: dense correspondence across different scenes,” in Proceedings of European Conference on Computer Vision (ECCV) (Springer-Verlag, 2008), pp. 28–42.

Smith, L. N.

Smith, M. L.

Sturm, P.

P. Sturm, S. Ramalingam, J.-P. Tardif, S. Gasparini, and J. Barreto, “Camera models and fundamental concepts used in geometric computer vision,” Found. Trends Comput. Graph. Vis. 6, 1–183 (2011).
[Crossref]

Sugano, Y.

E. Wood, T. Baltruaitis, X. Zhang, Y. Sugano, P. Robinson, and A. Bulling, “Rendering of eyes for eye-shape registration and gaze estimation,” in 2015 IEEE International Conference on Computer Vision (ICCV) (IEEE, 2015), pp. 3756–3764.

Sun, Z.

T. Tan, Z. He, and Z. Sun, “Efficient and robust segmentation of noisy iris images for non-cooperative iris recognition,” Image Vis. Comput. 28, 223–230 (2010).
[Crossref]

Z. He, T. Tan, Z. Sun, and X. Qiu, “Toward accurate and fast iris segmentation for iris biometrics,” IEEE Trans. Pattern Anal. Mach. Intell. 31, 1670–1684 (2009).
[Crossref]

Svarm, L.

E. Ask, O. Enqvist, L. Svarm, F. Kahl, and G. Lippolis, “Tractable and reliable registration of 2d point sets,” in Computer Vision—ECCV 2014, D. Fleet, T. Pajdla, B. Schiele, and T. Tuytelaars, eds., Vol. 8689 of Lecture Notes in Computer Science (Springer, 2014), pp. 393–406.

Szeliski, R.

H.-Y. Shum and R. Szeliski, “Construction of panoramic image mosaics with global and local alignment,” in Panoramic Vision (Springer, 2001), pp. 227–268.

Takamatsu, J.

K. Fujiwara, K. Nishino, J. Takamatsu, B. Zheng, and K. Ikeuchi, “Locally rigid globally non-rigid surface registration,” in Proceedings of IEEE International Conference on Computer Vision (ICCV) (2011), pp. 1527–1534.

Takemura, H.

C. Nitschke, A. Nakazawa, and H. Takemura, “Corneal imaging revisited: an overview of corneal reflection analysis and applications,” IPSJ Trans. Comput. Vis. Appl. 5, 1–18 (2013).
[Crossref]

Tan, T.

T. Tan, Z. He, and Z. Sun, “Efficient and robust segmentation of noisy iris images for non-cooperative iris recognition,” Image Vis. Comput. 28, 223–230 (2010).
[Crossref]

Z. He, T. Tan, Z. Sun, and X. Qiu, “Toward accurate and fast iris segmentation for iris biometrics,” IEEE Trans. Pattern Anal. Mach. Intell. 31, 1670–1684 (2009).
[Crossref]

L. Ma, T. Tan, S. Member, Y. Wang, and D. Zhang, “Personal identification based on iris texture analysis,” IEEE Trans. Pattern Anal. Mach. Intell. 25, 1519–1533 (2003).
[Crossref]

Tardif, J.-P.

P. Sturm, S. Ramalingam, J.-P. Tardif, S. Gasparini, and J. Barreto, “Camera models and fundamental concepts used in geometric computer vision,” Found. Trends Comput. Graph. Vis. 6, 1–183 (2011).
[Crossref]

Torralba, A.

C. Liu, J. Yuen, A. Torralba, J. Sivic, and W. T. Freeman, “SIFT flow: dense correspondence across different scenes,” in Proceedings of European Conference on Computer Vision (ECCV) (Springer-Verlag, 2008), pp. 28–42.

Tuytelaars, T.

H. Bay, T. Tuytelaars, and L. V. Gool, “SURF: speeded up robust features,” in Proceedings of European Conference on Computer Vision (ECCV), A. Leonardis, H. Bischof, and A. Pinz, eds., Vol. 3951 in Lecture Notes in Computer Science (Springer, 2006), pp. 404–417.

Urban, M.

J. Matas, O. Chum, M. Urban, and T. Pajdla, “Robust wide baseline stereo from maximally stable extremal regions,” in Proceedings of British Machine Vision Conference (BMVC) (BMVA, 2002), pp. 36.1–36.10.

Wang, B.

J. Ying, B. Wang, and M. Shi, “Anterior corneal asphericity calculated by the tangential radius of curvature,” J. Biomed. Opt. 17, 0750051 (2012).
[Crossref]

Wang, H.

H. Wang, S. Lin, X. Liu, and S. B. Kang, “Separating reflections in human iris images for illumination estimation,” in Proceedings of International Conference on Computer Vision (ICCV) (2005), Vol. 2, pp. 1691–1698.

Wang, Y.

L. Ma, T. Tan, S. Member, Y. Wang, and D. Zhang, “Personal identification based on iris texture analysis,” IEEE Trans. Pattern Anal. Mach. Intell. 25, 1519–1533 (2003).
[Crossref]

Welk, M.

M. Backes, T. Chen, M. Dürmuth, H. P. A. Lensch, and M. Welk, “Tempest in a teapot: Compromising reflections revisited,” in Proceedings of IEEE Symposium on Security and Privacy (SP) (2009), pp. 315–327.

Wilson, C.

M. Guillon, D. P. Lydon, and C. Wilson, “Corneal topography: a clinical model,” Ophthal. Physiol. Opt. 6, 47–56 (1986).
[Crossref]

Winfield, D.

D. Li, D. Winfield, and D. J. Parkhurst, “Starburst: a hybrid algorithm for video-based eye tracking combining feature-based and model-based approaches,” in 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR)-Workshops (IEEE, 2005), pp. 79.

Wood, E.

E. Wood, T. Baltruaitis, X. Zhang, Y. Sugano, P. Robinson, and A. Bulling, “Rendering of eyes for eye-shape registration and gaze estimation,” in 2015 IEEE International Conference on Computer Vision (ICCV) (IEEE, 2015), pp. 3756–3764.

Ying, J.

J. Ying, B. Wang, and M. Shi, “Anterior corneal asphericity calculated by the tangential radius of curvature,” J. Biomed. Opt. 17, 0750051 (2012).
[Crossref]

Yuen, J.

C. Liu, J. Yuen, A. Torralba, J. Sivic, and W. T. Freeman, “SIFT flow: dense correspondence across different scenes,” in Proceedings of European Conference on Computer Vision (ECCV) (Springer-Verlag, 2008), pp. 28–42.

Zhang, D.

L. Ma, T. Tan, S. Member, Y. Wang, and D. Zhang, “Personal identification based on iris texture analysis,” IEEE Trans. Pattern Anal. Mach. Intell. 25, 1519–1533 (2003).
[Crossref]

Zhang, W.

Zhang, X.

E. Wood, T. Baltruaitis, X. Zhang, Y. Sugano, P. Robinson, and A. Bulling, “Rendering of eyes for eye-shape registration and gaze estimation,” in 2015 IEEE International Conference on Computer Vision (ICCV) (IEEE, 2015), pp. 3756–3764.

Zhang, Z.

Z. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Anal. Mach. Intell. 22, 1330–1334 (2000).
[Crossref]

Zheng, B.

K. Fujiwara, K. Nishino, J. Takamatsu, B. Zheng, and K. Ikeuchi, “Locally rigid globally non-rigid surface registration,” in Proceedings of IEEE International Conference on Computer Vision (ICCV) (2011), pp. 1527–1534.

Biomed. Opt. Express (1)

Commun. ACM (1)

M. A. Fischler and R. C. Bolles, “Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography,” Commun. ACM 24, 381–395 (1981).
[Crossref]

Comput. Vis. Image Underst. (1)

K. W. Bowyer, K. Hollingsworth, and P. J. Flynn, “Image understanding for iris biometrics: a survey,” Comput. Vis. Image Underst. 110, 281–307 (2008).
[Crossref]

Found. Trends Comput. Graph. Vis. (1)

P. Sturm, S. Ramalingam, J.-P. Tardif, S. Gasparini, and J. Barreto, “Camera models and fundamental concepts used in geometric computer vision,” Found. Trends Comput. Graph. Vis. 6, 1–183 (2011).
[Crossref]

IEEE Trans. Pattern Anal. Mach. Intell. (5)

Z. He, T. Tan, Z. Sun, and X. Qiu, “Toward accurate and fast iris segmentation for iris biometrics,” IEEE Trans. Pattern Anal. Mach. Intell. 31, 1670–1684 (2009).
[Crossref]

L. Ma, T. Tan, S. Member, Y. Wang, and D. Zhang, “Personal identification based on iris texture analysis,” IEEE Trans. Pattern Anal. Mach. Intell. 25, 1519–1533 (2003).
[Crossref]

J. G. Daugman, “High confidence visual recognition of persons by a test of statistical independence,” IEEE Trans. Pattern Anal. Mach. Intell. 15, 1148–1161 (1993).
[Crossref]

J. J. McAuley, T. S. Caetano, and M. S. Barbosa, “Graph rigidity, cyclic belief propagation, and point pattern matching,” IEEE Trans. Pattern Anal. Mach. Intell. 30, 2047–2054 (2008).
[Crossref]

Z. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Anal. Mach. Intell. 22, 1330–1334 (2000).
[Crossref]

Image Vis. Comput. (1)

T. Tan, Z. He, and Z. Sun, “Efficient and robust segmentation of noisy iris images for non-cooperative iris recognition,” Image Vis. Comput. 28, 223–230 (2010).
[Crossref]

Int. J. Comput. Sci. Appl. (1)

M. V. S. Sakharkar and S. Gupta, “Image stitching techniques-an overview,” Int. J. Comput. Sci. Appl. 6, 324–330 (2013).

Int. J. Comput. Vis. (2)

K. Nishino and S. K. Nayar, “Corneal imaging system: environment from eyes,” Int. J. Comput. Vis. 70, 23–40 (2006).
[Crossref]

D. G. Lowe, “Distinctive image features from scale-invariant keypoints,” Int. J. Comput. Vis. 60, 91–110 (2004).

IPSJ Trans. Comput. Vis. Appl. (1)

C. Nitschke, A. Nakazawa, and H. Takemura, “Corneal imaging revisited: an overview of corneal reflection analysis and applications,” IPSJ Trans. Comput. Vis. Appl. 5, 1–18 (2013).
[Crossref]

J. Biomed. Opt. (1)

J. Ying, B. Wang, and M. Shi, “Anterior corneal asphericity calculated by the tangential radius of curvature,” J. Biomed. Opt. 17, 0750051 (2012).
[Crossref]

J. Field Rob. (1)

J. Civera, O. G. Grasa, A. J. Davison, and J. Montiel, “1-point RANSAC for extended Kalman filtering: application to real-time structure from motion and visual odometry,” J. Field Rob. 27, 609–631 (2010).
[Crossref]

J. Opt. Soc. Am. A (1)

J. WSCG (1)

A. Hast and J. Nysjö, “Optimal RANSAC-towards a repeatable algorithm for finding the optimal set,” J. WSCG 21, 21–30 (2013).

Mach. Vis. Appl. (1)

D. Nistér, “Preemptive RANSAC for live structure and motion estimation,” Mach. Vis. Appl. 16, 321–329 (2005).
[Crossref]

Ophthal. Physiol. Opt. (1)

M. Guillon, D. P. Lydon, and C. Wilson, “Corneal topography: a clinical model,” Ophthal. Physiol. Opt. 6, 47–56 (1986).
[Crossref]

Other (21)

A. Nakazawa, C. Nitschke, and T. Nishida, “Non-calibrated and real-time human view estimation using a mobile corneal imaging camera,” in International Conference on Multimedia & Expo Workshops (ICMEW) (IEEE, 2015), pp. 1–6.

O. Chum and J. Matas, “Matching with PROSAC-progressive sample consensus,” in IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR) (IEEE, 2005), Vol. 1, pp. 220–226.

E. Ask, O. Enqvist, and F. Kahl, “Optimal geometric fitting under the truncated l2-norm,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2013), pp. 1722–1729.

E. Ask, O. Enqvist, L. Svarm, F. Kahl, and G. Lippolis, “Tractable and reliable registration of 2d point sets,” in Computer Vision—ECCV 2014, D. Fleet, T. Pajdla, B. Schiele, and T. Tuytelaars, eds., Vol. 8689 of Lecture Notes in Computer Science (Springer, 2014), pp. 393–406.

A. Nakazawa, “Noise stable image registration using random resample consensus,” in 2016 International Conference on Pattern Recognition (ICPR2) (IAPR, 2016).

B. D. Lucas and T. Kanade, “An iterative image registration technique with an application to stereo vision,” in International Joint Conference on Artificial Intelligence (1981), Vol. 81, pp. 674–679.

D. Li, D. Winfield, and D. J. Parkhurst, “Starburst: a hybrid algorithm for video-based eye tracking combining feature-based and model-based approaches,” in 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR)-Workshops (IEEE, 2005), pp. 79.

E. Wood, T. Baltruaitis, X. Zhang, Y. Sugano, P. Robinson, and A. Bulling, “Rendering of eyes for eye-shape registration and gaze estimation,” in 2015 IEEE International Conference on Computer Vision (ICCV) (IEEE, 2015), pp. 3756–3764.

D. Scaramuzza, F. Fraundorfer, and R. Siegwart, “Real-time monocular visual odometry for on-road vehicles with 1-point RANSAC,” in IEEE International Conference on Robotics and Automation (ICRA) (IEEE, 2009), pp. 4293–4299.

J. J. Moré, “The Levenberg-Marquardt algorithm: implementation and theory,” in Numerical Analysis (Springer, 1978), pp. 105–116.

H. Bay, T. Tuytelaars, and L. V. Gool, “SURF: speeded up robust features,” in Proceedings of European Conference on Computer Vision (ECCV), A. Leonardis, H. Bischof, and A. Pinz, eds., Vol. 3951 in Lecture Notes in Computer Science (Springer, 2006), pp. 404–417.

J. Matas, O. Chum, M. Urban, and T. Pajdla, “Robust wide baseline stereo from maximally stable extremal regions,” in Proceedings of British Machine Vision Conference (BMVC) (BMVA, 2002), pp. 36.1–36.10.

K. Fujiwara, K. Nishino, J. Takamatsu, B. Zheng, and K. Ikeuchi, “Locally rigid globally non-rigid surface registration,” in Proceedings of IEEE International Conference on Computer Vision (ICCV) (2011), pp. 1527–1534.

C. Liu, J. Yuen, A. Torralba, J. Sivic, and W. T. Freeman, “SIFT flow: dense correspondence across different scenes,” in Proceedings of European Conference on Computer Vision (ECCV) (Springer-Verlag, 2008), pp. 28–42.

H. Wang, S. Lin, X. Liu, and S. B. Kang, “Separating reflections in human iris images for illumination estimation,” in Proceedings of International Conference on Computer Vision (ICCV) (2005), Vol. 2, pp. 1691–1698.

M. Backes, T. Chen, M. Dürmuth, H. P. A. Lensch, and M. Welk, “Tempest in a teapot: Compromising reflections revisited,” in Proceedings of IEEE Symposium on Security and Privacy (SP) (2009), pp. 315–327.

A. Nakazawa and C. Nitschke, “Point of gaze estimation through corneal surface reflection in an active illumination environment,” in Proceedings of European Conference on Computer Vision (ECCV) (Springer-Verlag, 2012), pp. 159–172.

K. Nishino, P. N. Belhumeur, and S. K. Nayar, “Using eye reflections for face recognition under varying illumination,” in Proceedings of IEEE International Conference on Computer Vision (ICCV) (2005), pp. 519–526.

C. Nitschke and A. Nakazawa, “Super-resolution from corneal images,” in Proceedings of British Machine Vision Conference (BMVC) (BMVA, 2012), pp. 22.1–22.12.

H.-Y. Shum and R. Szeliski, “Construction of panoramic image mosaics with global and local alignment,” in Panoramic Vision (Springer, 2001), pp. 227–268.

S. Lovegrove and A. J. Davison, “Real-time spherical mosaicing using whole image alignment,” in Computer Vision—ECCV 2010 (Springer, 2010), pp. 73–86.

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (14)

Fig. 1.
Fig. 1. Overview of the algorithm.
Fig. 2.
Fig. 2. (a) Cross section of the human eye and (b) a geometric eye model with two overlapping surfaces for eyeball and cornea. (c) Spherical ( Q = 0 ) and aspherical shape ( Q = 0.1 0.5 ) for geometric corneal model. The asphericity of the corneal surface is Q = 0.33 ± 0.1 .
Fig. 3.
Fig. 3. 3D eye pose estimation from the projected limbus (iris contour).
Fig. 4.
Fig. 4. Light lay E ( u ) is reflected at the aspherical corneal surface point U and reached to the image point u .
Fig. 5.
Fig. 5. Relation of eye reflection and scene images and their EMs.
Fig. 6.
Fig. 6. Initial registration algorithm using RANRESAC.
Fig. 7.
Fig. 7. Corneal imaging camera.
Fig. 8.
Fig. 8. Experimental results. In original scene images, green crosses are the key points (ground-truth points) used for evaluating registration errors. Yellow crosses and yellow lines in the resulting images indicate corresponding key points and the errors to the ground-truth points.
Fig. 9.
Fig. 9. Sensitivity of warping function with respect to lens and asphericity. (a) Result using the corneal imaging camera, (b) result using SLR. In both cases, left plots show the projection results of grid points in an eye image to the scene image where red, green, and blue markers indicate the projection of Q = 0.4 , 0.2 and 0.0, respectively. The right graphs visualize the relation between the angle from a projection center and angular difference of the projected point among different asphericities. Apparently, the difference increases according to the distance (angle) from the projection center.
Fig. 10.
Fig. 10. Residual errors after initial and fine registrations. The fine registration decreases errors at any asphericity.
Fig. 11.
Fig. 11. Residual errors with respect to the image positions ( Q = 0.4 ). The x -axis shows the angle from the image center and the y -axis indicates the residual error. The red crosses are the result of initial registration and green asterisks are the result after the fine registration.
Fig. 12.
Fig. 12. PoG estimation using the proposed approach.
Fig. 13.
Fig. 13. PoG estimation result in an outdoor city scene. Green squares show PoGs and red lines show the gaze trajectories.
Fig. 14.
Fig. 14. Peripheral vision estimation results for two scenes. The pictures show the viewing angles overlaid to an eye reflection image (left) and a scene image (right). The center of the circular contours marks the PoG at 0 deg, from which contours are drawn at 10 deg increments.

Tables (3)

Tables Icon

Table 1. Corneal Reflection and Scene Dataset

Tables Icon

Table 2. Experimental Results (Accuracy)

Tables Icon

Table 3. Experimental Results (Registration Robustness), Showing the Number of Frames Where the Registration Succeeds

Equations (12)

Equations on this page are rendered with MathJax. Learn more.

X 2 + Y 2 + p Z 2 = r 0 2 , p = 1 + Q .
A e ( u ) = K e 1 u K e 1 u ,
T CI = [ R CI C 0 1 ] , R CI = R z ( φ ) R x ( τ ) , T IC = T CI 1 , R IC = R CI 1 ,
U = R IC A e ( u ) · t 1 + T IC [ 0 0 0 1 ] T .
E ( u ) = R IC A e ( u ) + 2 ( R IC A e ( u ) · N ( U ) ) N ( U ) , N ( U ) = [ X u Y u p Z u ] X u 2 + Y u 2 + p 2 Z u 2 ,
E ( u ) = R A s ( v ) , A s ( v ) = K s 1 [ v T 1 ] T K s 1 [ v T 1 ] T ,
R i = [ E ^ x E ^ y E ^ z ] [ A ^ x A ^ y A ^ z ] 1 ,
E x = E ( u i ) , E y = E ( u i ) × ( E ( u i ) × E ( u i , θ i u ) ) , E z = E ( u i ) × E ( u i , θ i u ) , E ( u , θ u ) = E ( u + h ( θ u ) ) E ( u ) , A x = A s ( v i ) , A y = A s ( v i ) × ( A s ( v i ) × A s ( v i , θ i v ) ) , A z = A s ( v i ) × A s ( v i , θ i v ) , A s ( v , θ v ) = A s ( v + h ( θ v ) ) A s ( v ) , h ( θ ) = [ cos ( θ ) sin ( θ ) ] T .
W i ( u ) [ 1 0 0 0 1 0 ] K s R i 1 E ( u ) [ 0 0 1 ] K s R i 1 E ( u ) .
P ( W i ) = P t ( W i ) · P o ( W i ) j = 1 K F ( u j * ) F ( v j * ) 2 2 σ t 2 α j = 1 K 1 ( E ^ ( u j * , θ u j * ) , A ^ s ( v j * , θ v j * ) ) 2 2 σ o 2 , v j * = W ( u j * ) ,
i = argmax i ( P ( W i ) ) .
s i v = 1 2 2 W i ( u + [ 1 1 ] T ) W i ( u [ 1 1 ] T ) · s i u ,

Metrics