Abstract

We present a novel approach to foveated imaging based on dual-aperture optics that superimpose two images on a single sensor, thus attaining a pronounced foveal function with reduced optical complexity. Each image captures the scene at a different magnification and therefore the system simultaneously captures a wide field of view and a high acuity at a central region. This approach enables arbitrary magnification ratios using a relatively simple system, which would be impossible using conventional optical design, and is of importance in applications where the cost per pixel is high. The acquired superimposed image can be processed to perform enhanced object tracking and recognition over a wider field of view and at an increased angular resolution for a given limited pixel count. Alternatively, image reconstruction can be used to separate the image components enabling the reconstruction of a foveated image for display. We demonstrate these concepts through ray-tracing simulation of practical optical systems with computational recovery.

© 2017 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

Full Article  |  PDF Article
OSA Recommended Articles
Multi-aperture foveated imaging

Guillem Carles, Shouqian Chen, Nicholas Bustin, James Downing, Duncan McCall, Andrew Wood, and Andrew R. Harvey
Opt. Lett. 41(8) 1869-1872 (2016)

Dual-sensor foveated imaging system

Hong Hua and Sheng Liu
Appl. Opt. 47(3) 317-327 (2008)

Superimposed video disambiguation for increased field of view

Roummel F. Marcia, Changsoon Kim, Cihat Eldeniz, Jungsang Kim, David J. Brady, and Rebecca M. Willett
Opt. Express 16(21) 16352-16363 (2008)

References

  • View by:
  • |
  • |
  • |

  1. G. Sandini, P. Questa, D. Scheffer, B. Diericks, and A. Mannucci, “A retina-like CMOS sensor and its applications,” in Proceedings of IEEE Sensor Array and Multichannel Signal Processing Workshop (IEEE, 2000), pp. 514–519.
  2. Y. Suematu and H. Yamada, “A wide angle vision sensor with fovea-design of distortion lens and the simulated images,” in Proceedings of IEEE International Conference on Industrial Electronics, Control, and Instrumentation (IEEE, 1993), vol. 3, pp. 1770–1773.
  3. K. Kuniyoshi, N. Kita, K. Sugimoto, S. Nakamura, and T. Suehiro, “A foveated wide angle lens for active vision,” in Proceedings of IEEE International Conference on Robotics and Automation (IEEE, 1995), vol. 3, pp. 2982–2988.
  4. X. Dallaire and S. Thibault, “Wide-angle lens miniaturization through foveated imaging,” Proc. SPIE 9626, 96261A (2015).
  5. H. Hua and S. Liu, “Dual-sensor foveated imaging system,” Appl. Opt. 47(3), 317–327 (2008).
    [Crossref] [PubMed]
  6. S. Thiele, K. Arzenbacher, T. Gissibl, H. Giessen, and A. M. Herkommer, “3d-printed eagle eye: Compound microlens system for foveated imaging,” Sci. Adv. 3(2), e1602655 (2017).
    [Crossref] [PubMed]
  7. A. Ude, C. Gaskett, and G. Cheng, “Foveated vision systems with two cameras per eye,” in Proceedings of IEEE International Conference on Robotics and Automation (IEEE, 2006), pp. 3457–3462.
  8. Y. Qin, H. Hua, and M. Nguyen, “Multiresolution foveated laparoscope with high resolvability,” Opt. Lett. 38(13), 2191–2193 (2013).
    [Crossref] [PubMed]
  9. G. Carles, S. Chen, N. Bustin, J. Downing, D. McCall, A. Wood, and A. R. Harvey, “Multi-aperture foveated imaging,” Opt. Lett. 41(8), 1869–1872 (2016).
    [Crossref] [PubMed]
  10. G. Y. Belay, H. Ottevaere, Y. Meuret, M. Vervaeke, J. V. Erps, and H. Thienpont, “Demonstration of a multichannel, multiresolution imaging system,” Appl. Opt. 52(24), 6081–6089 (2013).
    [Crossref] [PubMed]
  11. M. Mehta, E. J. Griffith, and J. F. Ralph, “Geometric separation of superimposed images with varying fields-of-view,” in International Conference on Information Fusion (IEEE, 2014), pp. 1–6.
  12. G. Curatu and J. E. Harvey, “Analysis and design of wide-angle foveated optical systems based on transmissive liquid crystal spatial light modulators,” Optical Engineering 48(4), 043001 (2009).
    [Crossref]
  13. T. Martinez, D. V. Wick, and S. R. Restaino, “Foveated, wide field-of-view imaging system using a liquid crystal spatial light modulator,” Opt. Express 8(10), 555–560 (2001).
    [Crossref] [PubMed]
  14. B. Potsaid, Y. Bellouard, and J. T. Wen, “Adaptive scanning optical microscope (ASOM): A multidisciplinary optical microscope design for large field of view and high resolution imaging,” Opt. Express 13(17), 6504–6518 (2005).
    [Crossref] [PubMed]
  15. D. L. Dickensheets, S. Kreitinger, G. Peterson, M. Heger, and M. Rajadhyaksha, “Wide-field imaging combined with confocal microscopy using a miniature f/5 camera integrated within a high na objective lens,” Opt. Lett. 42(7), 1241–1244 (2017).
    [Crossref] [PubMed]
  16. S. A. Burns, R. Tumbar, A. E. Elsner, D. Ferguson, and D. X. Hammer, “Large-field-of-view, modular, stabilized, adaptive-optics-based scanning laser ophthalmoscope,” J. Opt. Soc. Am. A 24(5), 1313–1326 (2007).
    [Crossref]
  17. R. D. Ferguson, Z. Zhong, D. X. Hammer, M. Mujat, A. H. Patel, C. Deng, W. Zou, and S. A. Burns, “Adaptive optics scanning laser ophthalmoscope with integrated wide-field retinal imaging and tracking,” J. Opt. Soc. Am. A 27(11), A265–A277 (2010).
    [Crossref]
  18. G. Carles, G. Muyo, N. Bustin, A. Wood, and A. R. Harvey, “Compact multi-aperture imaging with high angular resolution,” J. Opt. Soc. Am. A 32(3), 411–419 (2015).
    [Crossref]
  19. D. B. Phillips, M.-J. Sun, J. M. Taylor, M. P. Edgar, S. M. Barnett, G. M. Gibson, and M. J. Padgett, “Adaptive foveated single-pixel imaging with dynamic supersampling,” Sci. Adv. 3(4), e160178 (2017).
    [Crossref]
  20. M. M. Mehta, E. J. Griffith, S. Maskell, and J. F. Ralph, “Geometric separation of superimposed images,” in International Conference on Information Fusion, (IEEE, 2016), pp. 1244–1251.

2017 (3)

S. Thiele, K. Arzenbacher, T. Gissibl, H. Giessen, and A. M. Herkommer, “3d-printed eagle eye: Compound microlens system for foveated imaging,” Sci. Adv. 3(2), e1602655 (2017).
[Crossref] [PubMed]

D. L. Dickensheets, S. Kreitinger, G. Peterson, M. Heger, and M. Rajadhyaksha, “Wide-field imaging combined with confocal microscopy using a miniature f/5 camera integrated within a high na objective lens,” Opt. Lett. 42(7), 1241–1244 (2017).
[Crossref] [PubMed]

D. B. Phillips, M.-J. Sun, J. M. Taylor, M. P. Edgar, S. M. Barnett, G. M. Gibson, and M. J. Padgett, “Adaptive foveated single-pixel imaging with dynamic supersampling,” Sci. Adv. 3(4), e160178 (2017).
[Crossref]

2016 (1)

2015 (2)

X. Dallaire and S. Thibault, “Wide-angle lens miniaturization through foveated imaging,” Proc. SPIE 9626, 96261A (2015).

G. Carles, G. Muyo, N. Bustin, A. Wood, and A. R. Harvey, “Compact multi-aperture imaging with high angular resolution,” J. Opt. Soc. Am. A 32(3), 411–419 (2015).
[Crossref]

2013 (2)

2010 (1)

2009 (1)

G. Curatu and J. E. Harvey, “Analysis and design of wide-angle foveated optical systems based on transmissive liquid crystal spatial light modulators,” Optical Engineering 48(4), 043001 (2009).
[Crossref]

2008 (1)

2007 (1)

2005 (1)

2001 (1)

Arzenbacher, K.

S. Thiele, K. Arzenbacher, T. Gissibl, H. Giessen, and A. M. Herkommer, “3d-printed eagle eye: Compound microlens system for foveated imaging,” Sci. Adv. 3(2), e1602655 (2017).
[Crossref] [PubMed]

Barnett, S. M.

D. B. Phillips, M.-J. Sun, J. M. Taylor, M. P. Edgar, S. M. Barnett, G. M. Gibson, and M. J. Padgett, “Adaptive foveated single-pixel imaging with dynamic supersampling,” Sci. Adv. 3(4), e160178 (2017).
[Crossref]

Belay, G. Y.

Bellouard, Y.

Burns, S. A.

Bustin, N.

Carles, G.

Chen, S.

Cheng, G.

A. Ude, C. Gaskett, and G. Cheng, “Foveated vision systems with two cameras per eye,” in Proceedings of IEEE International Conference on Robotics and Automation (IEEE, 2006), pp. 3457–3462.

Curatu, G.

G. Curatu and J. E. Harvey, “Analysis and design of wide-angle foveated optical systems based on transmissive liquid crystal spatial light modulators,” Optical Engineering 48(4), 043001 (2009).
[Crossref]

Dallaire, X.

X. Dallaire and S. Thibault, “Wide-angle lens miniaturization through foveated imaging,” Proc. SPIE 9626, 96261A (2015).

Deng, C.

Dickensheets, D. L.

Diericks, B.

G. Sandini, P. Questa, D. Scheffer, B. Diericks, and A. Mannucci, “A retina-like CMOS sensor and its applications,” in Proceedings of IEEE Sensor Array and Multichannel Signal Processing Workshop (IEEE, 2000), pp. 514–519.

Downing, J.

Edgar, M. P.

D. B. Phillips, M.-J. Sun, J. M. Taylor, M. P. Edgar, S. M. Barnett, G. M. Gibson, and M. J. Padgett, “Adaptive foveated single-pixel imaging with dynamic supersampling,” Sci. Adv. 3(4), e160178 (2017).
[Crossref]

Elsner, A. E.

Erps, J. V.

Ferguson, D.

Ferguson, R. D.

Gaskett, C.

A. Ude, C. Gaskett, and G. Cheng, “Foveated vision systems with two cameras per eye,” in Proceedings of IEEE International Conference on Robotics and Automation (IEEE, 2006), pp. 3457–3462.

Gibson, G. M.

D. B. Phillips, M.-J. Sun, J. M. Taylor, M. P. Edgar, S. M. Barnett, G. M. Gibson, and M. J. Padgett, “Adaptive foveated single-pixel imaging with dynamic supersampling,” Sci. Adv. 3(4), e160178 (2017).
[Crossref]

Giessen, H.

S. Thiele, K. Arzenbacher, T. Gissibl, H. Giessen, and A. M. Herkommer, “3d-printed eagle eye: Compound microlens system for foveated imaging,” Sci. Adv. 3(2), e1602655 (2017).
[Crossref] [PubMed]

Gissibl, T.

S. Thiele, K. Arzenbacher, T. Gissibl, H. Giessen, and A. M. Herkommer, “3d-printed eagle eye: Compound microlens system for foveated imaging,” Sci. Adv. 3(2), e1602655 (2017).
[Crossref] [PubMed]

Griffith, E. J.

M. M. Mehta, E. J. Griffith, S. Maskell, and J. F. Ralph, “Geometric separation of superimposed images,” in International Conference on Information Fusion, (IEEE, 2016), pp. 1244–1251.

M. Mehta, E. J. Griffith, and J. F. Ralph, “Geometric separation of superimposed images with varying fields-of-view,” in International Conference on Information Fusion (IEEE, 2014), pp. 1–6.

Hammer, D. X.

Harvey, A. R.

Harvey, J. E.

G. Curatu and J. E. Harvey, “Analysis and design of wide-angle foveated optical systems based on transmissive liquid crystal spatial light modulators,” Optical Engineering 48(4), 043001 (2009).
[Crossref]

Heger, M.

Herkommer, A. M.

S. Thiele, K. Arzenbacher, T. Gissibl, H. Giessen, and A. M. Herkommer, “3d-printed eagle eye: Compound microlens system for foveated imaging,” Sci. Adv. 3(2), e1602655 (2017).
[Crossref] [PubMed]

Hua, H.

Kita, N.

K. Kuniyoshi, N. Kita, K. Sugimoto, S. Nakamura, and T. Suehiro, “A foveated wide angle lens for active vision,” in Proceedings of IEEE International Conference on Robotics and Automation (IEEE, 1995), vol. 3, pp. 2982–2988.

Kreitinger, S.

Kuniyoshi, K.

K. Kuniyoshi, N. Kita, K. Sugimoto, S. Nakamura, and T. Suehiro, “A foveated wide angle lens for active vision,” in Proceedings of IEEE International Conference on Robotics and Automation (IEEE, 1995), vol. 3, pp. 2982–2988.

Liu, S.

Mannucci, A.

G. Sandini, P. Questa, D. Scheffer, B. Diericks, and A. Mannucci, “A retina-like CMOS sensor and its applications,” in Proceedings of IEEE Sensor Array and Multichannel Signal Processing Workshop (IEEE, 2000), pp. 514–519.

Martinez, T.

Maskell, S.

M. M. Mehta, E. J. Griffith, S. Maskell, and J. F. Ralph, “Geometric separation of superimposed images,” in International Conference on Information Fusion, (IEEE, 2016), pp. 1244–1251.

McCall, D.

Mehta, M.

M. Mehta, E. J. Griffith, and J. F. Ralph, “Geometric separation of superimposed images with varying fields-of-view,” in International Conference on Information Fusion (IEEE, 2014), pp. 1–6.

Mehta, M. M.

M. M. Mehta, E. J. Griffith, S. Maskell, and J. F. Ralph, “Geometric separation of superimposed images,” in International Conference on Information Fusion, (IEEE, 2016), pp. 1244–1251.

Meuret, Y.

Mujat, M.

Muyo, G.

Nakamura, S.

K. Kuniyoshi, N. Kita, K. Sugimoto, S. Nakamura, and T. Suehiro, “A foveated wide angle lens for active vision,” in Proceedings of IEEE International Conference on Robotics and Automation (IEEE, 1995), vol. 3, pp. 2982–2988.

Nguyen, M.

Ottevaere, H.

Padgett, M. J.

D. B. Phillips, M.-J. Sun, J. M. Taylor, M. P. Edgar, S. M. Barnett, G. M. Gibson, and M. J. Padgett, “Adaptive foveated single-pixel imaging with dynamic supersampling,” Sci. Adv. 3(4), e160178 (2017).
[Crossref]

Patel, A. H.

Peterson, G.

Phillips, D. B.

D. B. Phillips, M.-J. Sun, J. M. Taylor, M. P. Edgar, S. M. Barnett, G. M. Gibson, and M. J. Padgett, “Adaptive foveated single-pixel imaging with dynamic supersampling,” Sci. Adv. 3(4), e160178 (2017).
[Crossref]

Potsaid, B.

Qin, Y.

Questa, P.

G. Sandini, P. Questa, D. Scheffer, B. Diericks, and A. Mannucci, “A retina-like CMOS sensor and its applications,” in Proceedings of IEEE Sensor Array and Multichannel Signal Processing Workshop (IEEE, 2000), pp. 514–519.

Rajadhyaksha, M.

Ralph, J. F.

M. M. Mehta, E. J. Griffith, S. Maskell, and J. F. Ralph, “Geometric separation of superimposed images,” in International Conference on Information Fusion, (IEEE, 2016), pp. 1244–1251.

M. Mehta, E. J. Griffith, and J. F. Ralph, “Geometric separation of superimposed images with varying fields-of-view,” in International Conference on Information Fusion (IEEE, 2014), pp. 1–6.

Restaino, S. R.

Sandini, G.

G. Sandini, P. Questa, D. Scheffer, B. Diericks, and A. Mannucci, “A retina-like CMOS sensor and its applications,” in Proceedings of IEEE Sensor Array and Multichannel Signal Processing Workshop (IEEE, 2000), pp. 514–519.

Scheffer, D.

G. Sandini, P. Questa, D. Scheffer, B. Diericks, and A. Mannucci, “A retina-like CMOS sensor and its applications,” in Proceedings of IEEE Sensor Array and Multichannel Signal Processing Workshop (IEEE, 2000), pp. 514–519.

Suehiro, T.

K. Kuniyoshi, N. Kita, K. Sugimoto, S. Nakamura, and T. Suehiro, “A foveated wide angle lens for active vision,” in Proceedings of IEEE International Conference on Robotics and Automation (IEEE, 1995), vol. 3, pp. 2982–2988.

Suematu, Y.

Y. Suematu and H. Yamada, “A wide angle vision sensor with fovea-design of distortion lens and the simulated images,” in Proceedings of IEEE International Conference on Industrial Electronics, Control, and Instrumentation (IEEE, 1993), vol. 3, pp. 1770–1773.

Sugimoto, K.

K. Kuniyoshi, N. Kita, K. Sugimoto, S. Nakamura, and T. Suehiro, “A foveated wide angle lens for active vision,” in Proceedings of IEEE International Conference on Robotics and Automation (IEEE, 1995), vol. 3, pp. 2982–2988.

Sun, M.-J.

D. B. Phillips, M.-J. Sun, J. M. Taylor, M. P. Edgar, S. M. Barnett, G. M. Gibson, and M. J. Padgett, “Adaptive foveated single-pixel imaging with dynamic supersampling,” Sci. Adv. 3(4), e160178 (2017).
[Crossref]

Taylor, J. M.

D. B. Phillips, M.-J. Sun, J. M. Taylor, M. P. Edgar, S. M. Barnett, G. M. Gibson, and M. J. Padgett, “Adaptive foveated single-pixel imaging with dynamic supersampling,” Sci. Adv. 3(4), e160178 (2017).
[Crossref]

Thibault, S.

X. Dallaire and S. Thibault, “Wide-angle lens miniaturization through foveated imaging,” Proc. SPIE 9626, 96261A (2015).

Thiele, S.

S. Thiele, K. Arzenbacher, T. Gissibl, H. Giessen, and A. M. Herkommer, “3d-printed eagle eye: Compound microlens system for foveated imaging,” Sci. Adv. 3(2), e1602655 (2017).
[Crossref] [PubMed]

Thienpont, H.

Tumbar, R.

Ude, A.

A. Ude, C. Gaskett, and G. Cheng, “Foveated vision systems with two cameras per eye,” in Proceedings of IEEE International Conference on Robotics and Automation (IEEE, 2006), pp. 3457–3462.

Vervaeke, M.

Wen, J. T.

Wick, D. V.

Wood, A.

Yamada, H.

Y. Suematu and H. Yamada, “A wide angle vision sensor with fovea-design of distortion lens and the simulated images,” in Proceedings of IEEE International Conference on Industrial Electronics, Control, and Instrumentation (IEEE, 1993), vol. 3, pp. 1770–1773.

Zhong, Z.

Zou, W.

Appl. Opt. (2)

J. Opt. Soc. Am. A (3)

Opt. Express (2)

Opt. Lett. (3)

Optical Engineering (1)

G. Curatu and J. E. Harvey, “Analysis and design of wide-angle foveated optical systems based on transmissive liquid crystal spatial light modulators,” Optical Engineering 48(4), 043001 (2009).
[Crossref]

Proc. SPIE (1)

X. Dallaire and S. Thibault, “Wide-angle lens miniaturization through foveated imaging,” Proc. SPIE 9626, 96261A (2015).

Sci. Adv. (2)

D. B. Phillips, M.-J. Sun, J. M. Taylor, M. P. Edgar, S. M. Barnett, G. M. Gibson, and M. J. Padgett, “Adaptive foveated single-pixel imaging with dynamic supersampling,” Sci. Adv. 3(4), e160178 (2017).
[Crossref]

S. Thiele, K. Arzenbacher, T. Gissibl, H. Giessen, and A. M. Herkommer, “3d-printed eagle eye: Compound microlens system for foveated imaging,” Sci. Adv. 3(2), e1602655 (2017).
[Crossref] [PubMed]

Other (6)

A. Ude, C. Gaskett, and G. Cheng, “Foveated vision systems with two cameras per eye,” in Proceedings of IEEE International Conference on Robotics and Automation (IEEE, 2006), pp. 3457–3462.

G. Sandini, P. Questa, D. Scheffer, B. Diericks, and A. Mannucci, “A retina-like CMOS sensor and its applications,” in Proceedings of IEEE Sensor Array and Multichannel Signal Processing Workshop (IEEE, 2000), pp. 514–519.

Y. Suematu and H. Yamada, “A wide angle vision sensor with fovea-design of distortion lens and the simulated images,” in Proceedings of IEEE International Conference on Industrial Electronics, Control, and Instrumentation (IEEE, 1993), vol. 3, pp. 1770–1773.

K. Kuniyoshi, N. Kita, K. Sugimoto, S. Nakamura, and T. Suehiro, “A foveated wide angle lens for active vision,” in Proceedings of IEEE International Conference on Robotics and Automation (IEEE, 1995), vol. 3, pp. 2982–2988.

M. Mehta, E. J. Griffith, and J. F. Ralph, “Geometric separation of superimposed images with varying fields-of-view,” in International Conference on Information Fusion (IEEE, 2014), pp. 1–6.

M. M. Mehta, E. J. Griffith, S. Maskell, and J. F. Ralph, “Geometric separation of superimposed images,” in International Conference on Information Fusion, (IEEE, 2016), pp. 1244–1251.

Supplementary Material (4)

NameDescription
» Visualization 1       Video sequence through a superimposed multi-resolution imaging system.
» Visualization 2       Video sequence through a superimposed multi-resolution imaging system, and computational separation of image components.
» Visualization 3       Objects moving within the field of view of a superimposed multi-resolution imaging system.
» Visualization 4       Object tracking in a superimposed multi-resolution imaging system.

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (9)

Fig. 1
Fig. 1 Optical design of the proposed system. The design integrates a narrow-FOV (upper) and a wide-FOV (lower) channels that are superimposed on the same sensor. Each optical channel consists of three Germanium lenses. The narrow-FOV channel is folded using a mirror inserted after the first lens element, and a semireflective beam splitter combines both images onto the detector.
Fig. 2
Fig. 2 Geometrical distortion at the detector plane for (a) wide-FOV channel and (b) narrow-FOV channel; and (c) radial distortion plot. In (a) and (b) the thick black rectangle denotes the area of the sensor.
Fig. 3
Fig. 3 Field-dependent relative illumination for both wide-FOV and narrow-FOV channels. Values are plotted as a function of radial field measured in pixels from the center of the detector array. The dotted lines denote the edge of the detector vertically, horizontally and diagonally.
Fig. 4
Fig. 4 Polychromatic MTFs for (a) wide-FOV channel and (b) narrow-FOV channel. In each graph the tangential (T) and sagittal (S) MTFs are ploted for diffraction-limited, on-axis and off-axis (at the edge of the detector vertically). On-axis and off-axis fields correspond with the blue and red fields traced in Fig. 1.
Fig. 5
Fig. 5 Simulation pipeline of the image formation and acquisition. Postdetection image processing is also indicated with reference to Section 4 and Section 5.
Fig. 6
Fig. 6 Groundtruth scenes imaged by (a,d) the wide-FOV channel and (b,e) the narrow-FOV channel, and (c,f) simulation of the superimposed detection. See Visualization 1 for a video sequence for scene (a–c).
Fig. 7
Fig. 7 Image recovery results (separation of the wide-FOV and narrow-FOV components) for the example frames shown in Fig. 6(c) and Fig. 6(d). First row corresponds to groundtruth data, second row corresponds to the pixel-recursive algorithm, third row corresponds to the system-matrix based algorithm performing the perturbed Lucy-Richardson recovery, and the fourth row results are from the sharpness transfer. The latter two are means to transfer details to the narrow-FOV reconstruction as can be appreciated in the close-up views. See Visualization 2 for a video sequence of the separation results for the scene in (a) employing the perturbed Lucy-Richardson approach.
Fig. 8
Fig. 8 Appraisal of object tracking and recognition. In the frame, three objects (cars) are within the wide-FOV view, and two move horizontally eventually within the narrow-FOV view (see Visualization 3). A template-matching algorithm based on the sum of absolute differences is used to track the cars. The metric score is plotted over the image in (a) where the peaks identify car locations (see Visualization 4 for a view of the metric score over the sequence). Peaks in metric score are used to track the cars and are labeled by the dashed-lined squares in (b) with blue, red and yellow. Close-up views of the red and blue labeled cars are reproduced in (c). The white dotted-lined rectangle in (b) shows the area of the narrow-FOV view. The location of the cars identified by blue and red are also plotted in the narrow-FOV view with the continuous-lined squares in (b), and are further reproduced in (d) where it is appreciated how the letters ‘A’ and ‘B’ in the side of each car can be recognized thanks to the higher angular resolution of the narrow-FOV channel, as opposed to the close-up views in (c).
Fig. 9
Fig. 9 Illustration of high-frequency detail content. For the synthetic image (a) there is no overlap of the edges and the information is preserved completely for both images in the superimposed image; for the natural image (b) there is some overlap but statistically the edges are not super-imposed. The underscores show the density of edge pixels (using Sobel edge detection).

Tables (1)

Tables Icon

Table 1 Specifications of the dual-FOV imaging system.

Equations (10)

Equations on this page are rendered with MathJax. Learn more.

I D ( x , y ) = r W ( x , y ) I W ( x , y ) + r N ( x , y ) I N ( x , y )
I W ( x 1 , y 1 ) = I N ( x 0 , y 0 ) I W ( x 2 , y 2 ) = I N ( x 1 , y 1 ) I W ( x 3 , y 3 ) =
I W ( i ) = I D ( i ) r N ( i ) r W ( i + 1 ) I W ( i + 1 ) r W ( i )
y = D ( P W + P N ) x + e
x n + 1 = diag ( x n ) ( DP W ) ( diag ( D ( P W + P N ) x n ) ) 1 y
x ^ W = DP W x ^
x ^ N = DP N x ^
x n = x n + P N inv P W x n
x ˜ W = 𝒢 σ [ x ^ W ]
x ˜ N = 2 y x ˜ W

Metrics