Abstract

We describe a novel method to track targets in a large field of view. This method simultaneously images multiple, encoded sub-fields of view onto a common focal plane. Sub-field encoding enables target tracking by creating a unique connection between target characteristics in superposition space and the target’s true position in real space. This is accomplished without reconstructing a conventional image of the large field of view. Potential encoding schemes include spatial shift, rotation, and magnification. We discuss each of these encoding schemes, but the main emphasis of the paper and all examples are based on one-dimensional spatial shift encoding. System performance is evaluated in terms of two criteria: average decoding time and probability of decoding error. We study these performance criteria as a function of resolution in the encoding scheme and signal-to-noise ratio. Finally, we include simulation and experimental results demonstrating our novel tracking method.

©2009 Optical Society of America

Full Article  |  PDF Article
OSA Recommended Articles
Design architectures for optically multiplexed imaging

R. Hamilton Shepard, Yaron Rachlin, Vinay Shah, and Tina Shih
Opt. Express 23(24) 31419-31435 (2015)

LED-based visible light communication for color image and audio transmission utilizing orbital angular momentum superposition modes

Yuanying Zhang, Jikang Wang, Wuhong Zhang, Shuting Chen, and Lixiang Chen
Opt. Express 26(13) 17300-17311 (2018)

References

  • View by:
  • |
  • |
  • |

  1. E. Cuevas, D. Zaldivar, and R. Rojas, “Kalman filter for vision tracking,” Free University of Berlin, Tech. Rep., (2005).
  2. D. J. Brady, “Micro-optics and megapixels,” Optics and Photonics News 17, 24–29, (2006).
    [Crossref]
  3. D. J. Brady and M. E. Gehm, “Compressive imaging spectrometers using coded apertures,” Proc. SPIE 6246, 62460A, (2006).
    [Crossref]
  4. M. A. Neifeld and P. Shankar, “Feature-specific imaging,” Appl. Opt. 42, 3379–3389, (2003).
    [Crossref] [PubMed]
  5. D. Takhar, J. N. Laska, M. B. Wakin, M. F. Duarte, D. Baron, S. Sarvotham, K. F. Kelly, and R. G. Baraniuk, “A new compressive imaging camera architecture using optical-domain compression,” Proc. SPIE 6065, 606509, (2006).
    [Crossref]
  6. D. Donoho, “Compressed sensing,” IEEE Trans. Inform. Theory 52, 1289–1306, (2006).
    [Crossref]
  7. E. J. Candés, “Compressive sampling,” Proc. of the Intl. Cong. of Mathematicians, (2006).
  8. M. D. Stenner, P. Shankar, and M. A. Neifeld, “Wide-field feature-specific imaging,” in Frontiers in Optics, Optical Society of America, (2007).
  9. D. Du and F. Hwang, Combinatorial group testing and its applications, Series on Applied Mathematics 12, (World Scientific, 2000).
  10. C. M. Brown, “Multiplex imaging with random arrays,” Ph.D. dissertation, Univ. of Chicago, (1972).
  11. R. H. Dicke, “Scatter-hole cameras for x-rays and gamma rays,” Astrophys J. 153, L101–L106,(1968).
    [Crossref]
  12. A. Biswas, P. Guha, A. Mukerjee, and K. S. Venkatesh, “Intrusion detection and tracking with pan-tilt cameras,” IET Intl. Conf. VIE 06 , 565–571, (2006)
  13. A. W. Senior, A. Hampapur, and M. Lu, “Image segmentation in video sequences: A probabilistic approach,” WACV/MOTION’05, 433–438, (2005)
  14. C. R. Wren, A. Azarbayejani, T. J. Darrell, and A. P. Pentland, “Pfinder: Real-time tracking of the human body,” IEEE Trans. Pattern Anal. Mach. Intell 19, 780–785, (1997)
    [Crossref]
  15. N. Friedman and S. J. Russell, “Image segmentation in video sequences: A probabilistic approach,” Proc. Uncertainty Artif. Intell. Conf., 175–181, (1997)
  16. C. Stauffer and E. L. Grimson “Learning patterns of activity using real-time tracking,” IEEE Trans. Pattern Anal. Mach. Intell 22, 747–757, (2000)
    [Crossref]
  17. A. Mittal and N. Paragios, “Motion-based background subtraction using adaptive kernel density estimation,” Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 302–309, (2004).
  18. K. P. Karmann and A. Brandt, ”Moving Object Recognition Using an Adaptive Background Memory,” in Time varying image processing and moving object recognition Volume 2, V. Cappellini, (ed.), (Elsevier, 1990).
  19. P. Jodoin, M. Mignotte, and J. Konrad, “Statistical background subtraction using spatial cues,” IEEE Circuits Syst. Video Technol 17, 1758–1763, (2007).
    [Crossref]
  20. R. Singh, “Advanced correlation filters for multi-class synthetic aperture radar detection and classification,” Carnegie Mellon University, MS Rep., (2002).
  21. M. Alkanhal and B. V. K. Vijaya Kumar, “Polynomial Distance Classifier Correlation Filter for Pattern Recognition,” Appl. Opt. 42, 4688–4708, (2003).
    [Crossref] [PubMed]
  22. B. V. K. Vijaya Kumar, “Minimum-variance synthetic discriminant functions,” J. Opt. Soc. Am. A 3, 1579–1584, (1986).
    [Crossref]
  23. B. V. K. Vijaya Kumar, D. W. Carlson, and A. Mahalanobis, “Optimal trade-off synthetic discriminant function filters for arbitrary devices,” Optics Letters 19, 1556–1558, (1994).
    [Crossref] [PubMed]
  24. S. M. Kay, Fundamentals of Statistical Processing, Volume I: Estimation Theory. (Prentice Hall, 1993).

2007 (2)

M. D. Stenner, P. Shankar, and M. A. Neifeld, “Wide-field feature-specific imaging,” in Frontiers in Optics, Optical Society of America, (2007).

P. Jodoin, M. Mignotte, and J. Konrad, “Statistical background subtraction using spatial cues,” IEEE Circuits Syst. Video Technol 17, 1758–1763, (2007).
[Crossref]

2006 (5)

D. J. Brady, “Micro-optics and megapixels,” Optics and Photonics News 17, 24–29, (2006).
[Crossref]

D. J. Brady and M. E. Gehm, “Compressive imaging spectrometers using coded apertures,” Proc. SPIE 6246, 62460A, (2006).
[Crossref]

D. Takhar, J. N. Laska, M. B. Wakin, M. F. Duarte, D. Baron, S. Sarvotham, K. F. Kelly, and R. G. Baraniuk, “A new compressive imaging camera architecture using optical-domain compression,” Proc. SPIE 6065, 606509, (2006).
[Crossref]

D. Donoho, “Compressed sensing,” IEEE Trans. Inform. Theory 52, 1289–1306, (2006).
[Crossref]

A. Biswas, P. Guha, A. Mukerjee, and K. S. Venkatesh, “Intrusion detection and tracking with pan-tilt cameras,” IET Intl. Conf. VIE 06 , 565–571, (2006)

2004 (1)

A. Mittal and N. Paragios, “Motion-based background subtraction using adaptive kernel density estimation,” Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 302–309, (2004).

2003 (2)

2000 (1)

C. Stauffer and E. L. Grimson “Learning patterns of activity using real-time tracking,” IEEE Trans. Pattern Anal. Mach. Intell 22, 747–757, (2000)
[Crossref]

1997 (2)

C. R. Wren, A. Azarbayejani, T. J. Darrell, and A. P. Pentland, “Pfinder: Real-time tracking of the human body,” IEEE Trans. Pattern Anal. Mach. Intell 19, 780–785, (1997)
[Crossref]

N. Friedman and S. J. Russell, “Image segmentation in video sequences: A probabilistic approach,” Proc. Uncertainty Artif. Intell. Conf., 175–181, (1997)

1994 (1)

B. V. K. Vijaya Kumar, D. W. Carlson, and A. Mahalanobis, “Optimal trade-off synthetic discriminant function filters for arbitrary devices,” Optics Letters 19, 1556–1558, (1994).
[Crossref] [PubMed]

1986 (1)

1968 (1)

R. H. Dicke, “Scatter-hole cameras for x-rays and gamma rays,” Astrophys J. 153, L101–L106,(1968).
[Crossref]

Alkanhal, M.

Azarbayejani, A.

C. R. Wren, A. Azarbayejani, T. J. Darrell, and A. P. Pentland, “Pfinder: Real-time tracking of the human body,” IEEE Trans. Pattern Anal. Mach. Intell 19, 780–785, (1997)
[Crossref]

Baraniuk, R. G.

D. Takhar, J. N. Laska, M. B. Wakin, M. F. Duarte, D. Baron, S. Sarvotham, K. F. Kelly, and R. G. Baraniuk, “A new compressive imaging camera architecture using optical-domain compression,” Proc. SPIE 6065, 606509, (2006).
[Crossref]

Baron, D.

D. Takhar, J. N. Laska, M. B. Wakin, M. F. Duarte, D. Baron, S. Sarvotham, K. F. Kelly, and R. G. Baraniuk, “A new compressive imaging camera architecture using optical-domain compression,” Proc. SPIE 6065, 606509, (2006).
[Crossref]

Biswas, A.

A. Biswas, P. Guha, A. Mukerjee, and K. S. Venkatesh, “Intrusion detection and tracking with pan-tilt cameras,” IET Intl. Conf. VIE 06 , 565–571, (2006)

Brady, D. J.

D. J. Brady, “Micro-optics and megapixels,” Optics and Photonics News 17, 24–29, (2006).
[Crossref]

D. J. Brady and M. E. Gehm, “Compressive imaging spectrometers using coded apertures,” Proc. SPIE 6246, 62460A, (2006).
[Crossref]

Brandt, A.

K. P. Karmann and A. Brandt, ”Moving Object Recognition Using an Adaptive Background Memory,” in Time varying image processing and moving object recognition Volume 2, V. Cappellini, (ed.), (Elsevier, 1990).

Brown, C. M.

C. M. Brown, “Multiplex imaging with random arrays,” Ph.D. dissertation, Univ. of Chicago, (1972).

Candés, E. J.

E. J. Candés, “Compressive sampling,” Proc. of the Intl. Cong. of Mathematicians, (2006).

Carlson, D. W.

B. V. K. Vijaya Kumar, D. W. Carlson, and A. Mahalanobis, “Optimal trade-off synthetic discriminant function filters for arbitrary devices,” Optics Letters 19, 1556–1558, (1994).
[Crossref] [PubMed]

Cuevas, E.

E. Cuevas, D. Zaldivar, and R. Rojas, “Kalman filter for vision tracking,” Free University of Berlin, Tech. Rep., (2005).

Darrell, T. J.

C. R. Wren, A. Azarbayejani, T. J. Darrell, and A. P. Pentland, “Pfinder: Real-time tracking of the human body,” IEEE Trans. Pattern Anal. Mach. Intell 19, 780–785, (1997)
[Crossref]

Dicke, R. H.

R. H. Dicke, “Scatter-hole cameras for x-rays and gamma rays,” Astrophys J. 153, L101–L106,(1968).
[Crossref]

Donoho, D.

D. Donoho, “Compressed sensing,” IEEE Trans. Inform. Theory 52, 1289–1306, (2006).
[Crossref]

Du, D.

D. Du and F. Hwang, Combinatorial group testing and its applications, Series on Applied Mathematics 12, (World Scientific, 2000).

Duarte, M. F.

D. Takhar, J. N. Laska, M. B. Wakin, M. F. Duarte, D. Baron, S. Sarvotham, K. F. Kelly, and R. G. Baraniuk, “A new compressive imaging camera architecture using optical-domain compression,” Proc. SPIE 6065, 606509, (2006).
[Crossref]

Friedman, N.

N. Friedman and S. J. Russell, “Image segmentation in video sequences: A probabilistic approach,” Proc. Uncertainty Artif. Intell. Conf., 175–181, (1997)

Gehm, M. E.

D. J. Brady and M. E. Gehm, “Compressive imaging spectrometers using coded apertures,” Proc. SPIE 6246, 62460A, (2006).
[Crossref]

Grimson, E. L.

C. Stauffer and E. L. Grimson “Learning patterns of activity using real-time tracking,” IEEE Trans. Pattern Anal. Mach. Intell 22, 747–757, (2000)
[Crossref]

Guha, P.

A. Biswas, P. Guha, A. Mukerjee, and K. S. Venkatesh, “Intrusion detection and tracking with pan-tilt cameras,” IET Intl. Conf. VIE 06 , 565–571, (2006)

Hampapur, A.

A. W. Senior, A. Hampapur, and M. Lu, “Image segmentation in video sequences: A probabilistic approach,” WACV/MOTION’05, 433–438, (2005)

Hwang, F.

D. Du and F. Hwang, Combinatorial group testing and its applications, Series on Applied Mathematics 12, (World Scientific, 2000).

Jodoin, P.

P. Jodoin, M. Mignotte, and J. Konrad, “Statistical background subtraction using spatial cues,” IEEE Circuits Syst. Video Technol 17, 1758–1763, (2007).
[Crossref]

Karmann, K. P.

K. P. Karmann and A. Brandt, ”Moving Object Recognition Using an Adaptive Background Memory,” in Time varying image processing and moving object recognition Volume 2, V. Cappellini, (ed.), (Elsevier, 1990).

Kay, S. M.

S. M. Kay, Fundamentals of Statistical Processing, Volume I: Estimation Theory. (Prentice Hall, 1993).

Kelly, K. F.

D. Takhar, J. N. Laska, M. B. Wakin, M. F. Duarte, D. Baron, S. Sarvotham, K. F. Kelly, and R. G. Baraniuk, “A new compressive imaging camera architecture using optical-domain compression,” Proc. SPIE 6065, 606509, (2006).
[Crossref]

Konrad, J.

P. Jodoin, M. Mignotte, and J. Konrad, “Statistical background subtraction using spatial cues,” IEEE Circuits Syst. Video Technol 17, 1758–1763, (2007).
[Crossref]

Kumar, B. V. K. Vijaya

Laska, J. N.

D. Takhar, J. N. Laska, M. B. Wakin, M. F. Duarte, D. Baron, S. Sarvotham, K. F. Kelly, and R. G. Baraniuk, “A new compressive imaging camera architecture using optical-domain compression,” Proc. SPIE 6065, 606509, (2006).
[Crossref]

Lu, M.

A. W. Senior, A. Hampapur, and M. Lu, “Image segmentation in video sequences: A probabilistic approach,” WACV/MOTION’05, 433–438, (2005)

Mahalanobis, A.

B. V. K. Vijaya Kumar, D. W. Carlson, and A. Mahalanobis, “Optimal trade-off synthetic discriminant function filters for arbitrary devices,” Optics Letters 19, 1556–1558, (1994).
[Crossref] [PubMed]

Mignotte, M.

P. Jodoin, M. Mignotte, and J. Konrad, “Statistical background subtraction using spatial cues,” IEEE Circuits Syst. Video Technol 17, 1758–1763, (2007).
[Crossref]

Mittal, A.

A. Mittal and N. Paragios, “Motion-based background subtraction using adaptive kernel density estimation,” Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 302–309, (2004).

Mukerjee, A.

A. Biswas, P. Guha, A. Mukerjee, and K. S. Venkatesh, “Intrusion detection and tracking with pan-tilt cameras,” IET Intl. Conf. VIE 06 , 565–571, (2006)

Neifeld, M. A.

M. D. Stenner, P. Shankar, and M. A. Neifeld, “Wide-field feature-specific imaging,” in Frontiers in Optics, Optical Society of America, (2007).

M. A. Neifeld and P. Shankar, “Feature-specific imaging,” Appl. Opt. 42, 3379–3389, (2003).
[Crossref] [PubMed]

Paragios, N.

A. Mittal and N. Paragios, “Motion-based background subtraction using adaptive kernel density estimation,” Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 302–309, (2004).

Pentland, A. P.

C. R. Wren, A. Azarbayejani, T. J. Darrell, and A. P. Pentland, “Pfinder: Real-time tracking of the human body,” IEEE Trans. Pattern Anal. Mach. Intell 19, 780–785, (1997)
[Crossref]

Rojas, R.

E. Cuevas, D. Zaldivar, and R. Rojas, “Kalman filter for vision tracking,” Free University of Berlin, Tech. Rep., (2005).

Russell, S. J.

N. Friedman and S. J. Russell, “Image segmentation in video sequences: A probabilistic approach,” Proc. Uncertainty Artif. Intell. Conf., 175–181, (1997)

Sarvotham, S.

D. Takhar, J. N. Laska, M. B. Wakin, M. F. Duarte, D. Baron, S. Sarvotham, K. F. Kelly, and R. G. Baraniuk, “A new compressive imaging camera architecture using optical-domain compression,” Proc. SPIE 6065, 606509, (2006).
[Crossref]

Senior, A. W.

A. W. Senior, A. Hampapur, and M. Lu, “Image segmentation in video sequences: A probabilistic approach,” WACV/MOTION’05, 433–438, (2005)

Shankar, P.

M. D. Stenner, P. Shankar, and M. A. Neifeld, “Wide-field feature-specific imaging,” in Frontiers in Optics, Optical Society of America, (2007).

M. A. Neifeld and P. Shankar, “Feature-specific imaging,” Appl. Opt. 42, 3379–3389, (2003).
[Crossref] [PubMed]

Singh, R.

R. Singh, “Advanced correlation filters for multi-class synthetic aperture radar detection and classification,” Carnegie Mellon University, MS Rep., (2002).

Stauffer, C.

C. Stauffer and E. L. Grimson “Learning patterns of activity using real-time tracking,” IEEE Trans. Pattern Anal. Mach. Intell 22, 747–757, (2000)
[Crossref]

Stenner, M. D.

M. D. Stenner, P. Shankar, and M. A. Neifeld, “Wide-field feature-specific imaging,” in Frontiers in Optics, Optical Society of America, (2007).

Takhar, D.

D. Takhar, J. N. Laska, M. B. Wakin, M. F. Duarte, D. Baron, S. Sarvotham, K. F. Kelly, and R. G. Baraniuk, “A new compressive imaging camera architecture using optical-domain compression,” Proc. SPIE 6065, 606509, (2006).
[Crossref]

Venkatesh, K. S.

A. Biswas, P. Guha, A. Mukerjee, and K. S. Venkatesh, “Intrusion detection and tracking with pan-tilt cameras,” IET Intl. Conf. VIE 06 , 565–571, (2006)

Wakin, M. B.

D. Takhar, J. N. Laska, M. B. Wakin, M. F. Duarte, D. Baron, S. Sarvotham, K. F. Kelly, and R. G. Baraniuk, “A new compressive imaging camera architecture using optical-domain compression,” Proc. SPIE 6065, 606509, (2006).
[Crossref]

Wren, C. R.

C. R. Wren, A. Azarbayejani, T. J. Darrell, and A. P. Pentland, “Pfinder: Real-time tracking of the human body,” IEEE Trans. Pattern Anal. Mach. Intell 19, 780–785, (1997)
[Crossref]

Zaldivar, D.

E. Cuevas, D. Zaldivar, and R. Rojas, “Kalman filter for vision tracking,” Free University of Berlin, Tech. Rep., (2005).

Appl. Opt. (2)

Astrophys J. (1)

R. H. Dicke, “Scatter-hole cameras for x-rays and gamma rays,” Astrophys J. 153, L101–L106,(1968).
[Crossref]

IEEE Circuits Syst. Video Technol (1)

P. Jodoin, M. Mignotte, and J. Konrad, “Statistical background subtraction using spatial cues,” IEEE Circuits Syst. Video Technol 17, 1758–1763, (2007).
[Crossref]

IEEE Trans. Inform. Theory (1)

D. Donoho, “Compressed sensing,” IEEE Trans. Inform. Theory 52, 1289–1306, (2006).
[Crossref]

IEEE Trans. Pattern Anal. Mach. Intell (2)

C. R. Wren, A. Azarbayejani, T. J. Darrell, and A. P. Pentland, “Pfinder: Real-time tracking of the human body,” IEEE Trans. Pattern Anal. Mach. Intell 19, 780–785, (1997)
[Crossref]

C. Stauffer and E. L. Grimson “Learning patterns of activity using real-time tracking,” IEEE Trans. Pattern Anal. Mach. Intell 22, 747–757, (2000)
[Crossref]

IET Intl. Conf. VIE (1)

A. Biswas, P. Guha, A. Mukerjee, and K. S. Venkatesh, “Intrusion detection and tracking with pan-tilt cameras,” IET Intl. Conf. VIE 06 , 565–571, (2006)

J. Opt. Soc. Am. A (1)

Optical Society of America (1)

M. D. Stenner, P. Shankar, and M. A. Neifeld, “Wide-field feature-specific imaging,” in Frontiers in Optics, Optical Society of America, (2007).

Optics and Photonics News (1)

D. J. Brady, “Micro-optics and megapixels,” Optics and Photonics News 17, 24–29, (2006).
[Crossref]

Optics Letters (1)

B. V. K. Vijaya Kumar, D. W. Carlson, and A. Mahalanobis, “Optimal trade-off synthetic discriminant function filters for arbitrary devices,” Optics Letters 19, 1556–1558, (1994).
[Crossref] [PubMed]

Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (1)

A. Mittal and N. Paragios, “Motion-based background subtraction using adaptive kernel density estimation,” Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 302–309, (2004).

Proc. SPIE (2)

D. J. Brady and M. E. Gehm, “Compressive imaging spectrometers using coded apertures,” Proc. SPIE 6246, 62460A, (2006).
[Crossref]

D. Takhar, J. N. Laska, M. B. Wakin, M. F. Duarte, D. Baron, S. Sarvotham, K. F. Kelly, and R. G. Baraniuk, “A new compressive imaging camera architecture using optical-domain compression,” Proc. SPIE 6065, 606509, (2006).
[Crossref]

Proc. Uncertainty Artif. Intell. Conf. (1)

N. Friedman and S. J. Russell, “Image segmentation in video sequences: A probabilistic approach,” Proc. Uncertainty Artif. Intell. Conf., 175–181, (1997)

Other (8)

A. W. Senior, A. Hampapur, and M. Lu, “Image segmentation in video sequences: A probabilistic approach,” WACV/MOTION’05, 433–438, (2005)

D. Du and F. Hwang, Combinatorial group testing and its applications, Series on Applied Mathematics 12, (World Scientific, 2000).

C. M. Brown, “Multiplex imaging with random arrays,” Ph.D. dissertation, Univ. of Chicago, (1972).

E. J. Candés, “Compressive sampling,” Proc. of the Intl. Cong. of Mathematicians, (2006).

S. M. Kay, Fundamentals of Statistical Processing, Volume I: Estimation Theory. (Prentice Hall, 1993).

K. P. Karmann and A. Brandt, ”Moving Object Recognition Using an Adaptive Background Memory,” in Time varying image processing and moving object recognition Volume 2, V. Cappellini, (ed.), (Elsevier, 1990).

R. Singh, “Advanced correlation filters for multi-class synthetic aperture radar detection and classification,” Carnegie Mellon University, MS Rep., (2002).

E. Cuevas, D. Zaldivar, and R. Rojas, “Kalman filter for vision tracking,” Free University of Berlin, Tech. Rep., (2005).

Supplementary Material (3)

» Media 1: AVI (881 KB)     
» Media 2: AVI (1248 KB)     
» Media 3: AVI (3896 KB)     

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (14)

Fig. 1.
Fig. 1. Multiple lenses camera capable of performing both pan/tilt and multiplexed operations.
Fig. 2.
Fig. 2. Schematic representation of encoding schemes: (a) Spatial shift (two-dimensional case shown), (b) rotation, and (c) magnification.
Fig. 3.
Fig. 3. Optical setup capable of performing multiplexed operations with spatial shift encoding for (a) Ns = 2 and (b) Ns = 8
Fig. 4.
Fig. 4. Binary combiner in a log sequence arrangement for multiplexing 8 sub-FOVs each with an angular range of 7.5°.
Fig. 5.
Fig. 5. (a) The area of interest (large FOV) for tracking targets along with the reference coordinate system. The large FOV is sub-divided into 4 non-overlapping sub-FOVs in the x-direction. In this un-coded case the object space and the FOV are the same. (b) Superposition space. (c) Hypothesis space: From the superposition space it is not possible to tell which sub-FOV the target belongs to. This ambiguity in target location leads to the hypothesis that the target could be in any of the 4 sub-FOVs.
Fig. 6.
Fig. 6. (a) A portion of an encoded object space with the target in the region of overlap between the two adjacent sub-FOVs along with its distance from the two boundaries of the overlap. The loss in area coverage due to encoding is also shown. (b) Superposition space with two target copies or ghosts. The one to the left corresponds to fovi and the one to the right corresponds to fov i-1. Also depicted is the relation between the distance measures l 1 and l 2 in the object space, and the separation distance between the target ghosts in the superposition space.
Fig. 7.
Fig. 7. Three targets in the object space with the same velocities and y-coordinates depicting two scenarios both of which result in the same superposition space data. (a)Scenario 1: Two targets in the object space with Target 1 in the non-overlapping region of fov 1 and Target 2 in the region of overlap between fov 1 and fov 2. (b) Scenario 2: A single target (Target 3) in the overlap between the sub-FOVs fov 2, fov 3 and fov 4. (c) The same superposition space arising from the two scenarios. (i) Ghost 1 and Ghost 3 in the superposition space are ghosts of Target 2 in the object space, while Ghost 2 in the superpostion space corresponds to Target 1 in the object space. (ii) Ghost 1 through 3 in the superposition space are ghosts of Target 3 in the object space.
Fig. 8.
Fig. 8. (Media 1) showing error in decoding two targets with the same velocity, the same y-coordinate, and with a separation which is an element of set S 2. Correct decoding is possible only when the targets begin to differ in their velocities. (Movie size: 0.9 MB)
Fig. 9.
Fig. 9. (a) Encoded Object Space with 4 sub-FOVs. The target is in fov 0. The local fov 0 x-coordinate lies between Wfov -O 2 and Wfov . (b) The superposition Space with a single target indicating that the target is in a non-overlapping sub-FOV region. (c) Hypothesis Space with 4 potential targets. The third hypothesized target, however, lies in a sub-FOV overlap which if true would produce ghosts in the Superposition Space. The absence of ghosts rules out this hypothesized target as a potential true target.
Fig. 10.
Fig. 10. Examples of valid target patterns.
Fig. 11.
Fig. 11. (Media 2) showing the decoding of 4 targets that appear at random times and for random durations at random locations. The movie shows the successful decoding of all the 4 targets. Each frame shows the encoded object space (δ = δmax ), the corresponding measured superposition space (with static background subtracted out) and the resulting hypothesis space. (Movie size: 1.25 MB)
Fig. 12.
Fig. 12. Plot of decoding time as a function of the area coverage efficiency η. The plot shows the error bars representing the ±1 standard deviation of the decoding time from the mean.
Fig. 13.
Fig. 13. Plot of decoding error versus area coverage efficiency for different SNRs and different values of M, the number of sub-FOVs that overlap.
Fig. 14.
Fig. 14. Experimental data movie showing successful decoding a target moving through the object space. Intial target ambiguity is reduced using “missing ghosts” logic and is completely removed as the target enters the region of overlap. (Media 3) (Movie size: 3.9 MB)

Equations (6)

Equations on this page are rendered with MathJax. Learn more.

A e = 2 A ( 1 tan ϕ 4 ) 1 ( 1 β ) S ,
η = 1 α ( N s 2 ) 2 N s ,
r = N s α ( N s 2 ) 2 ,
η = ( 1 α x ( N x 2 ) 2 N x ) ( 1 α y ( N y 2 ) 2 N y ) ,
r = ( N x α x ( N x 2 ) 2 ) ( N y α y ( N y 2 ) 2 ) ,
var [ x ̂ ] 1 SNR· B rms ,

Metrics