Abstract

We present a new multi-focus image fusion method based on dictionary learning with a rolling guidance filter to fusion of multi-focus images with registration and mis-registration. First, we learn a dictionary via several classical multi-focus images blurred by a rolling guidance filter. Subsequently, we present a new model for focus regions identification via applying the learned dictionary to input images to obtain the corresponding focus feature maps. Then, we determine the initial decision map via comparing the difference of the focus feature maps. The latter is to optimize the initial decision map and perform it on input images to obtain fused images. Experimental results demonstrate that the suggested algorithm is competitive with the current state of the art and superior to some representative methods when input images are well registered and mis-registered.

© 2017 Optical Society of America

Full Article  |  PDF Article
OSA Recommended Articles
Image fusion via nonlocal sparse K-SVD dictionary learning

Ying Li, Fangyi Li, Bendu Bai, and Qiang Shen
Appl. Opt. 55(7) 1814-1823 (2016)

Dictionary learning approach for image deconvolution with variance estimation

Hang Yang, Ming Zhu, Xiaotian Wu, Zhongbo Zhang, and Heyan Huang
Appl. Opt. 53(25) 5677-5684 (2014)

Image decomposition fusion method based on sparse representation and neural network

Lihong Chang, Xiangchu Feng, Rui Zhang, Hua Huang, Weiwei Wang, and Chen Xu
Appl. Opt. 56(28) 7969-7977 (2017)

References

  • View by:
  • |
  • |
  • |

  1. Y. Liu, S. Liu, and Z. Wang, “Multi-focus image fusion with dense SIFT,” Inf. Fusion 23, 139–155 (2015).
    [Crossref]
  2. Q. Zhang and B. Guo, “Multifocus image fusion using the nonsubsampled contourlet transform,” Signal Process. 89, 1334–1346 (2009).
    [Crossref]
  3. S. Li, X. Kang, J. Hu, and B. Yang, “Image matting for fusion of multi-focus images in dynamic scenes,” Inf. Fusion 14, 147–162 (2013).
    [Crossref]
  4. X. Bai, Y. Zhang, F. Zhou, and B. Xue, “Quadtree-based multi-focus image fusion using a weighted focus-measure,” Inf. Fusion 22, 105–118 (2015).
    [Crossref]
  5. T. Wan, C. Zhu, and Z. Qin, “Multifocus image fusion based on robust principal component analysis,” Pattern Recognit. Lett. 34, 1001–1008 (2013).
    [Crossref]
  6. Z. Zhou, S. Li, and B. Wang, “Multi-scale weighted gradient-based fusion for multi-focus images,” Inf. Fusion 20, 60–72 (2014).
    [Crossref]
  7. Y. Liu, J. Jin, Q. Wang, Y. Shen, and X. Dong, “Region level based multi-focus image fusion using quaternion wavelet and normalized cut,” Signal Process. 97, 9–30 (2014).
    [Crossref]
  8. N. Kausar and A. Majid, “Random forest-based scheme using feature and decision levels information for multi-focus image fusion,” Pattern Anal. Appl. 19, 221–236 (2016).
    [Crossref]
  9. B. Yang and S. Li, “Multifocus image fusion and restoration with sparse representation,” IEEE Trans. Instrum. Meas. 59, 884–892 (2010).
    [Crossref]
  10. Y. Liu, S. Liu, and Z. Wang, “A general framework for image fusion based on multi-scale transform and sparse representation,” Inf. Fusion 24, 147–164 (2015).
    [Crossref]
  11. Y. Liu and Z. Wang, “Simultaneous image fusion and denoising with adaptive sparse representation,” IET Image Process. 9, 347–357 (2015).
    [Crossref]
  12. M. Nejati, S. Samavi, and S. Shirani, “Multi-focus image fusion using dictionary-based sparse representation,” Inf. Fusion 25, 72–84 (2015).
    [Crossref]
  13. M. Kim, D. K. Han, and H. Ko, “Joint patch clustering-based dictionary learning for multimodal image fusion,” Inf. Fusion 27, 198–214 (2016).
    [Crossref]
  14. Y. Li, F. Li, B. Bai, and Q. Shen, “Image fusion via nonlocal sparse K-SVD dictionary learning,” Appl. Opt. 55, 1814–1823 (2016).
    [Crossref]
  15. S. Li, B. Yang, and J. Hu, “Performance comparison of different multiresolution transforms for image fusion,” Inf. Fusion 12, 74–84 (2011).
    [Crossref]
  16. V. N. Gangapure, S. Banerjee, and A. S. Chowdhury, “Steerable local frequency based multispectral multifocus image fusion,” Inf. Fusion 23, 99–115 (2015).
    [Crossref]
  17. H. Li, B. Manjunath, and S. Mitra, “Multisensor image fusion using the wavelet transform,” Graph. Models Image Process. 57, 235–245 (1995).
    [Crossref]
  18. Q. Miao, C. Shi, P. Xu, M. Yang, and Y. Shi, “A novel algorithm of image fusion using shearlets,” Opt. Commun. 284, 1540–1547 (2011).
    [Crossref]
  19. Y. Yang, S. Tong, S. Huang, and P. Lin, “Dual-tree complex wavelet transform and image block residual-based multi-focus image fusion in visual sensor networks,” Sensors 14, 22408–22430 (2014).
    [Crossref]
  20. B. Yu, B. Jia, L. Ding, Z. Cai, Q. Wu, R. Law, J. Huang, L. Song, and S. Fu, “Hybrid dual-tree complex wavelet transform and support vector machine for digital multi-focus image fusion,” Neurocomputing 182, 1–9 (2016).
    [Crossref]
  21. S. Li and B. Yang, “Multifocus image fusion using region segmentation and spatial frequency,” Image Vis. Comput. 26, 971–979 (2008).
    [Crossref]
  22. V. Aslantas and R. Kurban, “Fusion of multi-focus images using differential evolution algorithm,” Expert Syst. Appl. 37, 8861–8870 (2010).
    [Crossref]
  23. X. Xia, S. Fang, and Y. Xiao, “High resolution image fusion algorithm based on multi-focused region extraction,” Pattern Recognit. Lett. 45, 115–120 (2014).
    [Crossref]
  24. S. Li, X. Kang, and J. Hu, “Image fusion with guided filtering,” IEEE Trans. Image Process. 22, 2864–2875 (2013).
    [Crossref]
  25. X. Yan, H. Qin, J. Li, H. Zhou, and T. Yang, “Multi-focus image fusion using a guided-filter-based difference image,” Appl. Opt. 55, 2230–2239 (2016).
    [Crossref]
  26. H. Yin, Y. Li, Y. Chai, Z. Liu, and Z. Zhu, “A novel sparse-representation-based multi-focus image fusion approach,” Neurocomputing 216, 216–229 (2016).
    [Crossref]
  27. Q. Zhang and M. Levine, “Robust multi-focus image fusion using multi-task sparse representation and spatial context,” IEEE Trans. Image Process. 25, 2045–2058 (2016).
    [Crossref]
  28. Q. Zhang, X. Shen, L. Xu, and J. Jia, “Rolling guidance filter,” in European Conference on Computer Vision (ECCV, 2014), pp. 815–830.
  29. M. Aharon, M. Elad, and A. Bruckstein, “SVD: an algorithm for designing overcomplete dictionaries for sparse representation,” IEEE Trans. Signal Process. 54, 4311–4322 (2006).
    [Crossref]
  30. A. M. Bruckstein, D. L. Donoho, and M. Elad, “From sparse solutions of systems of equations to sparse modeling of signals and images,” SIAM Rev. 51, 34–81 (2009).
    [Crossref]
  31. J. Zhao, R. Laganiere, and Z. Liu, “Performance assessment of combinative pixel-level image fusion based on an absolute feature measurement,” Int. J. Innovat. Comput. Inf. Control 6, 1433–1447 (2007).
  32. C. S. Xydeas and V. S. Petrovic, “Objective image fusion performance measure,” Electron. Lett. 36, 308–309 (2000).
    [Crossref]
  33. M. Hossny, S. Nahavandi, and D. Creighton, “Comments on ‘information measure for performance of image fusion’,” Electron. Lett. 44, 1066–1067 (2008).
    [Crossref]
  34. C. Yang, J. Zhang, X. Wang, and X. Liu, “A novel similarity based quality metric for image fusion,” Inf. Fusion 9, 156–160 (2008).
    [Crossref]
  35. Y. Chen and R. Blum, “A new automated quality assessment algorithm for image fusion,” Image Vis. Comput. 27, 1421–1432 (2009).
    [Crossref]
  36. http://home.ustc.edu.cn/~liuyu1/ .
  37. http://mansournejati.ece.iut.ac.ir/content/lytro-multi-focus-dataset .

2016 (7)

N. Kausar and A. Majid, “Random forest-based scheme using feature and decision levels information for multi-focus image fusion,” Pattern Anal. Appl. 19, 221–236 (2016).
[Crossref]

M. Kim, D. K. Han, and H. Ko, “Joint patch clustering-based dictionary learning for multimodal image fusion,” Inf. Fusion 27, 198–214 (2016).
[Crossref]

Y. Li, F. Li, B. Bai, and Q. Shen, “Image fusion via nonlocal sparse K-SVD dictionary learning,” Appl. Opt. 55, 1814–1823 (2016).
[Crossref]

B. Yu, B. Jia, L. Ding, Z. Cai, Q. Wu, R. Law, J. Huang, L. Song, and S. Fu, “Hybrid dual-tree complex wavelet transform and support vector machine for digital multi-focus image fusion,” Neurocomputing 182, 1–9 (2016).
[Crossref]

X. Yan, H. Qin, J. Li, H. Zhou, and T. Yang, “Multi-focus image fusion using a guided-filter-based difference image,” Appl. Opt. 55, 2230–2239 (2016).
[Crossref]

H. Yin, Y. Li, Y. Chai, Z. Liu, and Z. Zhu, “A novel sparse-representation-based multi-focus image fusion approach,” Neurocomputing 216, 216–229 (2016).
[Crossref]

Q. Zhang and M. Levine, “Robust multi-focus image fusion using multi-task sparse representation and spatial context,” IEEE Trans. Image Process. 25, 2045–2058 (2016).
[Crossref]

2015 (6)

Y. Liu, S. Liu, and Z. Wang, “A general framework for image fusion based on multi-scale transform and sparse representation,” Inf. Fusion 24, 147–164 (2015).
[Crossref]

Y. Liu and Z. Wang, “Simultaneous image fusion and denoising with adaptive sparse representation,” IET Image Process. 9, 347–357 (2015).
[Crossref]

M. Nejati, S. Samavi, and S. Shirani, “Multi-focus image fusion using dictionary-based sparse representation,” Inf. Fusion 25, 72–84 (2015).
[Crossref]

V. N. Gangapure, S. Banerjee, and A. S. Chowdhury, “Steerable local frequency based multispectral multifocus image fusion,” Inf. Fusion 23, 99–115 (2015).
[Crossref]

Y. Liu, S. Liu, and Z. Wang, “Multi-focus image fusion with dense SIFT,” Inf. Fusion 23, 139–155 (2015).
[Crossref]

X. Bai, Y. Zhang, F. Zhou, and B. Xue, “Quadtree-based multi-focus image fusion using a weighted focus-measure,” Inf. Fusion 22, 105–118 (2015).
[Crossref]

2014 (4)

Z. Zhou, S. Li, and B. Wang, “Multi-scale weighted gradient-based fusion for multi-focus images,” Inf. Fusion 20, 60–72 (2014).
[Crossref]

Y. Liu, J. Jin, Q. Wang, Y. Shen, and X. Dong, “Region level based multi-focus image fusion using quaternion wavelet and normalized cut,” Signal Process. 97, 9–30 (2014).
[Crossref]

Y. Yang, S. Tong, S. Huang, and P. Lin, “Dual-tree complex wavelet transform and image block residual-based multi-focus image fusion in visual sensor networks,” Sensors 14, 22408–22430 (2014).
[Crossref]

X. Xia, S. Fang, and Y. Xiao, “High resolution image fusion algorithm based on multi-focused region extraction,” Pattern Recognit. Lett. 45, 115–120 (2014).
[Crossref]

2013 (3)

S. Li, X. Kang, and J. Hu, “Image fusion with guided filtering,” IEEE Trans. Image Process. 22, 2864–2875 (2013).
[Crossref]

T. Wan, C. Zhu, and Z. Qin, “Multifocus image fusion based on robust principal component analysis,” Pattern Recognit. Lett. 34, 1001–1008 (2013).
[Crossref]

S. Li, X. Kang, J. Hu, and B. Yang, “Image matting for fusion of multi-focus images in dynamic scenes,” Inf. Fusion 14, 147–162 (2013).
[Crossref]

2011 (2)

Q. Miao, C. Shi, P. Xu, M. Yang, and Y. Shi, “A novel algorithm of image fusion using shearlets,” Opt. Commun. 284, 1540–1547 (2011).
[Crossref]

S. Li, B. Yang, and J. Hu, “Performance comparison of different multiresolution transforms for image fusion,” Inf. Fusion 12, 74–84 (2011).
[Crossref]

2010 (2)

B. Yang and S. Li, “Multifocus image fusion and restoration with sparse representation,” IEEE Trans. Instrum. Meas. 59, 884–892 (2010).
[Crossref]

V. Aslantas and R. Kurban, “Fusion of multi-focus images using differential evolution algorithm,” Expert Syst. Appl. 37, 8861–8870 (2010).
[Crossref]

2009 (3)

A. M. Bruckstein, D. L. Donoho, and M. Elad, “From sparse solutions of systems of equations to sparse modeling of signals and images,” SIAM Rev. 51, 34–81 (2009).
[Crossref]

Y. Chen and R. Blum, “A new automated quality assessment algorithm for image fusion,” Image Vis. Comput. 27, 1421–1432 (2009).
[Crossref]

Q. Zhang and B. Guo, “Multifocus image fusion using the nonsubsampled contourlet transform,” Signal Process. 89, 1334–1346 (2009).
[Crossref]

2008 (3)

M. Hossny, S. Nahavandi, and D. Creighton, “Comments on ‘information measure for performance of image fusion’,” Electron. Lett. 44, 1066–1067 (2008).
[Crossref]

C. Yang, J. Zhang, X. Wang, and X. Liu, “A novel similarity based quality metric for image fusion,” Inf. Fusion 9, 156–160 (2008).
[Crossref]

S. Li and B. Yang, “Multifocus image fusion using region segmentation and spatial frequency,” Image Vis. Comput. 26, 971–979 (2008).
[Crossref]

2007 (1)

J. Zhao, R. Laganiere, and Z. Liu, “Performance assessment of combinative pixel-level image fusion based on an absolute feature measurement,” Int. J. Innovat. Comput. Inf. Control 6, 1433–1447 (2007).

2006 (1)

M. Aharon, M. Elad, and A. Bruckstein, “SVD: an algorithm for designing overcomplete dictionaries for sparse representation,” IEEE Trans. Signal Process. 54, 4311–4322 (2006).
[Crossref]

2000 (1)

C. S. Xydeas and V. S. Petrovic, “Objective image fusion performance measure,” Electron. Lett. 36, 308–309 (2000).
[Crossref]

1995 (1)

H. Li, B. Manjunath, and S. Mitra, “Multisensor image fusion using the wavelet transform,” Graph. Models Image Process. 57, 235–245 (1995).
[Crossref]

Aharon, M.

M. Aharon, M. Elad, and A. Bruckstein, “SVD: an algorithm for designing overcomplete dictionaries for sparse representation,” IEEE Trans. Signal Process. 54, 4311–4322 (2006).
[Crossref]

Aslantas, V.

V. Aslantas and R. Kurban, “Fusion of multi-focus images using differential evolution algorithm,” Expert Syst. Appl. 37, 8861–8870 (2010).
[Crossref]

Bai, B.

Bai, X.

X. Bai, Y. Zhang, F. Zhou, and B. Xue, “Quadtree-based multi-focus image fusion using a weighted focus-measure,” Inf. Fusion 22, 105–118 (2015).
[Crossref]

Banerjee, S.

V. N. Gangapure, S. Banerjee, and A. S. Chowdhury, “Steerable local frequency based multispectral multifocus image fusion,” Inf. Fusion 23, 99–115 (2015).
[Crossref]

Blum, R.

Y. Chen and R. Blum, “A new automated quality assessment algorithm for image fusion,” Image Vis. Comput. 27, 1421–1432 (2009).
[Crossref]

Bruckstein, A.

M. Aharon, M. Elad, and A. Bruckstein, “SVD: an algorithm for designing overcomplete dictionaries for sparse representation,” IEEE Trans. Signal Process. 54, 4311–4322 (2006).
[Crossref]

Bruckstein, A. M.

A. M. Bruckstein, D. L. Donoho, and M. Elad, “From sparse solutions of systems of equations to sparse modeling of signals and images,” SIAM Rev. 51, 34–81 (2009).
[Crossref]

Cai, Z.

B. Yu, B. Jia, L. Ding, Z. Cai, Q. Wu, R. Law, J. Huang, L. Song, and S. Fu, “Hybrid dual-tree complex wavelet transform and support vector machine for digital multi-focus image fusion,” Neurocomputing 182, 1–9 (2016).
[Crossref]

Chai, Y.

H. Yin, Y. Li, Y. Chai, Z. Liu, and Z. Zhu, “A novel sparse-representation-based multi-focus image fusion approach,” Neurocomputing 216, 216–229 (2016).
[Crossref]

Chen, Y.

Y. Chen and R. Blum, “A new automated quality assessment algorithm for image fusion,” Image Vis. Comput. 27, 1421–1432 (2009).
[Crossref]

Chowdhury, A. S.

V. N. Gangapure, S. Banerjee, and A. S. Chowdhury, “Steerable local frequency based multispectral multifocus image fusion,” Inf. Fusion 23, 99–115 (2015).
[Crossref]

Creighton, D.

M. Hossny, S. Nahavandi, and D. Creighton, “Comments on ‘information measure for performance of image fusion’,” Electron. Lett. 44, 1066–1067 (2008).
[Crossref]

Ding, L.

B. Yu, B. Jia, L. Ding, Z. Cai, Q. Wu, R. Law, J. Huang, L. Song, and S. Fu, “Hybrid dual-tree complex wavelet transform and support vector machine for digital multi-focus image fusion,” Neurocomputing 182, 1–9 (2016).
[Crossref]

Dong, X.

Y. Liu, J. Jin, Q. Wang, Y. Shen, and X. Dong, “Region level based multi-focus image fusion using quaternion wavelet and normalized cut,” Signal Process. 97, 9–30 (2014).
[Crossref]

Donoho, D. L.

A. M. Bruckstein, D. L. Donoho, and M. Elad, “From sparse solutions of systems of equations to sparse modeling of signals and images,” SIAM Rev. 51, 34–81 (2009).
[Crossref]

Elad, M.

A. M. Bruckstein, D. L. Donoho, and M. Elad, “From sparse solutions of systems of equations to sparse modeling of signals and images,” SIAM Rev. 51, 34–81 (2009).
[Crossref]

M. Aharon, M. Elad, and A. Bruckstein, “SVD: an algorithm for designing overcomplete dictionaries for sparse representation,” IEEE Trans. Signal Process. 54, 4311–4322 (2006).
[Crossref]

Fang, S.

X. Xia, S. Fang, and Y. Xiao, “High resolution image fusion algorithm based on multi-focused region extraction,” Pattern Recognit. Lett. 45, 115–120 (2014).
[Crossref]

Fu, S.

B. Yu, B. Jia, L. Ding, Z. Cai, Q. Wu, R. Law, J. Huang, L. Song, and S. Fu, “Hybrid dual-tree complex wavelet transform and support vector machine for digital multi-focus image fusion,” Neurocomputing 182, 1–9 (2016).
[Crossref]

Gangapure, V. N.

V. N. Gangapure, S. Banerjee, and A. S. Chowdhury, “Steerable local frequency based multispectral multifocus image fusion,” Inf. Fusion 23, 99–115 (2015).
[Crossref]

Guo, B.

Q. Zhang and B. Guo, “Multifocus image fusion using the nonsubsampled contourlet transform,” Signal Process. 89, 1334–1346 (2009).
[Crossref]

Han, D. K.

M. Kim, D. K. Han, and H. Ko, “Joint patch clustering-based dictionary learning for multimodal image fusion,” Inf. Fusion 27, 198–214 (2016).
[Crossref]

Hossny, M.

M. Hossny, S. Nahavandi, and D. Creighton, “Comments on ‘information measure for performance of image fusion’,” Electron. Lett. 44, 1066–1067 (2008).
[Crossref]

Hu, J.

S. Li, X. Kang, and J. Hu, “Image fusion with guided filtering,” IEEE Trans. Image Process. 22, 2864–2875 (2013).
[Crossref]

S. Li, X. Kang, J. Hu, and B. Yang, “Image matting for fusion of multi-focus images in dynamic scenes,” Inf. Fusion 14, 147–162 (2013).
[Crossref]

S. Li, B. Yang, and J. Hu, “Performance comparison of different multiresolution transforms for image fusion,” Inf. Fusion 12, 74–84 (2011).
[Crossref]

Huang, J.

B. Yu, B. Jia, L. Ding, Z. Cai, Q. Wu, R. Law, J. Huang, L. Song, and S. Fu, “Hybrid dual-tree complex wavelet transform and support vector machine for digital multi-focus image fusion,” Neurocomputing 182, 1–9 (2016).
[Crossref]

Huang, S.

Y. Yang, S. Tong, S. Huang, and P. Lin, “Dual-tree complex wavelet transform and image block residual-based multi-focus image fusion in visual sensor networks,” Sensors 14, 22408–22430 (2014).
[Crossref]

Jia, B.

B. Yu, B. Jia, L. Ding, Z. Cai, Q. Wu, R. Law, J. Huang, L. Song, and S. Fu, “Hybrid dual-tree complex wavelet transform and support vector machine for digital multi-focus image fusion,” Neurocomputing 182, 1–9 (2016).
[Crossref]

Jia, J.

Q. Zhang, X. Shen, L. Xu, and J. Jia, “Rolling guidance filter,” in European Conference on Computer Vision (ECCV, 2014), pp. 815–830.

Jin, J.

Y. Liu, J. Jin, Q. Wang, Y. Shen, and X. Dong, “Region level based multi-focus image fusion using quaternion wavelet and normalized cut,” Signal Process. 97, 9–30 (2014).
[Crossref]

Kang, X.

S. Li, X. Kang, J. Hu, and B. Yang, “Image matting for fusion of multi-focus images in dynamic scenes,” Inf. Fusion 14, 147–162 (2013).
[Crossref]

S. Li, X. Kang, and J. Hu, “Image fusion with guided filtering,” IEEE Trans. Image Process. 22, 2864–2875 (2013).
[Crossref]

Kausar, N.

N. Kausar and A. Majid, “Random forest-based scheme using feature and decision levels information for multi-focus image fusion,” Pattern Anal. Appl. 19, 221–236 (2016).
[Crossref]

Kim, M.

M. Kim, D. K. Han, and H. Ko, “Joint patch clustering-based dictionary learning for multimodal image fusion,” Inf. Fusion 27, 198–214 (2016).
[Crossref]

Ko, H.

M. Kim, D. K. Han, and H. Ko, “Joint patch clustering-based dictionary learning for multimodal image fusion,” Inf. Fusion 27, 198–214 (2016).
[Crossref]

Kurban, R.

V. Aslantas and R. Kurban, “Fusion of multi-focus images using differential evolution algorithm,” Expert Syst. Appl. 37, 8861–8870 (2010).
[Crossref]

Laganiere, R.

J. Zhao, R. Laganiere, and Z. Liu, “Performance assessment of combinative pixel-level image fusion based on an absolute feature measurement,” Int. J. Innovat. Comput. Inf. Control 6, 1433–1447 (2007).

Law, R.

B. Yu, B. Jia, L. Ding, Z. Cai, Q. Wu, R. Law, J. Huang, L. Song, and S. Fu, “Hybrid dual-tree complex wavelet transform and support vector machine for digital multi-focus image fusion,” Neurocomputing 182, 1–9 (2016).
[Crossref]

Levine, M.

Q. Zhang and M. Levine, “Robust multi-focus image fusion using multi-task sparse representation and spatial context,” IEEE Trans. Image Process. 25, 2045–2058 (2016).
[Crossref]

Li, F.

Li, H.

H. Li, B. Manjunath, and S. Mitra, “Multisensor image fusion using the wavelet transform,” Graph. Models Image Process. 57, 235–245 (1995).
[Crossref]

Li, J.

Li, S.

Z. Zhou, S. Li, and B. Wang, “Multi-scale weighted gradient-based fusion for multi-focus images,” Inf. Fusion 20, 60–72 (2014).
[Crossref]

S. Li, X. Kang, J. Hu, and B. Yang, “Image matting for fusion of multi-focus images in dynamic scenes,” Inf. Fusion 14, 147–162 (2013).
[Crossref]

S. Li, X. Kang, and J. Hu, “Image fusion with guided filtering,” IEEE Trans. Image Process. 22, 2864–2875 (2013).
[Crossref]

S. Li, B. Yang, and J. Hu, “Performance comparison of different multiresolution transforms for image fusion,” Inf. Fusion 12, 74–84 (2011).
[Crossref]

B. Yang and S. Li, “Multifocus image fusion and restoration with sparse representation,” IEEE Trans. Instrum. Meas. 59, 884–892 (2010).
[Crossref]

S. Li and B. Yang, “Multifocus image fusion using region segmentation and spatial frequency,” Image Vis. Comput. 26, 971–979 (2008).
[Crossref]

Li, Y.

H. Yin, Y. Li, Y. Chai, Z. Liu, and Z. Zhu, “A novel sparse-representation-based multi-focus image fusion approach,” Neurocomputing 216, 216–229 (2016).
[Crossref]

Y. Li, F. Li, B. Bai, and Q. Shen, “Image fusion via nonlocal sparse K-SVD dictionary learning,” Appl. Opt. 55, 1814–1823 (2016).
[Crossref]

Lin, P.

Y. Yang, S. Tong, S. Huang, and P. Lin, “Dual-tree complex wavelet transform and image block residual-based multi-focus image fusion in visual sensor networks,” Sensors 14, 22408–22430 (2014).
[Crossref]

Liu, S.

Y. Liu, S. Liu, and Z. Wang, “A general framework for image fusion based on multi-scale transform and sparse representation,” Inf. Fusion 24, 147–164 (2015).
[Crossref]

Y. Liu, S. Liu, and Z. Wang, “Multi-focus image fusion with dense SIFT,” Inf. Fusion 23, 139–155 (2015).
[Crossref]

Liu, X.

C. Yang, J. Zhang, X. Wang, and X. Liu, “A novel similarity based quality metric for image fusion,” Inf. Fusion 9, 156–160 (2008).
[Crossref]

Liu, Y.

Y. Liu, S. Liu, and Z. Wang, “Multi-focus image fusion with dense SIFT,” Inf. Fusion 23, 139–155 (2015).
[Crossref]

Y. Liu, S. Liu, and Z. Wang, “A general framework for image fusion based on multi-scale transform and sparse representation,” Inf. Fusion 24, 147–164 (2015).
[Crossref]

Y. Liu and Z. Wang, “Simultaneous image fusion and denoising with adaptive sparse representation,” IET Image Process. 9, 347–357 (2015).
[Crossref]

Y. Liu, J. Jin, Q. Wang, Y. Shen, and X. Dong, “Region level based multi-focus image fusion using quaternion wavelet and normalized cut,” Signal Process. 97, 9–30 (2014).
[Crossref]

Liu, Z.

H. Yin, Y. Li, Y. Chai, Z. Liu, and Z. Zhu, “A novel sparse-representation-based multi-focus image fusion approach,” Neurocomputing 216, 216–229 (2016).
[Crossref]

J. Zhao, R. Laganiere, and Z. Liu, “Performance assessment of combinative pixel-level image fusion based on an absolute feature measurement,” Int. J. Innovat. Comput. Inf. Control 6, 1433–1447 (2007).

Majid, A.

N. Kausar and A. Majid, “Random forest-based scheme using feature and decision levels information for multi-focus image fusion,” Pattern Anal. Appl. 19, 221–236 (2016).
[Crossref]

Manjunath, B.

H. Li, B. Manjunath, and S. Mitra, “Multisensor image fusion using the wavelet transform,” Graph. Models Image Process. 57, 235–245 (1995).
[Crossref]

Miao, Q.

Q. Miao, C. Shi, P. Xu, M. Yang, and Y. Shi, “A novel algorithm of image fusion using shearlets,” Opt. Commun. 284, 1540–1547 (2011).
[Crossref]

Mitra, S.

H. Li, B. Manjunath, and S. Mitra, “Multisensor image fusion using the wavelet transform,” Graph. Models Image Process. 57, 235–245 (1995).
[Crossref]

Nahavandi, S.

M. Hossny, S. Nahavandi, and D. Creighton, “Comments on ‘information measure for performance of image fusion’,” Electron. Lett. 44, 1066–1067 (2008).
[Crossref]

Nejati, M.

M. Nejati, S. Samavi, and S. Shirani, “Multi-focus image fusion using dictionary-based sparse representation,” Inf. Fusion 25, 72–84 (2015).
[Crossref]

Petrovic, V. S.

C. S. Xydeas and V. S. Petrovic, “Objective image fusion performance measure,” Electron. Lett. 36, 308–309 (2000).
[Crossref]

Qin, H.

Qin, Z.

T. Wan, C. Zhu, and Z. Qin, “Multifocus image fusion based on robust principal component analysis,” Pattern Recognit. Lett. 34, 1001–1008 (2013).
[Crossref]

Samavi, S.

M. Nejati, S. Samavi, and S. Shirani, “Multi-focus image fusion using dictionary-based sparse representation,” Inf. Fusion 25, 72–84 (2015).
[Crossref]

Shen, Q.

Shen, X.

Q. Zhang, X. Shen, L. Xu, and J. Jia, “Rolling guidance filter,” in European Conference on Computer Vision (ECCV, 2014), pp. 815–830.

Shen, Y.

Y. Liu, J. Jin, Q. Wang, Y. Shen, and X. Dong, “Region level based multi-focus image fusion using quaternion wavelet and normalized cut,” Signal Process. 97, 9–30 (2014).
[Crossref]

Shi, C.

Q. Miao, C. Shi, P. Xu, M. Yang, and Y. Shi, “A novel algorithm of image fusion using shearlets,” Opt. Commun. 284, 1540–1547 (2011).
[Crossref]

Shi, Y.

Q. Miao, C. Shi, P. Xu, M. Yang, and Y. Shi, “A novel algorithm of image fusion using shearlets,” Opt. Commun. 284, 1540–1547 (2011).
[Crossref]

Shirani, S.

M. Nejati, S. Samavi, and S. Shirani, “Multi-focus image fusion using dictionary-based sparse representation,” Inf. Fusion 25, 72–84 (2015).
[Crossref]

Song, L.

B. Yu, B. Jia, L. Ding, Z. Cai, Q. Wu, R. Law, J. Huang, L. Song, and S. Fu, “Hybrid dual-tree complex wavelet transform and support vector machine for digital multi-focus image fusion,” Neurocomputing 182, 1–9 (2016).
[Crossref]

Tong, S.

Y. Yang, S. Tong, S. Huang, and P. Lin, “Dual-tree complex wavelet transform and image block residual-based multi-focus image fusion in visual sensor networks,” Sensors 14, 22408–22430 (2014).
[Crossref]

Wan, T.

T. Wan, C. Zhu, and Z. Qin, “Multifocus image fusion based on robust principal component analysis,” Pattern Recognit. Lett. 34, 1001–1008 (2013).
[Crossref]

Wang, B.

Z. Zhou, S. Li, and B. Wang, “Multi-scale weighted gradient-based fusion for multi-focus images,” Inf. Fusion 20, 60–72 (2014).
[Crossref]

Wang, Q.

Y. Liu, J. Jin, Q. Wang, Y. Shen, and X. Dong, “Region level based multi-focus image fusion using quaternion wavelet and normalized cut,” Signal Process. 97, 9–30 (2014).
[Crossref]

Wang, X.

C. Yang, J. Zhang, X. Wang, and X. Liu, “A novel similarity based quality metric for image fusion,” Inf. Fusion 9, 156–160 (2008).
[Crossref]

Wang, Z.

Y. Liu and Z. Wang, “Simultaneous image fusion and denoising with adaptive sparse representation,” IET Image Process. 9, 347–357 (2015).
[Crossref]

Y. Liu, S. Liu, and Z. Wang, “A general framework for image fusion based on multi-scale transform and sparse representation,” Inf. Fusion 24, 147–164 (2015).
[Crossref]

Y. Liu, S. Liu, and Z. Wang, “Multi-focus image fusion with dense SIFT,” Inf. Fusion 23, 139–155 (2015).
[Crossref]

Wu, Q.

B. Yu, B. Jia, L. Ding, Z. Cai, Q. Wu, R. Law, J. Huang, L. Song, and S. Fu, “Hybrid dual-tree complex wavelet transform and support vector machine for digital multi-focus image fusion,” Neurocomputing 182, 1–9 (2016).
[Crossref]

Xia, X.

X. Xia, S. Fang, and Y. Xiao, “High resolution image fusion algorithm based on multi-focused region extraction,” Pattern Recognit. Lett. 45, 115–120 (2014).
[Crossref]

Xiao, Y.

X. Xia, S. Fang, and Y. Xiao, “High resolution image fusion algorithm based on multi-focused region extraction,” Pattern Recognit. Lett. 45, 115–120 (2014).
[Crossref]

Xu, L.

Q. Zhang, X. Shen, L. Xu, and J. Jia, “Rolling guidance filter,” in European Conference on Computer Vision (ECCV, 2014), pp. 815–830.

Xu, P.

Q. Miao, C. Shi, P. Xu, M. Yang, and Y. Shi, “A novel algorithm of image fusion using shearlets,” Opt. Commun. 284, 1540–1547 (2011).
[Crossref]

Xue, B.

X. Bai, Y. Zhang, F. Zhou, and B. Xue, “Quadtree-based multi-focus image fusion using a weighted focus-measure,” Inf. Fusion 22, 105–118 (2015).
[Crossref]

Xydeas, C. S.

C. S. Xydeas and V. S. Petrovic, “Objective image fusion performance measure,” Electron. Lett. 36, 308–309 (2000).
[Crossref]

Yan, X.

Yang, B.

S. Li, X. Kang, J. Hu, and B. Yang, “Image matting for fusion of multi-focus images in dynamic scenes,” Inf. Fusion 14, 147–162 (2013).
[Crossref]

S. Li, B. Yang, and J. Hu, “Performance comparison of different multiresolution transforms for image fusion,” Inf. Fusion 12, 74–84 (2011).
[Crossref]

B. Yang and S. Li, “Multifocus image fusion and restoration with sparse representation,” IEEE Trans. Instrum. Meas. 59, 884–892 (2010).
[Crossref]

S. Li and B. Yang, “Multifocus image fusion using region segmentation and spatial frequency,” Image Vis. Comput. 26, 971–979 (2008).
[Crossref]

Yang, C.

C. Yang, J. Zhang, X. Wang, and X. Liu, “A novel similarity based quality metric for image fusion,” Inf. Fusion 9, 156–160 (2008).
[Crossref]

Yang, M.

Q. Miao, C. Shi, P. Xu, M. Yang, and Y. Shi, “A novel algorithm of image fusion using shearlets,” Opt. Commun. 284, 1540–1547 (2011).
[Crossref]

Yang, T.

Yang, Y.

Y. Yang, S. Tong, S. Huang, and P. Lin, “Dual-tree complex wavelet transform and image block residual-based multi-focus image fusion in visual sensor networks,” Sensors 14, 22408–22430 (2014).
[Crossref]

Yin, H.

H. Yin, Y. Li, Y. Chai, Z. Liu, and Z. Zhu, “A novel sparse-representation-based multi-focus image fusion approach,” Neurocomputing 216, 216–229 (2016).
[Crossref]

Yu, B.

B. Yu, B. Jia, L. Ding, Z. Cai, Q. Wu, R. Law, J. Huang, L. Song, and S. Fu, “Hybrid dual-tree complex wavelet transform and support vector machine for digital multi-focus image fusion,” Neurocomputing 182, 1–9 (2016).
[Crossref]

Zhang, J.

C. Yang, J. Zhang, X. Wang, and X. Liu, “A novel similarity based quality metric for image fusion,” Inf. Fusion 9, 156–160 (2008).
[Crossref]

Zhang, Q.

Q. Zhang and M. Levine, “Robust multi-focus image fusion using multi-task sparse representation and spatial context,” IEEE Trans. Image Process. 25, 2045–2058 (2016).
[Crossref]

Q. Zhang and B. Guo, “Multifocus image fusion using the nonsubsampled contourlet transform,” Signal Process. 89, 1334–1346 (2009).
[Crossref]

Q. Zhang, X. Shen, L. Xu, and J. Jia, “Rolling guidance filter,” in European Conference on Computer Vision (ECCV, 2014), pp. 815–830.

Zhang, Y.

X. Bai, Y. Zhang, F. Zhou, and B. Xue, “Quadtree-based multi-focus image fusion using a weighted focus-measure,” Inf. Fusion 22, 105–118 (2015).
[Crossref]

Zhao, J.

J. Zhao, R. Laganiere, and Z. Liu, “Performance assessment of combinative pixel-level image fusion based on an absolute feature measurement,” Int. J. Innovat. Comput. Inf. Control 6, 1433–1447 (2007).

Zhou, F.

X. Bai, Y. Zhang, F. Zhou, and B. Xue, “Quadtree-based multi-focus image fusion using a weighted focus-measure,” Inf. Fusion 22, 105–118 (2015).
[Crossref]

Zhou, H.

Zhou, Z.

Z. Zhou, S. Li, and B. Wang, “Multi-scale weighted gradient-based fusion for multi-focus images,” Inf. Fusion 20, 60–72 (2014).
[Crossref]

Zhu, C.

T. Wan, C. Zhu, and Z. Qin, “Multifocus image fusion based on robust principal component analysis,” Pattern Recognit. Lett. 34, 1001–1008 (2013).
[Crossref]

Zhu, Z.

H. Yin, Y. Li, Y. Chai, Z. Liu, and Z. Zhu, “A novel sparse-representation-based multi-focus image fusion approach,” Neurocomputing 216, 216–229 (2016).
[Crossref]

Appl. Opt. (2)

Electron. Lett. (2)

C. S. Xydeas and V. S. Petrovic, “Objective image fusion performance measure,” Electron. Lett. 36, 308–309 (2000).
[Crossref]

M. Hossny, S. Nahavandi, and D. Creighton, “Comments on ‘information measure for performance of image fusion’,” Electron. Lett. 44, 1066–1067 (2008).
[Crossref]

Expert Syst. Appl. (1)

V. Aslantas and R. Kurban, “Fusion of multi-focus images using differential evolution algorithm,” Expert Syst. Appl. 37, 8861–8870 (2010).
[Crossref]

Graph. Models Image Process. (1)

H. Li, B. Manjunath, and S. Mitra, “Multisensor image fusion using the wavelet transform,” Graph. Models Image Process. 57, 235–245 (1995).
[Crossref]

IEEE Trans. Image Process. (2)

S. Li, X. Kang, and J. Hu, “Image fusion with guided filtering,” IEEE Trans. Image Process. 22, 2864–2875 (2013).
[Crossref]

Q. Zhang and M. Levine, “Robust multi-focus image fusion using multi-task sparse representation and spatial context,” IEEE Trans. Image Process. 25, 2045–2058 (2016).
[Crossref]

IEEE Trans. Instrum. Meas. (1)

B. Yang and S. Li, “Multifocus image fusion and restoration with sparse representation,” IEEE Trans. Instrum. Meas. 59, 884–892 (2010).
[Crossref]

IEEE Trans. Signal Process. (1)

M. Aharon, M. Elad, and A. Bruckstein, “SVD: an algorithm for designing overcomplete dictionaries for sparse representation,” IEEE Trans. Signal Process. 54, 4311–4322 (2006).
[Crossref]

IET Image Process. (1)

Y. Liu and Z. Wang, “Simultaneous image fusion and denoising with adaptive sparse representation,” IET Image Process. 9, 347–357 (2015).
[Crossref]

Image Vis. Comput. (2)

Y. Chen and R. Blum, “A new automated quality assessment algorithm for image fusion,” Image Vis. Comput. 27, 1421–1432 (2009).
[Crossref]

S. Li and B. Yang, “Multifocus image fusion using region segmentation and spatial frequency,” Image Vis. Comput. 26, 971–979 (2008).
[Crossref]

Inf. Fusion (10)

C. Yang, J. Zhang, X. Wang, and X. Liu, “A novel similarity based quality metric for image fusion,” Inf. Fusion 9, 156–160 (2008).
[Crossref]

M. Nejati, S. Samavi, and S. Shirani, “Multi-focus image fusion using dictionary-based sparse representation,” Inf. Fusion 25, 72–84 (2015).
[Crossref]

M. Kim, D. K. Han, and H. Ko, “Joint patch clustering-based dictionary learning for multimodal image fusion,” Inf. Fusion 27, 198–214 (2016).
[Crossref]

S. Li, B. Yang, and J. Hu, “Performance comparison of different multiresolution transforms for image fusion,” Inf. Fusion 12, 74–84 (2011).
[Crossref]

V. N. Gangapure, S. Banerjee, and A. S. Chowdhury, “Steerable local frequency based multispectral multifocus image fusion,” Inf. Fusion 23, 99–115 (2015).
[Crossref]

Y. Liu, S. Liu, and Z. Wang, “A general framework for image fusion based on multi-scale transform and sparse representation,” Inf. Fusion 24, 147–164 (2015).
[Crossref]

Z. Zhou, S. Li, and B. Wang, “Multi-scale weighted gradient-based fusion for multi-focus images,” Inf. Fusion 20, 60–72 (2014).
[Crossref]

Y. Liu, S. Liu, and Z. Wang, “Multi-focus image fusion with dense SIFT,” Inf. Fusion 23, 139–155 (2015).
[Crossref]

S. Li, X. Kang, J. Hu, and B. Yang, “Image matting for fusion of multi-focus images in dynamic scenes,” Inf. Fusion 14, 147–162 (2013).
[Crossref]

X. Bai, Y. Zhang, F. Zhou, and B. Xue, “Quadtree-based multi-focus image fusion using a weighted focus-measure,” Inf. Fusion 22, 105–118 (2015).
[Crossref]

Int. J. Innovat. Comput. Inf. Control (1)

J. Zhao, R. Laganiere, and Z. Liu, “Performance assessment of combinative pixel-level image fusion based on an absolute feature measurement,” Int. J. Innovat. Comput. Inf. Control 6, 1433–1447 (2007).

Neurocomputing (2)

H. Yin, Y. Li, Y. Chai, Z. Liu, and Z. Zhu, “A novel sparse-representation-based multi-focus image fusion approach,” Neurocomputing 216, 216–229 (2016).
[Crossref]

B. Yu, B. Jia, L. Ding, Z. Cai, Q. Wu, R. Law, J. Huang, L. Song, and S. Fu, “Hybrid dual-tree complex wavelet transform and support vector machine for digital multi-focus image fusion,” Neurocomputing 182, 1–9 (2016).
[Crossref]

Opt. Commun. (1)

Q. Miao, C. Shi, P. Xu, M. Yang, and Y. Shi, “A novel algorithm of image fusion using shearlets,” Opt. Commun. 284, 1540–1547 (2011).
[Crossref]

Pattern Anal. Appl. (1)

N. Kausar and A. Majid, “Random forest-based scheme using feature and decision levels information for multi-focus image fusion,” Pattern Anal. Appl. 19, 221–236 (2016).
[Crossref]

Pattern Recognit. Lett. (2)

T. Wan, C. Zhu, and Z. Qin, “Multifocus image fusion based on robust principal component analysis,” Pattern Recognit. Lett. 34, 1001–1008 (2013).
[Crossref]

X. Xia, S. Fang, and Y. Xiao, “High resolution image fusion algorithm based on multi-focused region extraction,” Pattern Recognit. Lett. 45, 115–120 (2014).
[Crossref]

Sensors (1)

Y. Yang, S. Tong, S. Huang, and P. Lin, “Dual-tree complex wavelet transform and image block residual-based multi-focus image fusion in visual sensor networks,” Sensors 14, 22408–22430 (2014).
[Crossref]

SIAM Rev. (1)

A. M. Bruckstein, D. L. Donoho, and M. Elad, “From sparse solutions of systems of equations to sparse modeling of signals and images,” SIAM Rev. 51, 34–81 (2009).
[Crossref]

Signal Process. (2)

Q. Zhang and B. Guo, “Multifocus image fusion using the nonsubsampled contourlet transform,” Signal Process. 89, 1334–1346 (2009).
[Crossref]

Y. Liu, J. Jin, Q. Wang, Y. Shen, and X. Dong, “Region level based multi-focus image fusion using quaternion wavelet and normalized cut,” Signal Process. 97, 9–30 (2014).
[Crossref]

Other (3)

http://home.ustc.edu.cn/~liuyu1/ .

http://mansournejati.ece.iut.ac.ir/content/lytro-multi-focus-dataset .

Q. Zhang, X. Shen, L. Xu, and J. Jia, “Rolling guidance filter,” in European Conference on Computer Vision (ECCV, 2014), pp. 815–830.

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (10)

Fig. 1.
Fig. 1. Overall framework of the proposed multi-focus image fusion.
Fig. 2.
Fig. 2. Three examples of rolling guidance filter. The top row is input images and the bottom row is the corresponding filtered images ( σ s = 3 , σ r = 2 , t = 5 ).
Fig. 3.
Fig. 3. Four source images filtered by a rolling guidance filter served as learning sources.
Fig. 4.
Fig. 4. Three pairs of multi-focus images. (a1), (a2), and (a3) are the input images with the focus on the left part. (b1), (b2), and (b3) are the corresponding input images with the focus on the right part.
Fig. 5.
Fig. 5. Fusion results for Figs. 4(a1) and 4(b1). (a) NSCT, (b) ASR, (c) NSCT_SR, (d) GF, (e) DSIFT, (f) DLRGF.
Fig. 6.
Fig. 6. Normalized difference images between Fig. 4(a1) and each of the fused images in Fig. 5.
Fig. 7.
Fig. 7. Fusion results for Figs. 4(a1) and 4(b1). (a) NSCT, (b) ASR, (c) NSCT_SR, (d) GF, (e) DSIFT, (f) DLRGF.
Fig. 8.
Fig. 8. Normalized difference images between Fig. 4(b2) and each of the fused images in Fig. 7.
Fig. 9.
Fig. 9. Fusion results for Fig. 4(a3) and 4(b3). (a) NSCT, (b) ASR, (c) NSCT_SR, (d) GF, (e) DSIFT, (f) DLRGF.
Fig. 10.
Fig. 10. Normalized difference images between Fig. 4(b3) and each of the fused images in Fig. 9.

Tables (1)

Tables Icon

Table 1. Performance of Different Fusion Methods on Multi-Focus Images with Registration and Mis-Registration

Equations (14)

Equations on this page are rendered with MathJax. Learn more.

J t + 1 ( p ) = 1 K p q N ( p ) exp ( p q 2 2 σ s 2 J t ( p ) J t ( q ) 2 σ r 2 ) I ( q ) ,
K p = q N ( p ) exp ( p q 2 2 σ s 2 J t ( p ) J t ( q ) 2 σ r 2 ) ,
J 1 = 1 K p q N ( p ) exp ( p q 2 2 σ s 2 ) I ( q ) ,
K p = q N ( p ) exp ( p q 2 2 σ s 2 ) ,
min x i y i D x i 2 2 s.t.    x i 0 k 0 , 1 i N ,
x i ^ = arg min x i x i 1 s.t.    y i D x i 2 δ ,
f = x i ^ 0 .
W O , 1 ( i ) = f O , 1 ( i ) O { 1,2 } ,
f O , 1 ( i ) = x ^ O , i 0 ,
W O , 2 = F R G ( W O , 1 , σ s , σ r , t ) .
W ( x , y ) = { 1 , if    W 1,2 ( x , y ) W 2,2 ( x , y ) 0 , else .
W * = imclose ( W , S e ) ,
I F ( x , y ) = W * ( x , y ) I 1 ( x , y ) + ( 1 W * ( x , y ) ) I 2 ( x , y ) .
I N D * = I N D I max I min .

Metrics