Skip to main content

Advertisement

Log in

Bayesian’s probabilistic strategy for feature fusion from visible and infrared images

  • Original article
  • Published:
The Visual Computer Aims and scope Submit manuscript

Abstract

This article introduces a unique and first attempt at the fusion of visible and infrared images depending on multi-scale decomposition and salient feature map detection. The proposed technique integrates the bidimensional empirical mode decomposition (BEMD) strategy with Bayesian’s probabilistic strategy for fusion. The proposed mechanism can effectively handle the uncertainty in the challenging source pairs and retain maximum details of the sources at a multi-scale level. The BEMD level features are extracted and integrated with Bayesian’s probabilistic fusion strategy to extract several salient feature maps from the infrared and visual sensors images, which are able to preserve the common information and reduce the source images’ superfluous information at various scales. The combination of these salient feature maps generates an image that gives the target scene complete information with reduced artifacts. The performance of the proposed algorithm is estimated by testing it on the benchmark “TNO” database. The empirical results of the proposed algorithm are evaluated using both visual analysis and quantitative assessment. In this work, the efficiency of the proposed technique is corroborated against seventeen existing state-of-the-art (SOTA) techniques and found to be effective. For the quantitative assessment, we have used the four most-cited quantitative evaluation measures: mutual information for the discrete cosine features \((\text {FMI}_\textrm{dct})\), amount of artifacts added during the fusion process \((N_\textrm{abf} )\), structure similarity index \((\text {SSIM}_a)\), and edge preservation index \((\text {EPI}_a)\). It is observed that the proposed algorithm attained the best average values: Avg. \(\text {FMI}_\textrm{dct}\)= 0.39863, Avg.\(N_\textrm{abf}\) = 0.00102, Avg.\(\text {SSIM}_a\) = 0.77820, and Avg.\(\text {EPI}_a\) = 0.78404. It is also observed that the proposed scheme outperforms the competitive SOTA techniques in terms of different considered quantitative evaluation measures with at least a gain of 3% and the highest gain of 94%.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Algorithm 1
Fig. 2

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

Data Availibility Statement

The authors confirm that the data supporting the findings of the proposed technique in the article titled “Bayesian’s Probabilistic Strategy for Feature Fusion from Visible and Infrared Images” are openly available in the “TNO” benchmark database. The said database can be accessed at web page https://figshare.com/articles/dataset/TNO_Image_Fusion_Dataset/1008029. This database was last accessed by the authors on 10th March 2023. The code will be available publicly soon.

Notes

  1. https://figshare.com/articles/dataset/TNO_Image_Fusion_Dataset/1008029.

  2. https://github.com/manojpanda1986/Fused-images.

References

  1. Ludusan, C., Lavialle, O.: Multifocus image fusion and denoising: a variational approach. Pattern Recogn. Lett. 33(10), 1388–1396 (2012)

    Article  Google Scholar 

  2. Ma, J., Ma, Y., Li, C.: Infrared and visible image fusion methods and applications: a survey. Inf. Fusion 45, 153–178 (2019)

    Article  Google Scholar 

  3. Zhang, Q., Wang, L., Li, H., Ma, Z.: Similarity-based multimodality image fusion with shiftable complex directional pyramid. Pattern Recogn. Lett. 32(13), 1544–1553 (2011)

    Article  Google Scholar 

  4. Yan, Y., Ren, J., Zhao, H., Sun, G., Wang, Z., Zheng, J., Marshall, S., Soraghan, J.: Cognitive fusion of thermal and visible imagery for effective detection and tracking of pedestrians in videos. Cogn. Comput. 10(1), 94–104 (2018)

    Article  Google Scholar 

  5. Zhang, X., Ye, P., Peng, S., Liu, J., Gong, K., Xiao, G.: SiamFT: an RGB-infrared fusion tracking method via fully convolutional siamese networks. IEEE Access 7, 122122–122133 (2019)

    Article  Google Scholar 

  6. Singh, R., Vatsa, M., Noore, A.: Integrated multilevel image fusion and match score fusion of visible and infrared face images for robust face recognition. Pattern Recogn. 41(3), 880–893 (2008)

    Article  Google Scholar 

  7. Jagalingam, P., Hegde, A.V.: Pixel level image fusion-a review on various techniques. In: Proceedings of the 3rd World Conf. on Applied Sciences, Engineering and Technology, pp. 1–8 (2014)

  8. Zhi-she, W., Feng-bao, Y., Zhi-hao, P., Lei, C., Li-e, J.: Multi-sensor image enhanced fusion algorithm based on NSST and top-hat transformation. Optik 126(23), 4184–4190 (2015)

    Article  Google Scholar 

  9. Aghamaleki, J.A., Ghorbani, A.: Image fusion using dual tree discrete wavelet transform and weights optimization. Vis. Comput. 1–11 (2022)

  10. Panda, M.K., Subudhi, B.N., Veerakumar, T., Gaur, M.S.: Edge preserving image fusion using intensity variation approach. In: Proceedings of the IEEE Region 10 Conference, pp. 251–256 (2020)

  11. Kumar, B.S.: Image fusion based on pixel significance using cross bilateral filter. SIViP 9(5), 1193–1204 (2015)

    Article  Google Scholar 

  12. Bavirisetti, D.P., Dhuli, R.: Two-scale image fusion of visible and infrared images using saliency detection. Infrared Phys. Technol. 76, 52–64 (2016)

    Article  Google Scholar 

  13. Liu, J., Jiang, Z., Wu, G., Liu, R., Fan, X.: A unified image fusion framework with flexible bilevel paradigm integration. Vis. Comput. 1–18 (2022)

  14. Yin, W., He, K., Xu, D., Yue, Y., Luo, Y.: Adaptive low light visual enhancement and high-significant target detection for infrared and visible image fusion. Vis. Comput. 1–20 (2023)

  15. Liu, Y., Chen, X., Ward, R.K., Wang, Z.J.: Image fusion with convolutional sparse representation. IEEE Signal Process. Lett. 23(12), 1882–1886 (2016)

    Article  Google Scholar 

  16. Liu, Y., et al.: Medical image fusion via convolutional sparsity based morphological component analysis. IEEE Signal Process. Lett. 26(3), 485–489 (2019)

    Article  Google Scholar 

  17. Li, H., Wu, X.J., Kittler, J.: MDLatLRR: a novel decomposition method for infrared and visible image fusion. IEEE Trans. Image Process. 29, 4733–4746 (2020)

    Article  Google Scholar 

  18. Lu, R., Gao, F., Yang, X., Fan, J., Li, D.: A novel infrared and visible image fusion method based on multi-level saliency integration. Vis. Comput. 1–15 (2022)

  19. Liu, Y., Chen, X., Cheng, J., Peng, H., Wang, Z.: Infrared and visible image fusion with convolutional neural networks. Int. J. Wavelets Multiresolut. Inf. Process. 16(03), 1–20 (2018)

    Article  MathSciNet  Google Scholar 

  20. Liu, Y., Chen, X., Peng, H., Wang, Z.: Multi-focus image fusion with a deep convolutional neural network. Inf. Fusion 36, 191–207 (2017)

    Article  Google Scholar 

  21. Li, H., Wu, X.: Densefuse: a fusion approach to infrared and visible images. IEEE Trans. Image Process. 28(5), 2614–2623 (2019)

    Article  MathSciNet  Google Scholar 

  22. Li, H., Wu, X.J., Kittler, J.: Infrared and visible image fusion using a deep learning framework. In: Proceedings of the 24th International Conference on Pattern Recognition, pp. 2705–2710 (2018)

  23. Ma, J., Yu, W., Liang, P., Li, C., Jiang, J.: FusionGAN: a generative adversarial network for infrared and visible image fusion. Inf. Fusion 48, 11–26 (2019)

    Article  Google Scholar 

  24. Li, H., Wu, X.J., Kittler, J.: RFN-Nest: an end-to-end residual fusion network for infrared and visible images. Inf. Fusion 73, 72–86 (2021)

    Article  Google Scholar 

  25. Gao, C., Qi, D., Zhang, Y., Song, C., Yu, Y.: Infrared and visible image fusion method based on resnet in a nonsubsampled contourlet transform domain. IEEE Access 9, 91883–91895 (2021)

    Article  Google Scholar 

  26. Li, H., Cen, Y., Liu, Y., Chen, X., Yu, Z.: Different input resolutions and arbitrary output resolution: a meta learning-based deep framework for infrared and visible image fusion. IEEE Trans. Image Process. 30, 4070–4083 (2021)

    Article  MathSciNet  Google Scholar 

  27. Yang, Y., Liu, J., Huang, S., Wan, W., Wen, W., Guan, J.: Infrared and visible image fusion via texture conditional generative adversarial network. IEEE Trans. Circuits Syst. Video Technol. 1–14 (2021)

  28. Jian, L., Rayhana, R., Ma, L., Wu, S., Liu, Z., Jiang, H.: Infrared and visible image fusion based on deep decomposition network and saliency analysis. IEEE Trans. Multimedia 1–13 (2021)

  29. Wang, X., Hua, Z., Li, J.: Cross-UNet: dual-branch infrared and visible image fusion framework based on cross-convolution and attention mechanism. Vis. Comput. 1–18 (2022)

  30. Soroush, R., Baleghi, Y.: NIR/RGB image fusion for scene classification using deep neural networks. Vis. Comput. 1–15 (2022)

  31. Panda, M.K., Subudhi, B.N., Veerakumar, T., Jakhetiya, V.: Integration of bi-dimensional empirical mode decomposition with two streams deep learning network for infrared and visible image fusion. In: Proceeding of the 30th European Signal Processing Conference, pp. 493–497 (2022)

  32. Nunes, J.C., Bouaoune, Y., Delechelle, E., Niang, O., Bunel, P.: Image analysis by bidimensional empirical mode decomposition. Image Vis. Comput. 21(12), 1019–1026 (2003)

    Article  Google Scholar 

  33. Huang, N.E., Shen, Z., Long, S.R., Wu, M.C., Shih, H.H., Zheng, Q., Yen, N.-C., Tung, C.C., Liu, H.H.: The empirical mode decomposition and the Hilbert spectrum for nonlinear and non-stationary time series analysis. Proc. R. Soc. Lond. Ser. A Math. Phys. Eng. Sci. 454(1971), 903–995 (1998)

    Article  MathSciNet  Google Scholar 

  34. Ma, J., Zhou, Z., Wang, B., Zong, H.: Infrared and visible image fusion based on visual saliency map and weighted least square optimization. Infrared Phys. Technol. 82, 8–17 (2017)

    Article  Google Scholar 

  35. Liu, Y., Liu, S., Wang, Z.: A general framework for image fusion based on multi-scale transform and sparse representation. Inf. Fusion 24, 147–164 (2015)

    Article  Google Scholar 

  36. Li, H., Wu, X.J.: Infrared and visible image fusion using latent low-rank representation. arXiv:1804.08992 (2018)

  37. Liu, C., Qi, Y., Ding, W.: Infrared and visible image fusion method based on saliency detection in sparse domain. Infrared Phys. Technol. 83, 94–102 (2017)

    Article  Google Scholar 

  38. Zhang, Y., Liu, Y., Sun, P., Yan, H., Zhao, X., Zhang, L.: IFCNN: a gen eral image fusion framework based on convolutional neural network. Inf. Fusion 54, 99–118 (2020)

  39. Panda, M.K., Subudhi, B.N., Veerakumar, T., Gaur, M.S.: Pixel-level visual and thermal images fusion using maximum and minimum value selection strategy. In: Proceedings of the IEEE International Symposium on Sustainable Energy, Signal Processing and Cyber Security, pp. 1–6 (2020)

  40. Ojagh, S., Cauteruccio, F., Terracina, G., Liang, S.H.: Enhanced air quality prediction by edge-based spatiotemporal data preprocessing. Comput. Electr. Eng. 96, 107572 (2021)

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Badri Narayan Subudhi.

Ethics declarations

Conflict of interest

1. Author Manoj Kumar Panda declares that he has no conflict of interest. 2. Author T. Veerakumar declares that he has no conflict of interest. 3. Author Badri Narayan Subudhi declares that he has no conflict of interest. 4. Author Vinit Jakhetiya declares that he has no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Panda, M.K., Thangaraj, V., Subudhi, B.N. et al. Bayesian’s probabilistic strategy for feature fusion from visible and infrared images. Vis Comput 40, 4221–4233 (2024). https://doi.org/10.1007/s00371-023-03078-4

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00371-023-03078-4

Keywords

Navigation

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy