Skip to main content

Advertisement

Log in

An interactive deep model combined with Retinex for low-light visible and infrared image fusion

  • Original Article
  • Published:
Neural Computing and Applications Aims and scope Submit manuscript

Abstract

Directly fusing the low-light visible and infrared images is hard to obtain fusion results containing rich structural details and critical infrared targets, due to the limitations of extreme illumination. The generated fusion results are typically not accurate to describe the scene nor are suitable for machine perception. Consequently, a novel image fusion framework combined with Retinex theory, termed as LLVIFusion, is designed for the low-light visible and infrared image fusion. Specifically, the training of LLVIFusion is accomplished via a two-stage training strategy. First, a new interactive fusion network is trained to generate initial fusion results with more informative features. Within the interactive fusion network, features are continuously reused in the same branch network and feature information from different branches are constantly interacted through the designed fusion blocks, which allows the fusion network to not only avoid losing information but also strengthen the information for subsequent processing. Further, an adaptive weight-based loss function is proposed to guide the fusion network for training. Next, a refinement network incorporating Retinex theory is introduced to optimize the visibility of the initial fusion results for obtaining high visual quality fusion results. In contrast to the 14 state-of-the-art comparison methods, LLVIFusion achieves the best values for all six objective measures on the LLVIP and the MF-Net datasets, while obtaining two best values and two second-best values for the objective measures on the TNO dataset. Such experimental results show that LLVIFusion can successfully perform low-light visible and infrared image fusion and produce good fusion results under normal illumination. The codes of LLVIFusion are accessible on https://github.com/govenda/LLVIFusion.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

Data availability

The datasets generated during and/or analyzed during the current study are available from the corresponding author on reasonable request.

References

  1. Yin W, He K, Xu D, Luo Y, Gong J (2022) Adaptive enhanced infrared and visible image fusion using hybrid decomposition and coupled dictionary. Neural Comput Appl 34(23):20831–20849

    Google Scholar 

  2. Gao X, Tang P, Cheng Q, Li J (2022) Air infrared small target local dehazing based on multiple-factor fusion cascade network. Neural Comput Appl 1-9

  3. Ciprián-Sánchez J. F, Ochoa-Ruiz G, Gonzalez-Mendoza M, Rossi L (2021) FIRe-GAN: a novel deep learning-based infrared-visible fusion method for wildfire imagery. Neural Comput Appl 1-13

  4. Fan X, Shi P, Ni J, Li M (2015) A thermal infrared and visible images fusion based approach for multitarget detection under complex environment. Math Probl Eng 6(2):1121–1130

    Google Scholar 

  5. Raghavendra R, Dorizzi B, Rao A, Kumar GH (2011) Particle swarm optimization based fusion of near infrared and visible images for improved face verification. Pattern Recogn 44(2):401–411

    Google Scholar 

  6. Ulusoy I, Yuruk H (2011) New method for the fusion of complementary information from infrared and visual images for object detection. IET Image Proc 5(1):36–48

    Google Scholar 

  7. Zang Y, Zhou D, Wang C, Nie R, Guo Y (2021) UFA-FUSE: a novel deep supervised and hybrid model for multifocus image fusion. IEEE Trans Instrum Meas 70:1–17

    Google Scholar 

  8. Chen J, Li X, Luo L, Mei X, Ma J (2020) Infrared and visible image fusion based on target-enhanced multiscale transform decomposition. Inf Sci 508:64–78

    Google Scholar 

  9. Ma J, Ma Y, Li C (2019) Infrared and visible image fusion methods and applications: a survey. Inf Fusion 45:153–178

    Google Scholar 

  10. Li H, Wu XJ (2017) Multi-focus image fusion using dictionary learning and low-rank representation. In: International conference on image and graphics, pp 675-686

  11. Liu CH, Qi Y, Ding WR (2017) Infrared and visible image fusion method based on saliency detection in sparse domain. Infrared Phys Technol 83:94–102

    Google Scholar 

  12. Tang L, Yuan J, Ma J (2022) Image fusion in the loop of high-level vision tasks: a semantic-aware real-time infrared and visible image fusion network. Inf Fusion 82:28–42

    Google Scholar 

  13. Zhang Y, Liu Y, Sun P, Yan H, Zhao X, Zhang L (2020) IFCNN: a general image fusion framework based on convolutional neural network. Inf Fusion 54:99–118

    Google Scholar 

  14. Xu H, Ma J, Jiang J, Guo X, Ling H (2020) U2Fusion: a unified unsupervised image fusion network. IEEE Trans Pattern Anal Mach Intell 44(1):502–518

    Google Scholar 

  15. Zhang H, Xu H, Xiao Y, Guo X, Ma J (2020) Rethinking the image fusion: a fast unified image fusion network based on proportional maintenance of gradient and intensity. Proc AAAI Conf Artif Intell 34(7):12797–12804

    Google Scholar 

  16. Li H, Wu XJ (2018) DenseFuse: a fusion approach to infrared and visible images. IEEE Trans Image Process 28(5):2614–2623

    MathSciNet  Google Scholar 

  17. Li H, Wu XJ, Kittler J (2021) RFN-nest: an end-to-end residual fusion network for infrared and visible images. Inf Fusion 73:72–86

    Google Scholar 

  18. Li H, Wu XJ, Durrani T (2020) NestFuse: an infrared and visible image fusion architecture based on nest connection and spatial/channel attention models. IEEE Trans Instrum Meas 69(12):9645–9656

    Google Scholar 

  19. Ma J, Yu W, Liang P, Li C, Jiang J (2019) FusionGAN: a generative adversarial network for infrared and visible image fusion. Inf fusion 48:11–26

    Google Scholar 

  20. Ma J, Xu H, Jiang J, Mei X, Zhang XP (2020) DDcGAN: a dual-discriminator conditional generative adversarial network for multi-resolution image fusion. IEEE Trans Image Process 29:4980–4995

    MATH  Google Scholar 

  21. Ma J, Zhang H, Shao Z, Liang P, Xu H (2020) GANMcC: a generative adversarial network with multiclassification constraints for infrared and visible image fusion. IEEE Trans Instrum Meas 70:1–14

    Google Scholar 

  22. Wang Z, Wang J, Wu Y, Xu J, Zhang X (2021) UNFusion: a unified multi-scale densely connected network for infrared and visible image fusion. IEEE Trans Circuits Syst Video Technol 66:216–230

    Google Scholar 

  23. Tang L, Yuan J, Zhang H, Jiang X, Ma J (2022) PIAFusion: a progressive infrared and visible image fusion network based on illumination aware. Inf Fusion 83:79–92

    Google Scholar 

  24. Jia X, Zhu C, Li M, Tang W, Zhou W (2021) LLVIP: a visible-infrared paired dataset for low-light vision. Proc IEEE/CVF Int Conf Comput Vis 1:3496–3504

    Google Scholar 

  25. Zhou Z, Dong M, Xie X, Gao Z (2016) Fusion of infrared and visible images for night-vision context enhancement. Appl Opt 55(23):6480–6490

    Google Scholar 

  26. Liu Y, Dong L, Xu W (2022) Infrared and visible image fusion via salient object extraction and low-light region enhancement. Infrared Phys Technol 2:117–126

    Google Scholar 

  27. Ma J, Tang L, Xu M, Zhang H, Xiao G (2021) STDFusionNet: an infrared and visible image fusion network based on salient target detection. IEEE Trans Instrum Meas 70:1–13

    Google Scholar 

  28. Jiang Z, Li H, Liu L, Men A, Wang H (2021) A switched view of retinex: deep self-regularized low-light image enhancement. Neurocomputing 454:361–372

    Google Scholar 

  29. Xu H, Gong M, Tian X, Huang J, Ma J (2022) CUFD: an encoder-decoder network for visible and infrared image fusion based on common and unique feature decomposition. Comput Vis Image Underst 218:103–119

    Google Scholar 

  30. Yang Y, Liu J, Huang S, Wan W, Wen W, Guan J (2021) Infrared and visible image fusion via texture conditional generative adversarial network. IEEE Trans Circuits Syst Video Technol 31(12):4771–4783

    Google Scholar 

  31. Zhu M, Pan P, Chen W, Yang Y (2020) Eemefn: Low-light image enhancement via edge-enhanced multi-exposure fusion network. Proc AAAI Conf Artif Intell 34(7):13106–13113

    Google Scholar 

  32. Wei C, Wang W, Yang W, Liu J (2018) Deep retinex decomposition for low-light enhancement. arXiv preprint arXiv:1808.04560

  33. Wang R, Zhang Q, Fu CW, Shen X, Zheng WS, Jia J (2019) Underexposed photo enhancement using deep illumination estimation. Proc IEEE/CVF Conf Comput Vis Pattern Recognit 10:6849–6857

    Google Scholar 

  34. Ronneberger O, Fischer P, Brox T (2015) U-net: Convolutional networks for biomedical image segmentation. In: International conference on Medical image computing and computer-assisted intervention, pp 234-241

  35. Roberts JW, Van Aardt JA, Ahmed FB (2008) Assessment of image fusion procedures using entropy, image quality, and multispectral classification. J Appl Remote Sens 2(1):212–224

    Google Scholar 

  36. Qu G, Zhang D, Yan P (2002) Information measure for performance of image fusion. Electron Lett 38(7):313–325

    Google Scholar 

  37. Li J, Guo X, Lu G, Zhang B, Xu Y, Wu F, Zhang D (2020) DRPL: deep regression pair learning for multi-focus image fusion. IEEE Trans Image Process 29:4816–4831

    MATH  Google Scholar 

  38. Cui G, Feng H, Xu Z, Li Q, Chen Y (2015) Detail preserved fusion of visible and infrared images using regional saliency extraction and multi-scale image decomposition. Opt Commun 341:199–209

    Google Scholar 

  39. Han Y, Cai Y, Cao Y, Xu X (2013) A new image fusion performance metric based on visual information fidelity. Inf fusion 14(2):127–135

    Google Scholar 

  40. Wang HN, Zhong W, Wang J, Xia D (2004) Research of measurement for digital image definition. J Image Graph 9(7):828–831

    Google Scholar 

  41. Ma J, Chen C, Li C, Huang J (2016) Infrared and visible image fusion via gradient transfer and total variation minimization. Inf Fusion 31:100–109

    Google Scholar 

  42. Shreyamsha Kumar BK (2015) Image fusion based on pixel significance using cross bilateral filter. SIViP 9(5):1193–1204

    Google Scholar 

  43. Li P (2021) DIDFuse: deep image decomposition for infrared and visible image fusion. Proc Twenty-Ninth Int Conf Int Jt Conf Artif Intell 11:1976–1980

    Google Scholar 

  44. Zhao Z, Xu S, Zhang J, Liang C, Zhang C, Liu J (2022) Efficient and model-based infrared and visible image fusion via algorithm unrolling. IEEE Trans Circuits Syst Video Technol 32(3):1186–1196

    Google Scholar 

  45. Zhou H, Wu W, Zhang Y, Ma J, Ling H (2021) Semantic-supervised infrared and visible image fusion via a dual-discriminator generative adversarial network. IEEE Trans Multimedia 12(2):1261–1276

    Google Scholar 

  46. Fu Y, Xu T, Wu X, Kittler J (2022) Ppt fusion: Pyramid patch transformerfor a case study in image fusion. arXiv preprint arXiv:2107.13967

  47. Liu J, Wu Y, Huang Z, Liu R, Fan X (2021) Smoa: searching a modality-oriented architecture for infrared and visible image fusion. IEEE Signal Process Lett 28:1818–1822

    Google Scholar 

  48. Tang L, Jiteng Y, Jiayi M (2022) Image fusion in the loop of high-level vision tasks: a semantic-aware real-time infrared and visible image fusion network. Inf Fusion 82:28–42

    Google Scholar 

  49. Zhang Hao, Ma Jiayi (2022) SDNet: a versatile squeeze-and-decomposition network for real-time image fusion. Int J Comput Vision 129(10):2761–2785

    Google Scholar 

Download references

Acknowledgements

This work was primarily supported by the National Natural Science Foundation of China under Grant Nos. 62066047, 61966037 and 61463052, in part by Innovation Research Foundation for Graduate Students of Yunnan University (KC-2222245, KC-22221913).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Dongming Zhou.

Ethics declarations

Conflict of interest

To the best of our knowledge, the named authors have no conflict of interest, financial or otherwise.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wang, C., Zang, Y., Zhou, D. et al. An interactive deep model combined with Retinex for low-light visible and infrared image fusion. Neural Comput & Applic 35, 11733–11751 (2023). https://doi.org/10.1007/s00521-023-08314-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00521-023-08314-5

Keywords

Navigation

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy