Abstract
Directly fusing the low-light visible and infrared images is hard to obtain fusion results containing rich structural details and critical infrared targets, due to the limitations of extreme illumination. The generated fusion results are typically not accurate to describe the scene nor are suitable for machine perception. Consequently, a novel image fusion framework combined with Retinex theory, termed as LLVIFusion, is designed for the low-light visible and infrared image fusion. Specifically, the training of LLVIFusion is accomplished via a two-stage training strategy. First, a new interactive fusion network is trained to generate initial fusion results with more informative features. Within the interactive fusion network, features are continuously reused in the same branch network and feature information from different branches are constantly interacted through the designed fusion blocks, which allows the fusion network to not only avoid losing information but also strengthen the information for subsequent processing. Further, an adaptive weight-based loss function is proposed to guide the fusion network for training. Next, a refinement network incorporating Retinex theory is introduced to optimize the visibility of the initial fusion results for obtaining high visual quality fusion results. In contrast to the 14 state-of-the-art comparison methods, LLVIFusion achieves the best values for all six objective measures on the LLVIP and the MF-Net datasets, while obtaining two best values and two second-best values for the objective measures on the TNO dataset. Such experimental results show that LLVIFusion can successfully perform low-light visible and infrared image fusion and produce good fusion results under normal illumination. The codes of LLVIFusion are accessible on https://github.com/govenda/LLVIFusion.















Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Data availability
The datasets generated during and/or analyzed during the current study are available from the corresponding author on reasonable request.
References
Yin W, He K, Xu D, Luo Y, Gong J (2022) Adaptive enhanced infrared and visible image fusion using hybrid decomposition and coupled dictionary. Neural Comput Appl 34(23):20831–20849
Gao X, Tang P, Cheng Q, Li J (2022) Air infrared small target local dehazing based on multiple-factor fusion cascade network. Neural Comput Appl 1-9
Ciprián-Sánchez J. F, Ochoa-Ruiz G, Gonzalez-Mendoza M, Rossi L (2021) FIRe-GAN: a novel deep learning-based infrared-visible fusion method for wildfire imagery. Neural Comput Appl 1-13
Fan X, Shi P, Ni J, Li M (2015) A thermal infrared and visible images fusion based approach for multitarget detection under complex environment. Math Probl Eng 6(2):1121–1130
Raghavendra R, Dorizzi B, Rao A, Kumar GH (2011) Particle swarm optimization based fusion of near infrared and visible images for improved face verification. Pattern Recogn 44(2):401–411
Ulusoy I, Yuruk H (2011) New method for the fusion of complementary information from infrared and visual images for object detection. IET Image Proc 5(1):36–48
Zang Y, Zhou D, Wang C, Nie R, Guo Y (2021) UFA-FUSE: a novel deep supervised and hybrid model for multifocus image fusion. IEEE Trans Instrum Meas 70:1–17
Chen J, Li X, Luo L, Mei X, Ma J (2020) Infrared and visible image fusion based on target-enhanced multiscale transform decomposition. Inf Sci 508:64–78
Ma J, Ma Y, Li C (2019) Infrared and visible image fusion methods and applications: a survey. Inf Fusion 45:153–178
Li H, Wu XJ (2017) Multi-focus image fusion using dictionary learning and low-rank representation. In: International conference on image and graphics, pp 675-686
Liu CH, Qi Y, Ding WR (2017) Infrared and visible image fusion method based on saliency detection in sparse domain. Infrared Phys Technol 83:94–102
Tang L, Yuan J, Ma J (2022) Image fusion in the loop of high-level vision tasks: a semantic-aware real-time infrared and visible image fusion network. Inf Fusion 82:28–42
Zhang Y, Liu Y, Sun P, Yan H, Zhao X, Zhang L (2020) IFCNN: a general image fusion framework based on convolutional neural network. Inf Fusion 54:99–118
Xu H, Ma J, Jiang J, Guo X, Ling H (2020) U2Fusion: a unified unsupervised image fusion network. IEEE Trans Pattern Anal Mach Intell 44(1):502–518
Zhang H, Xu H, Xiao Y, Guo X, Ma J (2020) Rethinking the image fusion: a fast unified image fusion network based on proportional maintenance of gradient and intensity. Proc AAAI Conf Artif Intell 34(7):12797–12804
Li H, Wu XJ (2018) DenseFuse: a fusion approach to infrared and visible images. IEEE Trans Image Process 28(5):2614–2623
Li H, Wu XJ, Kittler J (2021) RFN-nest: an end-to-end residual fusion network for infrared and visible images. Inf Fusion 73:72–86
Li H, Wu XJ, Durrani T (2020) NestFuse: an infrared and visible image fusion architecture based on nest connection and spatial/channel attention models. IEEE Trans Instrum Meas 69(12):9645–9656
Ma J, Yu W, Liang P, Li C, Jiang J (2019) FusionGAN: a generative adversarial network for infrared and visible image fusion. Inf fusion 48:11–26
Ma J, Xu H, Jiang J, Mei X, Zhang XP (2020) DDcGAN: a dual-discriminator conditional generative adversarial network for multi-resolution image fusion. IEEE Trans Image Process 29:4980–4995
Ma J, Zhang H, Shao Z, Liang P, Xu H (2020) GANMcC: a generative adversarial network with multiclassification constraints for infrared and visible image fusion. IEEE Trans Instrum Meas 70:1–14
Wang Z, Wang J, Wu Y, Xu J, Zhang X (2021) UNFusion: a unified multi-scale densely connected network for infrared and visible image fusion. IEEE Trans Circuits Syst Video Technol 66:216–230
Tang L, Yuan J, Zhang H, Jiang X, Ma J (2022) PIAFusion: a progressive infrared and visible image fusion network based on illumination aware. Inf Fusion 83:79–92
Jia X, Zhu C, Li M, Tang W, Zhou W (2021) LLVIP: a visible-infrared paired dataset for low-light vision. Proc IEEE/CVF Int Conf Comput Vis 1:3496–3504
Zhou Z, Dong M, Xie X, Gao Z (2016) Fusion of infrared and visible images for night-vision context enhancement. Appl Opt 55(23):6480–6490
Liu Y, Dong L, Xu W (2022) Infrared and visible image fusion via salient object extraction and low-light region enhancement. Infrared Phys Technol 2:117–126
Ma J, Tang L, Xu M, Zhang H, Xiao G (2021) STDFusionNet: an infrared and visible image fusion network based on salient target detection. IEEE Trans Instrum Meas 70:1–13
Jiang Z, Li H, Liu L, Men A, Wang H (2021) A switched view of retinex: deep self-regularized low-light image enhancement. Neurocomputing 454:361–372
Xu H, Gong M, Tian X, Huang J, Ma J (2022) CUFD: an encoder-decoder network for visible and infrared image fusion based on common and unique feature decomposition. Comput Vis Image Underst 218:103–119
Yang Y, Liu J, Huang S, Wan W, Wen W, Guan J (2021) Infrared and visible image fusion via texture conditional generative adversarial network. IEEE Trans Circuits Syst Video Technol 31(12):4771–4783
Zhu M, Pan P, Chen W, Yang Y (2020) Eemefn: Low-light image enhancement via edge-enhanced multi-exposure fusion network. Proc AAAI Conf Artif Intell 34(7):13106–13113
Wei C, Wang W, Yang W, Liu J (2018) Deep retinex decomposition for low-light enhancement. arXiv preprint arXiv:1808.04560
Wang R, Zhang Q, Fu CW, Shen X, Zheng WS, Jia J (2019) Underexposed photo enhancement using deep illumination estimation. Proc IEEE/CVF Conf Comput Vis Pattern Recognit 10:6849–6857
Ronneberger O, Fischer P, Brox T (2015) U-net: Convolutional networks for biomedical image segmentation. In: International conference on Medical image computing and computer-assisted intervention, pp 234-241
Roberts JW, Van Aardt JA, Ahmed FB (2008) Assessment of image fusion procedures using entropy, image quality, and multispectral classification. J Appl Remote Sens 2(1):212–224
Qu G, Zhang D, Yan P (2002) Information measure for performance of image fusion. Electron Lett 38(7):313–325
Li J, Guo X, Lu G, Zhang B, Xu Y, Wu F, Zhang D (2020) DRPL: deep regression pair learning for multi-focus image fusion. IEEE Trans Image Process 29:4816–4831
Cui G, Feng H, Xu Z, Li Q, Chen Y (2015) Detail preserved fusion of visible and infrared images using regional saliency extraction and multi-scale image decomposition. Opt Commun 341:199–209
Han Y, Cai Y, Cao Y, Xu X (2013) A new image fusion performance metric based on visual information fidelity. Inf fusion 14(2):127–135
Wang HN, Zhong W, Wang J, Xia D (2004) Research of measurement for digital image definition. J Image Graph 9(7):828–831
Ma J, Chen C, Li C, Huang J (2016) Infrared and visible image fusion via gradient transfer and total variation minimization. Inf Fusion 31:100–109
Shreyamsha Kumar BK (2015) Image fusion based on pixel significance using cross bilateral filter. SIViP 9(5):1193–1204
Li P (2021) DIDFuse: deep image decomposition for infrared and visible image fusion. Proc Twenty-Ninth Int Conf Int Jt Conf Artif Intell 11:1976–1980
Zhao Z, Xu S, Zhang J, Liang C, Zhang C, Liu J (2022) Efficient and model-based infrared and visible image fusion via algorithm unrolling. IEEE Trans Circuits Syst Video Technol 32(3):1186–1196
Zhou H, Wu W, Zhang Y, Ma J, Ling H (2021) Semantic-supervised infrared and visible image fusion via a dual-discriminator generative adversarial network. IEEE Trans Multimedia 12(2):1261–1276
Fu Y, Xu T, Wu X, Kittler J (2022) Ppt fusion: Pyramid patch transformerfor a case study in image fusion. arXiv preprint arXiv:2107.13967
Liu J, Wu Y, Huang Z, Liu R, Fan X (2021) Smoa: searching a modality-oriented architecture for infrared and visible image fusion. IEEE Signal Process Lett 28:1818–1822
Tang L, Jiteng Y, Jiayi M (2022) Image fusion in the loop of high-level vision tasks: a semantic-aware real-time infrared and visible image fusion network. Inf Fusion 82:28–42
Zhang Hao, Ma Jiayi (2022) SDNet: a versatile squeeze-and-decomposition network for real-time image fusion. Int J Comput Vision 129(10):2761–2785
Acknowledgements
This work was primarily supported by the National Natural Science Foundation of China under Grant Nos. 62066047, 61966037 and 61463052, in part by Innovation Research Foundation for Graduate Students of Yunnan University (KC-2222245, KC-22221913).
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
To the best of our knowledge, the named authors have no conflict of interest, financial or otherwise.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Wang, C., Zang, Y., Zhou, D. et al. An interactive deep model combined with Retinex for low-light visible and infrared image fusion. Neural Comput & Applic 35, 11733–11751 (2023). https://doi.org/10.1007/s00521-023-08314-5
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00521-023-08314-5