Research Article: Image Dehazing Based On Improved Color Channel Transfer and Multiexposure Fusion
Research Article: Image Dehazing Based On Improved Color Channel Transfer and Multiexposure Fusion
Advances in Multimedia
Volume 2023, Article ID 8891239, 10 pages
https://doi.org/10.1155/2023/8891239
Research Article
Image Dehazing Based on Improved Color Channel Transfer and
Multiexposure Fusion
Shaojin Ma ,1 Weiguo Pan ,1 Hongzhe Liu ,1 Songyin Dai,1 Bingxin Xu,1 Cheng Xu ,1
Xuewei Li,2 and Huaiguang Guan3
1
Beijing Key Laboratory of Information Service Engineering, Beijing Union University, Beijing 100101, China
2
Beijing Jiaotong University, School of Economics and Management, Beijing 100044, China
3
CATARC (Tianjin) Automotive Engineering Research Institute Co., Ltd., Tianjin 300300, China
Received 30 December 2022; Revised 17 March 2023; Accepted 2 May 2023; Published 15 May 2023
Copyright © 2023 Shaojin Ma et al. Tis is an open access article distributed under the Creative Commons Attribution License,
which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Image dehazing is one of the problems that need to be solved urgently in the feld of computer vision. In recent years, more and
more algorithms have been applied to image dehazing and achieved good results. However, the image after dehazing still has color
distortion, contrast and saturation disorder, and other challenges; in order to solve these problems, in this paper, an efective
image dehazing method is proposed, which is based on improved color channel transfer and multiexposure image fusion to
achieve image dehazing. First, the image is preprocessed using a color channel transfer method based on k-means. Second, gamma
correction is introduced on the basis of guided fltering to obtain a series of multiexposure images, and the obtained multiexposure
images are fused into a dehazed image through a Laplacian pyramid fusion scheme based on local similarity of adaptive weights.
Finally, contrast and saturation corrections are performed on the dehazed image. Experimental verifcation is carried out on
synthetic dehazed images and natural dehazed images, and it is verifed that the method proposed is superior to existing dehazed
algorithms from both subjective and objective aspects.
subsequent physical model-based dehazing algorithm, and Te establishment of the atmospheric scattering model
many researchers have carried out extensive and in-depth [13] explains the formation process of images in foggy
research on the basis of the atmospheric scattering model to weather and lays a foundation for the subsequent defogging
continuously improve the level of image dehazing. work [14, 15]. He et al. [8] proposed a dark channel prior
In recent years, the dehazing algorithm based on deep principle (DCP) based on the atmospheric scattering model
learning has shown better performance. At present, there are and using prior knowledge. In general, DCP has a good efect
two types of deep learning-based dehazing algorithms that on dehazing natural scene images, but this theory is in-
are widely studied: one is to use deep learning methods to efective in bright areas, such as the sky, water, and the
estimate some parameters of atmospheric physical models to surface of white objects, resulting in inaccurate transmission
restore images [9] and the other is to use the neural network rates calculated and excessive enhancement of the recovered
to directly restore the input foggy image to obtain the image and a darker efect. After that, He et al. [16] proposed
dehazing image, which is often referred to as end-to-end a guided fltering algorithm, which focuses on simple box
dehazing in deep learning [10, 11]. blurring and will not be afected by the degree and radius of
Diferent from the existing dehazing methods based on blurring, so the real-time performance has been greatly
atmospheric scattering models, the proposed method adopts improved, which is a conformal fltering algorithm. In the
Laplace pyramid decomposition based on Laplace pyramid feld of image deraining and denoising, guided fltering can
decomposition to retain the structural information of the also achieve good results. Raanan [17] proposed a dehazing
image. In order to obtain a fog-free image, the area with the method based on color lines from the perspective of image
best visual quality is collected from each image for image color lines, assuming that the transmission in a local area is
fusion, and the color channel transfer algorithm is used to consistent, and the color lines in the nonfoggy area need to
efectively retain the color information in the image. pass through the origin and move along the ambient light;
Te main contributions of this paper are as follows: this characteristic is used to estimate local transmission and
global ambient light.
(a) In order to prevent color dull and distortion that
Compared with traditional methods, the methods of
may occur in the image after dehazing, we propose
deep learning mainly learn the transfer rate by labeling and
a color transfer module to compensate for the color
training datasets or directly learn the mapping of foggy
loss of the dehazing image. Te color transfer
images to the corresponding fog-free images. For example,
module converts the data of the image from RGB
Proximal Dehaze-Net [18] frst designed these two priori
space to lαβ space and then uses the color channel
iterative optimization algorithms using proximity operators
transfer between images to restore the color in-
and then expanded the iterative algorithm into a dehazing
formation of the dehazing image.
network by using convolutional neural networks to learn
(b) An image dehazing algorithm based on Laplace neighboring operators. DehazeNet [9] uses a deep archi-
pyramid fusion scheme via local similarity of tecture based on convolutional neural networks to estimate
adaptive weights is proposed, which frst artifcially the transmission rate in atmospheric scattering models. Ren
underexposes haggy images through a series of et al. [19] proposed a multiscale deep convolutional neural
gamma correction operations. With a multiscale network for recovering foggy images. Tis process often
Laplace fusion scheme, multiple exposure images are wastes a lot of calculation time; if the depth estimation of the
combined into a fog-free result, extracting the best dehazing scene is not accurate, the image after dehazing is
quality areas from each image and merging them prone to artifacts in the edge area, or it appears as color
into a single fog-free output. distortion, afecting the visual efect. Zhang and Tao [20]
(c) In order to prove the dehazing performance of the proposed a multiscale convolutional neural dehazing net-
proposed method, extensive experiments were car- work FAMED-Net with a global edge, which can quickly and
ried out on the dataset of indoor/outdoor synthetic accurately calculate haze-free images end-to-end at multiple
foggy images and natural foggy images, and better scales. FFA-NET [21] is an end-to-end feature fusion at-
results were achieved in both subjective and objec- tention network in which attention mechanisms focus on
tive aspects. more efcient information. Hong et al. [22] proposed
a knowledge distillation network (KDDN) that uses the teacher
2. Related Work network as an image reconstruction task and enables the
student network to simulate this process. LKD-NE [23] im-
Foggy images lead to blurry image details, low contrast, and proves the performance of the convolution kernel by increasing
loss of important image information, and preprocessing of the size of the convolution kernel to use a larger acceptance
foggy images can often improve dehazing performance. Te domain, thereby enhancing the efect of network dehazing. Te
literature [12] proposes color channel shifting, which utilizes deep learning-based dehazing method has shown excellent
a reference image from a source image to transfer in- performance and has achieved great success. However, training
formation from an important color channel to an attenuated deep learning models for good performance is cumbersome.
color channel to compensate for the loss of information. Not only is a large labeled dataset required, but also the training
However, this method needs to be combined with other process is time-consuming. Moreover, the debugging of
dehazing methods to improve the dehazing performance of models in deep learning is also relatively difcult, which in-
these methods in special color scenes. creases the difculty of work.
Advances in Multimedia 3
Original image
γ=2
*
Multi-exposure blending
Color channel transfer
enhancement
correction
Saturation
γ=4
Contrast
*
Results layer
K-mean
algorithm
guidance γ=5
*
Dehazed image
where F is the original image, Fi,j is the ith subimage and then, the pixels within each block are histogram
in the jth scale, and hi,j is the Gaussian kernel equalized, and fnally, the values of pixels within the
function of scale I and subimage j. block are interpolated:
(2) CLAHE processing was carried out for each scale
image. First, the image is divided into small blocks,
Figure 4: Comparison of real scene image dehazing efect. From left to right, the original image, the processed results of AMEF, CAP,
CODHWT, FADE, MAME, and DePAMEF, and our method.
which reduces the visual efect of the image. Te AMEF and contrast correction are applied to multiexposure image
CODHWT methods can efectively reconstruct sharp im- fusion, and the image after dehazing is more in line with
ages from foggy images. Trough the sky area in rows 6 and 8 human visual observation.
of Figure 4, the background color of the image after dehazing
by the AMEF method is closer to the original image com-
pared to the CODHWT method. Both the MAME and 4.3. Objective Evaluation. In order to analyze the subtle
DePAMEF methods achieved better performance in detail diferences in the images, this paper uses the PSNR [36],
visibility and preservation of fog-free areas, but the image SSIM [37], FSIM [38], and GMSD [39] for objective
after DePAMEF dehazing had a mutilated haze, resulting in evaluation.
an increase in color artifacts in the area where the house and Zhang et al. [38] proposed FSIM, arguing that the human
sky were connected. visual system mainly understands images based on low-level
Te algorithm proposed in this paper compensates for features and combines phase consistency, color features,
the loss between each channel through the color channel gradient features, and chromaticity features to measure the
transfer method before dehazing and efectively reduces the local structural information of images. GMSD was discov-
interference between each channel, and the essence of the ered by Xue [39] in 2014 which showed that gradient maps
image is clearly restored after dehazing, and the buildings are sensitive to image distortion, and distortion images with
and vehicles in the distance are clearly visible and the details diferent structures have diferent degrees of quality deg-
are obvious. Spatial linear saturation adjustment and radation, so as to propose an image full reference evaluation
Advances in Multimedia 7
Figure 5: Te visualization efect of dehazing in the synthetic haze scene. From left to right, the original image, the processed results of
AMEF, CAP, CODHWT, FADE, MAME, DePAMEF, our method, and real fog-free image.
method, which has the characteristics of high accuracy and where μx and μy represent the means of x and y and μ2x and
low amount of calculation. μ2y represent the variance of x and y, respectively. σ xy is
PSNR evaluates image quality by calculating the pixel represented as the covariance between x and y, and C1 and
error between the original image and the dehazing image. C2 are represented as constant coefcients.
Te PSNR value is more signifcant when the error between FSIM is based on phase consistency and gradient am-
the dehazed image and the original image is smaller. Te plitude. Te larger the value is, the closer the dehazing image
calculation of PSNR is shown in the following equation: is to the original image. GMSD is designed primarily to
provide credible evaluation capabilities and use metrics that
MAX2 (13)
minimize computational overhead.
PSNR � 10 × log10 , We calculate the PSNR of diferent methods for pro-
MSE
cessing images. In Table 1, it can be seen from Figure 5 that
where MSE is represented by the mean squared error and both MAME and the proposed method can achieve good
MAX2 is the maximum pixel value of the original image. results in removing dense fog, and compared with MAME,
SSIM was used to measure the similarity between the our proposed method can efectively remove dense fog while
original image and the dehaze image. SSIM uses the mean restoring the color information of the sky area. In addition,
value to estimate brightness, standard deviation to estimate compared with other images, the method proposed in this
contrast, and covariance to measure structural similarity, as paper has achieved better results.
shown in the following equation. Te more signifcant the Te SSIM values for the image in Figure 5 are shown in
SSIM value, the less distorted the image, indicating better Table 2. As can be seen from the table, AMEF, CAP, and the
results after dehazing. proposed method obtain higher SSIM values. It can be seen
from Table 2 that the SSIM value of the proposed method
2μx × μy + C1 2σ xy + C2 reaches 0.9073, which has the best performance. For the
SSIM � 2 2 2 2 , (14) Tiananmen image in Figure 5, the SSIM value of the method
μx + μy + C1 σ x + σ y + C2 in this paper is 0.9192, second only to CAP.
8 Advances in Multimedia
As shown in Table 3, the method proposed in this paper dehazing images, so as to retain more image details. Trough
is superior to other dehazing methods in recovering image comparative experiments with other mainstream dehazing
structure. Tis is because the multiexposure melting dehaze methods, the results show that the proposed method can
method fuses images with diferent exposure levels and obtain good dehazing efect in light fog and dense fog
better preserves the structural features of the image. images, and the method achieves good results in various
Table 4 shows the calculation results of FSIM values. It evaluation performance indicators. In future work, it is
can be seen from the table that the dehazing image proposed necessary to further optimize the complexity of the algo-
by this method has a high similarity with the original haze- rithm and improve the practicability of the algorithm. In
free image, and the FSIM score is greater than 0.90. Tis is addition, it is also possible to start with fog and haze images
because we use gamma correction to acquire images with in various scenarios and perform targeted defogging pro-
diferent exposure levels and multiscale fusion using the cessing to obtain better efects.
classical Laplace pyramid method. Te method proposed in
this article attempts to obtain the best exposure for each area, Data Availability
so the FSIM score of the image is high.
Te data used to support the fndings of this study are
5. Conclusion available from the corresponding author upon request.
Hongzhe Liu performed funding acquisition; Shaojin Ma [11] D. Li, N. Ma, and Y. Gao, “Future vehicles: learnable wheeled
and WeiGuo Pan performed methodology; Shaojin Ma and robots,” Science China Information Sciences, vol. 63, no. 9, p. 8,
Cheng Xu provided software; Xuewei Li performed super- Article ID 193201, 2020.
vision; Shaojin Ma and Huaiguang Guan performed vali- [12] C. O. Ancuti, A. Cosmin, C. Vleeschouwer, and M. Sbetr,
dation and visualization; Shaojin Ma wrote the original draft; “Color Channel transfer for image dehazing,” IEEE Signal
Processing Letters, vol. 26, no. 9, pp. 1413–1417, 2019.
Shaojin Ma and WeiGuo Pan wrote, reviewed, and edited the
[13] S. K. Nayar and S. G. Narasimhan, “Vision in bad weather,” in
manuscript. Proceedings of the Seventh IEEE International Conference on
Computer Vision, IEEE, Kerkyra, Greece, August 2002.
Acknowledgments [14] D. Berman, T. Treibitz, and S. Avidan, Non-local Image
Dehazing, in Proceedings of the 2016 IEEE Conference on
Tis work was supported by the Beijing Natural Science Computer Vision and Pattern Recognition (CVPR), IEEE, Las
Foundation (4232026), National Natural Science Founda- Vegas, NV, USA, June 2016.
tion of China (grant nos. 62272049, 62171042, 61871039, [15] Q. Zhu, J. Mai, and L. Shao, “A fast single image haze removal
algorithm using color attenuation prior,” IEEE Transactions
62102033, and 62006020), Key Project of Science and
on Image Processing, vol. 24, no. 11, pp. 3522–3533, 2015.
Technology Plan of Beijing Education Commission [16] K. He, J. Sun, and X. Tang, “Guided image fltering,” IEEE
(KZ202211417048), the Project of Construction and Support Transactions on Pattern Analysis and Machine Intelligence,
for High-Level Innovative Teams of Beijing Municipal In- vol. 35, no. 6, pp. 1397–1409, 2013.
stitutions (no. BPHR20220121), the Collaborative In- [17] F. Raanan, “Dehazing using color-lines,” ACM Transactions
novation Center of Chaoyang (no. CYXC2203), and on Graphics, vol. 34, no. 1, 2014.
Scientifc Research Projects of Beijing Union University [18] Y. Dong and J. Sun, “Proximal Dehaze-Net: A Prior Learning-
(grant nos. ZK10202202, BPHR2020DZ02, ZK40202101, Based Deep Network for Single Image Dehazing,” in Pro-
and ZK120202104). ceedings of the European Conference on Computer Vision,
Springer, Cham, Switzerland, September 2018.
[19] W. Ren, S. Liu, H. Zhang, J. Pan, and X. Cao, “Single Image
References Dehazing via Multi-Scale Convolutional Neural networks,” in
Proceedings of the European Conference on Computer Vision,
[1] S. S. Sengar and S. Mukhopadhyay, “Moving object detection pp. 154–169, Springer, Cham, Switzerland, September 2016.
based on frame diference and W4,” Signal, Image and Video [20] J. Zhang and D. Tao, “FAMED-net: a fast and accurate multi-
Processing, vol. 11, no. 7, pp. 1357–1364, 2017. scale end-to-end dehazing network,” IEEE Transactions on
[2] S. S. Sengar and S. Mukhopadhyay, “Moving object area Image Processing, vol. 29, pp. 72–84, 2020.
detection using normalized self adaptive optical fow,” Optik, [21] X. Qin, Z. Wang, Y. Bai, X. Xie, and H. Jia, “FFA-net: feature
vol. 127, no. 16, pp. 6258–6267, 2016. fusion attention network for single image dehazing,” Pro-
[3] S. S. Sengar and S. Mukhopadhyay, “Moving object detection ceedings of the AAAI Conference on Artifcial Intelligence,
using statistical background subtraction in wavelet com- vol. 34, no. 7, pp. 11908–11915, 2020.
pressed domain,” Multimedia Tools and Applications, vol. 79, [22] M. Hong, Y. Xie, C. Li, and Y. Qu, “Distilling image dehazing
no. 9-10, pp. 5919–5940, 2020. with heterogeneous task imitation,” in Proceedings of the
[4] S. S. Sengar and S. Mukhopadhyay, “Foreground detection via IEEE/CVF Conference on Computer Vision and Pattern Rec-
background subtraction and improved three-frame difer- ognition, pp. 3462–3471, Seattle, WA, USA, June 2020.
encing,” Arabian Journal for Science and Engineering, vol. 42, [23] P. Luo, G. Xiao, X. Gao, and S. Wu, “LKD-net: Large Kernel
no. 8, pp. 3621–3633, 2017. Convolution Network for Single Image Dehazing,” 2022,
[5] S. S. Sengar and S. Mukhopadhyay, “Motion segmentation- https://arxiv.org/abs/2209.01788.
based surveillance video compression using adaptive particle [24] R. Achanta, S. Hemami, F. Estrada, and S. Susstrunk,
swarm optimization,” Neural Computing & Applications, “Frequency-tuned Salient Region detection,” in Proceedings of
vol. 32, no. 15, pp. 11443–11457, 2020. the 2009 IEEE Conference on Computer Vision and Pattern
[6] S. Singh Sengar, “Motion Segmentation Based on Structure- Recognition, IEEE, Miami, FL, USA, June 2009.
Texture Decomposition and Improved Tree Frame Difer- [25] J. Vazquez-Corral and M. Bertalmı́o, “Simultaneous blind
encing,” in Proceedings of the Artifcial Intelligence Applica- gamma estimation,” IEEE Signal Processing Letters, vol. 22,
tions and Innovations, pp. 609–622, Hersonissos, Greece, May no. 9, pp. 1316–1320, 2015.
2019. [26] A. Galdran, “Image dehazing by artifcial multiple-exposure
[7] D. Zhang, “Wavelet transform,” Fundamentals of image data image fusion,” Signal Processing, vol. 149, pp. 135–147, 2018.
mining, pp. 35–44, Springer, Cham, Switzerland, 2019. [27] Y. Liu, L. Wang, J. Cheng, C. Li, and X. Chen, “Multi-focus
[8] K. He, J. Sun, and X. Tang, “Single image haze removal using image fusion: a Survey of the state of the art,” Information
dark channel prior,” IEEE Transactions on Pattern Analysis Fusion, vol. 64, pp. 71–91, 2020.
and Machine Intelligence, vol. 33, no. 12, pp. 2341–2353, 2011. [28] Z. Zhu, X. He, G. Qi, Y. Li, B. Cong, and Y. Liu, “Brain tumor
[9] B. Cai, X. Xu, K. Jia, C. Qing, and D. Tao, “DehazeNet: an end- segmentation based on the fusion of deep semantics and edge
to-end system for single image haze removal,” IEEE Trans- information in multimodal MRI,” Information Fusion, vol. 91,
actions on Image Processing, vol. 25, no. 11, pp. 5187–5198, pp. 376–387, 2023.
2016. [29] M. Ravikumar, P. G. Rachana, B. J. Shivaprasad, and
[10] Y. Cao, J. Xu, S. Lin, F. Wei, and H. Hu, “Gcnet: non-local D. S. Guru, “Enhancement of mammogram images using
networks meet squeeze-excitation networks and beyond,” in CLAHE and bilateral flter approaches,” Cybernetics, Cogni-
Proceedings of the IEEE/CVF international conference on tion and Machine Learning Applications, pp. 261–271,
computer vision workshops, Seoul, South Korea, October 2019. Springer, Singapore, 2021.
10 Advances in Multimedia