0% found this document useful (0 votes)
26 views10 pages

Research Article: Image Dehazing Based On Improved Color Channel Transfer and Multiexposure Fusion

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
26 views10 pages

Research Article: Image Dehazing Based On Improved Color Channel Transfer and Multiexposure Fusion

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

Hindawi

Advances in Multimedia
Volume 2023, Article ID 8891239, 10 pages
https://doi.org/10.1155/2023/8891239

Research Article
Image Dehazing Based on Improved Color Channel Transfer and
Multiexposure Fusion

Shaojin Ma ,1 Weiguo Pan ,1 Hongzhe Liu ,1 Songyin Dai,1 Bingxin Xu,1 Cheng Xu ,1
Xuewei Li,2 and Huaiguang Guan3
1
Beijing Key Laboratory of Information Service Engineering, Beijing Union University, Beijing 100101, China
2
Beijing Jiaotong University, School of Economics and Management, Beijing 100044, China
3
CATARC (Tianjin) Automotive Engineering Research Institute Co., Ltd., Tianjin 300300, China

Correspondence should be addressed to Weiguo Pan; ldtweiguo@buu.edu.cn

Received 30 December 2022; Revised 17 March 2023; Accepted 2 May 2023; Published 15 May 2023

Academic Editor: Sandeep Singh Sengar

Copyright © 2023 Shaojin Ma et al. Tis is an open access article distributed under the Creative Commons Attribution License,
which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Image dehazing is one of the problems that need to be solved urgently in the feld of computer vision. In recent years, more and
more algorithms have been applied to image dehazing and achieved good results. However, the image after dehazing still has color
distortion, contrast and saturation disorder, and other challenges; in order to solve these problems, in this paper, an efective
image dehazing method is proposed, which is based on improved color channel transfer and multiexposure image fusion to
achieve image dehazing. First, the image is preprocessed using a color channel transfer method based on k-means. Second, gamma
correction is introduced on the basis of guided fltering to obtain a series of multiexposure images, and the obtained multiexposure
images are fused into a dehazed image through a Laplacian pyramid fusion scheme based on local similarity of adaptive weights.
Finally, contrast and saturation corrections are performed on the dehazed image. Experimental verifcation is carried out on
synthetic dehazed images and natural dehazed images, and it is verifed that the method proposed is superior to existing dehazed
algorithms from both subjective and objective aspects.

1. Introduction detail information of the image, but there is a phenome-


non that the image information is lost due to excessive
With the popularization and rapid development of com- enhancement. Tis kind of method is mainly divided into
puter technology, computer vision is widely used in various two categories: global enhancement and local enhance-
felds such as object detection [1–4], image segmentation ment. Among the globally enhanced methods, there are
[5, 6], and face recognition. Afected by smoggy weather, the algorithms based on histogram equalization, homomor-
images acquired by camera equipment usually show color phic fltering, and Retinex theory. In the localization
shift, low visibility, and decreased contrast and saturation, enhancement method, the wavelet transform algorithm
which seriously afect the development of subsequent decomposes the image, and the image is localized through
computer vision tasks. Terefore, dehazing the images is an local features to make the image enhanced at multiple
important research direction in computer vision tasks. In scales and then amplify the useful information [7].
recent years, many researchers have studied image dehazing Te dehazing algorithm based on the physical model
algorithms from multiple directions. It is mainly divided often relies on the atmospheric scattering model [8], which
into three parts: dehazing algorithm based on image en- mainly focuses on the solution of the parameters in the
hancement, physical model, and deep learning. model, and through the mapping relationship, the inverse
Te dehazing algorithm based on image enhancement operation is performed according to the formation process
improves the quality of the image by enhancing the of the foggy image to restore the clear image. Te atmo-
contrast and strengthening the edge information and spheric scattering model is the cornerstone of the
2 Advances in Multimedia

subsequent physical model-based dehazing algorithm, and Te establishment of the atmospheric scattering model
many researchers have carried out extensive and in-depth [13] explains the formation process of images in foggy
research on the basis of the atmospheric scattering model to weather and lays a foundation for the subsequent defogging
continuously improve the level of image dehazing. work [14, 15]. He et al. [8] proposed a dark channel prior
In recent years, the dehazing algorithm based on deep principle (DCP) based on the atmospheric scattering model
learning has shown better performance. At present, there are and using prior knowledge. In general, DCP has a good efect
two types of deep learning-based dehazing algorithms that on dehazing natural scene images, but this theory is in-
are widely studied: one is to use deep learning methods to efective in bright areas, such as the sky, water, and the
estimate some parameters of atmospheric physical models to surface of white objects, resulting in inaccurate transmission
restore images [9] and the other is to use the neural network rates calculated and excessive enhancement of the recovered
to directly restore the input foggy image to obtain the image and a darker efect. After that, He et al. [16] proposed
dehazing image, which is often referred to as end-to-end a guided fltering algorithm, which focuses on simple box
dehazing in deep learning [10, 11]. blurring and will not be afected by the degree and radius of
Diferent from the existing dehazing methods based on blurring, so the real-time performance has been greatly
atmospheric scattering models, the proposed method adopts improved, which is a conformal fltering algorithm. In the
Laplace pyramid decomposition based on Laplace pyramid feld of image deraining and denoising, guided fltering can
decomposition to retain the structural information of the also achieve good results. Raanan [17] proposed a dehazing
image. In order to obtain a fog-free image, the area with the method based on color lines from the perspective of image
best visual quality is collected from each image for image color lines, assuming that the transmission in a local area is
fusion, and the color channel transfer algorithm is used to consistent, and the color lines in the nonfoggy area need to
efectively retain the color information in the image. pass through the origin and move along the ambient light;
Te main contributions of this paper are as follows: this characteristic is used to estimate local transmission and
global ambient light.
(a) In order to prevent color dull and distortion that
Compared with traditional methods, the methods of
may occur in the image after dehazing, we propose
deep learning mainly learn the transfer rate by labeling and
a color transfer module to compensate for the color
training datasets or directly learn the mapping of foggy
loss of the dehazing image. Te color transfer
images to the corresponding fog-free images. For example,
module converts the data of the image from RGB
Proximal Dehaze-Net [18] frst designed these two priori
space to lαβ space and then uses the color channel
iterative optimization algorithms using proximity operators
transfer between images to restore the color in-
and then expanded the iterative algorithm into a dehazing
formation of the dehazing image.
network by using convolutional neural networks to learn
(b) An image dehazing algorithm based on Laplace neighboring operators. DehazeNet [9] uses a deep archi-
pyramid fusion scheme via local similarity of tecture based on convolutional neural networks to estimate
adaptive weights is proposed, which frst artifcially the transmission rate in atmospheric scattering models. Ren
underexposes haggy images through a series of et al. [19] proposed a multiscale deep convolutional neural
gamma correction operations. With a multiscale network for recovering foggy images. Tis process often
Laplace fusion scheme, multiple exposure images are wastes a lot of calculation time; if the depth estimation of the
combined into a fog-free result, extracting the best dehazing scene is not accurate, the image after dehazing is
quality areas from each image and merging them prone to artifacts in the edge area, or it appears as color
into a single fog-free output. distortion, afecting the visual efect. Zhang and Tao [20]
(c) In order to prove the dehazing performance of the proposed a multiscale convolutional neural dehazing net-
proposed method, extensive experiments were car- work FAMED-Net with a global edge, which can quickly and
ried out on the dataset of indoor/outdoor synthetic accurately calculate haze-free images end-to-end at multiple
foggy images and natural foggy images, and better scales. FFA-NET [21] is an end-to-end feature fusion at-
results were achieved in both subjective and objec- tention network in which attention mechanisms focus on
tive aspects. more efcient information. Hong et al. [22] proposed
a knowledge distillation network (KDDN) that uses the teacher
2. Related Work network as an image reconstruction task and enables the
student network to simulate this process. LKD-NE [23] im-
Foggy images lead to blurry image details, low contrast, and proves the performance of the convolution kernel by increasing
loss of important image information, and preprocessing of the size of the convolution kernel to use a larger acceptance
foggy images can often improve dehazing performance. Te domain, thereby enhancing the efect of network dehazing. Te
literature [12] proposes color channel shifting, which utilizes deep learning-based dehazing method has shown excellent
a reference image from a source image to transfer in- performance and has achieved great success. However, training
formation from an important color channel to an attenuated deep learning models for good performance is cumbersome.
color channel to compensate for the loss of information. Not only is a large labeled dataset required, but also the training
However, this method needs to be combined with other process is time-consuming. Moreover, the debugging of
dehazing methods to improve the dehazing performance of models in deep learning is also relatively difcult, which in-
these methods in special color scenes. creases the difculty of work.
Advances in Multimedia 3

3. Proposed Method gamma � ε × Ιc (x), (4)


In this paper, an image dehazing algorithm based on color where ε and c are the coefcients in gamma correction.
channel transfer and multiexposure fusion is proposed as When the coefcient c < 1, as shown in Figure 2, over-
shown in Figure 1, which efectively restores the satu- exposure makes the hue of high-brightness objects in the
ration and contrast information of the image while image too bright, and the smoothness of the edges of the
retaining the color characteristics of the image. Te al- object is prone to degradation.
gorithm frst uses k-means to cluster and color transfer When the factor c > 1, as shown in Figure 3, the contrast
the pixel intensity of the image in the lαβ color space. of the underexposed image is enhanced, and more detail can
Second, guided fltering is introduced into the multi- be obtained in the image. Terefore, we choose c values of 2,
exposure image obtained by gamma correction, and the 3, 4, and 5 to artifcially generate underexposed images.
dehazing image is obtained by Laplace pyramid fusion.
Finally, contrast and saturation are corrected by an
improved adaptive histogram equalization and spatial 3.3. Laplace Pyramid Decomposition and Energy Local
linear saturation adjustment, respectively. Features. Laplace pyramid is a simple and efective, mul-
tiscale, multiresolution image processing method, which is
based on the Gaussian decomposition of the image and
3.1. Improved Color Channel Transfer Method. In the process contains the information diference between the adjacent
of image dehazing, in order to avoid the interference of layers of two Gaussian pyramids. Te dehazing algorithm
a certain spectrum, the proposed method in this paper es- using Laplace pyramid fusion can better improve the
tablishes a reference image by transferring the color channel dehazing efect [26] and contain higher spatial resolution
of the input image, and the formula is as follows: and image detail, as shown in the following equation:
R(x) � G(x) + D(x) + S(x) × I(x), (1) K
J(x) � 􏽘 Wk (x) × Ek (x), (5)
where G(x) is the uniform grayscale image (50%) and D(x) k�1
is the detail layer of the input image, in order to calculate the
signifcance mapping of the input image. We employ an where K is the number of available images with diferent
efective technique proposed by Aganta [24] to introduce exposures in Ek (x) and J(x) is a well-exposed image pro-
a bias against the dominant color between the feature map duced by a combination of diferent correctly exposed re-
and the initial image, helping to restore the initial color. Te gions in Ek (x). Normalize the weights Ek (x).
detail layer D(x) is obtained by subtracting the Gaussian In the paper, a fusion method based on local energy
blurred image from the input image, as shown in the fol- features is used to assign the weight values in the Laplace
lowing equation: pyramid. Te local energy features are defned as follows:

D(x) � I(x) − Iwhc (x), (2) S(i, j) � 􏽘 􏽘 C(i + m, j + n)2 . (6)


m n
�� ��
S(x) � ���Iμ − Iwhc (x)���, (3) For the position of the image at (i, j), the local energy of
the point is the sum of squares of the pixel values in the m
where Iwhc (x) is the image of the original image processed window centered on the point. Local energy features can
by a Gaussian kernel of 5 × 5, Iμ is the mean vector of the efectively represent areas of an image with rich detail. In
initial image, and ‖ ‖ is the L2 norm. general, areas of the image that contain detailed details have
Color channel shifting is used for dehazing image pre- a lot of energy. In the process of regional fusion, if the energy
processing, with the most pronounced efect in extreme diference between the two is too large, it means that the
conditions such as multilight sources, underwater images, matching degree is small, so we only choose the larger part.
and night images. In order to improve the efect of pre- Te specifc steps are as follows:
processing of color channel transfer on daytime images, this
(a) Choose an appropriate threshold.
paper introduces the k-means algorithm to adjust the
standard deviation of the source image and the reference (b) Calculate the local energy graph of each image after
image in the color channel transition and then cluster the the decomposition of the Laplace pyramid.
pixel intensity of each image in the color space and fnally (c) Calculate the local covariance of the fused image to
use the Euclidean distance to determine the centroid be- represent the similarity.
tween the two most similar images and only calculate the (d) If the matching degree of this point is less than the
statistics in each region. threshold value, the graph with high energy of this
point is selected and the rest are discarded.
3.2. Gamma Correction. In computer vision, pixel intensity (e) If the matching degree of this point is greater than
values are proportional to the exposure level, so gamma the threshold value, the weight is assigned according
correction can adjust the image exposure level by using to the energy size. Te weight with small energy is
diferent coefcients c [25]. Te gamma correction is shown Wmin � 0.5 × (1 − (1 − cor)/(1 − e)) and the weight
as follows: with large energy is Wmax � 1 − Wmin .
4 Advances in Multimedia

Laplace pyramid Local energy


decomposition feature map

Original image
γ=2
*

Fusion of local similarity


based on adaptive weights
γ=3
*
Normalization Detail layer Feature map

Multi-exposure blending
Color channel transfer

enhancement
correction
Saturation
γ=4

Contrast
*

Results layer
K-mean
algorithm
guidance γ=5
*

Dehazed image

Figure 1: Te proposed framework.

Gi , then the weight calculation formula of this method is as


follows:
α
􏼐Gi,j + ϵ􏼑
wi,j � α , (7)
Figure 2: Image overexposure processing. Te c values from left to 􏽐N
k�1 􏼐Gk,j + ϵ􏼑
right are 0.2, 0.3, and 0.4.
where i represents the frst image, j represents the position of
the pixel, and Gi,j represents a total of N images. ϵ is a small
positive number, which is used to avoid the case of zero
divisor. α is a hyperparameter that controls the degree of
nonlinearity of the weight, and Gi,j represents the gradient
Figure 3: Image overexposure processing. Te c values from left to value of the ith image at position j. Te greater the fnal
right are 2, 3, and 4. weight wi,j , the greater the contribution of the ith image in
position j.
On the basis of local similarity fusion, adaptive weight
3.4. Local Similarity Fusion Method Based on Adaptive method can be introduced to further improve the fusion
Weights. Te fusion method based on local similarity is efect. In this method, gradient information is used to cal-
a multiexposure image fusion method. According to this culate the weight in order to better preserve the image
method, if two pixels have similar local neighborhood in details. In addition, the fusion method based on local
diferent images, they can be regarded as the same pixel, and similarity can be combined with other extension methods,
they can be fused into a high dynamic range (HDR) image such as multiscale fusion and local tone mapping, to further
[27, 28]. In this paper, a local similarity fusion method based on improve the fusion efect.
adaptive weights is adopted. By adding adaptive weights to the
local similarity fusion method, the weights can be adjusted
adaptively according to the gradient information of diferent 3.5. Multiscale CLAHE Method. In order to further retain
pixels, so as to better balance the contributions of diferent more detailed information of the dehazed image, this
images and make HDR images more balanced and natural. paper uses the CLAHE algorithm to process the dehazed
Te details of the algorithm are as follows: image. In CLAHE, multiscale processing can further
improve its enhancement efect. By analyzing the image at
(1) For each pixel, the Manhattan distance metric
diferent scales, extracting feature information at diferent
method is selected to calculate its local neighborhood
levels can efectively improve the contrast and details of
in multiple images, and the gradient information of
the image after defogging, while avoiding excessive noise
pixel value is used to calculate the weight, so as to
enhancement [29].
better preserve the image details
Te details of CLAHE are as follows:
(2) Te mean square error method is used to calculate
the similarity of each pixel’s local neighborhood in (1) Te original image is divided into multiple scales,
diferent images which can be layered using methods such as the
Gaussian or Laplacian pyramid. At the bottom of the
(3) Te pixel with the highest similarity in diferent
pyramid, the size of the image is the largest and more
images is selected for fusion, and the weighted av-
detail can be obtained, while as the number of layers
erage method is used to obtain the fnal pixel value
increases, the size of the image gradually decreases
Te adaptive weighting method takes into account the and the details gradually become blurred:
gradient information of each pixel when calculating the
weight. Supposing that the gradient value in the Nth image is Fi,j (x, y) � F(x, y) × hi,j (x, y), (8)
Advances in Multimedia 5

where F is the original image, Fi,j is the ith subimage and then, the pixels within each block are histogram
in the jth scale, and hi,j is the Gaussian kernel equalized, and fnally, the values of pixels within the
function of scale I and subimage j. block are interpolated:
(2) CLAHE processing was carried out for each scale
image. First, the image is divided into small blocks,

Hi,j (k) − Ci,j


H′ i, j(k) � 􏼨 0 Hi, j(k) ≤ Ci,j Ci,j + · 􏼐K − Ci,j 􏼑 Hi,j (k) > Ci,j , (9)
1 − Ci,j

where Ci,j is the cumulative distribution function of SF � 􏽐m,n F m,n I F I


i�1,j�1 sij and SI � 􏽐i�1,j�1 sij , where sij and sij are the
pixel values in the subimage and K is the maximum saturation of pixel (i, j) in the fusion image F and the
value of the histogram. saturation of pixel (i, j) in the foggy image F, respectively;
(3) In the locally enhanced subimages, the boundaries of ωF is the diference between brightness and saturation of the
each subimage are smoothed using an interpolation fusion image F; and ωI is the diference between brightness
method: and saturation of foggy image I.
1
Fi,j′ (x, y) � 􏽘 wi,j (u, v) 4. Experimental Results and Analysis
wi,j (x, y) (u,v)∈Si,j (x,y)
(10)
4.1. Parameter Settings and Datasets. Te experimental
× Fi,j (u, v), computer is confgured with Intel (R) Core (TM) i7-
10875UCPU@2.30 GHz 16.00 GB RAM. In the improved
where Fi,j′ is the smoothed subimage, wi,j is the color channel transition algorithm, equation (5) uses
interpolation weight, and Si,j is the interpolation a Gaussian kernel and takes the k value of 5 to initialize the
window. cluster center to ensure that the cluster center is in the data
(4) Te fnal enhanced image is obtained by combining space. During the gamma correction phase, the selection
the enhanced results of all scales: value of artifcial exposure is fxed at c ∈ {2, 3, 4, 5}.
n m
It is difcult to collect real fog-free and contrasting foggy
F′ (x, y) � 􏽘 􏽘 Fi,j′ (x, y), (11) images in the research of the dehazing algorithm. In order to
i�1 j�1 solve this problem, artifcial synthesis of fog images is usually
required. In this paper, D − hazy artifcial synthesis of fog
where, F′ is the fnal enhanced image, n is the data set [31] and fog images collected in real scenes are
number of scales, and m is the number of neutron mainly used to test and compare the performance of this
images at each scale. algorithm on outdoor images. D − hazy contains 35 pairs of
images with fog and corresponding outdoor images without
fog (ground reality). Te variation range of atmospheric
3.6. Spatial Linear Saturation Adjustment. Te multiscale
light is 0.8∼1.0, and the variation range of scattering pa-
CLAHE method can take into account the detailed information
rameters is 0.04∼0.2. To compare with the previous state-of-
of images at diferent scales, making the contrast enhancement
the-art methods, we used PSNR, SSIM, GMSD, and FSIM
more balanced and natural. At the same time, multiscale
indicators for comparision tests on a dataset containing 500
processing can also avoid problems such as excessive en-
indoor images and 500 outdoor images.
hancement or distortion that may occur during the processing
of the CLAHE algorithm. However, multiscale processing will
increase computational complexity and storage space, which is 4.2. Subjective Evaluation. In this part, the algorithms of
what we will address next. According to the CAP dehazing CAP [15], AMEF [26], CODHWT [32], FADE [33], MAME
algorithm [15], it can be seen that, with the change of fog [34], and DePAMEF [35] are compared with the proposed
concentration, the brightness and saturation diference of the algorithm in this paper.
image also change. Based on this theory, Zhu [30] proposed Compared with rows 2 and 7 of Figure 4, it can be seen
a method to enhance image dehazing performance and ro- that the CAP method can show better dehazing performance
bustness and balance its color saturation during the dehazing in the mist area, but in rows 1, 10, and 11 of Figure 4, with
process, as shown in the following equation: the increase of fog concentration, the dehazing performance
VF − (ωF/ωI) VI − SI 􏼁􏼁 of the CAP method gradually decreases, the texture details of
τs � , (12) white objects (row 9) become blurred, and some details in
SF
the image are difcult to read, such as the texture of branches
where VF � 􏽐m,n F m,n I F
i�1,j�1 vij and VI � 􏽐i�1,j�1 vij , in which vij and (such as rows 3 and 4). From the second and ninth lines of
I Figure 4, it can be seen that the FADE method is accom-
vij are the brightness of pixel (i, j) in fused image F and
the brightness of pixel (i, j) in foggy image, respectively; panied by color distortion and loss of detail while dehazing,
6 Advances in Multimedia

Figure 4: Comparison of real scene image dehazing efect. From left to right, the original image, the processed results of AMEF, CAP,
CODHWT, FADE, MAME, and DePAMEF, and our method.

which reduces the visual efect of the image. Te AMEF and contrast correction are applied to multiexposure image
CODHWT methods can efectively reconstruct sharp im- fusion, and the image after dehazing is more in line with
ages from foggy images. Trough the sky area in rows 6 and 8 human visual observation.
of Figure 4, the background color of the image after dehazing
by the AMEF method is closer to the original image com-
pared to the CODHWT method. Both the MAME and 4.3. Objective Evaluation. In order to analyze the subtle
DePAMEF methods achieved better performance in detail diferences in the images, this paper uses the PSNR [36],
visibility and preservation of fog-free areas, but the image SSIM [37], FSIM [38], and GMSD [39] for objective
after DePAMEF dehazing had a mutilated haze, resulting in evaluation.
an increase in color artifacts in the area where the house and Zhang et al. [38] proposed FSIM, arguing that the human
sky were connected. visual system mainly understands images based on low-level
Te algorithm proposed in this paper compensates for features and combines phase consistency, color features,
the loss between each channel through the color channel gradient features, and chromaticity features to measure the
transfer method before dehazing and efectively reduces the local structural information of images. GMSD was discov-
interference between each channel, and the essence of the ered by Xue [39] in 2014 which showed that gradient maps
image is clearly restored after dehazing, and the buildings are sensitive to image distortion, and distortion images with
and vehicles in the distance are clearly visible and the details diferent structures have diferent degrees of quality deg-
are obvious. Spatial linear saturation adjustment and radation, so as to propose an image full reference evaluation
Advances in Multimedia 7

Table 1: PSNR index comparison of dehazing algorithms.


Methods Bus Pavilion Viaduct Pedestrian Tiananmen Lake
AMEF 22.337 16.732 17.676 16.113 17.862 17.487
CAP 21.032 19.214 21.638 15.999 25.671 26.785
CODHWT 16.703 16.615 18.070 11.869 25.316 24.430
FADE 16.576 16.264 14.924 12.985 18.571 19.265
MAME 16.367 15.873 17.702 11.499 17.221 117.968
DePAMEF 21.169 16.469 18.531 17.461 20.997 18.580
Ours 22.963 20.199 22.764 17.899 23.271 21.253

Figure 5: Te visualization efect of dehazing in the synthetic haze scene. From left to right, the original image, the processed results of
AMEF, CAP, CODHWT, FADE, MAME, DePAMEF, our method, and real fog-free image.

method, which has the characteristics of high accuracy and where μx and μy represent the means of x and y and μ2x and
low amount of calculation. μ2y represent the variance of x and y, respectively. σ xy is
PSNR evaluates image quality by calculating the pixel represented as the covariance between x and y, and C1 and
error between the original image and the dehazing image. C2 are represented as constant coefcients.
Te PSNR value is more signifcant when the error between FSIM is based on phase consistency and gradient am-
the dehazed image and the original image is smaller. Te plitude. Te larger the value is, the closer the dehazing image
calculation of PSNR is shown in the following equation: is to the original image. GMSD is designed primarily to
provide credible evaluation capabilities and use metrics that
MAX2 (13)
minimize computational overhead.
PSNR � 10 × log10 , We calculate the PSNR of diferent methods for pro-
MSE
cessing images. In Table 1, it can be seen from Figure 5 that
where MSE is represented by the mean squared error and both MAME and the proposed method can achieve good
MAX2 is the maximum pixel value of the original image. results in removing dense fog, and compared with MAME,
SSIM was used to measure the similarity between the our proposed method can efectively remove dense fog while
original image and the dehaze image. SSIM uses the mean restoring the color information of the sky area. In addition,
value to estimate brightness, standard deviation to estimate compared with other images, the method proposed in this
contrast, and covariance to measure structural similarity, as paper has achieved better results.
shown in the following equation. Te more signifcant the Te SSIM values for the image in Figure 5 are shown in
SSIM value, the less distorted the image, indicating better Table 2. As can be seen from the table, AMEF, CAP, and the
results after dehazing. proposed method obtain higher SSIM values. It can be seen
from Table 2 that the SSIM value of the proposed method
􏼐2μx × μy + C1 􏼑􏼐2σ xy + C2 􏼑 reaches 0.9073, which has the best performance. For the
SSIM � 2 2 2 2 , (14) Tiananmen image in Figure 5, the SSIM value of the method
􏼐μx + μy + C1 􏼑􏼐σ x + σ y + C2 􏼑 in this paper is 0.9192, second only to CAP.
8 Advances in Multimedia

Table 2: SSIM index comparison of dehazing algorithms.


Methods Bus Pavilion Viaduct Pedestrian Tiananmen Lake
AMEF 0.837 0.837 0.872 0.868 0.925 0.922
CAP 0.886 0.886 0.936 0.859 0.982 0.942
CODHWT 0.832 0.832 0.909 0.736 0.906 0.940
FADE 0.734 0.734 0.792 0.762 0.788 0.800
MAME 0.885 0.885 0.881 0.865 0.854 0.848
DePAMEF 0.781 0.781 0.887 0.874 0.905 0.906
Ours 0.892 0.914 0.927 0.986 0.937 0.962

Table 3: GMSD index comparison of dehazing algorithms.


Methods Bus Pavilion Viaduct Pedestrian Tiananmen Lake
AMEF 0.085 0.106 0.111 0.091 0.069 0.071
CAP 0.054 0.061 0.057 0.064 0.068 0.033
CODHWT 0.058 0.089 0.088 0.125 0.042 0.049
FADE 0.167 0.180 0.159 0.163 0.180 0.171
MAME 0.132 0.117 0.116 0.117 0.122 0.139
DePAMEF 0.116 0.136 0.115 0.110 0.098 0.097
Ours 0.065 0.053 0.071 0.066 0.049 0.037

Table 4: FSIM index comparison of dehazing algorithms.


Methods Bus Pavilion Viaduct Pedestrian Tiananmen Lake
AMEF 0.921 0.925 0.910 0.917 0.957 0.931
CAP 0.949 0.946 0.928 0.910 0.983 0.979
CODHWT 0.920 0.910 0.924 0.828 0.980 0.961
FADE 0.818 0.818 0.826 0.832 0.820 0.792
MAME 0.882 0.932 0.892 0.901 0.901 0.854
DePAMEF 0.895 0.897 0.899 0.911 0.929 0.900
Ours 0.962 0.955 0.920 0.941 0.986 0.969

As shown in Table 3, the method proposed in this paper dehazing images, so as to retain more image details. Trough
is superior to other dehazing methods in recovering image comparative experiments with other mainstream dehazing
structure. Tis is because the multiexposure melting dehaze methods, the results show that the proposed method can
method fuses images with diferent exposure levels and obtain good dehazing efect in light fog and dense fog
better preserves the structural features of the image. images, and the method achieves good results in various
Table 4 shows the calculation results of FSIM values. It evaluation performance indicators. In future work, it is
can be seen from the table that the dehazing image proposed necessary to further optimize the complexity of the algo-
by this method has a high similarity with the original haze- rithm and improve the practicability of the algorithm. In
free image, and the FSIM score is greater than 0.90. Tis is addition, it is also possible to start with fog and haze images
because we use gamma correction to acquire images with in various scenarios and perform targeted defogging pro-
diferent exposure levels and multiscale fusion using the cessing to obtain better efects.
classical Laplace pyramid method. Te method proposed in
this article attempts to obtain the best exposure for each area, Data Availability
so the FSIM score of the image is high.
Te data used to support the fndings of this study are
5. Conclusion available from the corresponding author upon request.

In this paper, an artifcial multiexposure image fusion al- Conflicts of Interest


gorithm for single image dehazing is proposed. First, the
color channel transfer method based on k-means is used to Te authors declare that they have no conficts of interest.
compensate for the channel with serious information loss.
Ten, artifcial gamma correction obtains a series of Authors’ Contributions
underexposed images and fuses them into dehazing images
with the improved Laplace pyramid fusion scheme, and Shaojin Ma and WeiGuo Pan conceptualized the study;
fnally, in order to obtain better visual efects after dehazing, Huaiguang Guan performed data curation; Songyin Dai and
contrast and saturation correction are applied to enhance the Bingxin Xu performed formal analysis; WeiGuo Pan and
Advances in Multimedia 9

Hongzhe Liu performed funding acquisition; Shaojin Ma [11] D. Li, N. Ma, and Y. Gao, “Future vehicles: learnable wheeled
and WeiGuo Pan performed methodology; Shaojin Ma and robots,” Science China Information Sciences, vol. 63, no. 9, p. 8,
Cheng Xu provided software; Xuewei Li performed super- Article ID 193201, 2020.
vision; Shaojin Ma and Huaiguang Guan performed vali- [12] C. O. Ancuti, A. Cosmin, C. Vleeschouwer, and M. Sbetr,
dation and visualization; Shaojin Ma wrote the original draft; “Color Channel transfer for image dehazing,” IEEE Signal
Processing Letters, vol. 26, no. 9, pp. 1413–1417, 2019.
Shaojin Ma and WeiGuo Pan wrote, reviewed, and edited the
[13] S. K. Nayar and S. G. Narasimhan, “Vision in bad weather,” in
manuscript. Proceedings of the Seventh IEEE International Conference on
Computer Vision, IEEE, Kerkyra, Greece, August 2002.
Acknowledgments [14] D. Berman, T. Treibitz, and S. Avidan, Non-local Image
Dehazing, in Proceedings of the 2016 IEEE Conference on
Tis work was supported by the Beijing Natural Science Computer Vision and Pattern Recognition (CVPR), IEEE, Las
Foundation (4232026), National Natural Science Founda- Vegas, NV, USA, June 2016.
tion of China (grant nos. 62272049, 62171042, 61871039, [15] Q. Zhu, J. Mai, and L. Shao, “A fast single image haze removal
algorithm using color attenuation prior,” IEEE Transactions
62102033, and 62006020), Key Project of Science and
on Image Processing, vol. 24, no. 11, pp. 3522–3533, 2015.
Technology Plan of Beijing Education Commission [16] K. He, J. Sun, and X. Tang, “Guided image fltering,” IEEE
(KZ202211417048), the Project of Construction and Support Transactions on Pattern Analysis and Machine Intelligence,
for High-Level Innovative Teams of Beijing Municipal In- vol. 35, no. 6, pp. 1397–1409, 2013.
stitutions (no. BPHR20220121), the Collaborative In- [17] F. Raanan, “Dehazing using color-lines,” ACM Transactions
novation Center of Chaoyang (no. CYXC2203), and on Graphics, vol. 34, no. 1, 2014.
Scientifc Research Projects of Beijing Union University [18] Y. Dong and J. Sun, “Proximal Dehaze-Net: A Prior Learning-
(grant nos. ZK10202202, BPHR2020DZ02, ZK40202101, Based Deep Network for Single Image Dehazing,” in Pro-
and ZK120202104). ceedings of the European Conference on Computer Vision,
Springer, Cham, Switzerland, September 2018.
[19] W. Ren, S. Liu, H. Zhang, J. Pan, and X. Cao, “Single Image
References Dehazing via Multi-Scale Convolutional Neural networks,” in
Proceedings of the European Conference on Computer Vision,
[1] S. S. Sengar and S. Mukhopadhyay, “Moving object detection pp. 154–169, Springer, Cham, Switzerland, September 2016.
based on frame diference and W4,” Signal, Image and Video [20] J. Zhang and D. Tao, “FAMED-net: a fast and accurate multi-
Processing, vol. 11, no. 7, pp. 1357–1364, 2017. scale end-to-end dehazing network,” IEEE Transactions on
[2] S. S. Sengar and S. Mukhopadhyay, “Moving object area Image Processing, vol. 29, pp. 72–84, 2020.
detection using normalized self adaptive optical fow,” Optik, [21] X. Qin, Z. Wang, Y. Bai, X. Xie, and H. Jia, “FFA-net: feature
vol. 127, no. 16, pp. 6258–6267, 2016. fusion attention network for single image dehazing,” Pro-
[3] S. S. Sengar and S. Mukhopadhyay, “Moving object detection ceedings of the AAAI Conference on Artifcial Intelligence,
using statistical background subtraction in wavelet com- vol. 34, no. 7, pp. 11908–11915, 2020.
pressed domain,” Multimedia Tools and Applications, vol. 79, [22] M. Hong, Y. Xie, C. Li, and Y. Qu, “Distilling image dehazing
no. 9-10, pp. 5919–5940, 2020. with heterogeneous task imitation,” in Proceedings of the
[4] S. S. Sengar and S. Mukhopadhyay, “Foreground detection via IEEE/CVF Conference on Computer Vision and Pattern Rec-
background subtraction and improved three-frame difer- ognition, pp. 3462–3471, Seattle, WA, USA, June 2020.
encing,” Arabian Journal for Science and Engineering, vol. 42, [23] P. Luo, G. Xiao, X. Gao, and S. Wu, “LKD-net: Large Kernel
no. 8, pp. 3621–3633, 2017. Convolution Network for Single Image Dehazing,” 2022,
[5] S. S. Sengar and S. Mukhopadhyay, “Motion segmentation- https://arxiv.org/abs/2209.01788.
based surveillance video compression using adaptive particle [24] R. Achanta, S. Hemami, F. Estrada, and S. Susstrunk,
swarm optimization,” Neural Computing & Applications, “Frequency-tuned Salient Region detection,” in Proceedings of
vol. 32, no. 15, pp. 11443–11457, 2020. the 2009 IEEE Conference on Computer Vision and Pattern
[6] S. Singh Sengar, “Motion Segmentation Based on Structure- Recognition, IEEE, Miami, FL, USA, June 2009.
Texture Decomposition and Improved Tree Frame Difer- [25] J. Vazquez-Corral and M. Bertalmı́o, “Simultaneous blind
encing,” in Proceedings of the Artifcial Intelligence Applica- gamma estimation,” IEEE Signal Processing Letters, vol. 22,
tions and Innovations, pp. 609–622, Hersonissos, Greece, May no. 9, pp. 1316–1320, 2015.
2019. [26] A. Galdran, “Image dehazing by artifcial multiple-exposure
[7] D. Zhang, “Wavelet transform,” Fundamentals of image data image fusion,” Signal Processing, vol. 149, pp. 135–147, 2018.
mining, pp. 35–44, Springer, Cham, Switzerland, 2019. [27] Y. Liu, L. Wang, J. Cheng, C. Li, and X. Chen, “Multi-focus
[8] K. He, J. Sun, and X. Tang, “Single image haze removal using image fusion: a Survey of the state of the art,” Information
dark channel prior,” IEEE Transactions on Pattern Analysis Fusion, vol. 64, pp. 71–91, 2020.
and Machine Intelligence, vol. 33, no. 12, pp. 2341–2353, 2011. [28] Z. Zhu, X. He, G. Qi, Y. Li, B. Cong, and Y. Liu, “Brain tumor
[9] B. Cai, X. Xu, K. Jia, C. Qing, and D. Tao, “DehazeNet: an end- segmentation based on the fusion of deep semantics and edge
to-end system for single image haze removal,” IEEE Trans- information in multimodal MRI,” Information Fusion, vol. 91,
actions on Image Processing, vol. 25, no. 11, pp. 5187–5198, pp. 376–387, 2023.
2016. [29] M. Ravikumar, P. G. Rachana, B. J. Shivaprasad, and
[10] Y. Cao, J. Xu, S. Lin, F. Wei, and H. Hu, “Gcnet: non-local D. S. Guru, “Enhancement of mammogram images using
networks meet squeeze-excitation networks and beyond,” in CLAHE and bilateral flter approaches,” Cybernetics, Cogni-
Proceedings of the IEEE/CVF international conference on tion and Machine Learning Applications, pp. 261–271,
computer vision workshops, Seoul, South Korea, October 2019. Springer, Singapore, 2021.
10 Advances in Multimedia

[30] Z. Zhu, H. Wei, G. Hu et al., “A A Novel Fast Single Image


Dehazing Algorithm Based on Artifcial Multiexposure Image
Fusionovel fast single image dehazing algorithm based on
artifcial multiexposure image fusion,” IEEE Transactions on
Instrumentation and Measurement, vol. 70, no. 99, pp. 1–23,
2021.
[31] C. Ancuti, C. O. Ancuti, and D.-H. A. Z. Y. Christophe De
Vleeschouwer, “A Dataset to Evaluate Quantitatively Dehazing
algorithms,” in Proceedings of the 2016 IEEE International
Conference on Image Processing (ICIP), IEEE, Phoenix, AZ,
USA, September 2016.
[32] J. He, C. Zhang, R. Yang, and K. Zhu, “Convex Optimization
for Fast Image dehazing,” in Proceedings of the 2016 IEEE
International Conference on Image Processing (ICIP), IEEE,
Phoenix, AZ, USA, September 2016.
[33] L. Zhan, X. Zheng, B. Bhanu, S. Long, Q. Zhang, and
Z. Huang, “Fast Region-Adaptive Defogging and Enhance-
ment for Outdoor Images Containing Sky,” in Proceedings of
the 25th International Conference on Pattern Recognition
(ICPR), Milan, Italy, January 2021.
[34] Y. Cho, J. Jeong, and A. Kim, “Model-assisted multiband
fusion for single image enhancement and applications to
robot vision,” IEEE Robotics and Automation Letters, vol. 3,
no. 4, pp. 2822–2829, 2018.
[35] M. Zheng, G. Qi, Z. Zhu, Y. Li, H. Wei, and Y. Liu, “Image
dehazing by an artifcial image fusion method based on
adaptive structure decomposition,” IEEE Sensors Journal,
vol. 20, no. 14, pp. 8062–8072, 2020.
[36] Q. Huynh-Tu and M. Ghanbari, “Scope of validity of PSNR
in image/video quality assessment,” Electronics Letters, vol. 44,
no. 13, pp. 800-801, 2008.
[37] Z. Wang, A. C. Bovik, H. Sheikh, and E. Simoncelli, “Image
quality assessment: from error visibility to structural simi-
larity,” IEEE Transactions on Image Processing, vol. 13, no. 4,
pp. 600–612, 2004.
[38] L. Zhang, L. Zhang, X. Mou, and D. Zhang, “FSIM: a feature
similarity Index for image quality assessment,” IEEE Trans-
actions on Image Processing, vol. 20, no. 8, pp. 2378–2386,
2011.
[39] W. Xue, L. Zhang, X. Mou, and A. C. Bovik, “Gradient
magnitude similarity deviation: a highly efcient perceptual
image quality Index,” IEEE Transactions on Image Processing,
vol. 23, no. 2, pp. 684–695, 2014.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy