ICMLA Dehazing CameraReady
ICMLA Dehazing CameraReady
Dehazing
Laura A. Martinho João M. B. Cavalcanti José L. S. Pio
Institute of Computing (ICOMP) Institute of Computing (ICOMP) Institute of Computing (ICOMP)
Univ. Federal do Amazonas (UFAM) Univ. Federal do Amazonas (UFAM) Univ. Federal do Amazonas (UFAM)
Manaus, Brazil Manaus, Brazil Manaus, Brazil
lam2@icomp.ufam.edu.br john@icomp.ufam.edu.br josepio@icomp.ufam.edu.br
Felipe G. Oliveira
Institute of Exact Sciences and Technology (ICET)
Univ. Federal do Amazonas (UFAM)
Itacoatiara, Brazil
felipeoliveira@ufam.edu.br
Abstract—Foggy or hazy images result from light scattering a great challenge for several applications related to image
and absorption by atmospheric particles. Intensity transforma- analysis and interpretation [3].
tion techniques offer solutions to solve this problem, but param- The purpose of image dehazing is to enhance the visual
eters selection significantly impacts the quality of the final image.
In this paper we propose the APL method, a parameter learning quality of degraded images by reducing the impact of weather
approach for enhancing hazy images using Convolutional Neural conditions. This is achieved by reducing the obscuring effects
Networks (CNNs), resulting in a new ISP learning-based pipeline. of these elements, thereby improving clarity, contrast and the
A set of intensity transformation techniques is applied, combined overall visibility of objects and details within the image. Image
with image quality metrics, to define parameters for hazy image dehazing is particularly crucial in several applications, includ-
enhancement. A CNN regression model is employed to learn
about the problem and estimate parameters for the transfor- ing remote sensing, surveillance and semantic segmentation,
mation stage. The best dehazing parameters are determined and where clear and accurate visual information is essential for a
utilized to enhance the quality of degraded images. Experiments more precise analysis and interpretation [4].
are conducted on three datasets of hazy images, including two Well-known image processing techniques such as color
datasets available in the literature and one proposed dataset adjustment, gamma correction and exposure enhancement
of real-world foggy images. Results are evaluated comparing to
other dehazing methods using full-reference (PSNR and SSIM) methods can be used together, as an Image Signal Processing
and non-reference (NIQE and BRISQUE) metrics, demonstrating (ISP) pipeline, in order to dehaze images achieving satis-
a high accuracy in image dehazing achieved by our method. The factory results. However, due to the number of parameters
proposed source code and dataset are available here. these techniques require, the results are usually focused on
Index Terms—Image dehazing, Haze removal, Adaptive pa- a particular set of image features and they require manual
rameter learning, Deep Learning.
parameter adjustments [5] [1].
I. I NTRODUCTION Considering that Convolutional Neural Network (CNN) can
automatically and adaptively learn features from a dataset, it
The continuous advances in imaging technology enabled
can be used to learn the needed parameters for the ISP image
several applications in different scenarios, in particular image
processing techniques mentioned earlier. The idea is that by
analysis and manipulation. Computer vision has addressed
learning the parameters, the resulting model can produce an
different problems related to the analysis and interpretation of
image dehazing technique with more precise visual results [6].
images, such as classification, detection and tracking of objects
In this paper we propose an image dehazing approach,
[1]. However, analyzing and interpreting images are extremely
called APL, removing fog effects from images of outdoor
challenging tasks, especially in outdoor environments [2].
environments, using CNN. To achieve this, a CNN regression
Images captured outdoors can easily be affected by natural
model was employed to enhance image quality by estimating
environmental effects, such as variation in lighting, shadow
the optimal parameters for the ISP pipeline, aiming at the
and reflection. However, images with fog can represent an even
enhancement process of hazy images. Next, a series of in-
more complex challenge, as they typically have low contrast,
tensity transformation methods are applied in order to reduce
visibility and saturation. Therefore, hazy images can depict
the degradation in hazy images. In the experiments we used
This work was carried out with the support of the Coordination for the two well-established hazy image datasets. Additionally, a com-
Improvement of Higher Education Personnel - Brazil (CAPES-PROEX) - parative analysis was performed, comparing our results with
Financing Code 001. This work was partially funded by the Amazonas State
Research Support Foundation - FAPEAM - through the PDPG/CAPES project, other well-established state-of-the-art algorithms in the field.
and the Cal-Comp Electronics - through the PROVIR R&D project. The results validate the effectiveness of APL approach for
hazy image restoration, highlighting a significant improvement model [10] [11] [12], emphasizing parameter resolution. Em-
in accuracy and clarity of hazy images. We also explored ploying a mapping relationship, it conducts the inverse opera-
potential applications of the method and its adaptability to real- tion based on the foggy image formation process, restoring a
world scenarios, such as security and surveillance monitoring clear image. In [11], a self-guided image dehazing technique
or aerial photography and drones. is introduced, emphasizing progressive feature fusion and end-
Our main contributions are summarized as follows: to-end training to exploit useful information from the fog
• We propose a CNN-based approach to learn and estimate image itself. The approach in [12] explores fog removal in
the best parameters for a novel adaptive ISP pipeline in remote sensing images, incorporating an Adversarial Genera-
the process of enhancing the quality of hazy images. The tive Network with enhanced attention. Despite these advances,
proposed regression process acquires knowledge from continuous refinement is needed, highlighting challenges in
different fog conditions (like saturation, low contrast, effectively removing haze in images.
lighting, scattering and blurry in haze), enabling the esti- Deep Learning Methods. In recent years, deep learning
mation of optimal parameters for different hazy images. applications in dehazing algorithms have shown significant
The proposed APL approach presents high accuracy even performance advancements. Two notable categories include
in different haze conditions, with a light-weight learning using deep learning for atmospheric parameter estimation in
process; image restoration [6] [2] and employing neural networks for
• We propose the Aerial Hazy Images (AHI) dataset, a direct foggy-to-dehazed image transformation, known as end-
challenging dataset composed by over 4700 hazy images, to-end dehazing in deep learning [3] [13]. [4] introduces an
acquired in urban and natural outdoor environments. The innovative approach using density and depth composition,
proposed dataset comprises hazy images with intense focusing on low-light scenarios and evaluating effectiveness
fog and low visibility and scattering. To the best of our through metrics like PSNR, SSIM and CIEDE2000. In [14],
knowledge this is the first dataset of hazy images from the authors proposed an analysis of combined spatial and
urban and natural outdoor environments, composed of frequency domain through a deep fourier transform. [10],
aerial images. presents a combination of Separable Hybrid Attention (SHA)
and density encoding matrix, achieving a deep learning-based
II. R ELATED W ORK dehazing network architecture. [15] proposes a single-image
dehazing method through test-time training with an auxiliary
Image dehazing is crucial for improving the quality of im- network, incorporating GDN, MSBDN and CYCLEGAN. The
ages captured in foggy environments. This enhancement pro- work of [16] introduces ”USID-Net”, emphasizing unsuper-
cess improves the quality of outdoor degraded images, leading vised single-image fog removal with disentangled represen-
to better contrast, visibility and saturation. The applications of tations. Additionally, [17] presents a fog removal network
dehazing are diverse, including aerial and aquatic navigation, with a light-invariant design, evaluated using MSE, PSNR and
monitoring and surveillance [7]. Dehazing techniques can SSIM. Despite these advancements, there is a growing need
be divided into three categories: image enhancement-based, for further improvement, reflecting challenges in effectively re-
physical model-based and deep learning approaches. moving haze, particularly in natural phenomenon with diverse
Image Enhancement Methods. The image enhancement- and unknown haze conditions.
based dehazing algorithm enhances image quality by ampli- Our approach is a deep learning method which combines
fying contrast and reinforcing edge and detail information. image enhancement techniques, by finding the optimal (or
However, it may lead to potential information loss due to ex- near-optimal) parameters for image dehazing. More specifi-
cessive enhancement. A comprehensive survey in [1] explores cally we deploy a CNN for finding the parameters to color
enhancement, restoration and fusion techniques in image de- and gamma correction and exposure adjustment, achieving an
blurring. In contrast, [8] proposes minimal modifications for adaptive ISP pipeline. The proposed APL approach presents
a compact deblurring method, redefining strategies for opti- more generalization capacity and less noise and lighting vari-
mization. The work in [9] presents a paradigm based on un- ation sensibility compared to complex and end-to-end deep
supervised unsourced domain adaptation, emphasizing multi- learning models [18].
resolution operation. Additionally, [6] introduces a simplified
global and local feature combiner for image deblurring, focus- III. M ETHODOLOGY
ing on a lightweight design. Finally, [7] describes an adaptive
single-image deblurring method based on sky segmentation, In this section, we describe the APL approach, an adaptive
effective for weak effects and color change, with the potential technique for parameter learning through CNN in image
for local fog effect amplification. These techniques showcase dehazing process. Our method consists of three primary steps:
the ongoing need for improvement, emphasizing the challenge i) Pre-Learning, which finds the best parameters by an iterative
of effectively removing distortions in images and finding process, based on image quality metrics; ii) Learning, which
optimal parameters for image enhancement. actually learns the relationship between visual features and
Physical Model Methods. Dehazing algorithms based on the image enhancement parameters and iii) Intensity Trans-
physical model methods rely on the atmospheric scattering formation, that applies the best estimated parameters for image
dehazing, achieving an adaptive ISP pipeline. Figure 1 presents 2) Color Correction: In this step, a color correction algo-
an overview of the proposed approach. rithm is applied. This algorithm performs histogram stretching
in each color channel (R, G, B). This technique utilizes
A. Problem Statement information from the linear curve, including minimum and
[Parameter Learning for Image Dehazing] Let I = maximum values, to generate a 256-value Look-up Table for
{i1 , i2 , ..., in } be a set of training hazy images. Let P = histogram equalization.
{p1 , p2 , . . . , pm } be a set of the best parameters for image en- The image is normalized to the [min, max] range through
hancement, found through image quality metrics, for training. a pixel value transformation using the function C, as defined
Let D be a Deep Learning regression model, trained on hazy by the equation:
(IE − (IE )min ) × (max − min)
images (I ) and image enhancement parameters (P). Also,
let T = {t1 , t2 , ..., tw } be a set of intensity transformation C(IE ) = × α (3)
(IE )max − (IX )min
methods for image enhancement. Our goal is to estimate the where IE represents the input image to be transformed,
best image dehazing parameters B = {b1 , b2 , ..., bj }, from a (IE )min and (IE )max denote the minimum and maximum
set of unknown hazy images U = {u1 , u2 , ..., uk }, using the gray values in the segmented transformed image, respectively.
predictive model D. Additionally, min and max correspond to the new gray value
range and α corresponds to the color correction degree. With
B. Pre-Learning
this step, a color compensation is performed resulting in color
In this step, a series of image processing methods is used. adjusted images (IC ).
Image segmentation, color adjustment, gamma correction and 3) Gamma Correction: Considering the color corrected
exposure enhancement methods are used to address haziness images (IC ). In this stage, the image pixel intensities are
in images. The chosen image processing methods were defined scaled from the range [0, 255] to [0, 1.0], by applying the
due to their efficiency in minimizing problems related to low gamma correction equation:
contrast, visibility and saturation in hazy images [4] [15].
IG = (IC )(1/γ) (4)
1) Image Segmentation: Image segmentation is processed
in order to identify areas that contain any kind of low- where IG is the image with the corrected gamma factor and
visibility detail in the images, such as patterns or edges that γ corresponds to the gamma constant value.
may be obscured due to haze, enhanced later through image 4) Exposure Adjustment: In the last transformation, from
transformations. the gamma corrected images (IG ), an exposure refining step
Adaptive Thresholding. This method calculates distinct was carried out and it yields the final output of the proposed
thresholds for different regions within an image by considering approach. Each output pixel value depends on only the corre-
a localized region around each pixel. The threshold L is sponding input pixel value, defined in equations below:
precisely defined using the presented equation.
Iϵ = If{IG = 0 or IG = 255}, ϵ × IG ; otherwise, IG (5)
σ(R)
L = µ(R) × 1 + K × −1 (1)
Q
Iι = If{Iϵ = 0 or Iϵ = 255}, Iϵ − 127; otherwise, Iϵ (6)
where vh is an intensity value within a region R of size 11x11,
µ(R) is the mean of the intensities within a region R, σ(R) is where IG is the gamma corrected image, Iϵ represents the
the standard deviation of the intensities within a region R, Q is image obtained in the first operation of exposure adjustment
the dynamic range of standard deviation, and the parameter K and Iι is the final image in the exposure adjustment process.
gets positive values [19]. Therefore, a segmented image (IS ) The average value 127 is subtracted to center the image around
is created, highlighting details like edges and corners using zero. The subtraction of 127 is a common practice to center
the Adaptive Thresholding method. the pixel values around zero, considering the 8-bit scale where
Enhancing Selected Areas. In this step, features like 127 is the mid-point between black (0) and white (255). This
edges and corners, highlighted in IS , are used to enhance normalization facilitates further processing, especially when
areas of interest in hazy images. For this stage, a sharpening algorithms or models are sensitive to the input scale.
transformation is applied in darker pixels, which represent To determine the best parameters for learning process are
regions containing important details to image. The sharpening employed the quality metrics PSNR and SSIM. For this, an
method is defined as in equation below: iterative process empirically determines the optimal param-
eters through the variation of a range of values for color
IE = I + (IS − ISG ) (2) adjustment (α), gamma correction (γ) and exposure intensity
transformations (ϵ). Thereby, are defined the parameters which
where I is the original hazy image, IS is the segmented image are applied in the whole image and provide the optimal image
and ISG is the smoothed segmented image, smoothed using quality measurements. In addition, empirically, the order of
Gaussian filter with kernel size of 5x5. With this step, images intensity transformation techniques was defined, which pre-
with enhanced features (IE ) are obtained. sented high accuracy in dehazing process.
Fig. 1: Overview of the proposed methodology for image dehazing of foggy images.
the comparison methods on the AHI non-reference dataset hazing method using the RESIDE-SOTS dataset, providing
and on the RESIDE-SOTS and O-Haze full-reference datasets, ground truth for reference. Our approach surpasses existing
respectively. methods, achieving haze-free results that often outperforming
Non-Reference Datasets. In Figure 3, we evaluate our APL the reference ground truth. Quality metrics affirm the superior
method using the AHI dataset, composed of 4710 images performance of our approach. Figure 5 extends the comparison
without reference data, due to the lack of images without to the O-Haze dataset, where our method generates dehazed
haze (reference images), given its nature of acquisition. Unlike results for challenging hazy images, outperforming established
existing methods, our approach excels in eliminating haze, methods. Quality metrics reinforce the overall superiority of
preventing color distortion and recovering original details. The our approach in this context as well.
BRISQUE and NIQE metrics, recognized for their effective- 2) Quantitative Improvements: Real-World Hazy Images.
ness in assessing non-reference image quality, consistently For real-world natural hazy images, a comprehensive evalua-
provide lower scores, confirming the superior haze restoration tion was conducted on the AHI dataset, without reference im-
quality achieved by our method. ages. The methodologies that provided their algorithms were
Full-Reference Datasets. In Figure 4, we evaluate our de- evaluated on the dataset, utilizing the NIQE and BRISQUE
C. Ablation Study
In our ablation study, we systematically examined the
impact of key components in our network architecture: seg-
mentation (L), color correction (α), gamma correction (γ)
and exposure adjustment (ϵ), previously defined in Section
III. Each component was individually removed to assess its
contribution. Table III details the ablation study on the O-
Haze full-reference dataset, providing the performance metrics
for each scenario, highlighting their influence on the overall
outcomes.
Segmentation (L). In Figure 6 (b), without L, there is a
noticeable decline in all scores and details remain unresolved,
Fig. 5: Qualitative Evaluation for Full-Reference Images from emphasizing the importance of the segmentation module in
O-Haze Dataset. preserving image quality.
Color Correction (α). In Figure 6 (c), removal of color
correction (α) leads to a decrease in scores, highlighting its
quality metrics. The outcomes, as summarized in Table I, impact on color fidelity, since our method not only adjusts
demonstrate the superiority of our algorithm across both color deviations but also equalizes histograms, correcting
metrics, when applied in this context. details and restoring clarity.
Gamma Correction (γ). Omitting gamma correction (γ)
TABLE I: Comparison on Real-World Scenarios from AHI results in a substantial drop in all scores. This underscores
Dataset the critical role of gamma correction in maintaining image
Methods Venue&Year NIQE↓ BRISQUE↓
fidelity, as can be seen in Figure 6 (d). Fog effects persist,
DCP [27] TPAMI 2011 0.917 23.712 impacting overall image clarity.
CycleGAN [13] ICCV 2017 0.862 22.315 Exposure Adjustment (ϵ). Performance without exposure
GDN [22] ICCV 2019 0.895 24.127 adjustment (ϵ) exhibits a significant score decrease. Exposure
MSBDN [24] CVPR 2020 0.839 20.893
DW-GAN [23] CVPR 2021 0.901 27.491 adjustment, as shown in Figure 6 (e), plays an essential role
DeHamer [25] CVPR 2022 0.8042 20.0153 in dehazing by darkening fog spots, contributing significantly
C2PNET [26] CVPR 2023 0.809 20.067 to overall image enhancement.
APL (Ours) 0.802 20.011
These visual changes highlight the distinct roles of each
module and provide insights for potential optimizations to
Synthetic Hazy Images. PSNR, SSIM, NIQE and BRISQUE enhance our approach’s capabilities further. This iterative
quality metrics were applied to synthetic hazy scenes using process of analysis and optimization is crucial for refining and
two benchmark datasets, RESIDE-SOTS and O-Haze, with adapting our method for improved performance.
reference images. Table II presents the average results, demon-
strating the superiority of our approach across all evaluated
metrics. In real-world scenarios with synthetic haze (SOTS),
our results outperform MSBDN, DeHammer and C2PNET, the
best comparison techniques. In environments with fog gen-
erated by smoke machines (O-Haze), our model consistently
showcases its efficacy across diverse dataset configurations.
In the proposed adaptive parameter estimation, the model
learns parameters to compensate the degradation in hazy
images, assuming color, gamma and exposure as prior trans-
formations to effectively restore the images. In our approach
we define the referred transformations, through extensive
investigation, as the most important for restoring hazy images,
achieving relevant results, unlike end-to-end comparison learn-
ing methods that need to learn the best transformations to be
applied to hazy images, that may not be achieved. Addition-
ally, it is worth mentioning that the significant results were Fig. 6: The images in (b)-(f) are dehazed results of (a) by each
obtained using a light-weight learning model, in contrast to step of our approach.
more modern methods. Finally, the generalization capacity was
demonstrated through the use of the proposed model in three V. C ONCLUSION
different datasets, with different fog conditions, highlighting In this paper we presented an approach for image dehazing.
the best results in all scenarios. We proposed the APL, a Deep Learning-based approach for
TABLE II: Comparison on Synthetic Scenarios
Methods SOTS O-Haze
PSNR↑ SSIM↑ NIQE↓ BRISQUE↓ PSNR↑ SSIM↑ NIQE↓ BRISQUE↓
DCP [27] 23.184 0.734 0.872 22.103 20.571 0.724 0.818 24.569
CycleGAN [13] 24.107 0.672 0.849 21.942 24.107 0.672 0.849 21.942
GDN [22] 22.305 0.718 0.825 23.491 25.609 0.657 0.879 22.713
MSBDN [24] 28.682 0.939 0.817 19.705 28.577 0.815 0.768 19.110
DW-GAN [23] 25.219 0.642 0.865 24.501 21.897 0.729 0.809 20.332
DeHamer [25] 28.495 0.947 0.819 19.083 28.599 0.818 0.757 19.116
C2PNET [26] 25.398 0.972 0.820 19.088 28.645 0.809 0.760 19.023
APL (Ours) 28.742 0.980 0.815 19.071 28.746 0.824 0.745 19.013
TABLE III: Ablation Study on the O-Haze Full-Reference [12] Y. Zheng, J. Su, S. Zhang, M. Tao, and L. Wang, “Dehaze-aggan:
Dataset Unpaired remote sensing image dehazing using enhanced attention-guide
generative adversarial networks,” IEEE Transactions on Geoscience and
Modules PSNR↑ SSIM↑ NIQE↓ BRISQUE↓ Remote Sensing, pp. 1–13, 2022.
w/o L 13.952 0.489 1.984 32.369 [13] J.-Y. Zhu, T. Park, P. Isola, and A. A. Efros, “Unpaired image-to-image
w/o α 12.154 0.689 1.862 29.315 translation using cycle-consistent adversarial networks,” in 2017 IEEE
w/o γ 13.455 0.669 1.064 27.071 International Conference on Computer Vision (ICCV), 2017.
w/o ϵ 18.997 0.792 0.997 23.009 [14] H. Yu, N. Zheng, M. Zhou, J. Huang, Z. Xiao, and F. Zhao, “Frequency
and spatial dual guidance for image dehazing,” in Computer Vision –
ECCV 2022, 2022, pp. 181–198.
[15] H. Liu, Z. Wu, L. Li, S. Salehkalaibar, J. Chen, and K. Wang, “Towards
the best parameters estimation in hazy image enhancement multi-domain single image dehazing via test-time training,” in CVPR,
process. Thereby, it is possible to optimize the use of image 2022, pp. 5831–5840.
enhancement techniques, using the best found parameters in [16] J. Li, Y. Li, L. Zhuo, L. Kuang, and T. Yu, “Usid-net: Unsupervised
single image dehazing network via disentangled representations,” IEEE
the dehazing operation. Our approach is an alternative to the Transactions on Multimedia, pp. 3587–3601, 2023.
use of image enhancement techniques and physical models, [17] A. Ali, A. Ghosh, and S. S. Chaudhuri, “Lidn: A novel light invariant
which can not adapt to unknown haze conditions. It is also image dehazing network,” Engineering Applications of Artificial Intelli-
gence, vol. 126, p. 106830, 2023.
superior to generative models which can not support unpre- [18] T. Glasmachers, “Limits of end-to-end learning,” in Proc. of the 9th
dictable and extreme natural phenomenon variations during Asian Conf. on Machine Learning, 2017, pp. 17–32.
training, as shown in the experiments results. Additionally, we [19] J. Sauvola and M. Pietikäinen, “Adaptive document image binarization,”
Pattern Recognition, vol. 33, pp. 225–236, 2000.
have also created a new hazy image dataset, with challenging [20] B. Li, W. Ren, D. Fu, D. Tao, D. Feng, W. Zeng, and Z. Wang,
scenarios and conditions. The experiments and obtained results “Benchmarking single-image dehazing and beyond,” IEEE Trans. on
demonstrated the robustness and accuracy for image dehazing. Image Processing, vol. 28, 2019.
[21] C. O. Ancuti, C. Ancuti, R. Timofte, and C. D. Vleeschouwer, “O-haze:
R EFERENCES a dehazing benchmark with real hazy and haze-free outdoor images,” in
IEEE CVPR, NTIRE Workshop, 2018.
[1] X. Guo, Y. Yang, C. Wang, and J. Ma, “Image dehazing via enhance- [22] X. Liu, Y. Ma, Z. Shi, and J. Chen, “Griddehazenet: Attention-
ment, restoration, and fusion: A survey,” Information Fusion, vol. 86-87, based multi-scale network for image dehazing,” in Proceedings of the
pp. 146–170, 2022. IEEE/CVF Int. Conference on Computer Vision, 2019, pp. 7314–7323.
[2] Y. Song, Z. He, H. Qian, and X. Du, “Vision transformers for single [23] M. Fu, H. Liu, Y. Yu, J. Chen, and K. Wang, “Dw-gan: A dis-
image dehazing,” IEEE Trans. Img. Proc., vol. 32, pp. 1927–1941, 2023. crete wavelet transform gan for nonhomogeneous dehazing,” in 2021
[3] H. X. C. L. An, S., “A comprehensive survey on image dehazing for IEEE/CVF (CVPRW), 2021.
different atmospheric scattering models,” Multim. Tools Appl, 2023. [24] H. Dong, J. Pan, L. Xiang, Z. Hu, X. Zhang, F. Wang, and M.-H. Yang,
[4] Y. Yang, C. Wang, R. Liu, L. Zhang, X. Guo, and D. Tao, “Self- “Multi-scale boosted dehazing network with dense feature fusion,” in
augmented unpaired image dehazing via density and depth decompo- 2020 IEEE/CVF (CVPR), 2020, pp. 2154–2164.
sition,” in IEEE Conference on Computer Vision and Pattern Recogni- [25] C. Guo, Q. Yan, S. Anwar, R. Cong, W. Ren, and C. Li, “Image dehazing
tion(CVPR), 2022, pp. 2037–2046. transformer with transmission-aware 3d position embedding,” in 2022
[5] B. Benjdira, A. M. Ali, and A. Koubaa, “Streamlined global and IEEE/CVF (CVPR), 2022, pp. 5802–5810.
local features combinator (sglc) for high resolution image dehazing,” [26] Y. Zheng, J. Zhan, S. He, J. Dong, and Y. Du, “Curricular contrastive
in IEEE/CVF (CVPR) Workshops, 2023, pp. 1854–1863. regularization for physics-aware single image dehazing,” in Conference
[6] J. Meng, Y. Li, H. Liang, and Y. Ma, “Single-image dehazing based on Computer Vision and Pattern Recognition (IEE/CVPR), 2023.
on two-stream convolutional neural network,” Journal of Artificial [27] K. He, J. Sun, and X. Tang, “Single image haze removal using dark
Intelligence and Technology, p. 100–110, 2022. channel prior,” IEEE Transactions on Pattern Analysis and Machine
[7] C. S. Wang, Hu, “Adaptive single image defogging based on sky Intelligence, vol. 33, pp. 2341–2353, 2011.
segmentation,” Multimedia Tools and Appl., pp. 1573–7721, 2023. [28] U. Sara, M. Akter, and M. S. Uddin, “Image quality assessment through
[8] Y. Song, Y. Zhou, H. Qian, and X. Du, “Rethinking performance gains fsim, ssim, mse and psnr—a comparative study,” Journal of Computer
in image dehazing networks,” arXiv preprint arXiv:2209.11448, 2022. and Communications, 2019.
[9] H. Yu, J. Huang, Y. Liu, Q. Zhu, M. Zhou, and F. Zhao, “Source-free [29] Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image
domain adaptation for real-world image dehazing,” in Proceedings of the quality assessment: from error visibility to structural similarity,” IEEE
30th ACM International Conference on Multimedia, 2022, p. 6645–6654. Transactions on Image Processing, vol. 13, pp. 600–612, 2004.
[10] T. Ye, Y. Zhang, M. Jiang, L. Chen, Y. Liu, S. Chen, and E. Chen, [30] A. Mittal, R. Soundararajan, and A. C. Bovik, “Making a “completely
“Perceiving and modeling density for image dehazing,” in Computer blind” image quality analyzer,” IEEE Sign. Proc. Letters, vol. 20, 2013.
Vision – ECCV 2022, S. Avidan, G. Brostow, M. Cissé, G. M. Farinella, [31] A. Mittal, A. K. Moorthy, and A. C. Bovik, “No-reference image quality
and T. Hassner, Eds. Springer Nature Switzerland, 2022, pp. 130–145. assessment in the spatial domain,” IEEE Trans. Img Proc., vol. 21, 2012.
[11] H. Bai, J. Pan, X. Xiang, and J. Tang, “Self-guided image dehazing using
progressive feature fusion,” IEEE Transactions on Image Processing,
vol. 31, pp. 1217–1229, 2022.