Thesis Master
Thesis Master
A Thesis
Submitted to the College of Sciences / Mustansiriyah University as
a Partial Fulfillment of the requirements for the Degree of Master
in Computer Science
By
Safa Burhan Abdulsada
Supervised By
Assistant Professor
Dr. Asmaa Sadiq Abul-Jabbar
I
س ِ ٍّم آ ِلٍّ ِلٍّ ِهٍّ آ ِلٍّ ِر َّح ِ ٍّم ٍِّن آ ِلٍّ ِر َّح ِيٍّ ِ ٍّم
بٍّ َّ
II
Supervisor Certification
Signature:
Name: Dr. Asmaa Sadiq Abul-Jabbar
Title: Assistant Professor
Date: / /2022
Signature:
Name: Dr. Thekra Hydar Ali Abbas
Title: Assistant Professor
Date: / /2022
III
Acknowledgments
I thank God for this blessing and success to complete
this work
All appreciation and gratitude to my supervisor Assist.
Prof. Dr. Asmaa Sadiq Abul-Jabbar for her advice,
valuable time, continuous support and effort
My sincere thanks and gratitude to my mother and all
my family who were my support and help in the hard
times, and to my friends who have been there to help
me from the beginning
Special thanks to all the staff of the Computer Science
Department, Al-Mustansiriyah University for their help
and efforts
IV
Dedication
To my wonderful mom
To my dear father, God have mercy on him
To my support in this life my brothers and sisters
To my dear friends
To everyone who helped me and wished me the
best
SAFA
V
Abstract
i
The performance of the current approach is validated using
comprehensive underwater images (bluish, greenish and fogy) and also
compared with recent work using the same dataset using number of statistical
metrics. The visual and statistical results have been proven that the proposed
approach markedly outpaced the state-of-the-art techniques using the same
dataset.
ii
List of contents
iii
2.5.1 Laplacian Contrast Weight (𝑊𝐿 ) ………………………………38
iv
3.2.2.1 Gamma Correction ............................................................61
3.2.2.2 Image Sharpening ..............................................................62
3.2.3 Homomorphic Filter ...................................................................63
3.2.4 Calculation Weights Maps.............................................65
3.2.4.1 Local Contrast Weight .......................................................65
3.2.4.2 Saliency Weight …………………………………………66
3.2.4.3 Saturation Weight ……………………………………….67
v
Chapter Five: Conclusion and Suggestion for Future Work
5.1 Introduction ………………………...………………………………89
5.2 Conclusion ………………………………………………………….89
5.3 Suggestion for future work …………………………………………90
vi
List of Tables
vii
List of Figures
viii
The original bluish image can see in the first
column, the second column display the result after
applying Gamma correction algorithm, the third
(4.2) 76
column represent the result of image after
applying sharpening algorithm, and the final result
of proposed method in the last column
The original greenish image can see in the first
column, the second column display the result after
applying Gamma correction algorithm, the third
(4.3) 78
column represent the result of image after
applying sharpening algorithm, and the final result
of proposed method in the last column
ix
List of Abbreviations
x
Chapter One
General Introduction
1.1 Introduction
1
Figure (1.1): Underwater light absorption [2]
2
Figure (1.2): Underwater optical image [4]
3
Figure (1.3): Effect of artificial lighting in deep sea water [5]
4
dehazeing and color cast. Generally these method are divided into four
categories: methods based on Frequency domain, methods based on spatial
domain, methods based on Color constancy, and methods based on Fusion
[6,8].
5
1.1.2 Spatial Domain based Method
Where 𝑔(𝑖, 𝑗) represent the source image, 𝑓(𝑖, 𝑗) present the result
image, T is an operator identified across a fixed neighborhood about
location(𝑖, 𝑗). Some of contrast enhancement methods are:
6
c. The Hybrid Cumulative Histogram Equalization (HCHE): It can be
used to overcome the drawback of losing in dark regions and the
problem of over-optimization. However, the global histogram
equalization problem still exists due to noise amplification in relatively
homogeneous regions [10].
d. Contrast Limited Adaptive Histogram Equalization (CLAHE): it
solve the problem resulting the amplification of noise in relatively
homogeneous regions by working on small areas and calculating
several histograms for each section in the image and using them to
redistribution the value of image’s brightness. [11].
7
b. White Balancing: Because of the medium attenuation properties,
white balancing is a method that is used to remove the unwanted color
cast. Its main purpose to compensate the losing of red color [8].
c. Retinex Method: Retinex is a color correcting application that uses
color theory to improve image clarity and color fidelity. It is really
complicated and it takes more computing time to improve. It is mostly
used in image enhancement and defogging [8].
8
1.2 Literature Survey
9
maps applied to enhance the visibility of underwater scene. This approach
suitable for improving single underwater image and don’t need for specialized
hardware or any knowledge about physical characteristics of the underwater
medium [15].
10
Ancuti and et al. [2018] suggested a novel method to enhance the
degradation in the underwater images. Two input images were obtained from
initial degraded input image by color compensation and white-balanced
version. Then, the two input images and their weight maps were fused to
enhance color contrast and the transfer of edge to the output image. The
quantitative and qualitative evaluation shows that enhanced images and
videos have best exposure of dark areas, edges sharpness and improved global
contrast [18].
11
images are fused together using Laplacian pyramid to get the optimized output
image [19].
12
Zhang and et al. [2020] suggested a new technique for single
underwater image enhancement used the fusion technology to overcome
underwater image problems (for example: color deviation, fain contrast, and
blurring). Firstly, dark channel prior dehazing technologies and the white
balance algorithm were used as preprocessed to the original image, then
applying color correction and contrast enhancement to obtain two single
images. In the final step enhancement image was gained by using multiscale
fusion strategy to fuse the feature based on contrast, saliency and exposure
weighted maps. The results indicated a good performance [22].
13
fusion was performed on the enhanced images to get the final enhanced image.
The Qualitative and quantitative results indicates effectively enhance
especially in dark areas [24].
Table (1.1) summarize the prior study works based on the above
review:
14
Table (1.1): An Overview of the Literature Survey
NO
Author Technique advantage disadvantage
.
It is easy and clear and does
Ancuti Fusion a series not need extra information
Noise amplifies with
1 and et.al of single input such as hardware, and the
increasing depth
[2011] image results can be easily
evaluated
It can be used with various
Some limitations in
Weight maps devices, the time consumed
Ancuti the case of images
and Fusion for implementation is very
2 and et.al captured in deep
approach of two short in addition to the clear
[2012] scenes with strong
input image improvement of contrast,
and artificial lighting
color and image details
over equalizing of
Gao and Color Correction It can effectively improve
one channel (red
3 et.al and Histogram the image quality
channel as an
[2016] Equalization
example)
Its results are clear and easy
Fusion based on
to understand, and there is Difficult to apply it
Biswas White Balance
4 no effect of green and blue to videos in the
[2017] and Contrast
tint, even for images taken future
Stretching
in depth
Zhang Low accuracy in
Retinex It can improve the color and
5 and et.al deep-sea researches
approach visibility of the image in and operations.
[2017]
15
addition to giving a natural
appearance.
The color cannot be
Fusion based on
Visibly improves image completely restored
Ancuti Color
quality in addition to and some fog
6 and et.al Compensation
improving contrast, color remains, especially in
[2018] and White-
and detail of the image scenes far from the
Balanced
camera
Luo and CLAHE and Optimize color, brightness,
The results of MSE,
7 et.al Homomorphic and contrast while
PSNR aren’t optimal
[2019] Filter preserving image detail
Fusion based on
Sethi and Histogram The color, contrast, and With a large fog, it is
8 et.al Equalization and blurring of the resulting difficult to restore the
[2019] Contrast image have been improved. colors of the image
Stretching
In addition to improving the Some results still
Patel and Gaussian and
quality of images include fogy, in
9 et.al laplacian fusion addition to the lack
underwater and blurry of statistical results.
[2020] pyramid
images on land
The restored image is better
Xiong Beer Lambert
in terms of color correction Some low statistical
10 and et.al Mathematical
with the least time results
[2020] Law
consumed
Zhang Fusion based on Correcting the color,
11 The proposed
and et.al weight maps contrast and appearance of approach was applied
16
[2020] the image, in addition to on hazy underwater
images only
noise suppression
17
1.3 Problem Statement
18
1.4 Aims of the Thesis
2. A study the derivation of two images from the color white balance of
the source image and applying a series of independent processors to
enhance the color contrast of the final image. At last the tow image are
fused based on the multi-scale fusion algorithm.
19
1.5 thesis Outline
20
CHAPTER TWO
Background Theories
2.1 Introduction
Underwater image suffers from color attenuation, foggy details, fain
contrast and bluish or greenish color resulting from light scattering and
absorption in the water environment. These factors, in turn, affect the
underwater images quality. Image processing is a method for improving the
visibility of the input image to be clear to the recipient. This involves
increasing the density of the image, sharpening the edges, color correction,
removing noise, filtering, and so on. In this chapter, the methods that have been
used to improve underwater images will be presented, including methods in
both spatial domain and frequency domain.
21
Because of the nature of underwater environment, red light fades as depth
increases, resulting in blue to grey-like images, as shown in figure (2.1).
Despite the above, it can't really know all the effects of water because many
(if not all) of the above factors are always changing [22,27].
Figure (2.1): The underwater lighting conditions that can produce color
changes [27]
22
2.2.1 Color Compensation
The Color Compensation method eliminates artifacts caused by
significantly non-uniform color spectrum distribution in photographs shot,
cloudy nighttime situations, underwater, or un-uniform lighting. The method
is based on the hypothesis that, in these challenging circumstances, at least
one color channel's information is nearly completely lost, making typical
boosting methods susceptible to noisy and color shifting [29].
To compensate the loss in the red channel has been depend on the
following notes observation:
1- Comparing with the red and blue channels, the green channel is still
visible underwater. In clear water, it is true that light having a long
wavelength, such as red light, loses energy quickest.
2- Since the green channel provides information about the opponent's
color compared to the red channel, it is necessary to compensate for
the greater attenuation caused by the red color compared to the green
one. As a result, a portion of the green channel was added to the red
to compensate for the red channel's attenuation. Initially, add a
portion of blue and green to the red color. However, it has been
observed that using only the green channel information provides the
best way to restore the entire color spectrum and for preserving the
natural color from the background (water areas).
3- According to the gray world hypothesis (channels have equal mean
value before attenuation), the difference indicates to the
contrast/unbalance between the red and green attenuation, so the
23
compensation must be equivalent to the difference between the
mean values of green and red.
4- The improvement of red must effect the pixels with tiny red values,
and should not alter pixels which really contain a strong red
component, in order to prevent saturation of the red channel during
the Gray World phase that grows after the red loss compensation. In
other terms, the information from the green channel shouldn't be
transmitted in areas where the information from the red channel is
still essential.
24
2.2.2 White Balance (WB)
A color cast is an excessive presence of an undesirable color that has a
particular effect on the entire photographic image where the effect of color
discrimination and identification underwater is associated to the depth. The
most important difficulty with underwater images is the blue-green tint must
be corrected. The white balancing is used to compensate the color cast created
by absorption of colors with deepness where the essential role of the white
balance is to make the neutral colors of an image accurate. Some of the color
balancing algorithms are Grey world, Sensor Correlation and Robust Auto
White balance. [31, 32].
The gray world methods is depend on the idea that the scene’s average
reflectance with rich color changes is achromatic. That’s mean, the three color
channels 𝑅𝑎𝑣𝑔 , 𝐺𝑎𝑣𝑔 , and 𝐵𝑎𝑣𝑔 should all have the same average [33].
Where 𝑅̅, 𝐺̅ , 𝐵̅ are the average component of the (R, G, B) channels. After
that gain 𝐾 of three channels is calculated using the following equation:
𝐾
𝐾𝑅 = … … … … … … … … … … … … … … … … … … … . . … … . (2.3)
𝑅̅
𝐾
𝐾𝐺 = …………………………………… … … … . . … … … … (2.4)
𝐺̅
25
𝐾
𝐾𝐵 = ̅ … … … … … … … … … … … … … … … … … … … … … … . … (2.5)
𝐵
Then the result value for each pixel of RGB channels in the image has
been calculated by the following equation:
𝑅′ = 𝑅 × 𝐾𝑅 … … … … … … … … … … … … … … … … . … … … (2.6)
𝐺 ′ = 𝐺 × 𝐾𝐺 … … … … … … … … … … … … … … … … … … … (2.7)
𝐵′ = 𝐵 × 𝐾𝐵 … … … … … … … … … … … … … … … … . . … … (2.8)
26
Histogram equalization, global histogram equalization, local histogram
equalization, adaptive histogram equalization, and contrast limited histogram
equalization [35, 36].
S = c
𝑟𝛾 … … … … … … … … … … … … … … … … … … . . … … (2.9)
27
Here's a collection of transformation curves that can be made by only
changing the values of 𝛾.
Figure (2.2): The plot of equation S = c 𝑟 𝛾 with different value of 𝛾 (the value
of c=1 in all cases) [38].
A curve with values of γ > 1 has an inverse effect than one with values
of γ < 1, as shown in Figure (2.3). Finally, when 𝛾 = 𝑐 = 1 the equation (2.9)
reduces to an identity transformation. A power law control the behavior of a
number of devices used for imaging, printing, and displaying. The power-law
exponent is generally known as gamma [38, 39 and 40].
28
Figure (2.3): Applying the power law transformation to an image [39]
29
monitors and/or monitor settings. Therefore, the appropriate way to store
images in websites using the gamma, which is an “average” for different
monitors and computers that the user expects in the open market at any given
time [41].
𝟏 𝐱 𝟐 +𝐲 𝟐
𝒈(𝐱, 𝐲) = 𝐞 𝟐𝛔𝟐 … … … … … … … … … . . … (𝟐. 𝟏𝟎)
√𝟐𝛑𝛔𝟐
30
Where variance of Gaussian filter (𝝈𝟐 ), Sigma (σ) identifies the amount
of smoothing, and the value of sigma affects the Gaussian kernel coefficients.
The higher the sigma means, the more variance is permitted around the mean,
and the smaller the sigma, the less variance is allowed around the mean. As a
result, while a large filter variance is effective at smoothing an image, it also
corrupts the image's edges and critical structure [44].
31
a b c
a b c
32
In the histogram equalization approach, the probability density function
(pdf) has been manipulated. In another word, the histogram equalization
approach converts a particular image's pdf into a uniform probability density
function that extends from the lower pixel values (equal to zero) to the higher
pixel values (L-1). If the probability density function is a continuous function,
this is simple to apply. However, the pdf will be a discrete function as long as
deal with a digital image. Assume an image x with a dynamic range for the
intensity 𝑟𝑘 ranging from 0 (black) to L1 (white). The probability based on
the histogram p (𝑟𝑘 ) that can be used to estimate this pdf as following:
After that, the cumulative intensity function (cdf) can be obtain from
the pdf as:
𝑙−1
cdf = ∑ 𝑝(𝑟𝑘 ) … … … … … … … … … … . … … . . … … … (2.12)
𝑘=0
𝑙−1
To obtain the intensity pixel, p(s𝑘 ) has been multiple by L1 and then
round to get the closest integer value [46].
33
2.4 Homomorphic Filter
1. Reflection and Illumination Model
34
2. Basic Homomorphic Filter
Then,
35
Z(u, v) = 𝐹𝐼 (𝑢, 𝑣) + 𝐹𝑟 (𝑢, 𝑣) … … … … … … … … … … . (2.19)
By define
and
𝑟 ′ (𝑥, 𝑦) =
𝜑 −1 {𝐹𝑟 (𝑢, 𝑣)H(u, v)} … … … … … … … … . … … … (2.25)
36
Lastly, because the z(x, y) was obtained by using the natural logarithm
of the source image, the reverse operation is applied by using the exponential
of the filtered result to get the output image:
Where
′ (𝑥,𝑦)
𝑖0 (𝑥, 𝑦) = 𝑒 𝑖 … … … … … … … … … … … … … … … … . (2.30)
and
′ (𝑥,𝑦)
𝑟0 (𝑥, 𝑦) = 𝑒 𝑟 … … … … … … … … … … … … … … … . . (2.31)
37
𝐷2 (𝑢,𝑣)
−𝑐[ ]
𝐻(𝑢, 𝑣) = (𝑦𝐻 − 𝑦𝐿 ) [1 − 𝑒 𝐷02 ] + 𝑦𝐿 … … … … (2.32)
In this work, Fast Local Laplacian Filter (FLLF) been utilized . The
steps of the FLLF can be described as follow [50, 51]:
38
1- To process input image I, FLLF employs a point-wise nonlinearity
function r(.) that is dependent on the Gaussian pyramid co-efficient g =
𝐺𝑙 [I] (x, y), where r is the ramp function. l represent the level of the
Gaussian pyramid and (x, y) is the location of the pixel. This method
produces a lot of intermediary images for different values of g.
2- FLLF calculates all output coefficient 𝐿𝑙 [O] (x, y) of the Laplacian
pyramid of the converted image by integrating all of these intermediate
images.
3- The algorithm collapsed the output pyramid L(O). to get the output
image O
Generally, the salient area of the image is one that changes significantly
in the image (for example: shape, texture and color). The human visual system
can easily detect these areas. To keep these areas and improve contrast of the
image, a salient weight is utilized. Saliency is a technique for enhancing
features that is missed in the images. Because of the obvious saturation map,
the contrast of the image will be diminished slightly. The other contrast
enhancement techniques cannot improve edges area. So it’s essential to use
saliency weight to enhance the image equality [17, 52].
𝑊𝑆,𝑘 (𝑖, 𝑗) = [𝐿𝐾 (𝑖, 𝑗) − 𝐿𝑚,𝑘 (𝑖, 𝑗)]2 + [𝑎𝐾 (𝑖, 𝑗) − 𝑎𝑚,𝑘 (𝑖, 𝑗)]2
+ [𝑚𝐾 (𝑖, 𝑗) − 𝑚𝑚,𝑘 (𝑖, 𝑗)]2 … … … … … … … … … … . … . (2.33)
39
Where 𝑊𝑆,𝑘 (𝑖, 𝑗) refers to the saliency weight, k refers to the levels of input
image, 𝐿𝐾 (𝑖, 𝑗)refers to the brightness value of the input in the Lab color
space, 𝐿𝑚,𝑘 (𝑖, 𝑗) refers to the mean value of the brightness of the input image
in the Lab color space, the mean value of the a and b color channels is referred
to as 𝑎𝑚,𝑘 (𝑖, 𝑗) and 𝑚𝑚,𝑘 (𝑖, 𝑗), sequentially [53].
In practice, the three weight maps for each input are combined into a
single weight map as bellows. An aggregated weight map 𝑊𝐼 is initially
calculate for every input I by adding the three weight maps𝑊𝐿 , WS andWSat .
The weights of every pixel in every map are then divided by the summation
of the weights of the same pixel across all maps to normalize K aggregated
̅𝐼 for every
maps on a pixel-by-pixel basis. The normalized weight maps 𝑊
input are calculated as:
40
2.6 Dual Pyramid
41
2.6.1 Gaussian Pyramid
First, convolving the image with a low-pass filter (for example, the 4-th
binomial filter b4 = [1, 4, 6, 4, 1] / 16). Second, sub-sampling the output by a
factor of two. Each level is created by using the 4-th binomial filter with a
stride of 2 to filter the preceding level (on each dimension). When used in a
recursive manner, this technique generates a series of images, each one
smaller and lower in resolution than the previous [46, 56 and 57]. The
following figure present the steps of Gaussian pyramid.
42
2.6.2 Laplacian Pyramid
𝐿𝑖 = 𝑔𝑖 − 𝐸𝑋𝑃𝐴𝑁𝐷(𝑔𝑖+1 ) … … … … . … … … … … … … … … … … . ( 36)
= 𝑔𝑖 − 𝑔𝑖+1 … … … … . … … … … … … … … … … … … … … … (37)
The following figure present the steps of Gaussian pyramid and its
corresponding Laplacian pyramid steps.
43
44
Figure (2.7): Four steps of Gaussian and Laplacian pyramid, the first row represent the Gaussian
pyramid images, Laplacian pyramid is shown in the down row where represent the difference between
the considering level and next one in Gaussian pyramid [55].
2.7 Multi-Scale Fusion
In the term of image fusion, Equation (2.38) can be used for the
simplest images fusion processing for the two sets of input images. However,
this process will cause artifacts to appear in the final images. In this work,
uses a fusion technique based on multi-scale Laplacian pyramid
decomposition has been used to prevent this problem.
𝑁
𝑓𝑢𝑠𝑖𝑜𝑛(𝑥, 𝑦) = ∑ 𝑊𝑛 (𝑥, 𝑦)𝐼𝑛 (𝑥, 𝑦) … … … … … … … . . … (2.38)
𝑛=1
For the input image version, the Laplace operator is used to obtain the
first level of the pyramid. Then, by down-sampling the level, the second level
of the image is created, and so on. Thus, to get the Gaussian pyramid of the
normalize weight image, the normalized weight version 𝑊𝑛 , filter the input
image utilizing the low-pass Gaussian filter kernel function G, corresponding
to each level of Laplacian pyramid. So, pyramid of multi-scale fusion can
write as follow:
𝑁
45
2.8 Datasets
Underwater image enhancement has gotten a lot of attention Due to its
importance in deferent field. In recent years, a new underwater image
enhancing algorithms have been suggested. However, these techniques are
primarily tested on synthetic datasets or a small number of real-world images.
As a result, its unknown how these algorithms would perform on real-world
images or how we'd track development in the field [58].
46
performance of different algorithms used to enhance underwater images [59,
60]. https://li-chongyi.github.io/proj_benchmark.html [60].
47
2.9.2 Patch Based Contrast Quality Index (PCQI)
Average intensity, signal strength, and signal structure are three adaptable
and logically independent elements that are employed to represent every
image patch. Although the main image may not has a good contrast, it is
thought to be the reliable source for structural information. Therefore, it is
important to separate the representation of the structure from the average
intensity and signal strength, causing it to be distorted. It is measurable
independently. The suggested method not only predictions the test image's
overall contrast quality, but also generates a local quality map that shows
regional variations in quality over space.
Where 𝑞𝑖 (𝑖, 𝑗), 𝑞𝑐 (𝑖, 𝑗), 𝑞𝑠 (𝑖, 𝑗) are the mean intensity, contrast change, and
structural distortion, respectively [62].
48
2.9.3 Average Gradient (AG)
Here m, n refer to the width and height of the image, ∇xF(i, j), ∇yF(i, j)
refer to the difference of F (i, j) over the x, y axis. Resulting in more detailed
information of the defogged image is obtained as the AG value increases [63].
49
evaluates the components of red, green (RG) color and the blue and yellow
(BY) color.
RG = R – G … … … … … … … … … … … … … … … … … … … … (2.43)
R + G
YB = – B … … … … … … . . … … … … … … … … … … . . (2.44)
2
Where N represents a total pixel of the component (RG) and these pixel
arranged as following 𝑋1 < 𝑋2 < ⋯ < 𝑋𝑁 , T R = αR · N and T L = αL · N
which are the number of smaller and greater pixel value to be neglect or
ignore, the µ𝑅𝐺 refers to the Chroma intensity, the average value of the color
component RG-YB is close to zero that indicate to the effective white balance,
that mean no one of the colors are dominant. To compute the higher variance
is associated with a wider dynamic range using the following equation:
𝑁
2
1
𝜎𝑅𝐺 = ∑(𝐼𝑛𝑡𝑒𝑛𝑠𝑖𝑡𝑦𝑅𝐺 (𝑋) − µRG)2 … … … … … … … . . (2.46)
𝑁
𝑋=1
50
Underwater Image Sharpness Measure (𝑼𝑰𝑺𝑴)
Sharpness represents the edges and details of the image, and it is definitely
preferable to obtain an image with prominent edges. Underwater images
suffer from deterioration and attenuation as a result of absorption and
scattering. To measure the sharpness of the image, the enhancement measure
estimation (EME) was used:
𝑛 𝑚
2 𝐼𝑀𝑎𝑥 , 𝑘, 𝑙
𝐸𝑀𝐸 = ∑. ∑ log ( ) … … … … … … … … … . . (2.48)
𝑛. 𝑚 𝐼𝑀𝑖𝑛 , 𝑘, 𝑙
𝑘=1 𝑙=1
Here divided the image into 𝑛 × 𝑚 blocks and found the maximum and
𝐼𝑀𝑎𝑥 ,𝑘,𝑙
minimum pixel values in each one, refers to the contrast ratio per
𝐼𝑀𝑖𝑛 ,𝑘,𝑙
51
Underwater Image Contrast Measure (UIConM)
Log AMEE
𝑛 𝑚
2 𝐼𝑀𝑎𝑥 , 𝑘, 𝑙 − 𝐼𝑀𝑖𝑛 , 𝑘, 𝑙 𝐼𝑀𝑎𝑥 , 𝑘, 𝑙 − 𝐼𝑀𝑖𝑛 , 𝑘, 𝑙
= ∑. ∑ . log ( ) … . . (2.50)
𝑛. 𝑚 𝐼𝑀𝑖𝑛 , 𝑘, 𝑙 + 𝐼𝑀𝑎𝑥 , 𝑘, 𝑙 𝐼𝑀𝑖𝑛 , 𝑘, 𝑙 + 𝐼𝑀𝑎𝑥 , 𝑘, 𝑙
𝑘=1 𝑙=1
52
2.9.5 Underwater Color Image Quality Metric (UCIQE)
Underwater image suffers from high color intensity, low contrast, and
deterioration. To choose an appropriate metric to measure the quality of an
underwater image, the underwater color image quality metric based on LAB
chromatic, contrast, and Saturation measure is used:
53
Chapter Three
3.1 Introduction
54
maps are applied to the two output images. after that, the two images obtain
from the image contrast enhancement were entered to the Laplacian pyramid,
while the two images resulting from the weight maps were processed using
the Gaussian pyramid and lastly the two final images were fused using the
multi- scale fusion. The following figure shows the complete scheme of the
proposed system.
55
56
Figure (3.2): Block diagram of the Proposed System.
3.2.1 Color Correction
57
Algorithm (3.1)// Color Compensation
58
3.2.1.1White Balance
The primary goal of proper white balance is to make the natural colors of
an image accurate, which is where the element “white” of the term comes
from. In his work the gray world algorithm is used.
59
This algorithm works to adjust the illumination of the image. At first,
the average of the three channels are calculated, and then calculate the gain
of each channel, the final values of each channel were found, in addition to
estimate the illumination based on gray color and color balancing at the RGB
image.
In this work and after color correction step, two version of enhanced
image are derived for fusion process. First one with image sharpening and
second with gamma correction.
60
3.2.2.1 Gamma Correction
It is a non-linear process that known as the power law transformation
and constitutes a major and important component of digital imaging devices.
It represents the relationship between illumination and pixels values Shadows.
Images captured by imaging equipment do not appear as they do to the human
eye, therefore the gamma correction was used to improve the exposure
process. When gamma value is less than one, the image tends to be darker,
and vice versa, it will appear brighter if the gamma value is greater than one,
while the output image looks the same in the case of the input if the gamma
value is one. The main steps of the gamma correction shown in the algorithm
(3.3).
61
correctly, since changing the gamma value affects not only the intensity of the
color image itself but the ratios of red to green to blue.
62
In this algorithm, a Gaussian filter is applied to the Gray World input
image, which is used to reduce the noise in the image, thus a more accurate
and realistic image can be obtained. After that, histogram equalization is
implemented, Histogram equalization is used to enhance the contrast by
spreading the intensity values of pixel over dynamic range of image.
63
Algorithm(3.4)// Homomorphic Filter
Input: Gamma corrected image, r, c // [r, c] are image size
Output: Enhancement Image
Begin
Step1: convert the image to the log domain using the equation (2.16)
Step2:convert image to the frequency domain by Discrete Fourier
transform(fast Fourier transform) using the equation(2.19)
Step3: applying high-pass filter using equation (2.32)
Step4: inverse fast Fourier transform using equation(2.26)
Step5: invert the log-transform using equation(2.29)
End.
The homomorphic approach can work easily and get a good result,
especially when the source image is capture in low illumination condition.
The homomorphic filter algorithm includes Logarithmic transform, Discrete
Fourier Transform (DFT), H (u, v), Inverse Discrete Fourier Transform
(IDFT) and Exponential transform depended on the type of filter.
64
3.2.4 Calculation Weights Maps
In this work three weights maps are used that introduced in the
following sections:
65
End.
In this algorithm the local Laplacian filter was used. At first, for local
enhancement define two parameters sigma for details processing and alpha
for contrast. Secondly, filtering the intensity of the image by filtering each
channel separately. After that, the three channels was concatenated. Finally,
convert image to the gray world.
66
In this step, the saliency weight map was used to increase the quality
of the underwater image because the other weight maps (Laplacian contrast,
Saturation) is not enough to improve the edges region of the image.
The saturation weight was applied by using the equation (2.35). After
that, the weight maps of each input image was combined into single weight
map. An aggregated weight map is obtained for every input using the equation
(2.36).
67
3.2.5 Dual Pyramid
Image information is highlighted by several spatial scales. Pyramid-
based exposure fusion on multi-resolution analysis - is a data structure used
to process and analyze images at most spatial scales. In this work, the
Gaussian and Laplacian pyramids were used.
68
This algorithm convolving the input image with low pass filter
(binomial filter) and consider it as a first level. After that subsampling the
result by factor of 2. Each level is created using binomial filter in the first step
to filter the previous level
69
End for
End.
70
End for
// Pyramid reconstruction for each channel
For each level =10 down to 2
Step4: Add all two adjoin levels after resizing
End for
Step5: concatenation the three channels
Step6: Display the final result
End.
For the input image version, the Laplace operator is used to obtain the
first layer of the pyramid. Then, by down-sampling the layer, the second layer
image is created, and so on. Thus, to obtain the Gaussian pyramid of the
normalized weight image, filter the input image using the low-pass Gaussian
filter (binomial filter), corresponding to each layer of the Laplacian pyramid.
Finally, applying the Laplacian inverse transform on the output to obtain
image with more details based on the reconstructed Laplacian pyramid
decomposition map.
71
Chapter Four
Experimental Results
4.1 Introduction
In this chapter and to prove the effect of the proposed system, the detail
explanation for each step will introduce. In the beginning, the colors are
corrected using color compensation and white balance algorithms, thus a less
distorted image is obtained by eliminating the predominance of green and blue
colors and compensating for the loss of red colors. Then, we worked on the
image in two different ways. Image sharpening algorithm is used for edge
detection and it is also used to restore the color that has been lost in the input
image, in other one, the gamma correction and the homomorphic filter
algorithms are used to make the image brighter and enhance the illumination.
Then, weight maps are performed to both images separately for features
extraction and highlight the details. Finally, the multi- scale fusion method
has been adopted to avoid the artifacts of the simple fusion method, using the
Laplace and Gaussian pyramids to obtain a higher quality image. As a result,
the suggested method gave better results in most cases.
72
4.2 System Specifications
The following figure shows samples of the images that are used in the
first and second cases. Where figure (4.1 a) presents samples of greenish,
bluish and fogy underwater image that used to compare with recently papers,
while figure (4.1b) presents samples of underwater images that have been
used to evaluate the proposed work.
73
(a) (b)
74
4.4 Results and Evaluation Metrics
Figure (4.2) presents the results of three steps of our proposed method
applied to a set of bluish images. The result of the Gamma Correction appears
in the second column after applying the color correction algorithms to an input
images, where the loss of red color has been compensated. According to the
visualize result, the resulting image appears dark after applying a gamma
correction in an attempt to recreate a correct color where all the image pixels
are adjusted, although the resulting image may appear more saturated. In third
column presents the result of sharpening image, the edges of the objects (such
as fish, divers, plants) as well as the texture of the images appear more sharp
and clear to the human eye. The results of our suggested method in the last
column show that the restored images which are better than the input image
in terms of clarity and color.
75
Input Images Gamma Correction Sharpening Images Fusion Results
Blue1Blue1
Blue2
Blue3
Blue4
Blue5
Figure(4.2): The original bluish images in the first column, the second
column display the result after applying Gamma correction algorithm, the
third column represent the result of images after applying sharpening
algorithm, and the final result of proposed method in the last column.
76
In figure (4.3), a set of greenish images are used where the green color
dominates on the captured images. The visible results shown a clear
enhancement in adjusting the colors of the images and extracting the details
of the scene from the objects and the background. The last column shows the
final results obtained after the fusion process.
77
Input Images Gamma Correction Sharpening Image Fusion
Results
Green1
Green2
Green3
Green4
Green5
78
Input Images Gamma Correction Sharpening Images Fusion Results
Fogy1
Fogy2
Fogy3
Fogy4
Fogy5
Figure(4.4): The original fogy images in the first column, the second column
display the result after applying Gamma correction algorithm, the third
column represent the result of images after applying sharpening algorithm,
and the final result of proposed method in the last column.
79
Input Image Gamma Correction Sharpening Image Fusion I
Random1
Random2
Random3
Random4
Random5
80
Despite the ability of the human eye to distinguish the accuracy and
clarity of the image, it does not give accurate results that can be relied upon.
Therefore, tables (4.1), (4.2), (4.3) and (4.4) depict the numerical scores in
term of IE, UCIQE and UIQM metrics to provide an inclusive assessment of
the effectiveness of the proposed method. The IE, UIQM and UCIQE results
indicate higher values in all types of images. Where the values of the IE and
UIQM can be observed closer in all types of selected images, with a slight
difference in the third indicator results.
Evaluation Metrics
Images
IE UCIQE UIQM
81
Evaluation Metrics
Images
IE UCIQE UIQM
Evaluation Metrics
Images
IE UCIQE UIQM
82
Evaluation Metrics
Images
IE UCIQE UIQM
83
Blue1 Blue2 Blue3 Blue4
Figure (4.6): The upper row display the original bluish underwater images and
the lower row display the results of the proposed system.
84
Green1 Green2 Green3 Green4
Figure (4.7): The upper row display the original greenish underwater
images and the lower row display the results of the proposed system.
\\
Figure (4.8): The upper row display the original foggy underwater images
and the lower row display the results of the proposed system
85
Statistically and to compare the performance of the proposed system
with the three grouped underwater images (bluish, greenish and foggy), three
metrics are chosen: information entropy (IE), average gradient (AG) and
based contrast quality index (PCQI) to evaluate the underwater image quality.
Where the primary purpose of AG is to denote the image's sharpness, IE
mostly refers to the average amount of data that can be utilized to characterize
how colorful underwater images are and PCQI primarily assesses the human
eye's ability to perceive contrast in underwater images.
86
Zhang and et.al. work (2022) Proposed system
Images
IE PCQI AG IE PCQI AG
Blue1 7.908 1.181 10.89 7.8140 1.181 10.5386
Blue2 7.610 1.145 9.227 7.9039 1.1993 10.1335
Blue3 7.798 1.236 13.68 7.8831 1.1989 8.4881
Blue4 7.795 1.168 7.601 7.8583 1.199 11.3174
Average 7.777 1.182 10.34 7.8640 1.1942 10.119
Table (4.5): The results of the evaluation comparison with the bluish
underwater images.
87
Zhang and et.al. work (2022) Proposed system
Images
IE PCQI AG IE PCQI AG
Fogy1 7.936 1.183 8.85 7.7796 1.1989 10.5199
Fogy2 7.847 1.22 7.405 7.8221 1.1988 9.834
Fogy3 7.723 1.292 17.85 7.7918 1.199 9.3507
Fogy4 7.821 1.238 9.028 7.8678 1.1996 15.8179
Average 7.832 1.233 10.78 7.815 1.1985 11.38
88
Chapter Five
Conclusion and Suggestion for Future Work
5.1 introduction
This work aims to create a system to restoration underwater images by
feature extracting. This part highlights some of the important conclusions that
have been extracted through this study, in addition to some suggestions that
can help us to get more improved underwater images
5.2 Conclusion
1. The proposed method is one of the ways that achieved great success in
restoration underwater images by supporting the results obtained.
2. The proposed system used a set of real images selected from
Underwater Image Enhancement Benchmark (UIEB) dataset to
evaluate the steps of the proposed work.
3. The proposed system used a white balance algorithm for color
correction of images by addressing the problem of the appearance of
bluish-green tint in underwater scenes.
4. The proposed system used two different and separate methods for
contrast enhancement. The first method, used gamma correction and
homomorphic filter algorithm. The second method, used image
sharpening algorithm. This method gives better results than if the image
is processed in one way.
89
5. Three weight maps (Laplacian Contrast Weight, Saliency Weight,
saturation weight) were applied to both images. First, to overcome the
defects resulting from the contrast improvement step. Secondly, for
features extraction from underwater image
6. Gaussian filter is used for further features extraction and more
appropriate in the fusion process.
7. Laplacian Filter is used to highlight and extract features and details of
an image.
8. Multi-scale fusion process was used instead of simple image fusion to
overcome the defects resulting from the latter. Where it works to collect
important information from a set of images and fuse them into a single
images that contain all the valuable information.
Important points that can help for further improvement this thesis in the
future can be identified:
90
References
من قبل
صفا برهان عبدالساده
باشراف
أ.م.د .أسماء صادق عبدالجبار