0% found this document useful (0 votes)
93 views114 pages

Thesis Master

This thesis proposes a system for reconstructing underwater images based on feature extraction. It first improves image colors using color correction algorithms. Then, it derives two images from the result - one sharpened and one gamma corrected. These images are filtered using a homomorphic filter and have four weight maps applied. Finally, they are fused using multi-scale fusion. The approach is validated on underwater images and prior works, outperforming them based on statistical metrics.

Uploaded by

Sama Talee
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
93 views114 pages

Thesis Master

This thesis proposes a system for reconstructing underwater images based on feature extraction. It first improves image colors using color correction algorithms. Then, it derives two images from the result - one sharpened and one gamma corrected. These images are filtered using a homomorphic filter and have four weight maps applied. Finally, they are fused using multi-scale fusion. The approach is validated on underwater images and prior works, outperforming them based on statistical metrics.

Uploaded by

Sama Talee
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 114

Republic of Iraq

Ministry of Higher Education


and Scientific Research
Mustansiriyah University
College of Sciences
Department of Computer Science

Reconstruction of underwater images based on


features extraction

A Thesis
Submitted to the College of Sciences / Mustansiriyah University as
a Partial Fulfillment of the requirements for the Degree of Master
in Computer Science

By
Safa Burhan Abdulsada

Supervised By
Assistant Professor
Dr. Asmaa Sadiq Abul-Jabbar

2022 A.D 1444 H

I
‫س ِ ٍّم آ ِلٍّ ِلٍّ ِهٍّ آ ِلٍّ ِر َّح ِ ٍّم ٍِّن آ ِلٍّ ِر َّح ِيٍّ ِ ٍّم‬
‫بٍّ َّ‬

‫نَ ْرفَ ُع د ََر َجات َّمن نَّشَا ُء ۗ َوفَ ْو َ‬


‫ق ُك ِ ٍّل ذِي ِع ْلم َ‬
‫ع ِليم‬

‫َق اَلَلَهَ اَلَعَ َظيَ َم‬


‫صد َ‬
‫َ‬

‫‪II‬‬
Supervisor Certification

I certify that this thesis entailed “Reconstruction of underwater


images based on features extraction” carried out by ”Safa Burhan
Abdulsada” was prepared under my supervision at the Department of
Computer Science, College of Science at Mustansiriyah University as partial
fulfillment of the requirements for the degree of Master of Science in
Computer Science.

Signature:
Name: Dr. Asmaa Sadiq Abul-Jabbar
Title: Assistant Professor
Date: / /2022

Recommendation of the Head of the Department


In view of the available recommendations, I forward this thesis for
debate by the examination committee.

Signature:
Name: Dr. Thekra Hydar Ali Abbas
Title: Assistant Professor
Date: / /2022

III
Acknowledgments
I thank God for this blessing and success to complete
this work
All appreciation and gratitude to my supervisor Assist.
Prof. Dr. Asmaa Sadiq Abul-Jabbar for her advice,
valuable time, continuous support and effort
My sincere thanks and gratitude to my mother and all
my family who were my support and help in the hard
times, and to my friends who have been there to help
me from the beginning
Special thanks to all the staff of the Computer Science
Department, Al-Mustansiriyah University for their help
and efforts

Safa Burhan Abdulsada

IV
Dedication
To my wonderful mom
To my dear father, God have mercy on him
To my support in this life my brothers and sisters
To my dear friends
To everyone who helped me and wished me the
best

SAFA

V
Abstract

The physical properties of various underwater environments such as:


scattering, absorption, and gradual loss of color in addition to the effect of
marine plankton, turbid water and others are the main reasons for the
deterioration of underwater images. All of these contributed to the difficulty
of extracting features from underwater scenes. Due to the urgent need and the
widespread interest in image processing in various fields such as: marine,
archeology, and investigations. Therefore, this field witnessed wide activity
and became a controversial topic.

In this work the underwater image restoration methods are addressed


which are classified into two categories: restoration images in spatial domain
and restoration images in frequency domain. For each method in a particular
domain, there is a corresponding method in the other domain, but previous
studies have proven that enhancing the image in the spatial domain is the most
accurate and easy.

In this study, a new system is built to reconstruction of underwater


images based on features extraction. First: the image colors are improved
using color correction algorithms (color compensation and white balance).
Secondly, two different images are derived from the resulting image and
processed in two different ways. While one of them is improved using the
image sharpening algorithm, the other is processed using gamma correction,
and then the images are filtered using a homomorphic filter. Then comes the
steps of features extraction by applying four weights maps on the two images,
and finally, both of them are fused using multi-scale fusion process.

i
The performance of the current approach is validated using
comprehensive underwater images (bluish, greenish and fogy) and also
compared with recent work using the same dataset using number of statistical
metrics. The visual and statistical results have been proven that the proposed
approach markedly outpaced the state-of-the-art techniques using the same
dataset.

ii
List of contents

Chapter One: General Introduction

1.1 Introduction …………………………………………………………..1


1.1.1. Frequency domain based method ……………………………...5
1.1.2. Spatial domain based method ………………………………….6
1.1.3. Color Constancy based method ………………………………..7
1.1.4. Fusion based method …………………………………………..8
1.2. Literature Survey ……………………………………………………9
1.3. Problem statement ………………………………………………….18
1.4. Objectives ………………………………………………………….19
1.5. thesis Outline ……………………………………………………….20

Chapter Two: Background Theories

2.1 Introduction ………………………………………………………...21

2.2 Color Correction ……………………………………………………21

2.2.1 Color Compensation ………………………………………….23

2.2.2 White Balance ……………………………………………...…25

2.3 Contrast Enhancement ………………………………….…………..26

2.3.1 Gamma Correction …………………………………………….27

2.3.2 Image Sharpening ………...…………………………………..30

2.4 Homomorphic Filter ...................................................................…... 34

2.5 Weights Maps Calculation …………………………………………..38

iii
2.5.1 Laplacian Contrast Weight (𝑊𝐿 ) ………………………………38

2.5.2 Saliency Weight (𝑊𝑆 ) ……………………………………….....39

2.5.3 Saturation Weight (𝑊𝑆𝑎𝑡 ) ………………………………...........40

2.6 Dual Pyramid ……………………………………………………..... 41

2.6.1 Gaussian Pyramid ……………………………………………..42

2.6.2 Laplacian Pyramid …………………………………..……….. 43

2.7 Multi-Scale Fusion ………………………………………………… 45

2.8 Datasets ……………………………………………………………..46

2.9 Metrics of Performance Evaluation …………………………………47

2.9.1 Information Entropy ………………......................................... 47

2.9.2 Patch Based Contrast Quality Index …………………………..48

2.9.3 Average Gradient ……………………………………………..49

2.9.4 Underwater Image Quality Measure..........................................49

2.9.5 Underwater Color Image Quality Metric……………...………53

Chapter three: System Design and Implementation

3.1 Introduction …………………………………………………………54


3.2 The Framework of the Suggested System ………..............................55
3.2.1 Color Correction ………………………………………......…..57
3.2.1.1 Color Compensation ………………………………….....57
3.2.1.2 White Balance ...................................................................59
3.2.2 Contrast Enhancement ................................................................60

iv
3.2.2.1 Gamma Correction ............................................................61
3.2.2.2 Image Sharpening ..............................................................62
3.2.3 Homomorphic Filter ...................................................................63
3.2.4 Calculation Weights Maps.............................................65
3.2.4.1 Local Contrast Weight .......................................................65
3.2.4.2 Saliency Weight …………………………………………66
3.2.4.3 Saturation Weight ……………………………………….67

3.2.5 Dual Pyramid …………………………………………………..68


3.2.5.1 Gaussian Pyramid ………………………………………...68
3.2.5.2 Laplacian pyramid ………………………………………..69

3.2.6 Multi-scale Fusion ……………………………………………..70

Chapter Four: Experimental Results

4.1 Introduction …………………………………………………………72

4.2 System Specifications ……………….……………………….73

4.3 Underwater Image Dataset ………………………………..73

4.4 Result and Evaluation Metrics ………………….…………………..75

4.5 Comparison with previous works …………………………………. 83

v
Chapter Five: Conclusion and Suggestion for Future Work
5.1 Introduction ………………………...………………………………89
5.2 Conclusion ………………………………………………………….89
5.3 Suggestion for future work …………………………………………90

vi
List of Tables

(1.1) An overview of the literature survey 15


(4.1) Evaluation Metrics of the set of the bluish images 81
(4.2) Evaluation Metrics of the set of the greenish images 82
(4.3) Evaluation Metrics of the set of the fogy images 82
(4.4) Evaluation Metrics of the set of random images 83
The results of the evaluation comparison of the bluish
(4.5) 87
underwater images.
The results of the evaluation comparison of the greenish
(4.6) 87
underwater images.
The results of the evaluation comparison of the foggy
(4.7) 88
underwater images.
The average of the evaluation comparison of (bluish,
(4.8) 88
greenish, foggy) underwater images.

vii
List of Figures

(1.1) Underwater light absorption 2

(1.2) UOI model 3

(1.3) the effect of artificial lighting in deep sea water 4


The underwater lighting conditions that can
(2.1) 22
produce color changes
The plot of equation S = c r γ with different value
(2.2) 28
of γ (the value of c=1 in all cases)
Applying the power law transformation to an
(2.3) 29
image
Showing the histogram equalization. a) The actual
(2.4) image, b) image histogram equalization, c) the 32
histogram of the images in the middle
(2.5) Basic steps of homomorphic filter 35
Applying the Gaussian pyramid on “Lady” image,
(2.6) 42
the input image at level zero
Four steps of Gaussian and Laplacian pyramid,
the first row represent the Gaussian pyramid
images, Laplacian pyramid is shown in the down
(2.7) 44
row where represent the difference between the
considering level and next one in Gaussian
pyramid
(3.1) Block diagram of the Proposed System 56

(UIEB) real underwater image a) Samples of


(4.1) underwater image used for comparison b) 74
Samples of underwater images for this work

viii
The original bluish image can see in the first
column, the second column display the result after
applying Gamma correction algorithm, the third
(4.2) 76
column represent the result of image after
applying sharpening algorithm, and the final result
of proposed method in the last column
The original greenish image can see in the first
column, the second column display the result after
applying Gamma correction algorithm, the third
(4.3) 78
column represent the result of image after
applying sharpening algorithm, and the final result
of proposed method in the last column

The original fogy images can see in the first


column, the second column display the result after
applying Gamma correction algorithm, the third
(4.4) 79
column represent the result of image after
applying sharpening algorithm, and the final
result of proposed method in the last column

The original random images can see in the first


column, the second column display the result after
applying Gamma correction algorithm, the third
(4.5) 80
column represent the result of image after
applying sharpening algorithm, and the final
result of proposed method in the last column

The upper row display the original bluish


(4.6) underwater images and the lower row display the 84
results of the proposed system

The upper row display the original greenish


(4.6) underwater images and the lower row display the 85
results of the proposed system

The upper row display the original foggy


(4.7) underwater images and the lower row display the 85
results of the proposed system

ix
List of Abbreviations

UOI underwater optical imaging


HE Histogram Equalization
AHE Adaptive Histogram Equalization
HCHE The hybrid cumulative histogram equalization
CLAHE Contrast limited adaptive histogram equalization
WB White Balance
GW Gray World
DFT Discrete Fourier Transform
IDFT Inverse Discrete Fourier Transform
𝑊𝐿 Laplacian Contrast Weight
FLLF Fast Local Laplacian Filter
𝑊𝑆 Saliency Weight
𝑊𝑆𝑎𝑡 Saturation Weight
RUIE Real world Underwater Image Enhancement
UIQS Underwater Image Quality Set
UCCS Underwater Color Cast Set
UHTS Underwater Higher-level Task-driven Set
IE Information Entropy
PCQI Patch based Contrast Quality Index
AG Average Gradient
UIQM Underwater Image Quality Measure

𝑈𝐼𝐶𝑀 Underwater Images Colorfulness Measure


𝑈𝐼𝑆𝑀 Underwater Image Sharpness Measure
UIConM Underwater Image Contrast Measure
UCIQE Underwater Color Image Quality Evaluation Metric
UIEB Underwater Image Enhancement Benchmark

x
Chapter One

General Introduction

1.1 Introduction

When images are captured in turbid medium, such as underwater,


cloudy, or noisy environments, the scene's visibility is dramatically reduced.
This is due to the fact that a medium scattering has a direct effect on the
brightness of a point in the scene. Objects in the background and fractions in
the underwater environments, particularly, suffer from low vision, poor
contrast and color lost. Lately, there is widespread interest in restore the
deteriorated image as a result of such environmental conditions. Recovery of
these types of deteriorated images is critical for a variety of implementation,
like marine biology research, archaeology, ocean engineering and
surveillance [1].
The two more effective factors in the quality of the underwater
images are absorption (which remove light energy) and scattering (which
reflect the light direction). Some of the light energy is absorbed faster than
others, light with a different wavelengths have different attenuation. As in
figure (1.1), where red light with lower frequency and longer wavelength is
absorbed first and disappears at a depth of 5 meters, followed by orange one
which disappear at a depth of about 10 meters, yellow at a depth of 20 meters,
green at a depth of about 30 meters and blue color which has the shortest
wavelength and can spread deeper underwater. As a result of the difference in
absorption, the image captured underwater has a cyan color [2, 3].

1
Figure (1.1): Underwater light absorption [2]

The scattering of light in the underwater medium is divided into two


categories: Forward scattering (Randomly reflected light from the object
towards the camera) lead to foggy underwater image and backward scattering
(light deviated from water into camera lens) which generally effects on the
resolution of the underwater image. As shown in figure (1.2) water is not the
only reason that affects absorption and scattering, but there are floating
particles that can be seen underwater and dissolved organic materials that have
a major role in increasing the effect of absorption and scattering [4].

2
Figure (1.2): Underwater optical image [4]

One way to increase the underwater visibility is to add Artificial light


source to devices to providing necessary lighting to the underwater dark
scene, as shown in figure (1.3), it is add its own flavor to the problem. The
artificial light tend to produce a non-uniform illumination, resulting in a bright
spot in the middle of the scene. [5].

3
Figure (1.3): Effect of artificial lighting in deep sea water [5]

Underwater images can be improved using special software and


hardware. Hardware devices such as cameras can be very expensive, consume
large energy and are not suitable for all underwater environments. Therefore,
researchers have suggested many theories and algorithms to improve the
underwater images that can be divided into two groups: image restoration
and image enhancement, the two methods have it is benefits and drawbacks.
In image restoration approaches, real scenes are recovered from degraded
underwater images by using the physical model and give more realistic images
, while in image enhancement approaches, which don’t based on physical
model, the quality of the original scenes are enhanced by using the image
processing method and the resulting image is not the real one . These methods
are often faster, simpler and better in image processing [6, 7].

Recently, methods of enhancement underwater images have been


classified according to the physical properties of water such as color contrast,

4
dehazeing and color cast. Generally these method are divided into four
categories: methods based on Frequency domain, methods based on spatial
domain, methods based on Color constancy, and methods based on Fusion
[6,8].

1.1.1Frequency Domain based Method

In frequency domain, the low frequency component correspond to the flat


background region and high frequency component represent the edges in the
image. The transform-domain methods such as homomorphic filter, wavelet-
transform, high-pass filter and low-pass filter are applied to processes
underwater images. Homomorphic filter is a collection of reflection
component and illumination component used to improve the poor illumination
and combine high and low frequency component by applying logarithmic
transformation. Wavelet transform decompose image into waves and the
obtained image is an unequal amounts or different scales of information. Low-
pass filtering has been used as a noise removal method by suppressing high
frequency component while High-pass filtering has been used as a detail
preservation method by suppressing the low frequency component [2, 6 and
8].

5
1.1.2 Spatial Domain based Method

In the spatial domain, image contrast are enhanced by highlighting


details and extracting important information to improve human visual
perception. The spatial domain known as the set of pixels that constitute up
an image. Image enhancement operator that work to improve the detection of
interest details by human or machines. The operator have a main role in edge
sharpening, noise suppression and smoothing. The spatial domain method
refer to the spatial filter that generally include manipulation and direct
operation that perform on the pixel value at specific point in the image or the
neighborhood about the point (x,y). The process of the spatial domain is
defined by the following equation [9]:

𝑔(𝑖, 𝑗) = 𝑇[𝑓(𝑖, 𝑗)] … … … … … … … … … … … … … … … … … … … (1.1)

Where 𝑔(𝑖, 𝑗) represent the source image, 𝑓(𝑖, 𝑗) present the result
image, T is an operator identified across a fixed neighborhood about
location(𝑖, 𝑗). Some of contrast enhancement methods are:

a. Histogram Equalization (HE): It is a traditional contrast enhancement


that has a clear effect in improving normal images by spreading the
intensity values. However, it suffers from noise in some areas [8].
b. Adaptive Histogram Equalization (AHE): Depends on histogram
equalization, it redistributes the brightness values of the image and it
is appropriate for image enhancement at the local contrast, noise
suppression and highlighting of details [10].

6
c. The Hybrid Cumulative Histogram Equalization (HCHE): It can be
used to overcome the drawback of losing in dark regions and the
problem of over-optimization. However, the global histogram
equalization problem still exists due to noise amplification in relatively
homogeneous regions [10].
d. Contrast Limited Adaptive Histogram Equalization (CLAHE): it
solve the problem resulting the amplification of noise in relatively
homogeneous regions by working on small areas and calculating
several histograms for each section in the image and using them to
redistribution the value of image’s brightness. [11].

1.1.3 Color Constancy based Method

The purpose of applying color constancy to an image is to reduce the


effect of potentially non-illumination in order to produce color data that more
accurately describes the image's physical content. Generally, color constancy
is defined to estimating the illuminate of the scene .Once such data has been
computed, it can be applied to create an image of the scene as it would appear
under a source light [12]. Some color correction methods as:

a. Gray World Method: The easiest method for determining color


constancy is to compute a single statistic from the entire image, which
can then be used as an estimate of the lighting that considered to be
uniform for the entire interest area. In physical terms, such a theory
means that the mean of all the scene's reflectance is referred to as gray
[12].

7
b. White Balancing: Because of the medium attenuation properties,
white balancing is a method that is used to remove the unwanted color
cast. Its main purpose to compensate the losing of red color [8].
c. Retinex Method: Retinex is a color correcting application that uses
color theory to improve image clarity and color fidelity. It is really
complicated and it takes more computing time to improve. It is mostly
used in image enhancement and defogging [8].

1.1.4 Fusion based Method

The fundamental of the fusion rules based on Laplacian Pyramid or


Gaussian Pyramid. Firstly, the two images that need to be processed are
sampled to a similar size, then the forward wavelet transform is used to
separate them into sub-images that have the same resolution at the same levels
but a different resolution at different levels. Information fusion is then carried
out using the high frequency sub-images of the decomposed images and
finally the likely to result image is produced using the inverse wavelet
transform. As a result of the image fusion, a single image that is better suitable
for both human and artificial perception as well as other image processing
tasks. [2, 13].

8
1.2 Literature Survey

In recent years, scholars’ attention in different fields of underwater


image restoration and enhancement such as underwater marine research,
robotics and underwater archaeology. As a result, several techniques have
been devised to enhance the underwater image. Some related researches are
introduced below :

Ancuti and et al. [2011] produced a novel technique for enhancement


the visibility of underwater images. It is a single image approach which based
on fusion a series of inputs images obtained from the source input image. The
proposed method was obtained to defeat the different problems of underwater
images based on weight maps which distinguish areas with low visibility. The
method consist of three phases. First, several inputs images that have the
interested details was derived and saved in the restored result. Second, the
weight maps that evaluate the locally important information were defined.
Lastly, a classical multi-scale fusion approach was used to obtain the
combination of the final output. The advantage of this approach is that the
underwater image is potentially improved with reliability even if the distance
map is not computed previously. The results indicate that the suggested
technique was simple and doesn’t need additional information [14].

Ancuti and et al. [2012] proposed a new method to improve


underwater images and videos based on fusion approach applied on series of
inputs images and weight measures extracted from the degraded image. Two
input image were obtain by using color corrected and contrast enhanced to
overcome the obstacles of the underwater environment. As well as weight

9
maps applied to enhance the visibility of underwater scene. This approach
suitable for improving single underwater image and don’t need for specialized
hardware or any knowledge about physical characteristics of the underwater
medium [15].

Gao and et al. [2016] Suggest a bright channel of the underwater


scenes to enhance and restore the underwater image which was taken from the
idea of the dark channel prior. In first step, the underwater image was restored
by evaluating and correcting the bright channel, evaluating the atmospheric
light and evaluating and polishing the transmittance images. Then in the
second, the histogram equalization is used to equalize the image to correct the
color distortion. The experiment results proved that the suggested approach
was efficiently increase image quality [7].

Biswas [2017] introduced a simple fusion based on the enhancement


of underwater image technique to improve the quality of the underwater
images. It was aimed to enhance the color and image contrast using Auto
White Balance and contrast stretching. This method was greatly contributed
to improve the visibility of underwater images [16].

Zhang and et al. [2017] suggested a new technique to improve


underwater images based on the implementation of color correction and
illumination adjustment. Firstly, an effectively color enhancement technique
was adopt to solve the color cast. After that, Retinex model was obtained for
illumination adjustment by applying illumination map and applying gamma
correction on it successively. The visual performance indicated good
performance of the applied method with simpler processing complexity [17].

10
Ancuti and et al. [2018] suggested a novel method to enhance the
degradation in the underwater images. Two input images were obtained from
initial degraded input image by color compensation and white-balanced
version. Then, the two input images and their weight maps were fused to
enhance color contrast and the transfer of edge to the output image. The
quantitative and qualitative evaluation shows that enhanced images and
videos have best exposure of dark areas, edges sharpness and improved global
contrast [18].

Luo and et al. [2019] proposed an effective technique to enhancement


the underwater image by spatial and frequency domain to solve some
problems in an underwater images such as unbalanced illumination, blurred
images and poor contrast resulted from noising, In the first step, the local
contrast of various regions of the input image have been enhanced based on
the distribution of pixel in each area by applying the Contrast Limited
Adaptive Histogram Equalization (CLAHE) method in spatial domain. Then
in second step and in frequency domain, the Homomorphic Filtering method
was used to minimize the noise and improve the details in the process image
[3].

Sethi and et al. [2019] Suggesting a new method called Fusion of


Underwater Image Enhancement and Restoration (FUIER) to improve and
restore the underwater images, which can work under different conditions,
such as, color cast removal, contrast enhancement and dehazing. Two single
images have been created. The first image is created by applying the
histogram eaualization on the RGB channels, the second one is obtain by
contrast stretching of RGB channels follow by dehazing. Finally, the two last

11
images are fused together using Laplacian pyramid to get the optimized output
image [19].

Patel and et.al. [2020] suggest a way to enhancement the underwater


image by implementing color balance, Laplacian and Gaussian fusion
pyramid. This method, seeks to balance the color distortion by removing the
bluish-green tint of the underwater image due to the different distortions
resulting from the light attenuation of the atmosphere. The aim of balancing
color is to enhance the edges distortion, after the color balancing the Laplacian
and Gaussian payramid fusion was performed to obtain the final enhanced
images. The results show the superiority of this method of fusion over modern
methods [20].

Xiong and et al. [2020] developed a model for enhancing single


underwater image by mathematically extending the BeerLambert
Mathematical Law. In this model, mean and variance of the original images
were used as a basis for correcting the color casts of underwater images. To
restore the best image detail, an efficient strategy consist of two step have
been used: Firstly, create a linear model associated with the variance and mean
to determine images areas that contain more details. Secondly, to prevent
partial over-enhancement and restore more detail, nonlinear adaptive weight
scheme has been implemented using the locating information extracted from
the first step. The experiment results proved that the outputs of this approach
get preferable structural restoration and good colors correction [21].

12
Zhang and et al. [2020] suggested a new technique for single
underwater image enhancement used the fusion technology to overcome
underwater image problems (for example: color deviation, fain contrast, and
blurring). Firstly, dark channel prior dehazing technologies and the white
balance algorithm were used as preprocessed to the original image, then
applying color correction and contrast enhancement to obtain two single
images. In the final step enhancement image was gained by using multiscale
fusion strategy to fuse the feature based on contrast, saliency and exposure
weighted maps. The results indicated a good performance [22].

Sun and et al. [2020] introduced an image enhancement algorithm to


solve the problems of underwater environment by using dark channel prior
theory and underwater imaging model. In this method, a new technique was
proposed by adding color compensation and transmission to the B and G
channel. The experimental results showed this method had a clear effect on
improving the underwater image with less color deviation [23].

Zhou and et al. [2020] suggested a new technique to enhance the


underwater images by adopting bi-interval histogram equalization. Their
method comprises three stage: color correction, contrast enhancement, and
multi-scale fusion. In color correction step, the white point detection and
white point adjustment are applied. Secondly, the input image decomposed
into high and low frequency components using homomorphic filtering, the
low component is improved by applying gamma correction while the high one
is enhanced by the gradient field bi-interval equalization. Finally multi-scale

13
fusion was performed on the enhanced images to get the final enhanced image.
The Qualitative and quantitative results indicates effectively enhance
especially in dark areas [24].

Zhang and et al. [2021] proposed an easy linear fusion approach to


enhance the high and low-frequency component in underwater images. A new
method has been suggested by using color correction and Bi-interval contrast
enhancement. Firstly, color correction technique was used to process the color
distortion, after that the L channel was separated into low and high frequency
component by using Gaussian low-pass filter. Finally, Bi-interval histogram
based on optimal equalization threshold strategy and S-shaped function were
employed to improve the details and image contrast. A quantitative and
qualitative results indicate to a good result [25].

Xu and et al. [2021] also introduced a technique to underwater image


enhancement based on Retinex. Firstly, the reflectance image has been
obtained by using bilateral filter to estimate illumination image. Secondly, an
attenuation map guided gray world method was implemented to overcome
color distortion and gamma correction to improve the illumination image. In
final step, the output image has been gained after perform the fusion image as
a post-processed. The Qualitative and quantitative evaluation showed that the
achieved results has well performance [26].

Table (1.1) summarize the prior study works based on the above
review:

14
Table (1.1): An Overview of the Literature Survey

NO
Author Technique advantage disadvantage
.
It is easy and clear and does
Ancuti Fusion a series not need extra information
Noise amplifies with
1 and et.al of single input such as hardware, and the
increasing depth
[2011] image results can be easily
evaluated
It can be used with various
Some limitations in
Weight maps devices, the time consumed
Ancuti the case of images
and Fusion for implementation is very
2 and et.al captured in deep
approach of two short in addition to the clear
[2012] scenes with strong
input image improvement of contrast,
and artificial lighting
color and image details
over equalizing of
Gao and Color Correction It can effectively improve
one channel (red
3 et.al and Histogram the image quality
channel as an
[2016] Equalization
example)
Its results are clear and easy
Fusion based on
to understand, and there is Difficult to apply it
Biswas White Balance
4 no effect of green and blue to videos in the
[2017] and Contrast
tint, even for images taken future
Stretching
in depth
Zhang Low accuracy in
Retinex It can improve the color and
5 and et.al deep-sea researches
approach visibility of the image in and operations.
[2017]

15
addition to giving a natural
appearance.
The color cannot be
Fusion based on
Visibly improves image completely restored
Ancuti Color
quality in addition to and some fog
6 and et.al Compensation
improving contrast, color remains, especially in
[2018] and White-
and detail of the image scenes far from the
Balanced
camera
Luo and CLAHE and Optimize color, brightness,
The results of MSE,
7 et.al Homomorphic and contrast while
PSNR aren’t optimal
[2019] Filter preserving image detail
Fusion based on
Sethi and Histogram The color, contrast, and With a large fog, it is
8 et.al Equalization and blurring of the resulting difficult to restore the
[2019] Contrast image have been improved. colors of the image
Stretching
In addition to improving the Some results still
Patel and Gaussian and
quality of images include fogy, in
9 et.al laplacian fusion addition to the lack
underwater and blurry of statistical results.
[2020] pyramid
images on land
The restored image is better
Xiong Beer Lambert
in terms of color correction Some low statistical
10 and et.al Mathematical
with the least time results
[2020] Law
consumed
Zhang Fusion based on Correcting the color,
11 The proposed
and et.al weight maps contrast and appearance of approach was applied

16
[2020] the image, in addition to on hazy underwater
images only
noise suppression

Effectively improve the


sun and Some results with
dark channel image quality in terms of low quality and
12 et.al
prior theory clarity, contrast and clarity
[2020]
saturation
color correction, There are defects in
contrast the color cast,
Zhou et.al Increase the number of
13 enhancement, especially in images
[2020] local features
and multi-scale taken in the depths of
fusion the sea
linear fusion
Suppressing fog and It is not possible to
Zhang based on Color
improving blurry images in get good results in
14 and et.al correction and
addition to improving images with high
[2021] Bi-interval
visibility and color noise
histogram
Xu and Improve image quality in
Retinex There were over
15 et.al terms of color, contrast, and compensation in
approach
[2021] saturation some experiments

17
1.3 Problem Statement

As a general observation, the quality of an image captured in water is


constantly diminished. It loses the true tone quality and contrast required to
recognize the image's object of interest. When the pixel intensity levels of
neighboring objects differ by only a few pixels, the issue becomes more
difficult. This condition makes it difficult to extract finer details from the data,
and the efficiency of the algorithms required to effectively extract data from
the images suffers as a result.

So, there is an urgent need for underwater images to be processed in


such a way that their tonal details are accurately represented. Underwater
imaging use widely in different fields, such as aquatic life research, water
quality monitoring, military and security, and so on. Therefore, any images or
videos acquired to meet these objectives must be extremely detailed.

18
1.4 Aims of the Thesis

This thesis aims to improve an underwater image enhancement techniques


in order to be appropriate to the different circumstances of the underwater
environment. The following are the main objectives of this study:

1. Study and discuss the impact of light reflection in the aquatic


environment and how it affects image quality degradation.

2. A study the derivation of two images from the color white balance of
the source image and applying a series of independent processors to
enhance the color contrast of the final image. At last the tow image are
fused based on the multi-scale fusion algorithm.

3. Compare with other state of art.

19
1.5 thesis Outline

The next section of this thesis is divided as following:

Chapter2: Introducing previous literature on underwater picture restoration


and a background information of pre-processing and various processing
techniques implemented in the thesis.

Chapter3: depicts the structure of the suggested systems, discusses the


techniques for processing underwater images, feature extraction, and also
describes algorithms that were designed to meet the purpose.

Chapter4: introduces the benchmark dataset, describes the measurements


used to list the results of the research study, and evaluates the proposed
systems' performance. Finally, the test experiment's results are listed and
assessed.

Chapter5: summarized the main study's generated conclusions as well as the


test final result. Furthermore, some recommendations for future work that
could be implemented in this chapter.

20
CHAPTER TWO

Background Theories

2.1 Introduction
Underwater image suffers from color attenuation, foggy details, fain
contrast and bluish or greenish color resulting from light scattering and
absorption in the water environment. These factors, in turn, affect the
underwater images quality. Image processing is a method for improving the
visibility of the input image to be clear to the recipient. This involves
increasing the density of the image, sharpening the edges, color correction,
removing noise, filtering, and so on. In this chapter, the methods that have been
used to improve underwater images will be presented, including methods in
both spatial domain and frequency domain.

2.2 Color Correction

Due to the lighting conditions severely change in a turbid and refractive


medium such as ocean, retrieving true colors or at least predictable colors of
underwater images is a highly tough challenge for imaging systems. Apart
from sky reflection, the bright blue color of clear ocean water is caused by
selective absorption of molecules of water. The water's filtration
characteristics are determined by its purity. The water becomes greener as
suspended and dissolved materials increase. The nature of the light available
is also influenced by the time of day and cloud cover of the sky. Other
consideration is depth, which takes its turn as soon as it reach its destination.

21
Because of the nature of underwater environment, red light fades as depth
increases, resulting in blue to grey-like images, as shown in figure (2.1).
Despite the above, it can't really know all the effects of water because many
(if not all) of the above factors are always changing [22,27].

Figure (2.1): The underwater lighting conditions that can produce color
changes [27]

Color correction of underwater images or video is a crucial challenge


in all image-based applications such as navigation, documentation, 3D
imaging and so on. For these reasons, a number of image correction
approaches have been proposed. These methods have the advantage of
requiring no knowledge of the medium physical parameters, although several
image changes can be done manually (such as histogram stretching) or
automatically by algorithms based on criteria proposed by computational
color constancy approaches. One of the most widely used criteria is the gray-
world hypothesis, which makes the assumption the acquired image's average
color must be gray [28].

22
2.2.1 Color Compensation
The Color Compensation method eliminates artifacts caused by
significantly non-uniform color spectrum distribution in photographs shot,
cloudy nighttime situations, underwater, or un-uniform lighting. The method
is based on the hypothesis that, in these challenging circumstances, at least
one color channel's information is nearly completely lost, making typical
boosting methods susceptible to noisy and color shifting [29].

To compensate the loss in the red channel has been depend on the
following notes observation:

1- Comparing with the red and blue channels, the green channel is still
visible underwater. In clear water, it is true that light having a long
wavelength, such as red light, loses energy quickest.
2- Since the green channel provides information about the opponent's
color compared to the red channel, it is necessary to compensate for
the greater attenuation caused by the red color compared to the green
one. As a result, a portion of the green channel was added to the red
to compensate for the red channel's attenuation. Initially, add a
portion of blue and green to the red color. However, it has been
observed that using only the green channel information provides the
best way to restore the entire color spectrum and for preserving the
natural color from the background (water areas).
3- According to the gray world hypothesis (channels have equal mean
value before attenuation), the difference indicates to the
contrast/unbalance between the red and green attenuation, so the

23
compensation must be equivalent to the difference between the
mean values of green and red.
4- The improvement of red must effect the pixels with tiny red values,
and should not alter pixels which really contain a strong red
component, in order to prevent saturation of the red channel during
the Gray World phase that grows after the red loss compensation. In
other terms, the information from the green channel shouldn't be
transmitted in areas where the information from the red channel is
still essential.

Mathematically, to adjust red channel I𝑟𝑐 at each pixel location (x)


according to the above observations by using the following equation:

I𝑟𝑐 (𝑥) = 𝐼𝑟 (𝑥) + 𝛼. (𝐼𝑔̅ − 𝐼𝑟̅ ). (1 − 𝐼𝑟 (𝑥)). 𝐼𝑔 (𝑥) … … … … … . ( 2.1)

Where 𝐼𝑟 , 𝐼𝑔 refer to the red channel and green channel of image I,


respectively. After normalizing by the upper bound of their dynamic range
each channel being in the range [0, 1]. Whilst the 𝐼𝑔̅ , 𝐼𝑟̅ refer to the mean
values of the green and red channels and 𝛼 is a constant value [18, 30].

24
2.2.2 White Balance (WB)
A color cast is an excessive presence of an undesirable color that has a
particular effect on the entire photographic image where the effect of color
discrimination and identification underwater is associated to the depth. The
most important difficulty with underwater images is the blue-green tint must
be corrected. The white balancing is used to compensate the color cast created
by absorption of colors with deepness where the essential role of the white
balance is to make the neutral colors of an image accurate. Some of the color
balancing algorithms are Grey world, Sensor Correlation and Robust Auto
White balance. [31, 32].

 Gray World (GW)

The gray world methods is depend on the idea that the scene’s average
reflectance with rich color changes is achromatic. That’s mean, the three color
channels 𝑅𝑎𝑣𝑔 , 𝐺𝑎𝑣𝑔 , and 𝐵𝑎𝑣𝑔 should all have the same average [33].

After color compensation, the GW algorithm was applied on the image.


This approach assumes that the average value of every channel is a gray for a
color image (RGB):

𝐾 = (𝑅̅ + 𝐺 + 𝐵)/3…………………………………….. (2.2)

Where 𝑅̅, 𝐺̅ , 𝐵̅ are the average component of the (R, G, B) channels. After
that gain 𝐾 of three channels is calculated using the following equation:
𝐾
𝐾𝑅 = … … … … … … … … … … … … … … … … … … … . . … … . (2.3)
𝑅̅

𝐾
𝐾𝐺 = …………………………………… … … … . . … … … … (2.4)
𝐺̅

25
𝐾
𝐾𝐵 = ̅ … … … … … … … … … … … … … … … … … … … … … … . … (2.5)
𝐵

Then the result value for each pixel of RGB channels in the image has
been calculated by the following equation:

𝑅′ = 𝑅 × 𝐾𝑅 … … … … … … … … … … … … … … … … . … … … (2.6)

𝐺 ′ = 𝐺 × 𝐾𝐺 … … … … … … … … … … … … … … … … … … … (2.7)

𝐵′ = 𝐵 × 𝐾𝐵 … … … … … … … … … … … … … … … … . . … … (2.8)

Where R, G, B represent the original pixel of the image and 𝑅′, 𝐺 ′ , 𝐵′


represent the adjusted pixel [18, 33, 34].

2.3 Contrast Enhancement

Image enhancement is aim to enhance the quality and contrast of the


considered image. It is a significant aspect in any individual's image quality
assessment. Contrast enhancement techniques are widely employed in a
variety of implementation where image quality is meaningful. The amount of
color or gray separation that available among different components in
considered image is called as the image contrast enhancement. So, it refers to
the brightness range of the image. When opposed to images with a lower
contrast level, images with a highest contrast level generally show a greater
level of colors. Contrast enhancement is a technique for making visual
characteristics more visible by maximizing the color available on the device.
Some of contrast enhancement methods was suggested within the last years
to enhance the contrast of images for diverse uses. Such as, gamma correction,

26
Histogram equalization, global histogram equalization, local histogram
equalization, adaptive histogram equalization, and contrast limited histogram
equalization [35, 36].

2.3.1 Gamma Correction

Gamma correction is a nonlinear adjustment utilized to all pixel value.


Usually, addition, subtraction, and multiplication linear methods are
performed on each pixels in image normalization. So, Gamma correction can
be considered as nonlinear method which applied on the pixels of the input
image and can cause saturation of the image being altered. It is important to
preserve the steady gamma value, i.e., it should not to be very minimum value
or maximum one [37].

The gamma correction is described by a power law and γ is a sign for


the exponent. The basic form of the power low transformation as:

S = c
𝑟𝛾 … … … … … … … … … … … … … … … … … … . . … … (2.9)

Where c, 𝛾 are a positive constant. The equation (2.9) sometime known


as S = c (𝑟 + ε)𝛾 to compute for an offset (this means, a measurable result if
the input is equalize to zero). Offsets, furthermore, are usually a display
calibration issue, therefore it is usually neglected in equation (2.9). Figure
(2.2) shows plots of s vs. r for varying values of Power-law curves with
fractional values of map a small range of shadowy input values into a greater
range of output values. With the reverse being true for higher input levels.

27
Here's a collection of transformation curves that can be made by only
changing the values of 𝛾.

Figure (2.2): The plot of equation S = c 𝑟 𝛾 with different value of 𝛾 (the value
of c=1 in all cases) [38].

A curve with values of γ > 1 has an inverse effect than one with values
of γ < 1, as shown in Figure (2.3). Finally, when 𝛾 = 𝑐 = 1 the equation (2.9)
reduces to an identity transformation. A power law control the behavior of a
number of devices used for imaging, printing, and displaying. The power-law
exponent is generally known as gamma [38, 39 and 40].

28
Figure (2.3): Applying the power law transformation to an image [39]

Gamma correction is necessary to showing the image accurately on the


computer screen. In images that haven't been properly adjusted can appear
bleached out or, more commonly, overly dark. Trying to recreate colors
correctly also needs some understanding of gamma correction, since changing
the gamma value affects not only the intensity of a color image itself but the
proportions of red to green to blue. While it has become popular in recent
years to use digital photos through the Internet, gamma correction has become
significantly important. Some computer systems even have partial gamma
correction integrated with. Images published for a successful Web site are
frequently viewed by millions of individuals, most of whom will have various

29
monitors and/or monitor settings. Therefore, the appropriate way to store
images in websites using the gamma, which is an “average” for different
monitors and computers that the user expects in the open market at any given
time [41].

2.3.2 Image Sharpening

Image contrast enhancement is related with sharpening specific image


elements such as textures or edges, and is generally used to improve an
image's aesthetic appeal and to enhance image underexposure and
overexposure. It can also be used to restore the color that has been lost.
Contrast Enhancement algorithms are frequently used in an interactive
manner, with the method selected and its parameters changed based on the
specific application at hand [42].

The Gaussian filter has been widely employed as a sharpening


technique in image processing and computer vision. In most cases, low pass
filtering is used. The Gaussian function can be multiplied by planar cosine
wave for directed filtering. Its first and second derivatives are commonly
utilized largely. Gaussian filters are the most extensively employed smoothing
filters among the many smoothing filters. This method is aims to make edge
clearer whilst smoothing the image. This is done by implementing a
convolution kernel to each pixel of an image. The noise was smoothed down
by a Gaussian filter, but the actual values of the signals were distorted [43].
The Gaussian function in two dimensions is written as follow:

𝟏 𝐱 𝟐 +𝐲 𝟐
𝒈(𝐱, 𝐲) = 𝐞 𝟐𝛔𝟐 … … … … … … … … … . . … (𝟐. 𝟏𝟎)
√𝟐𝛑𝛔𝟐

30
Where variance of Gaussian filter (𝝈𝟐 ), Sigma (σ) identifies the amount
of smoothing, and the value of sigma affects the Gaussian kernel coefficients.
The higher the sigma means, the more variance is permitted around the mean,
and the smaller the sigma, the less variance is allowed around the mean. As a
result, while a large filter variance is effective at smoothing an image, it also
corrupts the image's edges and critical structure [44].

After that, histogram equalization is applied, Histogram equalization is


applied to enhance the image contrast by spreading the intensity values across
the whole range. As this process only adds extra pixels to the light regions of
the image and subtracts extra pixels from the dark regions of the image, the
histogram equalization approach is inapplicable to images with backgrounds
that have uneven illumination. The result is an image with a high dynamic
range. The goal of histogram equalization is to properly distribute an image's
contrast across the whole dynamic range that is available [45]. Figure(2.4)
showing the difference in the quality of the image between the traditional
histogram and applying of histogram equalization.

31
a b c

a b c

Figure (2.4): Showing the histogram equalization. a) The actual image, b)


image histogram equalization, c) the histogram of the images in the middle
[45, 46]

32
In the histogram equalization approach, the probability density function
(pdf) has been manipulated. In another word, the histogram equalization
approach converts a particular image's pdf into a uniform probability density
function that extends from the lower pixel values (equal to zero) to the higher
pixel values (L-1). If the probability density function is a continuous function,
this is simple to apply. However, the pdf will be a discrete function as long as
deal with a digital image. Assume an image x with a dynamic range for the
intensity 𝑟𝑘 ranging from 0 (black) to L1 (white). The probability based on
the histogram p (𝑟𝑘 ) that can be used to estimate this pdf as following:

total pixels with intensity 𝑟𝑘


pdf(x) = p(𝑟𝑘 ) = … … (2.11)
total pixels I image x

After that, the cumulative intensity function (cdf) can be obtain from
the pdf as:

𝑙−1
cdf = ∑ 𝑝(𝑟𝑘 ) … … … … … … … … … … . … … . . … … … (2.12)
𝑘=0

here p(𝑟𝑘 ) represent the probability of an intensity pixel. The output


pixel of the histogram equalization process is then equal to the image's cdf or
can express on it by:

𝑙−1

p(s𝑘 ) = ∑ p(𝑟𝑘 ) … … … … … … … … … … … … . . … … . (2.13)


𝑘=0

To obtain the intensity pixel, p(s𝑘 ) has been multiple by L1 and then
round to get the closest integer value [46].

33
2.4 Homomorphic Filter
1. Reflection and Illumination Model

Generally, an image can be thought of as a 2D-Function of the form I(x,


y), with a positive scalar quantity at spatial coordinates (x, y). In the physical
sense, it is referred to as the original image. By working with grayscale
images, the values of an image formed by a physical process are equivalent to
the amount of energy radiated by the source. In other sentence, an image is a
function of the quantity of light reflected by the objects in the scene and is
made up of an array of measured light intensities. The intensity is a
combination of reflectance and illumination where reflectance is an amount
of light reflected by the objects and the illumination is an amount of light that
falls on the scene being observed. If illumination is represented as i(y, z) and
reflectance as r(y, z), then an image f(y, z) as:

𝑓(𝑦, 𝑧) = 𝑖(𝑦, 𝑧). 𝑟(𝑦, 𝑧) … … … … … … … … … … … … . . … (2.14)

Therefore, an image formation model in a Reflection and Illumination


model can be employed to improve or overcome problems with improving an
image captured in low-illumination environments [47, 48].

34
2. Basic Homomorphic Filter

Homomorphic Filter approach can be used to improve the contrast by


control the brightness of the image and at same time normalize the images.
This approach can work easily and get a good, especially when the source
image is capture in low illumination condition. The homomorphic filter
algorithm includes Logarithmic transform, Discrete Fourier Transform
(DFT), H (u, v), Inverse Discrete Fourier Transform (IDFT) and Exponential
transform depended on the type of filter [49].

Figure (2.5): Basic steps of homomorphic filter [6]

The following equations related to homomorphic filter:

z(x, y) = ln f(x, y) … … … … … … … … … … … … . … … . (2.15)

= ln i(x, y) + ln r(x, y) … … … . … … … … . . … (2.16)

Then,

𝜑{𝑧(𝑥, 𝑦)} = 𝜑{𝐼𝑛 𝑓(𝑥, 𝑦)} … … … … … … … … . . … . (2.17)

= φ{𝐼𝑛 𝑖(𝑥, 𝑦)} + 𝜑{𝐼𝑛 𝑟(𝑥, 𝑦)} … … . . … … … (2.18)

35
Z(u, v) = 𝐹𝐼 (𝑢, 𝑣) + 𝐹𝑟 (𝑢, 𝑣) … … … … … … … … … … . (2.19)

Where 𝐹𝑟 (𝑢, 𝑣) 𝑎𝑛𝑑 𝐹𝐼 (𝑢, 𝑣) are Fourier transformation of


𝐼𝑛 𝑟(𝑥, 𝑦) and 𝐼𝑛 𝑖(𝑥, 𝑦) , sequentially. After that filtered the Z(u,v) by
applying H(u,v) filter so that

S(u, v) = Z(u, v)H(u, v) … … … … … … … … … … … . . … . . (2.20)

= 𝐹𝐼 (𝑢, 𝑣)H(u, v) + 𝐹𝑟 (𝑢, 𝑣)H(u, v) … . . … … … (2.21)

The filtered image in spatial domain as

s(x, y) = 𝜑 −1 {𝑆(𝑢, 𝑣)} … … … … … … … … … … … … … … . . . (2.22)

= 𝜑 −1 {𝐹𝐼 (𝑢, 𝑣)H(u, v)} +


𝜑 −1 {𝐹𝑟 (𝑢, 𝑣)H(u, v)} … … … . . (2.23)

By define

𝑖 ′ (𝑥, 𝑦)= 𝜑 −1 {𝐹𝐼 (𝑢, 𝑣)H(u, v)} … … … … … … … … … … . … (2.24)

and

𝑟 ′ (𝑥, 𝑦) =
𝜑 −1 {𝐹𝑟 (𝑢, 𝑣)H(u, v)} … … … … … … … … . … … … (2.25)

The equation (2.22) is written as:

s(x, y) = 𝑖 ′ (𝑥, 𝑦) + 𝑟 ′ (𝑥, 𝑦) … … … … … … … … … … … . … (2.26)

36
Lastly, because the z(x, y) was obtained by using the natural logarithm
of the source image, the reverse operation is applied by using the exponential
of the filtered result to get the output image:

𝑔(𝑥, 𝑦) = 𝑒 𝑠(𝑥,𝑦) … … … … … … … … … … … … … … … … … (2.27)


′ (𝑥,𝑦) ′ (𝑥,𝑦)
= 𝑒𝑖 𝑒𝑟 … … … … … … … … … … … … … . . (2.28)

= 𝑖0 (𝑥, 𝑦)𝑟0 (𝑥, 𝑦) … … … … … … … … … … … . … … (2.29)

Where
′ (𝑥,𝑦)
𝑖0 (𝑥, 𝑦) = 𝑒 𝑖 … … … … … … … … … … … … … … … … . (2.30)

and
′ (𝑥,𝑦)
𝑟0 (𝑥, 𝑦) = 𝑒 𝑟 … … … … … … … … … … … … … … … . . (2.31)

Here 𝑖0 (𝑥, 𝑦) and 𝑟0 (𝑥, 𝑦) are represented the illumination and


reflection components of the output image

In this application, the concepts of the method is to separate the


illumination and reflection component as shown in the equation (2.19). Then
applying the homomorphic filter function H(u, v) on the reflection component
by using equation ( 2.32). The illumination of an image has low spatial
variations, but the reflectance component has sudden changes, especially at
the intersections of dissimilar objects. Because of these properties, the low
frequencies of the Fourier transform of the logarithm of an image are related
with illumination, whereas the high frequencies are related with reflectance.
Despite the fact that these correlations are only imprecise estimates, they can
be useful in image filtering.

37
𝐷2 (𝑢,𝑣)
−𝑐[ ]
𝐻(𝑢, 𝑣) = (𝑦𝐻 − 𝑦𝐿 ) [1 − 𝑒 𝐷02 ] + 𝑦𝐿 … … … … (2.32)

Where 𝑦𝑙 , 𝑦ℎ represent the minimum and maximum coefficients values,


sequentially. D0 represents the cut-off frequency and c is represent the
constant that control the sharpening of the bevel [6, 48].

2.5 Weights Maps Calculation

The weight maps is established to understand the spatial correlations


of degraded areas. Each pixel weight is calculated according on the object's
hue, contrast, and saturation [30]. Three weight maps have been employed
that are Laplacian Contrast Weight, Saliency Weight, and Saturation Weight.
The next part describes how each weight map is created.

2.5.1 Laplacian Contrast Weight (𝑾𝑳 )

The contrast is evaluated by compting the absolute value of a Laplacian


filter that implemented to every input luminance channel. This filter can be
applyed to expand the depth of field of an image by ensuring that the image's
edges and textures have high values. However, because it cannot discriminate
between the ramp and flat areas, this weight is insufficient to restore contrast.
As a result, saliency weight is also employed to solve this issue [15].

In this work, Fast Local Laplacian Filter (FLLF) been utilized . The
steps of the FLLF can be described as follow [50, 51]:

38
1- To process input image I, FLLF employs a point-wise nonlinearity
function r(.) that is dependent on the Gaussian pyramid co-efficient g =
𝐺𝑙 [I] (x, y), where r is the ramp function. l represent the level of the
Gaussian pyramid and (x, y) is the location of the pixel. This method
produces a lot of intermediary images for different values of g.
2- FLLF calculates all output coefficient 𝐿𝑙 [O] (x, y) of the Laplacian
pyramid of the converted image by integrating all of these intermediate
images.
3- The algorithm collapsed the output pyramid L(O). to get the output
image O

2.5.2 Saliency Weight (𝑾𝑺 )

Generally, the salient area of the image is one that changes significantly
in the image (for example: shape, texture and color). The human visual system
can easily detect these areas. To keep these areas and improve contrast of the
image, a salient weight is utilized. Saliency is a technique for enhancing
features that is missed in the images. Because of the obvious saturation map,
the contrast of the image will be diminished slightly. The other contrast
enhancement techniques cannot improve edges area. So it’s essential to use
saliency weight to enhance the image equality [17, 52].

In this work the saliency weight calculation technique that suggested


by Ancuti has been employed. The saliency weight can be defined by the
following equation:

𝑊𝑆,𝑘 (𝑖, 𝑗) = [𝐿𝐾 (𝑖, 𝑗) − 𝐿𝑚,𝑘 (𝑖, 𝑗)]2 + [𝑎𝐾 (𝑖, 𝑗) − 𝑎𝑚,𝑘 (𝑖, 𝑗)]2
+ [𝑚𝐾 (𝑖, 𝑗) − 𝑚𝑚,𝑘 (𝑖, 𝑗)]2 … … … … … … … … … … . … . (2.33)

39
Where 𝑊𝑆,𝑘 (𝑖, 𝑗) refers to the saliency weight, k refers to the levels of input
image, 𝐿𝐾 (𝑖, 𝑗)refers to the brightness value of the input in the Lab color
space, 𝐿𝑚,𝑘 (𝑖, 𝑗) refers to the mean value of the brightness of the input image
in the Lab color space, the mean value of the a and b color channels is referred
to as 𝑎𝑚,𝑘 (𝑖, 𝑗) and 𝑚𝑚,𝑘 (𝑖, 𝑗), sequentially [53].

2.5.3 Saturation Weight (𝑾𝑺𝒂𝒕 )

The saturation weight helps in fusion process to get the chromatic


information from high saturated areas. The image appears brighter with the
saturation color. This weight map evaluation is depended on the deviation
between the luminance 𝑅𝐼, , 𝐺𝐼 and 𝐵𝐼 color channel for each pixel point [52].

𝑊𝑆𝑎𝑡 = √1/3[(𝑅𝐼 − 𝐿𝐼 )2 + (𝐺𝐼 − 𝐿𝐼 )2 + (𝐵𝐼 − 𝐿𝐼 )2 ] … … … . . (2.34)

In practice, the three weight maps for each input are combined into a
single weight map as bellows. An aggregated weight map 𝑊𝐼 is initially
calculate for every input I by adding the three weight maps𝑊𝐿 , WS andWSat .
The weights of every pixel in every map are then divided by the summation
of the weights of the same pixel across all maps to normalize K aggregated
̅𝐼 for every
maps on a pixel-by-pixel basis. The normalized weight maps 𝑊
input are calculated as:

̅𝐼 =(𝑊𝐼 + δ) / (∑𝐼𝐼=1 𝑊𝐾 + I. δ) … … … … … … … … … … … . . . . (2.35)


𝑊

Where δ representing a minor regularization term that assures that each


input contributes to the output [18, 53].

40
2.6 Dual Pyramid

Pyramid-based exposure fusion achieved Multi-resolution analysis at


various scales, spatial bands and decomposition layers. The Gaussian pyramid
and Laplacian pyramid are the most common types of pyramids in use.
Gaussian pyramid divides the actual image into levels, extracts features
effectively, and applies the appropriate fusion rule for each level. So, during
image convolution and down-sampling, several high-frequency information
are lost, resulting in blurring in the fused image. The Laplacian pyramid
divides the actual image into spatial frequency bands, allowing features and
details to be highlighted and extracted based on the properties of associated
decomposition layers. It does, however, frequently lead to unpleasant
transitions between bright and dark pixels. As a result, mixing the two
pyramids is an alternative formula to take into account [54, 55].

41
2.6.1 Gaussian Pyramid

To create a multi-resolution version of an image, a recursive technique


has been implemented. A Gaussian filter is an appropriate candidate to this
purpose. This is an elegant and efficient approach for reducing the resolution
of an input image. It consists of two steps:

First, convolving the image with a low-pass filter (for example, the 4-th
binomial filter b4 = [1, 4, 6, 4, 1] / 16). Second, sub-sampling the output by a
factor of two. Each level is created by using the 4-th binomial filter with a
stride of 2 to filter the preceding level (on each dimension). When used in a
recursive manner, this technique generates a series of images, each one
smaller and lower in resolution than the previous [46, 56 and 57]. The
following figure present the steps of Gaussian pyramid.

Figure (2.6): Applying the Gaussian pyramid on “Lady” image, the


input image at level zero [56]

42
2.6.2 Laplacian Pyramid

Laplacian pyramid is series of error images 𝐿0 , 𝐿1 , … … . . , 𝐿 . Each


represents the variation between two Gaussian pyramid levels. Therefore:

𝐿𝑖 = 𝑔𝑖 − 𝐸𝑋𝑃𝐴𝑁𝐷(𝑔𝑖+1 ) … … … … . … … … … … … … … … … … . ( 36)

= 𝑔𝑖 − 𝑔𝑖+1 … … … … . … … … … … … … … … … … … … … … (37)

Where 𝐿𝑛 = 𝑔𝑛 , there is no image 𝑔𝑛 + 1to act as the prediction image


for 𝑔𝑛 , where 𝑔𝑛 represent the final Image in the final level [55,56].

The following figure present the steps of Gaussian pyramid and its
corresponding Laplacian pyramid steps.

43
44
Figure (2.7): Four steps of Gaussian and Laplacian pyramid, the first row represent the Gaussian
pyramid images, Laplacian pyramid is shown in the down row where represent the difference between
the considering level and next one in Gaussian pyramid [55].
2.7 Multi-Scale Fusion

In the term of image fusion, Equation (2.38) can be used for the
simplest images fusion processing for the two sets of input images. However,
this process will cause artifacts to appear in the final images. In this work,
uses a fusion technique based on multi-scale Laplacian pyramid
decomposition has been used to prevent this problem.
𝑁
𝑓𝑢𝑠𝑖𝑜𝑛(𝑥, 𝑦) = ∑ 𝑊𝑛 (𝑥, 𝑦)𝐼𝑛 (𝑥, 𝑦) … … … … … … … . . … (2.38)
𝑛=1

For the input image version, the Laplace operator is used to obtain the
first level of the pyramid. Then, by down-sampling the level, the second level
of the image is created, and so on. Thus, to get the Gaussian pyramid of the
normalize weight image, the normalized weight version 𝑊𝑛 , filter the input
image utilizing the low-pass Gaussian filter kernel function G, corresponding
to each level of Laplacian pyramid. So, pyramid of multi-scale fusion can
write as follow:
𝑁

𝑝𝑦𝑟𝑎𝑚𝑖𝑑𝑙 (𝑖, 𝑗) = ∑ 𝐺𝑙 {𝑊𝑛 (𝑖, 𝑗)}𝐿𝑙 {𝐼𝑛 (𝑖, 𝑗)} … … … . … … . . (2.39)


𝑛=1

Where 𝐺𝑙 , 𝐿𝑙 represent Gaussian and laplacian pyramid of each level,


N represent the number of image. The multi-scale laplacian pyramid based
fusion has been employed to ensure that only desired features are taken in the
final image [42].

Finally, using Laplacian inverse transform, based on the reconstructed


Laplacian pyramid decomposition map, a fused image with a lot of
information can be created [54].

45
2.8 Datasets
Underwater image enhancement has gotten a lot of attention Due to its
importance in deferent field. In recent years, a new underwater image
enhancing algorithms have been suggested. However, these techniques are
primarily tested on synthetic datasets or a small number of real-world images.
As a result, its unknown how these algorithms would perform on real-world
images or how we'd track development in the field [58].

The Real world Underwater Image Enhancement (RUIE) dataset


divided in to three subset: Underwater Image Quality Set (UIQS), Underwater
Color Cast Set (UCCS), and Underwater Higher-level Task-driven Set
(UHTS). The underwater images were taken by 24 cameras positioned at fixed
locations over 3 hours. But the depth and illumination conditions of scenes in
the collection vary due to changing tides and sunshine over time. UIQS
includes 3,630 images for evaluation, which are divided into five groups based
on the quality of the images as measured by an underwater image quality
measure. UCCS is made up of 300 images from UIQS that have been grouped
according to their overall color cast, which is determined by the average b
channel intensity in the CIELab color space. UIQS and UCCS are good tools
for assessing the quality of filtered images. UHTS comprises 300 images of
three marine species (scallop, sea urchins, and sea cucumbers), and is mostly
used to test the effectiveness of specific applications like detection and
classification [59].

In this work, real world Underwater Images Enhancement Benchmark


(UIEB) Dataset was used. It consists of 980 images, possibly taken under
natural or artificial light, collected from publications and online repositories
or taken by a group of authors. UIEB serves as a platform for evaluating the

46
performance of different algorithms used to enhance underwater images [59,
60]. https://li-chongyi.github.io/proj_benchmark.html [60].

2.9 Metrics of Performance Evaluation

The performance of metrics that evaluate the quality of underwater


image is an important component in image processing, analysis and
classification in different fields [61].

The following full-reference metrics is used to evaluate the


performance of the system: information entropy (IE), patch based contrast
quality index (PCQI) and average gradient (AG) [24].

2.9.1 Information Entropy (IE)

IE describes the amount of information that can be used to determine


the color richness of an image. When the image is un-uniform, the dynamical
range of the image is wider because every grayscale value in the image has an
equal chance of occurring. Thus, the IE value is the highest. While in regions
where grayscale and fogy are consistent, the IE value is the lowest. The IE
can be define by the following equation:
255

𝐼𝐸 = − ∑ 𝑝(𝑖)𝑙𝑜𝑔2 𝑝(𝑖) … … … … … … … … … … … … … … … … (2.40)


𝐼=0

Where 𝑝(𝑖) is a probability intensity of the image’s pixel, i = 1 …255.


Accordingly, the more image’s color information, increase the visualize
impact of the image, the higher the IE value [40, 45].

47
2.9.2 Patch Based Contrast Quality Index (PCQI)

Average intensity, signal strength, and signal structure are three adaptable
and logically independent elements that are employed to represent every
image patch. Although the main image may not has a good contrast, it is
thought to be the reliable source for structural information. Therefore, it is
important to separate the representation of the structure from the average
intensity and signal strength, causing it to be distorted. It is measurable
independently. The suggested method not only predictions the test image's
overall contrast quality, but also generates a local quality map that shows
regional variations in quality over space.

𝑃𝐶𝑄𝐼(𝑖, 𝑗) = 𝑞𝑖 (𝑖, 𝑗). 𝑞𝑐 (𝑖, 𝑗). 𝑞𝑠 (𝑖, 𝑗) … … … … … … … . ( 2.41)

Where 𝑞𝑖 (𝑖, 𝑗), 𝑞𝑐 (𝑖, 𝑗), 𝑞𝑠 (𝑖, 𝑗) are the mean intensity, contrast change, and
structural distortion, respectively [62].

48
2.9.3 Average Gradient (AG)

AG Indicates the richness of image information and minor


modifications to its details. AG for the input image can write as the equation
blow:
𝑚−1 𝑛−1
1
𝐴𝐺 = ∑ ∑ √(∇xF(i, j))2 + (∇yF(i, j))2 … . . (2.42)
(𝑚 − 1)(𝑛 − 1)
=1 𝑗=1

Here m, n refer to the width and height of the image, ∇xF(i, j), ∇yF(i, j)
refer to the difference of F (i, j) over the x, y axis. Resulting in more detailed
information of the defogged image is obtained as the AG value increases [63].

2.9.4 Underwater Image Quality Measure (UIQM)

The UIQM consist of three main measures, underwater images


colorfulness measure ( 𝑈𝐼𝐶𝑀 ), Underwater Image Sharpness Measure
(𝑈𝐼𝑆𝑀), Underwater Image Contrast Measure (UIConM)[19]. Which will be
discussed in detail below:

 Underwater Image Sharpness Measure (𝑼𝑰𝑪𝑴)

Various conditions of underwater environment contribute to the


attenuation of underwater images in different ways, as the colors with the
shorter wavelength disappear first. Starting with the red light, followed by the
green light, and finally the blue light with the longest wavelength, so the
image appears underwater with a blue-green tint. Therefore, to assess the
effectiveness of color correction methods. The UECM measure is used, which

49
evaluates the components of red, green (RG) color and the blue and yellow
(BY) color.

RG = R – G … … … … … … … … … … … … … … … … … … … … (2.43)

R + G
YB = – B … … … … … … . . … … … … … … … … … … . . (2.44)
2

Since underwater images frequently include a lot of noise, the colorfulness


of underwater images is determined by alpha trimmed statistical values.
𝑁− 𝑇𝐿
1
µ𝑅𝐺 = ∑ 𝐼𝑛𝑡𝑒𝑛𝑠𝑖𝑡𝑦𝑅𝐺 (𝑥) … … … … … … … . (2.45)
𝑁 − 𝑇𝐿 − 𝑇𝑅
𝑋=𝑇𝑅

Where N represents a total pixel of the component (RG) and these pixel
arranged as following 𝑋1 < 𝑋2 < ⋯ < 𝑋𝑁 , T R = αR · N and T L = αL · N
which are the number of smaller and greater pixel value to be neglect or
ignore, the µ𝑅𝐺 refers to the Chroma intensity, the average value of the color
component RG-YB is close to zero that indicate to the effective white balance,
that mean no one of the colors are dominant. To compute the higher variance
is associated with a wider dynamic range using the following equation:
𝑁
2
1
𝜎𝑅𝐺 = ∑(𝐼𝑛𝑡𝑒𝑛𝑠𝑖𝑡𝑦𝑅𝐺 (𝑋) − µRG)2 … … … … … … … . . (2.46)
𝑁
𝑋=1

Additionally, it is possible to compute μ2𝑌𝐵 , 𝜎𝑌𝐵


2
in a same manner. The
colorfulness coefficient of the underwater image can be compute by the
following equation [64]:

𝑈𝐼𝐶𝑀 = −0.0268√μ2𝑅𝐺 + μ2𝑌𝐵 + 0.1586√𝜎𝑅𝐺


2 2
+ 𝜎𝑌𝐵 … … (2.47)

50
 Underwater Image Sharpness Measure (𝑼𝑰𝑺𝑴)

Sharpness represents the edges and details of the image, and it is definitely
preferable to obtain an image with prominent edges. Underwater images
suffer from deterioration and attenuation as a result of absorption and
scattering. To measure the sharpness of the image, the enhancement measure
estimation (EME) was used:
𝑛 𝑚
2 𝐼𝑀𝑎𝑥 , 𝑘, 𝑙
𝐸𝑀𝐸 = ∑. ∑ log ( ) … … … … … … … … … . . (2.48)
𝑛. 𝑚 𝐼𝑀𝑖𝑛 , 𝑘, 𝑙
𝑘=1 𝑙=1

Here divided the image into 𝑛 × 𝑚 blocks and found the maximum and
𝐼𝑀𝑎𝑥 ,𝑘,𝑙
minimum pixel values in each one, refers to the contrast ratio per
𝐼𝑀𝑖𝑛 ,𝑘,𝑙

block. Underwater Image Sharpness Measure can be compute by the


following equation:
3

UISM = ∑ ⅄𝑘 . 𝐸𝑀𝐸(𝑔𝑟𝑎𝑦𝑠𝑐𝑎𝑙𝑒𝑘 ) … … … … … … … … … … … (2.49)


𝑘=1

Where ⅄𝑘 represent the weight coefficients of the red, green, blue


component, since ⅄𝑟 = 0.299, ⅄𝑔 = 0.587, ⅄𝑏 = 0.114 [65].

51
 Underwater Image Contrast Measure (UIConM)

Contrast is associated to visual performance, contrast deterioration occurs due


to backward scatter in underwater images to measure contrast. Log AMEE
measure can be used for measured the contrast of underwater image .

Log AMEE
𝑛 𝑚
2 𝐼𝑀𝑎𝑥 , 𝑘, 𝑙 − 𝐼𝑀𝑖𝑛 , 𝑘, 𝑙 𝐼𝑀𝑎𝑥 , 𝑘, 𝑙 − 𝐼𝑀𝑖𝑛 , 𝑘, 𝑙
= ∑. ∑ . log ( ) … . . (2.50)
𝑛. 𝑚 𝐼𝑀𝑖𝑛 , 𝑘, 𝑙 + 𝐼𝑀𝑎𝑥 , 𝑘, 𝑙 𝐼𝑀𝑖𝑛 , 𝑘, 𝑙 + 𝐼𝑀𝑎𝑥 , 𝑘, 𝑙
𝑘=1 𝑙=1

So, the contrast measure of underwater image can be write by the


following equation [64]:

𝑈𝐼𝐶𝑜𝑛𝑀 = Log AMEE(Intensity) … … … … … … … . . … … … . (2.51)

 Underwater Image Quality Measure(UIQM)

Underwater images have been shown to be described as a linear


superposition of absorption and scattering components. Particles in the water
can also produce sharpness degradation, color casting, and contrast reduction
due to water absorbing and backward scatter. In order to generate the overall
underwater image quality measure, it is fair to utilize the linear model, so, the
underwater image quality measure can be compute as the following:

UIQM = 𝑘1. 𝑈𝐼𝐶𝑀 + 𝑘2. 𝑈𝐼𝑆𝑀 + 𝑘3. UIConM … … … … … … … (2.52)

Here 𝑈𝐼𝐶𝑀, 𝑈𝐼𝑆𝑀, UIConM is the contrast, sharpness, and contrast


measure of underwater images. k1, k2 , k3 are the general coefficient of each
measure that control the important parameter and balancing their values, since
k1= 0.0282, k2= 0.2953, k3= 3.5753 [19].

52
2.9.5 Underwater Color Image Quality Metric (UCIQE)

Underwater image suffers from high color intensity, low contrast, and
deterioration. To choose an appropriate metric to measure the quality of an
underwater image, the underwater color image quality metric based on LAB
chromatic, contrast, and Saturation measure is used:

UCIQE = k1 × σ + k2 × con + k3 × μ … … … … … … … (2.53)

Where σ represents the slandered deviation, con represents the


illumination contrast, and μ represents saturation average. (k1, k2, k3)
represent the coefficients, since k1= 0.4680, k2=0.2745, and k3=0.2576. [66]

53
Chapter Three

System Design and Implementation

3.1 Introduction

In this chapter, the proposed system for restoration the underwater


image is presented. The overall research framework of the proposed is
introduced then general explanation of the algorithms that applied in each step
will presented. In addition to providing a detailed explanation of the
underwater images that were used in this work.

3.2 The Framework of the Suggested System

The proposed system is designed to restoration the underwater image


by feature extraction using a set of real images capture underwater. First, a
number of color correction algorithms was used to enhance the color of the
image. This includes color compensation to compensate the loss in the red
channel followed by applying white balance algorithm in order to make the
underwater image with natural colors. Then, two different methods of contrast
enhance that used to enhance the illumination of underwater image (Gamma
correction and image sharpening). So that in total we have two images that
were processed separately by using two different methods. The homomorphic
frequency filter is applied on the Gamma Correction result to correct non-
uniform lighting and sharpens the image. For feature extraction, three weight

54
maps are applied to the two output images. after that, the two images obtain
from the image contrast enhancement were entered to the Laplacian pyramid,
while the two images resulting from the weight maps were processed using
the Gaussian pyramid and lastly the two final images were fused using the
multi- scale fusion. The following figure shows the complete scheme of the
proposed system.

55
56
Figure (3.2): Block diagram of the Proposed System.
3.2.1 Color Correction

As a result of the conditions of the underwater environment that


obviously affect the image quality such as dispersion, absorption, reflection,
etc. The biggest problem was faced in the images captured underwater is the
appearance of the blue-green tint as a result of the scattering of light with
depth increasing. So, restoring the true color of the image is a great challenge
for many image-based applications. Therefore, a set of algorithms for color
correction has been proposed, and given the conditions in which the image
was taken in terms of environment, depth, time of day, etc., there is no specific
color correction algorithm suitable for all types of underwater images. In the
proposed system and to get around this problem the following two steps are
implemented:

First: Color Compensation

Second: White Balance

3.2.1.1 Color Compensation

Color compensation is used to reduce defects caused by color loss due


to the property of absorption in the underwater environment. Red color has
the largest wavelength and lowest frequency, losing first, followed by green
color, then blue color with the shortest wavelength and longest frequency. As
a result, the image captured in the water appears in a bluish tone. The
algorithm (3.1) explain the main steps of color compensation algorithm.

57
Algorithm (3.1)// Color Compensation

Input: Underwater Image


Output: Red color compensation image
Begin
Step1:Read the underwater input image
Step2: Separate the RGB channels
Step3: Calculate the mean for each channel.
Step4: Calculate the red channel compensation using equation (2.1) // ∝
= 0.1
Step5: Concatenation of three channel
End.

In the Color Compensation algorithm, the compensate for the loss of


red color resulting from the absorption of different wavelengths of colors
underwater. The red channel attenuation is compensated by adding a portion
of the green channel to the red channel. At first, the images are separated into
the red, green and blue channels. then the mean for the red and green channels
is calculated , after that the red channel compensation is compute using the
equation (2.1). finally, the three channels are concatented.

58
3.2.1.1White Balance
The primary goal of proper white balance is to make the natural colors of
an image accurate, which is where the element “white” of the term comes
from. In his work the gray world algorithm is used.

 Gray World (GW)


It is one of the easiest assessment methods used. It relied on the
principle that the image is color-balanced, that is mean, the average of each
color is gray. Thus, the brightness of the color cast can be estimated in relation
to the gray. The algorithm (3.2) showing the steps of gray world algorithm.

Algorithm (3.2)// Gray World


Input: Color Compensation Image
Output: Color-balanced image
Begin
Step1: compute the averag value of every channel using equation (2.2)
Step2: calculate the gain of each channel separetily using equation
(2.3),(2.4) and (2.5)
Step3: calculate the resaulte value of each channel using equations
(2.6),(2.7) and (2.8)
Step4: Estimate the illumenation of the image as asumption the colors’
averag is gray using illumgray function
Step5: Adjust the color balance in RGB image using chromadapt function
End.

59
This algorithm works to adjust the illumination of the image. At first,
the average of the three channels are calculated, and then calculate the gain
of each channel, the final values of each channel were found, in addition to
estimate the illumination based on gray color and color balancing at the RGB
image.

In this work and after color correction step, two version of enhanced
image are derived for fusion process. First one with image sharpening and
second with gamma correction.

3.2.2 Contrast Enhancement

Contrast enhancement is an enhance the visibility of the scene by


improving the color and brightness between the objects themselves and the
background. Contrast is the difference in the color components of an image
that makes one element stand out from the other. The goal of the contrast
enhancement process is to improve image quality. There are a number of
algorithms that have been proposed in recent years to enhance the contrast of
images such as sharpening the image, histogram equalization, global
histogram equalization, local histogram equalization, adaptive histogram
equalization, and contrast limited histogram equalization.

60
3.2.2.1 Gamma Correction
It is a non-linear process that known as the power law transformation
and constitutes a major and important component of digital imaging devices.
It represents the relationship between illumination and pixels values Shadows.
Images captured by imaging equipment do not appear as they do to the human
eye, therefore the gamma correction was used to improve the exposure
process. When gamma value is less than one, the image tends to be darker,
and vice versa, it will appear brighter if the gamma value is greater than one,
while the output image looks the same in the case of the input if the gamma
value is one. The main steps of the gamma correction shown in the algorithm
(3.3).

Algorithm(3.3)// Gamma Correction


Input: Gray World Image
Output: Enhancement Image
Begin
Step1: Define C as constant value // c=1
Step2: Define r value to create a vector from 0 to 255// r=0..255
Step3: Applying the power-law equation using equation (2.9)
Step4: Rescale the result of the power-law equation by normalizing the
vector
Steps5: image corrected equals to the evaluated Gamma Correction
End.

In this algorithm applying the power low (gamma) transformation on


the Gray World input image in order to recreate the color of the image

61
correctly, since changing the gamma value affects not only the intensity of the
color image itself but the ratios of red to green to blue.

3.2.2.1 Image Sharpening


It is the effect that gives the image a clear appearance when applied. In
order to increase the clarity of the image, the edges are improved. In blurry
images cannot distinguish between the background and the edges. As it is
known, contrast and intensity was changed in the edge. If this change is
beneficial, it means that the image is clear. Algorithm (3.5) showing the steps
of image sharpening.

Algorithm(3.5)// Image Sharpening


Input: Gray World Image
Output: Enhancement Image
Begin
From 1 to m // m=30
Begin for
Step1: Applying Gaussian filter on input image using equation (2.10) for
each channel
Step2: Return the minimum smoothing image
End for
Steps3: Subtract the input image from the Gaussian filter image
Steps4: Calculate the histogram equalization for each channel using
equation (2.13)
Step5: Concatenation the three channel
End.

62
In this algorithm, a Gaussian filter is applied to the Gray World input
image, which is used to reduce the noise in the image, thus a more accurate
and realistic image can be obtained. After that, histogram equalization is
implemented, Histogram equalization is used to enhance the contrast by
spreading the intensity values of pixel over dynamic range of image.

3.2.3 Homomorphic Filter


Homomorphic filters are used to enhance the contrast of the image by
controlling the brightness and at the same time normalizing the image and
removing multiplicative noise. Where the reflection and brightness
components of the image cannot be separated, and because they are integrated
with each other multipicatively, it can be linearly separated in the frequency
domain using the logarithm of the image density. The illumination variances
can be viewed as noise multiplication and can be decrease by filtering log
domain. The high-frequency components are thought to represent the
reflection in the image, while the low-frequency components represent the
illumination in the image, to keep the image evenly light, a high-pass filter
can be used to increase the high frequencies while decreasing the low
frequencies, In the domain of log intensity. The main steps of homomorphic
filter can see them in the algorithm (3.4).

63
Algorithm(3.4)// Homomorphic Filter
Input: Gamma corrected image, r, c // [r, c] are image size
Output: Enhancement Image
Begin
Step1: convert the image to the log domain using the equation (2.16)
Step2:convert image to the frequency domain by Discrete Fourier
transform(fast Fourier transform) using the equation(2.19)
Step3: applying high-pass filter using equation (2.32)
Step4: inverse fast Fourier transform using equation(2.26)
Step5: invert the log-transform using equation(2.29)
End.

The homomorphic approach can work easily and get a good result,
especially when the source image is capture in low illumination condition.
The homomorphic filter algorithm includes Logarithmic transform, Discrete
Fourier Transform (DFT), H (u, v), Inverse Discrete Fourier Transform
(IDFT) and Exponential transform depended on the type of filter.

64
3.2.4 Calculation Weights Maps

Contrast-enhancing methods deal with blurry images. The optical


density of each image differ according to the image blur, so it involves all
different values of each pixel. The disadvantage of contrast enhancement
methods is the use of static and identical processes on the entire image.
Therefore, weight maps was used to optimize each region individually so that
the resulting image will be more saliency and contrast than the input image.

In this work three weights maps are used that introduced in the
following sections:

3.2.4.1 Laplacian Contrast Weight (𝑾𝑳 )


Laplacian Filter was used for edges detection of the image. However,
this algorithm cannot discriminate between the ramp and flat areas and it is
insufficient to restore contrast. The algorithm (3.6) descript the steps of
Laplacian Contrast Weight.

Algorithm (3.6)// Laplacian Contrast Weight


Input: Homomorphic Image, Sharpening Image
Output : Enhancement Image
Begin
Step1: define filter parameters sigma and alpha // sigma = 0.2; alpha = 0.3;
Step2:filtering the image by using the local laplacian filter by applying it
on the RGB channels separately
Step3: concatenation the three channel
Step4: convert image to the gray world

65
End.

In this algorithm the local Laplacian filter was used. At first, for local
enhancement define two parameters sigma for details processing and alpha
for contrast. Secondly, filtering the intensity of the image by filtering each
channel separately. After that, the three channels was concatenated. Finally,
convert image to the gray world.

3.2.4.2 Saliency Weight (𝑾𝑺 )


Saliency Map represents the prominence of each pixel by its
brightness, as each is inversely proportional to the other. Also called heat
maps, it is a grayscale image. Temperature represents the high-impact areas
through which objects in the image can be highlighted. The purpose of
salience maps is to find and focus directly on prominent areas in the field of
view, so it is used in various models of visual attention. The following
algorithm (3.7) descript the main steps of Saliency Weight.
Algorithm (3.7)// Saliency Weight
Input: Homomorphic Image, Sharpening Image
Output : Enhancement Image
Begin
Step1: Image filtering using imfilter function
Step2: Convert image from RGB to Lab color space
Step3: Compute Lab average values
Step4: compute the saliency map by using equation (2.33)
End.

66
In this step, the saliency weight map was used to increase the quality
of the underwater image because the other weight maps (Laplacian contrast,
Saturation) is not enough to improve the edges region of the image.

3.2.4.3 Saturation Weight (𝑾𝒔𝒂𝒕 )


The saturation map shows the areas of the image that are the most
colorful and those are least colorful. When looking at an image saturation
map, areas of the image without color appear black, while the areas are
colored, it will look brighter. The specific steps of saturation weights can see
in the algorithm (3.8)
Algorithm(3.8)// Saturation Weight
Input: Homomorphic Image, Sharpening Image
Output : Enhancement Image
Begin
Step1: calculate the saturation weight for input images using the equation
(2.34)
Step2: calculate the normalized weight using the equation (2.35)
End.

The saturation weight was applied by using the equation (2.35). After
that, the weight maps of each input image was combined into single weight
map. An aggregated weight map is obtained for every input using the equation
(2.36).

67
3.2.5 Dual Pyramid
Image information is highlighted by several spatial scales. Pyramid-
based exposure fusion on multi-resolution analysis - is a data structure used
to process and analyze images at most spatial scales. In this work, the
Gaussian and Laplacian pyramids were used.

3.2.5.1 Gaussian Pyramid


A Gaussian pyramid is a method of image processing, in which an
image is divided into several smaller sizes for blurring. This method is widely
used in computer vision and image processing. When this method is applied
to the blurred image, the process of edge detection becomes much easier so
that the computer can automatically identify the objects. The steps of the
Gaussian pyramid can see it in the algorithm (3.9).
Algorithm(3.9)// Gaussian Pyramid
Input: normalized weight images, level
Output: Enhancement Image
Begin
Step1: define the number of level
Step2:Convolving the input image by using binomial filter
Begin for
Step3: subsampling the result of step2 by a factor of 2
Step4: convolving the image by using binomial filter
End for
End.

68
This algorithm convolving the input image with low pass filter
(binomial filter) and consider it as a first level. After that subsampling the
result by factor of 2. Each level is created using binomial filter in the first step
to filter the previous level

3.2.5.2 Laplacian Pyramid


The laplacian pyramid is the difference between two adjoining levels.
The Laplacian pyramid separates the specific image into spatial frequency
bands so that features and details can be extracted and highlighted based on
the characteristics of related decomposition layers. The following algorithm
(3.10) represent the steps of laplacian pyramid.
Algorithm(3.9)// Laplacian Pyramid
Input: Image Sharpening and Homomorphic Image, level
Output: Enhancement Image
Begin
Step1: Consider the input image as a level 1
For each level started with level 2// level =10
Begin for
Step2: Down-sampling the image by a factor of 2
Step3: Store the result in a variable or array
End for
Calculate the Difference of Gaussian
For each level to level-1
Begin for
Step4: Find the difference between two adjoining levels after resizing the
levels using the equation (2.36) and store the result directly

69
End for
End.

In the laplacian pyramid, down-sampling the image by a factor of two,


after that expand the image to be as size as of the previous level. Finally,
calculate the difference between the two adjoining levels.

3.2.6 Multi-scale Fusion


Image fusion technology is defined as gathering useful information
from a number of images, and fusing it into a smaller number of images which
is usually a single image. This image is more accurate and informative than
any of the entered images and contains all the useful information. The main
reason for the fusion process is not just to get an image with less data, but to
get a more accurate image that is understandable by humans and computer
vision. The steps of fusion method can see in the algorithm (3.10).
Algorithm(3.10)// Multi- Scale Fusion
Input: Laplacian and Gaussian Pyramid Images
Output: Final Enhancement Image
Begin
Step1: Calculate the Gaussian Pyramid for the normalize weights
Step2: Calculate the laplacian pyramid for each channels of the
sharpening and homomorphic images
For each level
Begin for
Step3: Appling the fusion process on each channels using the pyramid
equation (3.39)

70
End for
// Pyramid reconstruction for each channel
For each level =10 down to 2
Step4: Add all two adjoin levels after resizing
End for
Step5: concatenation the three channels
Step6: Display the final result
End.

For the input image version, the Laplace operator is used to obtain the
first layer of the pyramid. Then, by down-sampling the layer, the second layer
image is created, and so on. Thus, to obtain the Gaussian pyramid of the
normalized weight image, filter the input image using the low-pass Gaussian
filter (binomial filter), corresponding to each layer of the Laplacian pyramid.
Finally, applying the Laplacian inverse transform on the output to obtain
image with more details based on the reconstructed Laplacian pyramid
decomposition map.

71
Chapter Four

Experimental Results

4.1 Introduction

In this chapter and to prove the effect of the proposed system, the detail
explanation for each step will introduce. In the beginning, the colors are
corrected using color compensation and white balance algorithms, thus a less
distorted image is obtained by eliminating the predominance of green and blue
colors and compensating for the loss of red colors. Then, we worked on the
image in two different ways. Image sharpening algorithm is used for edge
detection and it is also used to restore the color that has been lost in the input
image, in other one, the gamma correction and the homomorphic filter
algorithms are used to make the image brighter and enhance the illumination.
Then, weight maps are performed to both images separately for features
extraction and highlight the details. Finally, the multi- scale fusion method
has been adopted to avoid the artifacts of the simple fusion method, using the
Laplace and Gaussian pyramids to obtain a higher quality image. As a result,
the suggested method gave better results in most cases.

72
4.2 System Specifications

The proposed system requirement is both hardware and software tools


such as Windows 2010 operating system, the algorithms were implemented
in Matlab (R2022a) on PC with processor core i7-85654 and RAM with 8.00
GB, used real world Underwater Image Enhancement Benchmark (UIEB)
dataset

4.3 Underwater Image Dataset

In this work, a set of underwater images were selected from


Underwater Image Enhancement Benchmark (UIEB) dataset to evaluate the
steps of the proposed work. These images are divided into two groups:

First: numbers of images were selected to enhance the proposed system.

Second: 12 images including (bluish, greenish and fogy) were used to


compare with the recently works.

The following figure shows samples of the images that are used in the
first and second cases. Where figure (4.1 a) presents samples of greenish,
bluish and fogy underwater image that used to compare with recently papers,
while figure (4.1b) presents samples of underwater images that have been
used to evaluate the proposed work.

73
(a) (b)

Figure (4.1): (UIEB) real underwater images a) Samples of underwater


image used for comparison b) Samples of underwater images for this work.

74
4.4 Results and Evaluation Metrics

In this subsection, several metrics to evaluate the quality of an


underwater image are used. Although the traditional metrics (such as, IE) give
us information about the performance of the different algorithms that are used
to restore images, but they do not diagnose the problem of underwater images.
So some of metrics for evaluating the underwater images quality (such as,
UQIM, UCIQE) are used. In this work, three groups of various images (bluish,
greenish, fogy) capture underwater are selected. IE describes amount of
information that can be used to determine the color richness of the image.
UCIQE is used to evaluate the color quality of underwater image based on
LAB color space. UIQM consists of three metrics (𝑈𝐼𝐶𝑀, 𝑈𝐼𝑆𝑀, UIConM).
UICM is used to evaluate the effectiveness of color correction methodsl,
UISM measure the sharpness of the restored image and UIConM measure
contrast of the restored image.

Figure (4.2) presents the results of three steps of our proposed method
applied to a set of bluish images. The result of the Gamma Correction appears
in the second column after applying the color correction algorithms to an input
images, where the loss of red color has been compensated. According to the
visualize result, the resulting image appears dark after applying a gamma
correction in an attempt to recreate a correct color where all the image pixels
are adjusted, although the resulting image may appear more saturated. In third
column presents the result of sharpening image, the edges of the objects (such
as fish, divers, plants) as well as the texture of the images appear more sharp
and clear to the human eye. The results of our suggested method in the last
column show that the restored images which are better than the input image
in terms of clarity and color.

75
Input Images Gamma Correction Sharpening Images Fusion Results

Blue1Blue1

Blue2

Blue3

Blue4

Blue5

Figure(4.2): The original bluish images in the first column, the second
column display the result after applying Gamma correction algorithm, the
third column represent the result of images after applying sharpening
algorithm, and the final result of proposed method in the last column.

76
In figure (4.3), a set of greenish images are used where the green color
dominates on the captured images. The visible results shown a clear
enhancement in adjusting the colors of the images and extracting the details
of the scene from the objects and the background. The last column shows the
final results obtained after the fusion process.

Also, in figure (4.4), the proposed method is applied to a set of fogy


images where the images are not clear because of blur. The visual results in
all steps of the proposed system showed a clear improvement in the various
parameters of the image, such as color, content, and edges.

In practical, our system achieved high performance and better results


not only on the above images, but also obtain good results when applied to a
set of random underwater captured images that shown in the figure (4.5)
where the images are of various colors and contain different water objects
such as rocks, coral reefs and different types of fish.

77
Input Images Gamma Correction Sharpening Image Fusion
Results
Green1

Green2

Green3

Green4

Green5

Figure(4.3): The original greenish images in the first column, the


second column display the result after applying Gamma correction
algorithm, the third column represent the result of images after applying
sharpening algorithm, and the final result of proposed method in the last
column.

78
Input Images Gamma Correction Sharpening Images Fusion Results

Fogy1

Fogy2

Fogy3

Fogy4

Fogy5

Figure(4.4): The original fogy images in the first column, the second column
display the result after applying Gamma correction algorithm, the third
column represent the result of images after applying sharpening algorithm,
and the final result of proposed method in the last column.

79
Input Image Gamma Correction Sharpening Image Fusion I

Random1

Random2

Random3

Random4

Random5

Figure(4.5): The original random images in the first column, the


second column display the result after applying Gamma correction
algorithm, the third column represent the result of images after applying
sharpening algorithm, and the final result of proposed method in the last
column.

80
Despite the ability of the human eye to distinguish the accuracy and
clarity of the image, it does not give accurate results that can be relied upon.
Therefore, tables (4.1), (4.2), (4.3) and (4.4) depict the numerical scores in
term of IE, UCIQE and UIQM metrics to provide an inclusive assessment of
the effectiveness of the proposed method. The IE, UIQM and UCIQE results
indicate higher values in all types of images. Where the values of the IE and
UIQM can be observed closer in all types of selected images, with a slight
difference in the third indicator results.

Evaluation Metrics
Images
IE UCIQE UIQM

Blue1 7.909 36.353 5.525

Blue2 7.885 30.823 4.392

Blue3 7.923 33.545 5.167

Blue4 9.735 25.074 5.449

Blue5 7.929 22.6581 5.539

Table (4.1): Evaluation Metrics of the set of the bluish images.

81
Evaluation Metrics
Images
IE UCIQE UIQM

Green1 7.7955 24.2223 5.3288

Green2 7.888 33.472 5.155

Green3 7.897 32.058 5.599

Green4 7.6539 19.5623 4.4675

Green5 7.8650 24.9937 5.7119

Table (4.2): Evaluation Metrics of the set of the greenish images.

Evaluation Metrics
Images
IE UCIQE UIQM

Fogy1 8.007 28.196 5.496

Fogy2 7.791 21.133 5.895

Fogy3 7.878 21.513 5.992

Fogy4 7.9248 23.3115 4.9755

Fogy5 7.899 23.874 5.327

Table (4.3): Evaluation Metrics of the set of the fogy images.

82
Evaluation Metrics
Images
IE UCIQE UIQM

Random1 7.987 39.379 4.583

Random2 7.868 25.590 4.841

Random3 8.013 38.003 5.207

Random4 7.847 44.689 5.8479

Random5 5.899 26.284 7.924

Table (4.4): Evaluation Metrics of the set of random images.

4.5 Comparison with Previous Work

In this section, the results of applying the proposed system is compared


with Zhang and et.al. Work that published in 2022. In this work a set of real
underwater images was selected from the Underwater Image Enhancement
Benchmark (UIEB) dataset that classified into three groups: greenish, bluish,
and hazy. These tested images include various details of underwater images
and suffer from distorted colors and poor visibility.

Figure (4.6) presents the results of applying the proposed system on


four first group images (bluish underwater images) which includes images
that are colored in different bluish and range in shades of blue. The visual
results indicate accurate restoration where most of objects (such as fish) are
detect with good colors.

83
Blue1 Blue2 Blue3 Blue4

Figure (4.6): The upper row display the original bluish underwater images and
the lower row display the results of the proposed system.

Figure (4.7) presents the results of applying the proposed system on


four greenish underwater images that consist green images with a lot of small
details that detected in appropriate way such as diver foot and clothes colors,
lines in fish bodies and some details on the rocks.

Finally, figure (4.8) shows the results of performing of the proposed


system on third group underwater images (four foggy images). The visual
results indicate good results where the foggy images are enhanced with clear
contents and with clear and distinct colors.

84
Green1 Green2 Green3 Green4

Figure (4.7): The upper row display the original greenish underwater
images and the lower row display the results of the proposed system.

Fogy1 Fogy2 Fogy3 Fogy4

\\

Figure (4.8): The upper row display the original foggy underwater images
and the lower row display the results of the proposed system

85
Statistically and to compare the performance of the proposed system
with the three grouped underwater images (bluish, greenish and foggy), three
metrics are chosen: information entropy (IE), average gradient (AG) and
based contrast quality index (PCQI) to evaluate the underwater image quality.
Where the primary purpose of AG is to denote the image's sharpness, IE
mostly refers to the average amount of data that can be utilized to characterize
how colorful underwater images are and PCQI primarily assesses the human
eye's ability to perceive contrast in underwater images.

Tables (4.5), (4.6), (4.7) display the statistical comparison evaluation


results of the proposed system with the work that was compared with it in the
three grouped images (bluish, greenish and foggy). In table (4.5), the the
suggested method obtain higher average results in both IE and PCQI metric,
while the average in AG is close to compared result. In the tables (4.6), (4.7)
the suggested system obtain average results close to compared work in both
IE and PCQI metrics and higher result in AG. So that the average score of the
image in the term of IE, AG, PCQI can show in the table (4.8).

86
Zhang and et.al. work (2022) Proposed system
Images
IE PCQI AG IE PCQI AG
Blue1 7.908 1.181 10.89 7.8140 1.181 10.5386
Blue2 7.610 1.145 9.227 7.9039 1.1993 10.1335
Blue3 7.798 1.236 13.68 7.8831 1.1989 8.4881
Blue4 7.795 1.168 7.601 7.8583 1.199 11.3174
Average 7.777 1.182 10.34 7.8640 1.1942 10.119

Table (4.5): The results of the evaluation comparison with the bluish
underwater images.

Zhang and et.al. work (2022) Proposed system


Images
IE PCQI AG IE PCQI AG
Green1 7.875 1.173 7.758 7.8868 1.199 7.0383
Green2 7.887 1.189 7.367 7.8563 1.1989 7.3358
Green3 7.693 1.169 5.479 7.693 1.1988 7.087
Green4 7.819 1.199 6.815 7.819 1.199 6.9526
Average 7.818 1.182 6.104 7.814 1.199 7.103

Table (4.6): The results of the evaluation comparison of the greenish


underwater images.

87
Zhang and et.al. work (2022) Proposed system
Images
IE PCQI AG IE PCQI AG
Fogy1 7.936 1.183 8.85 7.7796 1.1989 10.5199
Fogy2 7.847 1.22 7.405 7.8221 1.1988 9.834
Fogy3 7.723 1.292 17.85 7.7918 1.199 9.3507
Fogy4 7.821 1.238 9.028 7.8678 1.1996 15.8179
Average 7.832 1.233 10.78 7.815 1.1985 11.38

Table (4.7): The results of the evaluation comparison of the foggy


underwater images.

Zhang and et.al. work


Images Proposed system
(2022)
IE PCQI AG IE PCQI AG
Bluish 7.777 1.182 10.34 7.864 1.1942 10.119
Greenish 7.818 1.182 6.104 7.814 1.199 7.103
Fogy 7.832 1.233 10.78 7.815 1.1985 11.38

Table (4.8): The average of the evaluation comparison of (bluish, greenish,


foggy) underwater images.

88
Chapter Five
Conclusion and Suggestion for Future Work
5.1 introduction
This work aims to create a system to restoration underwater images by
feature extracting. This part highlights some of the important conclusions that
have been extracted through this study, in addition to some suggestions that
can help us to get more improved underwater images

5.2 Conclusion

These conclusions were obtained by working in this thesis:

1. The proposed method is one of the ways that achieved great success in
restoration underwater images by supporting the results obtained.
2. The proposed system used a set of real images selected from
Underwater Image Enhancement Benchmark (UIEB) dataset to
evaluate the steps of the proposed work.
3. The proposed system used a white balance algorithm for color
correction of images by addressing the problem of the appearance of
bluish-green tint in underwater scenes.
4. The proposed system used two different and separate methods for
contrast enhancement. The first method, used gamma correction and
homomorphic filter algorithm. The second method, used image
sharpening algorithm. This method gives better results than if the image
is processed in one way.

89
5. Three weight maps (Laplacian Contrast Weight, Saliency Weight,
saturation weight) were applied to both images. First, to overcome the
defects resulting from the contrast improvement step. Secondly, for
features extraction from underwater image
6. Gaussian filter is used for further features extraction and more
appropriate in the fusion process.
7. Laplacian Filter is used to highlight and extract features and details of
an image.
8. Multi-scale fusion process was used instead of simple image fusion to
overcome the defects resulting from the latter. Where it works to collect
important information from a set of images and fuse them into a single
images that contain all the valuable information.

5.3 Suggestion for future work

Important points that can help for further improvement this thesis in the
future can be identified:

1. Focus more on color correction and more feature extraction.


2. Ability to integrate this work with moving systems such as video.
3. Apply the proposed system in real time.
4. Applying the proposed system on synthetic data set with different depth
and water environments.

90
References

[1] Khan, R. (2014). FUSION BASED UNDERWATER IMAGE


RESTORATION SYSTEM. NICOSIA.
[2] Zhang, W., Dong, L., Pan, X., Zou, P., Qin, L., & Xu, W. (2019). A survey
of restoration and enhancement for underwater images. IEEE Access, 7,
182259-182279.
[3] Luo, M., Fang, Y., & Ge, Y. (2019, June). An effective underwater image
enhancement method based on CLAHE-HF. In Journal of Physics:
Conference Series (Vol. 1237, No. 3, p. 032009). IOP Publishing.
[4] Schettini, R., & Corchs, S. (2010). Underwater image processing: state of
the art of restoration and image enhancement methods. EURASIP journal on
advances in signal processing, 2010, 1-14.
[5] Liu, Y., Xu, H., Shang, D., Li, C., & Quan, X. (2019). An underwater
image enhancement method for different illumination conditions based on
color tone correction and fusion-based descattering. Sensors, 19(24), 5567.
[6] Wang, Y., Song, W., Fortino, G., Qi, L. Z., Zhang, W., & Liotta, A. (2019).
An experimental-based review of image enhancement and image restoration
methods for underwater imaging. IEEE access, 7, 140233-140251.
[7] Gao, Y., Li, H., & Wen, S. (2016). Restoration and enhancement of
underwater images based on bright channel prior. Mathematical Problems in
Engineering, 2016.
[8] Mangla, G., & Singh, S. (2015). A Review of Enhancement Techniques
of Underwater Images. International Journal of Engineering and
Management Research (IJEMR), 5(3), 9-11.
[9] Nguchu, B. A. Critical Analysis of Image Enhancement Techniques. Int.
J. Electr. Electron. Res., vol, 4, 23-33.
[10] Ma, J., Fan, X., Yang, S. X., Zhang, X., & Zhu, X. (2018). Contrast
limited adaptive histogram equalization-based fusion in YIQ and HSI color
spaces for underwater image enhancement. International Journal of Pattern
Recognition and Artificial Intelligence, 32(07), 1854018.
[11] Yadav, G., Maheshwari, S., & Agarwal, A. (2014, September). Contrast
limited adaptive histogram equalization based enhancement for real time
video system. In 2014 international conference on advances in computing,
communications and informatics (ICACCI) (pp. 2392-2397). IEEE.
[12] Agarwal, V., Abidi, B. R., Koschan, A., & Abidi, M. A. (2006). An
overview of color constancy algorithms. Journal of Pattern Recognition
Research, 1(1), 42-54.
[13] Kawle, V. R., & Shah, A. M. (2018). UNDERWATER IMAGE
ENHANCEMENT BY WAVELET DECOMPOSITION USING FPGA.
[14] Ancuti, C. O., Ancuti, C., Haber, T., & Bekaert, P. (2011, September).
Fusion-based restoration of the underwater images. In 2011 18th IEEE
International Conference on Image Processing (pp. 1557-1560). IEEE.
[15] Ancuti, C., Ancuti, C. O., Haber, T., & Bekaert, P. (2012, June).
Enhancing underwater images and videos by fusion. In 2012 IEEE conference
on computer vision and pattern recognition (pp. 81-88). IEEE.
[16] Singh, R., & Biswas, M. (2017). Hazy Underwater Image Enhancement
based on Contrast and Color improvement using fusion technique. In IPC
(Vol. 22, No. 3, pp. 31-38).
[17] Zhang, W., Li, G., & Ying, Z. (2017, December). A new underwater
image enhancing method via color correction and illumination adjustment. In
2017 IEEE Visual Communications and Image Processing (VCIP) (pp. 1-4).
IEEE.
[18] Ancuti, C. O., Ancuti, C., De Vleeschouwer, C., & Bekaert, P. (2017).
Color balance and fusion for underwater image enhancement. IEEE
Transactions on image processing, 27(1), 379-393.
[19] Sethi, R., & Indu, S. (2020). Fusion of underwater image enhancement
and restoration. International Journal of Pattern Recognition and Artificial
Intelligence, 34(03), 2054007.
[20] Patel, Z., Desai, C., Tabib, R. A., Bhat, M., Patil, U., & Mudengudi, U.
(2020). Framework for underwater image enhancement. Procedia Computer
Science, 171, 491-497.
[21] Xiong, J., Zhuang, P., & Zhang, Y. (2020, October). An Efficient
Underwater Image Enhancement Model With Extensive Beer-Lambert Law.
In 2020 IEEE International Conference on Image Processing (ICIP) (pp. 893-
897). IEEE.
[22] Zhang, Y., Yang, F., & He, W. (2020). An approach for underwater
image enhancement based on color correction and dehazing. International
Journal of Advanced Robotic Systems, 17(5), 1729881420961643.
[23] Sun, Z., Li, F., & Yang, Y. (2021). Underwater image enhancement
algorithm based on dark channel prior and underwater imaging model. In
MATEC Web of Conferences (Vol. 336, p. 06033). EDP Sciences.
[24] Zhou, J., Zhang, D., & Zhang, W. (2020). Multiscale Fusion Method for
the Enhancement of Low-Light Underwater Images. Mathematical Problems
in Engineering, 2020.
[25] Zhang, W., Dong, L., Zhang, T., & Xu, W. (2021). Enhancing underwater
image via color correction and bi-interval contrast enhancement. Signal
Processing: Image Communication, 90, 116030.
[26] Xu, S., Zhang, J., Bo, L., Li, H., Zhang, H., Zhong, Z., & Yuan, D. (2021,
October). Retinex based underwater image enhancement using attenuation
compensated color balance and gamma correction. In International
Symposium on Artificial Intelligence and Robotics 2021 (Vol. 11884, pp. 321-
334). SPIE.
[27] Bianco, G., Muzzupappa, M., Bruno, F., Garcia, R., & Neumann, L.
(2015). A New Color Correction Method for Underwater Imaging.
International Archives of the Photogrammetry, Remote Sensing & Spatial
Information Sciences.
[28] Ulutas, G., & Ustubioglu, B. (2021). Underwater image enhancement
using contrast limited adaptive histogram equalization and layered difference
representation. Multimedia Tools and Applications, 80(10), 15067-15091.
[29] Ancuti, C. O., Ancuti, C., De Vleeschouwer, C., & Sbert, M. (2019).
Color channel compensation (3C): A fundamental pre-processing step for
image enhancement. IEEE Transactions on Image Processing, 29, 2653-
2665.
[30] Li, C., & Guo, J. (2015). Underwater image enhancement by dehazing
and color correction. Journal of Electronic Imaging, 24(3), 033023.
[31] Philip, G. S., & Gisha, G. S. (2019). Underwater Image Enhancement
using White Balance and Fusion. International Journal of Engineering
Research & Technology (IJERT), 8(06), 1397-1401.
[32] Mohan, S., & Simon, P. (2020). Underwater image enhancement based
on histogram manipulation and multiscale fusion. Procedia Computer
Science, 171, 941-950.
[33] Chen, G., & Zhang, X. (2015, November). A method to improve
robustness of the gray world algorithm. In 4th International Conference on
Computer, Mechatronics, Control and Electronic Engineering (pp. 243-248).
Atlantis Press.
[34] Weng, C. C., Chen, H., & Fuh, C. S. (2005, May). A novel automatic
white balance method for digital still cameras. In 2005 IEEE International
Symposium on Circuits and Systems (ISCAS) (pp. 3801-3804). IEEE.
[35] Lal, S., & Chandra, M. (2014). Efficient algorithm for contrast
enhancement of natural images. Int. Arab J. Inf. Technol., 11(1), 95-102.
[36] Tian, Q. C. (2018). Color correction and contrast enhancement for
natural images and videos (Doctoral dissertation, Université Paris sciences et
lettres).
[37] Rahman, S., Rahman, M. M., Abdullah-Al-Wadud, M., Al-Quaderi, G.
D., & Shoyaib, M. (2016). An adaptive gamma correction for image
enhancement. EURASIP Journal on Image and Video Processing, 2016(1), 1-
13.
[38] Woods, R. E., & Gonzalez, R. C. (2021). Digital image processing third
edition.
[39] Solomon, C., & Breckon, T. (2011). Fundamentals of Digital Image
Processing: A practical approach with examples in Matlab. John Wiley &
Sons.
[40] Sundararajan, D. (2017). Digital image processing: a signal processing
and algorithmic approach. Springer.
[41] Shabana, D. F., Badithala, S., Daggupati, S., Chevala, R., & Raj, K.
(2020). an Image Enhancement Algorithm Using Gamma Correction By
Swarm Optimization. Int Res J Eng Technol, 7(9).
[42] Gao, F., Wang, K., Yang, Z., Wang, Y., & Zhang, Q. (2021). Underwater
image enhancement based on local contrast correction and multi-scale fusion.
Journal of Marine Science and Engineering, 9(2), 225.
[43] Dubey, M., & Agrawal, S. (2017). An analysis of energy efficient
Gaussian filter architectures. International Research Journal of Engineering
and Technology, 4(01), 1391-1397.
[44] Kökver, Y., DUMAN, E., ERDEM, O. A., & Ünver, H. M. An Adaptive
Gaussian Filter For Edge-Preserving Image Smoothing.
[45] Gonzalez, R. C., Woods, R. E., & Eddins, S. L. U. (2004). Digital Image
Processing Using MATLAB: Pearson Prentice Hall. Upper Saddle River, New
Jersey.
[46] Kumar, P. (2014). Image enhancement using histogram equalization and
histogram specification on different color spaces (Doctoral dissertation).
[47] Delac, K., Grgic, M., & Kos, T. (2006, September). Sub-image
homomorphic filtering technique for improving facial identification under
difficult illumination conditions. In International Conference on Systems,
Signals and Image Processing (Vol. 1, pp. 21-23).
[48] Zaheeruddin, S., & Suganthi, K. (2019). Image contrast enhancement by
homomorphic filtering based parametric fuzzy transform. Procedia Computer
Science, 165, 166-172.
[49] Mustafa, W. A., Khairunizam, W., Yazid, H., Ibrahim, Z., Shahriman, A.
B., & Razlan, Z. M. (2018, August). Image correction based on homomorphic
filtering approaches: a study. In 2018 International Conference on
Computational Approach in Smart Systems Design and Applications
(ICASSDA) (pp. 1-5). IEEE.
[50] Hoang, N. D., & Nguyen, Q. L. (2018). Fast local laplacian-based
steerable and sobel filters integrated with adaptive boosting classification tree
for automatic recognition of asphalt pavement cracks. Advances in Civil
Engineering, 2018.
[51] Aubry, M., Paris, S., Hasinoff, S. W., Kautz, J., & Durand, F. (2014).
Fast local laplacian filters: Theory and applications. ACM Transactions on
Graphics (TOG), 33(5), 1-14.
[52] Ancuti, C., Ancuti, C. O., De Vleeschouwer, C., & Bovik, A. C. (2020).
Day and night-time dehazing by local airlight estimation. IEEE Transactions
on Image Processing, 29, 6264-6275.
[53] Zhou, J., Zhang, D., & Zhang, W. (2021). A multifeature fusion method
for the color distortion and low contrast of underwater images. Multimedia
tools and applications, 80(12), 17515-17541.
[54] Wu, L., Hu, J., Yuan, C., & Shao, Z. (2021). Details-preserving multi-
exposure image fusion based on dual-pyramid using improved exposure
evaluation. Results in Optics, 2, 100046.
[55] Peter, J. (2006). The Laplacian pyramid as a compact image code.
Fundamental Papers in Wavelet Theory, 31(4), 28.
[56] Crowley, J. L., Riff, O., & Piater, J. H. (2002, September). Fast
computation of characteristic scale using a half octave pyramid. In
International Conference on Scale-Space Theories in Computer Vision.
[57] Derpanis, K. G. (2005). Overview of binomial filters.
[58] Saleh, A., Laradji, I. H., Konovalov, D. A., Bradley, M., Vazquez, D.,
& Sheaves, M. (2020). A realistic fish-habitat dataset to evaluate algorithms
for underwater visual analysis. Scientific Reports, 10(1), 1-10.
[59] Li, C. Y., Mazzon, R., & Cavallaro, A. (2020). Underwater image
filtering: methods, datasets and evaluation. arXiv preprint arXiv:2012.12258.
[60] Li, C., Guo, C., Ren, W., Cong, R., Hou, J., Kwong, S., & Tao, D. (2019).
An underwater image enhancement benchmark dataset and beyond. IEEE
Transactions on Image Processing, 29, 4376-4389.
[61] Yang, M., & Sowmya, A. (2014). New image quality evaluation metric
for underwater video. IEEE Signal Processing Letters, 21(10), 1215-1219.
[62] Wang, S., Ma, K., Yeganeh, H., Wang, Z., & Lin, W. (2015). A patch-
structure representation method for quality assessment of contrast changed
images. IEEE Signal Processing Letters, 22(12), 2387-2390.
[63] He, N., Wang, J. B., Zhang, L. L., & Lu, K. (2015). An improved
fractional-order differentiation model for image denoising. signal Processing,
112, 180-188.
[64] MAJUMDER, S. Underwater Image Quality Measurement and
Improvement (Doctoral dissertation, West Bengal University of Technology).
[65] Panetta, K., Gao, C., & Agaian, S. (2015). Human-visual-system-inspired
underwater image quality measures. IEEE Journal of Oceanic Engineering,
41(3), 541-551.
[66] Yang, M., & Sowmya, A. (2015). An underwater color image quality
evaluation metric. IEEE Transactions on Image Processing, 24(12), 6062-
6071.
‫المستخلص‬
‫تعتبرالخصائص الفيزيائية المختلفة للبيئات تحت الماء مثل‪ :‬التشتت واالمتصاص والفقدان التدريجي‬
‫للون باإلضافة إلى تأثير العوالق البحرية والمياه العكرة وغيرها األسباب الرئيسية لتدهور الصور‬
‫تحت الماء‪ .‬كل هذا ساهم في صعوبة استخراج المعالم من المشاهد تحت الماء‪ .‬نظرا للحاجة الملحة‬
‫واالهتمام الواسع بمعالجة الصور في مختلف المجاالت مثل‪ :‬البحرية ‪ ،‬وعلم اآلثار ‪ ،‬والتحقيقات‪ .‬لذلك‬
‫شهد هذا المجال نشاطا واسعا وأصبح موضوعا مثيرا للجدل‪.‬‬
‫في هذا العمل تم تناول طرق استعادة الصور تحت الماء والتي تم تصنيفها إلى فئتين‪ :‬صور االستعادة‬
‫في المجال المكاني وصور االستعادة في مجال التردد‪ .‬لكل طريقة في مجال معين ‪ ،‬هناك طريقة مقابلة‬
‫في المجال اآلخر ‪ ،‬لكن الدراسات السابقة أثبتت أن تحسين الصورة في المجال المكاني هو األكثر دقة‬
‫بسهولة ‪.‬‬
‫في هذه الدراسة ‪ ،‬تم بناء نظام جديد إلعادة بناء الصور تحت الماء على أساس استخراج الميزات‪.‬‬
‫أوالً‪ :‬تحسين ألوان الصورة باستخدام خوارزميات تصحيح األلوان (تعويض اللون وتوازن اللون‬
‫األبيض)‪ .‬ثانيًا ‪ ،‬يتم اشتقاق صورتين مختلفتين من الصورة الناتجة ومعالجتين بطريقتين مختلفتين‪.‬‬
‫بينما يتم تحسين أحدهما باستخدام خوارزمية شحذ الصورة ‪ ،‬تتم معالجة اآلخر باستخدام تصحيح جاما‬
‫‪ ،‬ثم تتم تصفية الصور باستخدام مرشح متماثل الشكل‪ .‬ثم تأتي خطوات استخراج الميزات من خالل‬
‫تطبيق اربعة من خرائط أوزان على كال الصورتين‪ ،‬وأخيراً‪ ،‬يتم دمج كل منهما باستخدام عملية دمج‬
‫متعددة المقاييس ‪.‬‬
‫يتم التحقق من صحة أداء النهج الحالي باستخدام صور شاملة تحت الماء (مزرقة وخضراء وضبابية)‬
‫ضا باالعمال السابقة مستخدمين المجموعة البيانات ذاتها مع عدد من المقاييس اإلحصائية‪.‬‬
‫ومقارنتها أي ً‬
‫لقد أثبتت النتائج المرئية واإلحصائية أن النهج المقترح تفوق بشكل ملحوظ على أحدث التقنيات‬
‫باستخدام مجموعة البيانات ذاتها‪.‬‬
‫جمهوريه العراق‬
‫وزاره التعليم العالي والبحث العلمي‬
‫الجامعه المستنصريه‬
‫كليه العلوم قسم علوم الحاسوب‬

‫تحسين الصورة تحت الماء على اساس استخراج الميزات‬

‫من قبل‬
‫صفا برهان عبدالساده‬

‫باشراف‬
‫أ‪.‬م‪.‬د‪ .‬أسماء صادق عبدالجبار‬

‫‪2022 A.D‬‬ ‫‪1444 .H‬‬

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy