0% found this document useful (0 votes)
22 views27 pages

Preprints201703 0086 v1

Uploaded by

rainbo531
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
22 views27 pages

Preprints201703 0086 v1

Uploaded by

rainbo531
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 27

Preprints (www.preprints.org) | NOT PEER-REVIEWED | Posted: 14 March 2017 doi:10.20944/preprints201703.0086.

v1

Article

Contrast Limited Adaptive Histogram Equalization


Based Fusion for Underwater Image Enhancement
Jinxiang Ma 1,2, Xinnan Fan 3,4,*, Simon X. Yang 5, Xuewu Zhang 3,4 and Xifang Zhu 2
1 College of Computer and Information, Hohai University, Nanjing 210098, China; mtmjx@126.com
2 School of Electric and Photo-electronic Engineering, Changzhou Institute of Technology, Changzhou
213032 , China; zhuxf9966@163.com
3 College of IOT Engineering, Hohai University, Changzhou 213022, China; fanxn@hhuc.edu.cn

4 Jiangsu Provincial Collaborative Innovation Center of World Water Valley and Water ecological
civilization, Nanjing 211100, China; lab_112@126.com
5 Advanced Robotics and Intelligent Systems (ARIS) Lab,School of Engineering, University of Guelph,

Guelph, Ontario, N1G 2W1,Canada; syang@uoguelph.ca


* Correspondence: fanxn@hhuc.edu.cn; Tel.: +86-136-0612-9868

Abstract: In order to improve contrast and restore color for underwater image captured by camera
sensors without suffering from insufficient details and color cast, a fusion algorithm for image
enhancement in different color spaces based on contrast limited adaptive histogram equalization
(CLAHE) is proposed in this article. The original color image is first converted from RGB color
space to two different special color spaces: YIQ and HSI. The color space conversion from RGB to
YIQ is a linear transformation, while the RGB to HSI conversion is nonlinear. Then, the algorithm
separately operates CLAHE in YIQ and HSI color spaces to obtain two different enhancement
images. The luminance component (Y) in the YIQ color space and the intensity component (I) in the
HSI color space are enhanced with CLAHE algorithm. The CLAHE has two key parameters: Block
Size and Clip Limit, which mainly control the quality of CLAHE enhancement image. After that,
the YIQ and HSI enhancement images are respectively converted backward to RGB color. When
the three components of red, green, and blue are not coherent in the YIQ-RGB or HSI-RGB images,
the three components will have to be harmonized with the CLAHE algorithm in RGB space.
Finally, with 4 direction Sobel edge detector in the bounded general logarithm ratio operation, a
self-adaptive weight selection nonlinear image enhancement is carried out to fuse YIQ-RGB and
HSI-RGB images together to achieve the final fused image. The enhancement fusion algorithm has
two key factors: average of Sobel edge detector and fusion coefficient, and these two factors
determine the effects of enhancement fusion algorithm. A series of evaluate metrics such as mean,
contrast, entropy, colorfulness metric (CM), mean square error (MSE) and peak signal to noise
ratio (PSNR) are used to assess the proposed enhancement algorithm. The experiments results
showed that the proposed algorithm provides more detail enhancement and higher values of
colorfulness restoration as compared to other existing image enhancement algorithms. The
proposed algorithm can suppress effectively noise interference, improve the image quality for
underwater image availably.

Keywords: image enhancement; image fusion; color space; edge detector; underwater image

1. Introduction
In the digital image application field, images with high contrast and bright colors are the
crucial prerequisite for good understanding of the real scenes, such as detection and classification
for underwater dam cracks, and multitarget detection under complex environment [1,2]. The
images having a higher contrast level usually display a larger degree of color scale difference as
compared to the lower contrast level ones [3]. Light plays a crucial role in generating images of
satisfactory quality in photography. Strong light causes an image to have a washed out appearance;

© 2017 by the author(s). Distributed under a Creative Commons CC BY license.


Preprints (www.preprints.org) | NOT PEER-REVIEWED | Posted: 14 March 2017 doi:10.20944/preprints201703.0086.v1

2 of 27

on the contrary, weak light leads to an image that is too dark to be visible. In these two cases, the
contrasts of the images are low and their detailed textures are difficult to discern [4]. The
underwater images may lose contrast suffering from degradation due to poor visibility conditions
and effects such as light absorption, light reflection, bending of light and scattering of light, which
result in dimness and distortion [5]. Furthermore, the poor sensitivity of charge-coupled device
/complementary-metal-oxide-semiconductor (CCD/CMOS) sensors leads to images with
excessively narrow dynamic ranges and renders their details unclear [4]. There are heuristically
serious disagreements existing between the recorded color images and the direct observation of the
real underwater scenes. The purpose of image enhancement is a process that allows image features
to show up more visibly details and highlight the useful information by making best use of the color
presented on the display devices. Image enhancement is used to improve the quality of an image for
visual perception of human being [6]. Therefore, it is particularly important to design effective
enhancement algorithms to improve contrast and restore color for the degenerated underwater
images.
During the last decade,a large number of enhancement algorithms have been developed for
contrast enhancement of images in various applications. Several effective image enhancement
algorithms can be mainly divided into two categories [7]: (1) image restoration based on physical
models , and (2) image enhancement based on image processing techniques.
For the first category, the optimal estimate of an improved image is obtained by establishing
and inverting the process of image degradation. More recently, dark channel prior (DCP) theory
which was proposed by He et al. directly estimates the depth information based on the comparison
between the degraded and the clear images [8]. Though some improved algorithms [9-11] based on
DCP theory have achieved significant performance, results restored from images captured under the
overcast environment are still unsatisfactory, especially for the images with large amount of
lightness and cloud zones.
The second category of image enhancement techniques directly improves contrast and
highlights details by either global or local pixel processing, regardless of the cause of color cast and
image degradation.
Recently, Retinex, Homomorphic and Wavelet Multi-Scale techniques have been popular for
enhancing images. These methods perform much better than those traditional ones [12]. The Retinex
theory is firstly introduced to image enhancement by Edwin et al [13]. There are some different
algorithms based on Retinex theory such as single-scale Retinex (SSR) [14], multi-scale Retinex
(MSR) [15], multi-scale Retinex with color restoration (MSRCR) [16], and fast multi-scale Retinex
(FMSR) [17] etc. Among them, the MSRCR method proposes to estimate the illumination of the input
image using gaussian surround filterings of different scales and conducts enhancement by applying
color restoration followed by linear stretching to the logarithm of reflectance. Though the MSRCR
method has demonstrated a strong ability in providing dynamic range compression, color
restoration and preserving most of details, a large number of parameters are involved and set
empirically, which limit the generalization ability and often result in pseudo halos and unnatural
color [18].
The classical contrast enhancement is Histogram Equalization (HE) which has good
performance in ordinary images, such as human portraits or natural images [19]. This method
increases the contrast of an image globally by spreading out the most frequent intensity values.
However, it suffers from noise amplification in relatively homogeneous regions. HE has been
generalized to a local histogram equalization which is known as adaptive histogram equalization
(AHE) . AHE is based on HE that the adaptive method formulates each histogram of sub-image to
redistribute the brightness values of the images. AHE is therefore suitable for improving the local
contrast of an image and bringing out more details [19]. Some AHE algorithms have get important
progress in suppressing noise and enhancing contrast. The hybrid cumulative histogram
equalization (HCHE) can improve the enhancement effect on hot objects rather than background
[20]. The gap adjustment histogram equalization can solve the over-enhancement problem and
alleviate the feature loss problem in the dark regions of the image [21]. However, the problem
Preprints (www.preprints.org) | NOT PEER-REVIEWED | Posted: 14 March 2017 doi:10.20944/preprints201703.0086.v1

3 of 27

remain the same with the global histogram equalization because of amplifying noise in relatively
homogeneous regions. In order to overcome this problem, contrast limited adaptive histogram
equalization (CLAHE) was proposed .CLAHE is a well-known block-based processing, and it can
overcome the over amplification of noise problem in the homogeneous region of image with
standard histogram equalization. CLAHE algorithm differs from standard HE in the respect that
CLAHE operates on small regions in the image, called tiles, and computes several histograms, each
corresponding to a distinct section of the image and use them to redistribute the lightness values of
the image[22-24].
The CLAHE enhancement algorithm can be operated in different color spaces such as RGB
space, YIQ space , HSI space and so on. In RGB color model, a color space is defined in terms of red
(R), green (G), and blue (B) components. These three components are monochrome intensity images.
Therefore, RGB model is an ideal tool for color generations, when images are captured by a color
video camera or displayed in color monitor screen [25]. In RGB color model, CLAHE can be applied
on all the three components individually. The result of full-color RGB image can be obtained by
combining the R, G, and B individual components [5]. Although the RGB color space is best suited
to display color images, this space is not suitable for analysis and processing imaging because of a
high degree of correlation between these three components. In the YIQ format, image data consists
of three components: luminance (Y), hue (I), and saturation (Q). The first component, luminance,
represents grayscale information, while the last two components make up chrominance (color
information) [3]. The HSI color model describes colors in terms of the Hue (H), Saturation (S), and
Intensity (I). The dominant description for black and white is the term of intensity. The hue and
saturation level do not make a difference when value is at max or min intensity level [26] .
The first advantage of YIQ and HSI format is that grayscale information is separated from
color data, so the same signal can be used for both color and black & white sets. Second advantage
is that it takes advantage of human color-response characteristics. For the purpose of enhancing a
color image, it is to be seen that hue should not change for any pixel. If hue is changed then the
color gets changed, thereby distorting the image. One needs to improve the visual quality of an
image without distorting it for image enhancement [6].
This paper focuses on the improvement of visual quality of underwater color images, especially
for those captured under the overcast or low-light conditions. To this end, we propose an improved
CLAHE image enhancement based on adaptive image fusion of YIQ and HSI color spaces. The
contributions of this paper can be summarized as follows:
(1) It is proposed to use two different color space transformations for CLAHE enhancement:
RGB-YIQ linear transformation, and RGB-HSI nonlinear transformation.
(2) It is proposed to use an improved Euclidean norm to fuse the two individual color spaces
CLAHE enhancement results: YIQ-RGB and HSI-RGB images.
(3) It is proposed to use bounded general logarithm ratio (GLR) operation with 4 directions
Sobel edge detector to enhance the whole contrast of image, and get richer gradient details than
before.
The remainder of this paper is organized as follows: In the following section, we first introduce
related works including CLAHE algorithm, linear transformation of RGB-YIQ model, nonlinear
transformation of RGB-HSI model, 4 directions Sobel edge detector and bounded GLR operation.
Section 3 introduces our proposed algorithm, including CLAHE in different color spaces and
enhancement fusion of YIQ-RGB and HSI-RGB color images. Section 4 presents the experiment
results and a series of evaluate metrics to show the improvements. Section 5 summarizes our work.

2. Related Works
In this section, we introduce the original CLAHE algorithm, different color spaces such as RGB,
YIQ and HSI, color space transformations of RGB-YIQ and RGB-HSI, improved Sobel edge detector
and bounded GLR operation. The RGB-YIQ color space conversion is a linear transformation, while
the RGB-HSI color space conversion is a nonlinear one. The Sobel edge detector describes the
gradient information of the original image, and the value of gradient changes in different pixel of
Preprints (www.preprints.org) | NOT PEER-REVIEWED | Posted: 14 March 2017 doi:10.20944/preprints201703.0086.v1

4 of 27

the original image. The Sobel edge detector can be used to enhance contrast for fusion image with
the help of special bounded GLR operation.
2.1. CLAHE algorithm
CLAHE was originally applied for enhancement of low-contrast medical images [23,24].
CLAHE differs from ordinary AHE in its contrast limiting. The CLAHE introduced clipping limit to
overcome the noise amplification problem. The CLAHE limits the amplification by clipping the
histogram at a predefined value before computing the Cumulative Distribution Function (CDF). In
CLAHE technique, an input original image is divided into non-overlapping contextual regions
called sub-images, tiles or blocks. The CLAHE has two key parameters: Block Size (BS) and Clip
Limit (CL). These two parameters mainly control enhanced image quality. The image is getting
bright when CL is increased because input image has very low intensity and larger CL makes its
histogram flatter. As the BS is bigger, the dynamic range becomes larger and the contrast of image
is also increasing. The two parameters determined at the point with maximum entropy curvature
produce subjectively good quality of image with using the entropy of image [27].
The CLAHE method applies histogram equalization to each contextual region. The original
histogram is clipped and the clipped pixels are redistributed to each gray level. The redistributed
histogram is different with ordinary histogram, because each pixel intensity is limited to a selected
maximum. But the enhanced image and the original image have the same minimum and maximum
gray values [24,28]. The CLAHE method to enhance the original image consists of the following
steps:
Step 1: Dividing the original intensity image into non-overlapping contextual regions. The total
number of image tiles is equal to M × N , and 8× 8 is a good value to preserve the image
chromatic data.
Step 2: Calculating the histogram of each contextual region according to gray levels present in
the array image.
Step 3: Calculating the contrast limited histogram of the contextual region by CL value as
N avg = ( NrX × NrY ) / N gray (1)
where N avg is the average number of pixel, N gray is the number of gray levels in the contextual
region, NrX and NrY are the numbers of pixels in the X dimension and Y dimension of the
contextual region.
The actual CL can be expressed as
N CL = N clip × N avg (2)
where N CL is the actual CL, N clip is the normalized CL in the range of [0, 1]. If the number of
pixels is greater than N CL , the pixels will be clipped. The total number of clipped pixels is defined
as N , then the average of the remain pixels to distribute to each gray level is
 clip
N avggray = N / N gray (3)
 clip
The histogram clipping rule is given by the following statements
If H region (i ) > N CL then
H region _ clip (i ) = N CL (4)
Else if ( H region (i ) + N avggray ) > N CL then
H region _ clip (i ) = N CL (5)
Else H region _ clip (i ) = H region (i ) + N CL (6)
where H region (i ) and H region _ clip (i ) are original histogram and clipped histogram of each region at
i-th gray level.
Step 4: Redistribute the remain pixels until the remaining pixels have been all distributed. The
step of redistribution pixels is given by
Preprints (www.preprints.org) | NOT PEER-REVIEWED | Posted: 14 March 2017 doi:10.20944/preprints201703.0086.v1

5 of 27

Step = N gray / N remain (7)


where N remain is the remaining number of clipped pixels. Step is positive integer at least 1. The
program starts search from the minimum to the maximum of gray level with the above step. If the
number of pixels in the gray level is less than N CL , the program will distribute one pixel to the
gray level. If the pixels are not all distributed when the search is end, the program will calculate the
new step according to Eq.(7) and start new search round until the remaining pixels is all
distributed .
Step 5: Enhancing intensity values in each region by Rayleigh transform. The clipped
histogram is transformed to cumulative probability, Pinput (i ) , which is provided to create transfer
function. The underwater image appears to look more natural when the Rayleigh distribution is
used. Rayleigh forward transform is given by
1
y (i ) = ymin + 2α 2 ln( ) (8)
1 - Pinput (i )
where ymin is the lower bound of the pixel value, and α is a scaling parameter of Rayleigh
distribution that is defined depending on each input image. In this study, α value in Rayleigh
function is set to 0.04. The output probability density of each intensity value can be expressed as
( y (i ) − ymin ) ( y (i ) − ymin ) 2
p( y (i )) = ⋅ exp(− ) for y (i ) ≥ ymin (9)
α2 2α 2
A higher α value will result in more significant contrast enhancement in the image, meanwhile
increasing saturation value and amplification of noise levels.
Step 6: Reducing abruptly changing effect. The output from the transfer function in Eq. (9) is
re-scaled using linear contrast stretch. The linear contrast stretch can be given as
x(i ) − xmin
y (i ) = (10)
xmax − xmin
Where x(i ) is the input value from the transfer function, xmin and xmax denote the minimum
and maximum value of the transfer function.
Step 7: Calculating the new gray level assignment of pixels within a sub-matrix contextual
region by using a bi-linear interpolation between four different mappings in order to eliminate
boundary artifacts.
2.2. color spaces
Color spaces provide a method for specifying, ordering and manipulating colors. The goal of a
color model is to facilitate the specification of colors in a standardized way [29]. In general, a color
space is a mathematical representation of a set of colors, and they can be classified into three basic
parts: color spaces based on HVS (e.g. RGB, HVS, HSI and etc.); application specific (e.g. YCbCr,
JPEG-YCbCr, YUV, YIQ and etc.) and CIE color spaces (e.g. CIELab and etc.) [30,31]. Within the first
category, the most widely used color space in digital image capturing and displaying is RGB.
Phenomenal colors also form part of this first category incorporating color spaces such as HSV
(hue-saturation-value) and HSI, which are simply transformations from RGB space [30]. The HSV
space is more akin to the human conceptual understanding of color [32]. The second category deals
with application-based color space. This includes CMY (Cyan-Magenta-Yellow) used in printing
applications and TV-related color spaces such national television system committee (NTSCs) YIQ,
YUV and YCbCr [30]. The third category deals with the CIE color spaces. International Commission
on Illumination (CIE) specifies three color spaces: CIE*XYZ, CIE*Lab and CIE*Luv, which CIE*Lab
and CIE*Luv provide a perceptually equal space [30].
Different color spaces usually display different color characteristics suitable for different visual
tasks, such as detection, indexing, and recognition [33-36]. The choice of a suitable color space for
color representation remains a challenge for scientists researching color image processing [37,38].
Preprints (www.preprints.org) | NOT PEER-REVIEWED | Posted: 14 March 2017 doi:10.20944/preprints201703.0086.v1

6 of 27

2.2.1. RGB color space


RGB model is the primary and source color space. In RGB model, a color space is defined in
terms R, G, and B components, known as the primary colors. These three components are
monochrome intensity images. In this model, a digital image consists of three planes of
independent images, each of which stores the values of R, G and B. The RGB model is a
hardware-oriented color model in which R, G, and B components are equivalent and have strong
correlation. Thus, the change in one component will affect the others. Therefore, it is an ideal tool
for color generations, when images are captured by a color video camera or displayed in a color
monitor screen [25].
The RGB color space is the most common and often found in computer systems as well as
television, video and so on, and it is widely used in computer graphics and imaging [39]. The most
of the color spaces have been developed for specific applications, but all come from the same
concept: the trichromatic theory of primary colors of R, G and B [40]. Other color spaces are usually
calculated from the RGB color space via either linear or nonlinear transformations [33-36,41].
However, RGB is not very efficient when dealing with “real-world” images. RGB color space is not
appropriate for the entire spectrum of image processing tasks [39]. The color image processing is
motivated by two important factors: first, by a similarity to human vision, fully chromatic; and
second, by the increasing of the information that the chromaticity contributes to the analysis of
images [29].
The RGB model, which is computationally convenient, is not very useful in the specification
and color recognition [29]. The RGB model is a perceptually nonuniform color space and one of its
limitations is the fact that the chrominance and intensity components are not explicitly defined [42].
The human being does not recognize a color by having an amount of R, G or B components, but
uses attributes perceptual of hue, saturation and intensity [29]. On the other hand, this RGB model
has serious disadvantage when you want to perform different types of processing of the images
such as enhancement, segmentation or classification. Although the RGB model is best suited to
display color images, the preliminary results obtained shows that this space is not suitable for
analysis and processing imaging with a high degree of correlation between the components R, G
and B [29].
For the purpose of enhancing a color image, it is to be seen that hue should not change for any
pixel. If hue is changed then the color gets changed, thereby distorting the image [6]. Thus, all
colors are seen as variable combination of the three primaries in the RGB color model, which is
usually used in representing and displaying images. Besides, several color models that decouple
luminance and chromaticity are briefly described in the following in terms of their relations with
the RGB model [43]. It is necessary to develop approach extracts the color features using a
multispace adaptive clustering algorithm, while the texture features are calculated using a
multichannel texture decomposition scheme.
2.2.2. YIQ color space
The YIQ model is the color primary system adopted by NTSC for color television broadcasting.
Like RGB, the YIQ color space is a device-dependent color space which means the actual color you
see on your monitor depends on what kind of monitor you are using and what its settings are [44].
In the NTSC format, image data consists of three components: luminance (Y), hue (I), and saturation
(Q). The first component, luminance, represents gray scale information, while the last two
components make up chrominance (color information) [26].
YIQ Color space is widely used in the NSTC and PAL televisions of different countries. First
advantages of this format is that gray scale information is separated from color data, so the same
signal can be used for both color and black & white sets. Second advantage is that it takes
advantage of human color-response characteristics. The eye is more sensitive to changes in the
orange-blue (I) range than in the purple-green range (Q), therefore less bandwidth is required for Q
than for I [3].
Preprints (www.preprints.org) | NOT PEER-REVIEWED | Posted: 14 March 2017 doi:10.20944/preprints201703.0086.v1

7 of 27

In this color space, Y-component stands for luminance or brightness, the I-component seems to
mimic mostly shifts from blue, through purple to red colors (with increasing I), and the
Q-component seems to mimic mostly the value of green; the I and Q components jointly represent
the chromatic attributes [44].
In addition, the NTSC YIQ representation is optimized with respect to human visual systems
so that the bandwidths of the I and Q components can be reduced without noticeable loss of visual
quality [45,46].
As mentioned earlier, in the YIQ color representation, the chrominance components are
separated from the luminance component and as a result the shadows and local inhomogeneities
are generally better modeled than in the RGB color space. Colors with high degrees of similarity in
the RGB space may be difficult to distinguish, while the YIQ representation may provide a much
stronger discrimination [42]. Its purpose is to exploit certain characteristics of the human visual
system to maximize the use of a fixed bandwidth.
The YIQ color space is defined by means of a linear transformation from the RGB color space
[47]. The color space from RGB to YIQ transformation is given as
Y  0.299 0.587 0.114   R 
 I  = 0.596 − 0.274 − 0.322 G  (11)
    
Q   0.211 − 0.523 0.312   B 
The decorrelation of the R, G, and B component images makes the Y, I, and Q component
images complementary to each other [44].
The color space from YIQ backward to RGB transformation is given as
 R  1.000 0.9562 0.6214  Y 
G  = 1.000 −0.2727 −0.6468  I  (12)
    
 B  1.000 −1.1037 1.7006  Q 
For RGB values with a range of 0-255, Y has a range of 0-255, I has a range of 0 to ±152, and Q
has a range of 0 to ±134. In the NTSC YIQ representation, the restoration of the Y component is
critical because this component contains 85%-95% of the total energy and has a large bandwidth.
the bandwidths of I and Q components are much smaller than that of the Y component [48].

2.2.3. HSI color space


HSI model is the most frequently used application-oriented color space. HSI color space is
based on the human visual perception theory and is suitable for describing, and interpreting color.
HSI model defines a color space in terms of hue (H), saturation (S), and intensity (I) components. It
decouples achromatic information (I component) from chromatic information (H and S components)
in a color image. Thus, each pixel of an image represented in this space has three data: hue and
saturation which provide information of color, and intensity which describes the brightness.
Therefore, it is an ideal tool for developing image-processing algorithms based on color
descriptions that are natural and intuitive to humans [31,49].
The HSI color space is very important and attractive color model for image processing
applications because it represents colors similarly how the human eye senses colors [26,28]. It is an
application-oriented color model and to some extent H, S, and I components are independent from
each other. So, one component can be processed separately without affecting the others [50], which
significantly simplifies the workload of image analysis and image processing [25]. For the purpose
of enhancing a color image, it is to be seen that hue should not change for any pixel. If hue is
changed then the color gets changed, thereby distorting the image [6].
There are two reasons why the HSI color space is chosen to develop the CLAHE algorithm:
first, compared with the RGB color space, the HSI color space is much closer to human being’s
perception to color; second, the intensity component is the weighted average of three color channels
and is less sensitive to noise [26,51].
Preprints (www.preprints.org) | NOT PEER-REVIEWED | Posted: 14 March 2017 doi:10.20944/preprints201703.0086.v1

8 of 27

The Hue component describes the color itself in the form of an angle between [0, 360] degrees:
0 degree means red, 120 means green, 240 means blue, 60 is yellow, and 300 is magenta. The
saturation component signals how much the color is polluted with white color. The saturation
range is [0, 1]. The Intensity range is between [0, 1], and 0 means black, 1 means white [49].
The HSI space is calculated from the primary RGB color space via nonlinear transformation.
The conversion formulas of color from RGB space to HSI space are given as [49]
 1
 I = ( R + G + B)
 3
 3
S = 1 − [min( R, G, B)] (13)
 R + G + B
 θ if B ≤ G
H = 
 360 − θ if B > G


1
[( R − G ) + ( R − B)] 
where θ = arccos 2
2
 (14)
 ( R − G ) + ( R − B)(G − B) 
 
If R,G,B have been normalized in range of [0, 1], then S, I are in range of [0, 1], and θ is the
angle between the point and the red axis in the HSI color space.
On the contrary, the conversion formula of color image from HSI space backward to RGB
space are given as [26]
For RG district (0 ≤ H < 120 ) ,
 B = I (1 − S )

  S cos H 
 R = I 1 +   (15)
  cos(60 − H ) 
G = 3I − ( R + B)

For GB district (120 ≤ H < 240 ) ,
H = H − 120
 R = I (1 − S )

  S cos H 
G = I 1 +   (16)
  cos(60 − H ) 
 B = 3I − ( R + G )

For BR district (240 ≤ H < 360 ) ,
H = H − 240
G = I (1 − S )

  S cos H 
 B = I 1 +   (17)
  cos(60 − H ) 
 R = 3I − (G + B)

2.3. Sobel edge detector
The importance of edge detection arises from the fact that edges can capture local features and
provide useful information in an image. In images, edges are marked by discontinuities or
significant variations in intensity or gray level, providing the location of the object contour [52,53].
Edge detection, one of the fundamental and most important problems in the field of lower level
image processing, plays a very important role in the realization of a complete vision based
Preprints (www.preprints.org) | NOT PEER-REVIEWED | Posted: 14 March 2017 doi:10.20944/preprints201703.0086.v1

9 of 27

understanding/monitoring system for automatic scene analysis/monitoring [54]. Quality of detected


edges plays a very important role in realization of complex automated computer/machine vision
systems [53,54].
An edge is a collection of connected pixels where the intensity level changes abruptly [26].
Edges in digital images are defined as the image positions/points where the intensity/brightness of
two neighboring pixels is significantly different from each other [55]. Edges can usually be found in
parts of an image where transition occurs, either between different objects, different regions, or
between objects and the background. In this view, gradients are effective descriptors of edges [53].
Edges provide significant and important information related to objects present in the scene. This
information helps in achieving higher level objectives like segmentation, object recognition, scene
analysis, and so forth [55].
For digital images, derivatives can be approximated with discrete differentiation. Therefore,
first-order edge detectors are easy to implement and widely used. There are several methods for
edge detection and extraction, such as Sobel, Roberts operators and Canny algorithm, and so on.
Prewitt and Sobel operations are examples of the gradient-based edge detectors [56,57]. Among
them, the Sobel operators are especially preferred because they are nonlinear filters with image
smoothing, and thus can produce less fragmentary edge images [53]. The Sobel edge detector is
very popular than simple gradient operators due to its property to counteract the noise sensitivity
and easier implementation [58]. The Sobel operator is chosen in this paper because it costs low
computation and can obtain the direction of the edges.
The Sobel operator is based on computing an approximation of the gradient of the image
intensity function. The original Sobel filter uses two 3 × 3 spatial masks which are convolved with
the original image to calculate the approximations of the gradient [55]. The Sobel edge detector [26]
performs a spatial gradient measurement on an image and so emphasizes regions of high spatial
frequency which corresponds to edges. Typically, it is used to find the approximate absolute
gradient magnitude at each pixel in an input gray-scale image.
The original Sobel edge detection filter is a commonly used edge detector that computes an
approximate gradient of the image intensity function. For each pixel in the image, it obtains the
vertical and horizontal components of the gradient by applying convolution with two 3 × 3 spatial
masks defined as
 −1 −2 −1
 
S1 =  0 0 0 (18)
1 2 1 

 −1 0 1
 
S2 =  −2 0 2 (19)
 −1 0 1 

where S1 is the vertical spatial mask in 90 direction , while S2 is the horizontal one in 0
direction.
The accuracy of the Sobel operator for edge detection is relatively low because it uses only two
masks which detect the edges in horizontal and vertical directions. This problem can be overcome
by using the Sobel compass operator which uses a larger set of masks with narrowly spaced
   
orientations. It uses four masks ( 0 , 45 , 90 and 135 ) each providing edge strength along one of
the four possible directions of the compass [26,53,55]. The other two directions spatial masks can be
expressed as
 0 1 2
 
S3 =  −1 0 1  (20)
 −2 −1 0 
 
Preprints (www.preprints.org) | NOT PEER-REVIEWED | Posted: 14 March 2017 doi:10.20944/preprints201703.0086.v1

10 of 27

2 1 0 
 
S4 =  1 0 −1  (21)
 0 −1 −2 
 
 
where S3 is the spatial mask in 45 direction , while S 2 is the another one in 135 direction.
   
Therefore, to find the edges in all possible directions, the four masks ( 0 , 45 , 90 and 135 )
must be applied to each pixel of the input image.
Supposing Z (i, j ) denotes the 3 × 3 image neighbourhood of pixel (i, j ) , then Z (i, j ) can
be expressed as
 z (i − 1, j − 1) z (i − 1, j ) z (i − 1, j + 1) 
 
Z ( i, j ) =  z (i, j − 1) z (i, j ) z (i, j + 1)  (22)
 z (i + 1, j − 1) z (i + 1, j ) z (i + 1, j + 1) 
 
where z (i, j ) denotes the original gray value of pixel (i, j ) .
These compute the average gradient components across the neighboring lines or columns,
2
respectively. The local edge strength is defined as the gradient magnitude given by the L norm
of the corresponding gradient vector (gradient magnitude). Then the gradient vector in the 4 directions
can be respectively expressed as
2 2
Gk (i, j ) =  z (i + m − 1, j + n − 1) × Sk (m, n) k = 1, 2,3, 4 (23)
m=0 n =0
The gradient image of the pixel (i, j ) can be defined as
4
g (i, j ) = Gk =1
2
k (i, j ) (24)

The gradient image is normalized as follows

log( g (i, j ) + 1 + δ1 )
g n (i, j ) = (25)
log(max( g (i, j )) + δ 2 )

where δ1 and δ 2 are small positive disturbance quantities to ensure the result of g n ∈(0,1).
With the abundant gradient information, the adaptive gain of pixel (i, j ) can be expressed as [59]
λ (i, j ) = 2[ a× g n (i, j )]
+b (26)
where a and b are adjustable positive quantities to ensure the average of λ in the range:
λ ∈(1,4).
2.4. GLR Model in Bounded Operate
The value domains of input and output are in closed range in boundary operates, which can
solve the problem of overstepping the boundary. Three GLR models in bounded operation are
introduced in this section, including add, subtraction and multiplication models [59]. Consider the
gray value of image is defined as I (i, j ) , then the normalized gray value is given by
I (i, j ) + 1 + δ1
I n (i, j ) = (27)
M + δ2
where: δ1 and δ 2 are small positive disturbance quantities, M = 256 for 8-bit gray image. The
1− x
normalized gray value x = I n (i, j ) ∈ (0,1) , nonlinear transform p ( x) = , logarithmic
x
transform φ (x) and its inverse logarithmic transform φ -1 ( x) are expressed as
1− x 
φ ( x) = log[ p( x)] = log  (28)
 x 
Preprints (www.preprints.org) | NOT PEER-REVIEWED | Posted: 14 March 2017 doi:10.20944/preprints201703.0086.v1

11 of 27

1
φ -1 ( x) = x
(29)
e +1
The symbols of ⊕ , ◎ and ⊗ are adopted to define as add, subtraction and multiplication
operation of GLR model, and the definition are expressed as
1
x1 ⊕ x2 = φ −1 [φ ( x1 ) + φ ( x2 ) ] = (30)
p ( x1 ) p ( x2 ) + 1
1
x1 ◎ x2 = φ −1[φ ( x1 ) − φ ( x2 )] = (31)
p ( x1 ) p ( x2 ) −1 + 1
1
x ⊗ r = φ −1 [ rφ ( x) ] = (32)
p( x)r + 1
where x1 and x2 are two channels signal of input image, r is arbitrary real number. The three
GLR model operations are presented in Figure 1.

(a) (b) (c)


Figure 1. GLR model operations: (a) GLR add operation, y = x ⊕ r ; (b) GLR subtraction operation,
y = x ◎ r ; (c) GLR multiplication operation, y = x ⊗ r .
The add and subtraction operations are inverse operations to each other in GLR model. These
two operations can adjust the brightness of the image either in low value gray segment or high
value gray segment, but the adjustments are not symmetrical for the two segments.
In the multiplication operation of GLR model, in the condition of r > 1 , the pixel values in the
zero point of GLR model ( x = 0.5 ) are stretched, while other pixel values far away from the zero
point are compressed [59]. This multiplication operation can adjust the brightness of the image both
in low value gray segment and high value gray segments, and the adjustments are symmetrical for
the two segments, which is very different from the above add and subtraction operations. This
operation effects will not be realized by the traditional multiplication operation. The GLR operates
are boundary operates with closure, and can solve the problem of overstep the boundary. Which
makes the details of the enhanced image more clear and the overall contrast higher than others.

3. Proposed Algorithm
An algorithm to enhance underwater image captured by CCD/CMOS camera sensors has to
improve contrast and restore the chromatic information without suffering from color cast and
deficiency in detail enhancement. In this algorithm, at first, the underwater image is converted from
RGB color space to YIQ color space with linear transformation and HSI color space with nonlinear
transformation. The chromatic information (hue and saturation) and the brightness information are
independent in YIQ and HSI color spaces. Secondly, the brightness information is employed to
enhance the contrast by using Rayleigh CLAHE, while the chromatic information are preserved.
The illuminance component (Y) in YIQ image is enhanced with Rayleigh CLAHE to get improved
illuminance component (Y1), and the intensity component (I) in HSI image is enhanced with
Rayleigh CLAHE to get improved intensity component (I1). Then, the enhanced YIQ space and HSI
space images are transformed backward to RGB space images to achieve enhanced YIQ-RGB and
HSI-RGB images. When the three components of red, green, and blue are not coherent in the
YIQ-RGB or HSI-RGB images, the three components will have to be harmonized. Finally, the
Preprints (www.preprints.org) | NOT PEER-REVIEWED | Posted: 14 March 2017 doi:10.20944/preprints201703.0086.v1

12 of 27

YIQ-RGB image and HSI-RGB image are combined to enhancement fusion RGB image in adaptive
Euclidean norm by using GLR multiplication operation with Sobel edge detector. The pipeline of
our proposed algorithm is shown in Figure 2.

Figure 2. The pipeline of the proposed algorithm

3.1. CLAHE in different color spaces


3.1.1. CLAHE in RGB color space
In RGB color model, a digital image consists of three planes of independent images, each of
which stores the values of R, G and B. The algorithm of CLAHE in RGB color space could include
the following steps: at first, the original image is divided into three independent images as R, G and
B images; Then, the three independent images are respectively enhanced with Rayleigh CLAHE to
achieve improved R, G and B images; Finally, the improved R, G and B images are combined into
enhanced CLAHE RGB color image.
The algorithm of CLAHE in RGB color space is not very difficult, and a more coherent and
chromatic image can be achieved in the end. This algorithm is really useful to harmonize the color
image while the three components of R, G and B are seriously unbalanced of the original image.
But, the enhancement effects such as contrast and information entropy are very limited.
3.1.2. CLAHE in YIQ color space
The YIQ color space is defined by means of a linear transformation from the RGB color space.
In the YIQ model, image data consists of three components: Y, I , and Q. The first component, Y,
represents gray scale information, while the last two components make up chrominance (color
information). Because the YIQ representation is optimized with respect to human visual systems,
the YIQ Color space is widely used in the NSTC and PAL televisions of different countries [26] .
The algorithm of CLAHE in YIQ color space could include the following steps:
Step 1: The three components of R, G and B in RGB image are normalized to the range of
[0, 1] as
Preprints (www.preprints.org) | NOT PEER-REVIEWED | Posted: 14 March 2017 doi:10.20944/preprints201703.0086.v1

13 of 27

 R ( x, y )
 Rn ( x, y ) = 255

 G ( x, y )
Gn ( x, y ) = (33)
 255
 B ( x, y )
 Bn ( x, y ) = 255

Step 2: Linear transformation from RGB color space to YIQ color space
Yn  0.299 0.587 0.114   Rn 
    
 I n  = 0.596 −0.274 −0.322  Gn  (34)
Qn   0.211 −0.523 0.312   Bn 
Step 3: The illuminance component ( Yn ) in YIQ image is enhanced with Rayleigh CLAHE to
get improved illuminance component ( Yn′ );
Step 4: Linear transformation from YIQ color space backward to RGB color space

 Rn′  1.000 0.9562 0.6214  Yn′ 


G′  = 1.000 −0.2727 −0.6468  I 
 n   n  (35)
 Bn′  1.000 −1.1037 1.7006  Qn 
 
Step 5:Normalized RGB image backward to the range of [0, 255]
 R′ = 255 × Rn′

G′ = 255 × Gn′ (36)
 B′ = 255 × B′
 n

Step 6:Final output RGB image can be calculated as


 R′ − Rmin

 R1 = ′ ′
 Rmax − Rmin
 G′ − Gmin

G1 = (37)
 ′ − Gmin
Gmax ′
 B′ − Bmin

 B1 =
 ′ − Bmin
Bmax ′
′ = min{R′} , Rmax
where Rmin ′ = max{R′} ; Gmin
′ = min{G′} , Gmax
′ = max{G′} ; Bmin
′ = min{B′} ,
′ = max{B′} .
Bmax
The CLAHE enhanced output RGB image in YIQ color space is defined as YIQ-RGB image,
and the three components of the YIQ-RGB image is defined as R1 , G1 and B1 .

3.1.3. CLAHE in HSI color space


The HSI space is calculated from the primary RGB color space via nonlinear transformation.
HSI color space is based on the human visual perception theory and is suitable for describing, and
interpreting color. HSI model defines a color space in terms of H, S, and I components. It decouples
achromatic information (I component) from chromatic information (H and S components) in a color
image [49]. Compared with the RGB color space, the HSI color space is much closer to human
being’s perception to color. On the other hand, the intensity component is the weighted average of
three color channels and is less sensitive to noise [51]. So the HSI model is the most frequently used
application-oriented color space.
The algorithm of CLAHE in HSI color space could include the following steps:
Preprints (www.preprints.org) | NOT PEER-REVIEWED | Posted: 14 March 2017 doi:10.20944/preprints201703.0086.v1

14 of 27

Step 1: The three original components of R , G and B in RGB image are normalized to Rn ,
Gn and Bn , the normalization equation is as same as Eq. (33);
Step 2: Nonlinear transformation from RGB color space to HSI color space

I = 1 (R + G + B )
 nn 3 n n n

 3
 Sn = 1 − [min( Rn , Gn , Bn )] (38)
 Rn + Gn + Bn
 θ if Bn ≤ Gn
Hn = 
 360 − θ if Bn > Gn

[ ( Rn − Gn ) + ( Rn − Bn )] 
1
 2
where θ = arccos   (39)
2
 ( Rn − Gn ) + ( Rn − Bn )(Gn − Bn ) 
 
Step 3: The intensity component ( I nn ) in HSI image is enhanced with Rayleigh CLAHE to get
improved intensity component ( I nn ′ );
′ ) backward to RGB color
Step 4: Nonlinear transformation from HSI color space ( H n S n I nn
space ( Rn′′Gn′′Bn′′ ), reference to Eq.(15-17):
Step 5:Normalized RGB image backward to the range of [0, 255]
 R′′ = 255 × Rn′′

G′′ = 255 × Gn′′ (40)
 B′′ = 255 × B′′
 n

Step 6:Final output RGB image can be calculated as


 R′′ − Rmin
′′
 R2 = ′′ ′′
 Rmax − Rmin
 G′′ − Gmin
′′
G2 = (41)
 G ′′
max − G ′′
min
 B′′ − Bmin
′′
 B2 =
 ′′ − Bmin
Bmax ′′
where Rmin′′ = min{R′′} , Rmax
′′ = max{R′′} ; Gmin
′′ = min{G′′} , Gmax
′′ = max{G′′} ; Bmin
′′ = min{B′′} ,
′′ = max{B′′} .
Bmax
The CLAHE enhanced output RGB image in HSI color space is defined as HSI-RGB image, and
the three components of the HSI-RGB image is defined as R2 , G2 and B2 .

3.2. Enhancement fusion of YIQ-RGB and HSI-RGB images


Both CLAHE enhanced images of YIQ-RGB and HSI-RGB are integrated using a Euclidean
norm [5], then the fusion image is enhanced by GLR multiplication operation with Sobel edge
detector. The algorithm of CLAHE enhancement fusion could include the following steps:
Step 1: The fusion image in RGB color space can be calculated in a Euclidean norm as
RGB f = γ ⋅  R12 + R22 , G12 + G22 , B12 + B22  (42)
 
where γ is the fusion coefficient of the images fusion, and γ [0.50, 0.95] . The
is in the range of
three components of the YIQ-HSI fusion image is defined as R f , G f and B f . R f , G f and
Preprints (www.preprints.org) | NOT PEER-REVIEWED | Posted: 14 March 2017 doi:10.20944/preprints201703.0086.v1

15 of 27

B f are in the range of [0, 255] . The fusion coefficient γ is chosen to ensure the Mean of the
enhancement fusion image in the range of [128-5, 128+5] . The enhancement fusion image is
getting brighter when γ is increased.
Step 2: The three components of R f , G f and B f in RGB image are normalized as
 R f ( x, y )
 R fn ( x, y ) =
 255
 G f ( x, y )
G fn ( x, y ) = (43)
 255
 B f ( x, y )
 B fn ( x, y ) =
 255
Step 3: The three normalized components of R fn , G fn and B fn in RGB image are carried
out GLR multiplication operation with Sobel edge detector independently
 1
 R′fn ( x, y ) = λ ( x, y ) ⊗ R fn ( x, y ) = φ  λ ( x, y ) R fn ( x, y )  =
−1

 ( R fn ( x, y ))λ ( x , y ) + 1
 1
G′fn ( x, y ) = λ ( x, y ) ⊗ G fn ( x, y ) = φ  λ ( x, y )G fn ( x, y )  =
−1
(44)
 (G fn ( x, y ))λ ( x , y ) + 1
 1
 B′fn ( x, y ) = λ ( x, y ) ⊗ B fn ( x, y ) = φ −1  λ ( x, y ) B fn ( x, y )  =
 ( B fn ( x, y ))λ ( x , y ) +1
where λ ( x, y ) is the adaptive gain in Sobel edge detector of pixel ( x, y ) . The average Sobel edge
detector is expressed as λ .
Step 4:Normalized RGB image which is given as
 R = 255 × R′fn

G = 255 × G′fn (45)

 B = 255 × B′fn
Step 5:Final output RGB image can be calculated as
 R − R min
R
 out  =
 Rmax − R min
 G − G min
Gout =  
(46)
 G max − G min
  
B − Bmin
 Bout =
 Bmax − B min

where R min = min{R } , R max = max{R } ; G min = min{G } , G max = max{G } ; B min = min{B } ,
B = max{B } .
max

The three components of CLAHE enhanced fusion image are Rout , Gout and Bout , which
combine into final output RGB image as RGBout .

4. Simulation Results and Discussions


To evaluate the proposed CLAHE fusion algorithm quantitatively, simulation experiments on
different underwater images were carried out.
Preprints (www.preprints.org) | NOT PEER-REVIEWED | Posted: 14 March 2017 doi:10.20944/preprints201703.0086.v1

16 of 27

4.1. Quantitative Metrics


In order to demonstrate the performance of the proposed CLAHE enhancement fusion
algorithm, it is tested on different underwater sensing images. The proposed algorithm and other
existing algorithms such as He’s DCP, MSR, MSRCR, RGB-CLAHE, YIQ-CLAHE and HSI-CLAHE
are implemented using MATLAB software (MATLAB 7.11, release 2010b), and 4GB RAM with I3
processor. A series of quantitative metrics such as Mean, Contrast, Entropy and colorfulness metric
(CM) for single enhanced color image were used to assess the enhancement algorithms. The Mean is
the average brightness of the enhanced image. The higher values of Contrast, Entropy and CM
imply that the visual quality of the enhanced image is good. These four quantitative metrics are
defined in Eq. (47) to Eq. (50).
1
Mean = ( μ R + μG + μ B ) (47)
3
where μ R , μG and μ B are the means of the improved image in the three components of R, G and B.
1 3 L −1 L −1
Contrast = 
4 k =0 i =0 j =0
(i − j ) 2 P ( i , j ; d , θ k ) (48)

1 3 L −1 L −1
Entropy = −  P(i, j; d ,θk ) log10 P(i, j; d ,θ k )
4 k =0 i =0 j =0
(49)

where P (i, j; d ,θ k ) is the gray-level co-occurrence matrix (GLCM) of the image. L is the gray levels of
the image ( L =256 in 8-bit image), d is the distance of two pixels ( d = 1 ), and θ k is the direction
between two pixels ( θ k = ( k − 1) * 45 , k = 1, 2,3, 4 ).

CM is no-reference image quality metric. It is suggested by Susstrunk and Winkler [60]. CM is


the quality in terms of color enhancement. The metric is defined in the RGB color space as below.
Let the three components of a color image be denoted by R, G and B, respectively [61]. Consider
α = R − G and β = ( R +G ) / 2 − B , then the colorfulness of the image is defined as
CM = σ α2 + σ β2 + 0.3 × μα2 + μ β2 (50)
where σα and σβ are standard deviations of α and β , respectively. Similarly, μα and μβ
are their means.
The Mean Square Error (MSE) and Peak Signal to Noise Ratio (PSNR) are the two error metrics
used to compare the quality of improved underwater images. The MSE represents the cumulative
squared error between the improved image and the original image, whereas PSNR represents a
measure of the peak error. The good method can be described if it produces lower MSE and higher
PSNR values[5].
The MSE is calculated using the following equation as
1 H W
MSE = 
H *W x =1 y =1
( I1 ( x, y ) − I 0 ( x, y )) 2 (51)

where I1 and I 0 denotes the improved image and the original image, respectively. The two
images must be same and denote by H × W .
To calculate the PSNR, we can use the MSE in Eq. (51). The following equation defines PSNR
 ( L − 1) 2 
PSNR = 10 log10   (dB ) (52)
 MSE 
where L is the gray levels of the image ( L =256 in 8-bit image). In general, a improved image is
acceptable by human perception if its PSNR > 30 ( dB ) .

4.2. Experimental Original Images


Preprints (www.preprints.org) | NOT PEER-REVIEWED | Posted: 14 March 2017 doi:10.20944/preprints201703.0086.v1

17 of 27

There are 3 original underwater images chosen for this enhancement algorithm. The 3 original
images are shown in Figure 3, and the characteristics information of these images are presented in
Table 1. The characteristics information include image size, mean, contrast, entropy and CM. The 3
original underwater images are landscape wall, power remains and coral branches. Underwater
images normally exhibit a high percentage of blue, followed by green and red. Therefore, most
underwater images appear bluish or greenish [62], given that blue and green are the dominant color
channels forming the overall image color. Red is the inferior color channel, and its percentage is
generally lower than those of the other two color channels. The images appear greenish in Figure
3(a), but bluish in Figure 3(b) and 3(c). The characteristics of the 3 original images can be expressed
as follows [62,63]:
(a) The landscape wall image
 The landscape wall is a typical underwater construction, this image is taken in the
archaeological site of Baia (Naples-Italy) at the depth of about 5 m underwater. Two fishes are
swimming around the wall, but the fishes could not be distinguished from the background because
they are both in almost the same color. The color cast is unusually serious, and the image appears
greenish.
 The brightness is good. The contrast and entropy are both the highest in these 3 images, but
almost all the details have been submerged in the greenish image.
(b) The power remains image
 The power remains are in the bottom of ocean, and a diver is trying to enter into the cockpit.
The mean of the image is very low, so that the image looks really a little dark. The wheel hubs of the
power almost could not be recognized from the degrade image.
 The image could not provide more detail information than landscape wall image, since the
contrast and entropy are both the medium in the 3 images .
(c) The coral branches image
 The coral is typical ocean biology. The image is degraded seriously, the coral branches almost
could not be recognized from the background, because they are both in the same brightness and
color.
The contrast and entropy are both the lowest in the 3 images. The color cast is unusually
serious. The whole image is in acute dimness.

(a) (b) (c)


Figure 3. The original underwater images: (a) The landscape wall image; (b) The power remains
image; (c) The coral branches image.

Table 1. The 5 quantitative metrics of the original images.

Image size Mean Contrast Entropy CM


Landscape wall 624*413 120.30 842.26 7.64 37.55
Power remains 624*413 64.84 250.60 6.73 53.00
Coral branches 692*460 107.73 5.13 5.96 24.33

4.3. Simulation Results


4.3.1 Enhancement results of landscape wall image
The different enhancement results of the original landscape wall image are shown in Figure 4.
The enhancement algorithms includes DCP, MSR, MSRCR, RGB-CLAHE, YIQ-CLAHE and HSI-
Preprints (www.preprints.org) | NOT PEER-REVIEWED | Posted: 14 March 2017 doi:10.20944/preprints201703.0086.v1

18 of 27

CLAHE. The 6 quantitative metrics of enhancement results for the image according to Figure 4 are
shown in Table 2. The quantitative metrics includes mean, contrast, entropy, CM, MSE and PSNR.
The contrast and entropy are both very high, the traditional image enhancement algorithms such as
DCP, MSR and MSRCR may have really weak effect for this original underwater image.
The proposed enhancement algorithm results of the original landscape wall image with
different CL and BS are shown in Figure 5. The 6 quantitative metrics of our proposed enhancement
results according to Figure 5 are shown in Table 3. The relationships of Contrast, Entropy and CM
v.s. the average of Sobel detector λ for fused CLAHE landscape wall image (BS=8*8,
CL=0.006, γ = 0.57 ) are shown in Figure 6. The relationships of Contrast, Entropy and CM v.s. the
fusion coefficient γ for fused CLAHE landscape wall image (BS=8*8, CL=0.006, λ = 1.1420 ) are
shown in Figure 7.

(a) (b) (c)

(d) (e) (f)


Figure 4. The different enhancement results for original landscape wall image: (a) DCP; (b) MSR; (c)
MSRCR; (d) RGB-CLAHE; (e) YIQ-CLAHE; (f) HSI-CLAHE.

Table 2. The 6 quantitative metrics of enhancement results for landscape wall image in Figure 4.
(CLAHE: BS=8*8, CL=0.005)

Mean Contrast Entropy CM MSE PSNR


DCP 115.90 885.18 7.58 38.82 9.17 41.48
MSR 181.49 609.33 6.39 36.83 5742.39 10.77
MSRCR 129.13 930.63 7.01 36.27 1287.02 20.05
RGB-CLAHE 118.20 1250.99 7.62 38.75 412.36 23.04
YIQ-CLAHE 117.95 1581.14 7.70 41.11 868.59 20.06
HSI-CLAHE 119.37 1654.87 7.66 42.67 738.63 20.11

(a) (b) (c)


Preprints (www.preprints.org) | NOT PEER-REVIEWED | Posted: 14 March 2017 doi:10.20944/preprints201703.0086.v1

19 of 27

(d) (e) (f)


Figure 5. The proposed algorithm results for original landscape wall image( γ =0.57 , λ = 1.1420 ): (a)
Fused image (BS=8*8, CL=0.004); (b) Fused image (BS=8*8, CL=0.006); (c) Fused image (BS=8*8,
CL=0.008); (d) Fused image (CL=0.006, BS=5*5); (e)Fused image (CL=0.006, BS=10*10); (f) Fused
image (CL=0.006, BS=20*20).

Table 3. The 6 quantitative metrics of proposed algorithms for landscape wall image in Figure 5.

Mean Contrast Entropy CM MSE PSNR


(a) Fusion image 124.80 1949.72 7.82 48.23 1029.12 18.73
(b) Fusion image 127.15 2232.36 7.83 51.28 1373.88 17.56
(c) Fusion image 127.86 2311.85 7.82 51.68 1510.23 17.16
(d) Fusion image 127.25 2085.20 7.83 52.02 1283.02 18.02
(e) Fusion image 126.62 2332.77 7.83 50.55 1414.06 17.30
(f) Fusion image 125.63 2495.89 7.82 47.67 1396.77 17.20

(a) (b) (c)


Figure 6. The relationships of Contrast, Entropy and CM v.s. λ for fused CLAHE landscape wall
image (BS=8*8, CL=0.006, γ = 0.57 ): (a) Contrast v.s. λ ; (b) Entropy v.s. λ ; (c) CM v.s. λ .

(a) (b) (c)


Figure 7. The relationships of Contrast, Entropy and CM v.s. γ for fused CLAHE landscape wall
image (BS=8*8, CL=0.006, λ = 1.1420 ): (a) Contrast v.s. γ ; (b) Entropy v.s. γ ; (c) CM v.s. γ .

The data cursors ordinates of Contrast, Entropy and CM in Figure 6 are as same as Figure
7, which are the quantitative metrics data according to Table 3(b). γ = 0.57 and λ = 1.1420
may be the best choice to ensure the enhancement fusion image in biggest Contrast, Entropy
and CM, and smallest MSE.
Preprints (www.preprints.org) | NOT PEER-REVIEWED | Posted: 14 March 2017 doi:10.20944/preprints201703.0086.v1

20 of 27

4.3.2 Enhancement results of power remains image


The different enhancement results of the original power remains image are shown in Figure 8.
The enhancement algorithms includes DCP, MSR, MSRCR, RGB-CLAHE, YIQ-CLAHE and HSI-
CLAHE. The 6 quantitative metrics of enhancement results for the image according to Figure 8 are
shown in Table 4. The quantitative metrics includes mean, contrast, entropy, CM, MSE and PSNR.
The proposed enhancement algorithm results of the original power remains image with
different CL and BS are shown in Figure 9. The 6 quantitative metrics of our proposed enhancement
results according to Figure 9 are shown in Table 5. The relationships of Contrast, Entropy and CM
v.s. the average of Sobel detector λ for fused CLAHE power remains image (BS=8*8, CL=0.008,
γ = 0.78 ) are shown in Figure 10. The relationships of Contrast, Entropy and CM v.s. the fusion
coefficient γ for fused CLAHE power remains image (BS=8*8, CL=0.008, λ = 1.0625 ) are shown in
Figure 11.

(a) (b) (c)

(d) (e) (f)


Figure 8. The different enhancement results for original power remains image: (a) DCP; (b) MSR; (c)
MSRCR; (d) RGB-CLAHE; (e) YIQ-CLAHE; (f) HSI-CLAHE.

Table 4. The 5 quantitative metrics of enhancement results for power remains image in Figure 8.
(CLAHE: BS=8*8, CL=0.007)

Mean Contrast Entropy CM MSE PSNR


DCP 64.86 251.25 6.73 53.04 0.03 67.70
MSR 148.72 865.95 6.88 33.13 8995.05 8.68
MSRCR 128.20 1015.25 7.33 49.75 7000.91 11.30
RGB-CLAHE 81.46 548.26 6.59 35.65 856.92 19.13
YIQ-CLAHE 103.97 803.57 7.38 38.86 3219.56 13.76
HSI-CLAHE 96.99 913.11 7.01 41.27 2552.30 14.32

(a) (b) (c)


Preprints (www.preprints.org) | NOT PEER-REVIEWED | Posted: 14 March 2017 doi:10.20944/preprints201703.0086.v1

21 of 27

(d) (e) (f)


Figure 9. The proposed algorithm results for original power remains image( γ = 0.78 , λ = 1.0625 ): (a)
Fused image (BS=8*8, CL=0.006);(b) Fused image (BS=8*8, CL=0.008); (c) Fused image (BS=8*8,
CL=0.010);(d) Fused image (CL=0.008, BS=6*6);(e)Fused image (CL=0.008, BS=12*12); (f) Fused image
(CL=0.008, BS=24*24).

Table 5. The 6 quantitative metrics of proposed algorithms for landscape wall image in Figure 9.

Mean Contrast Entropy CM MSE PSNR


(a) Fusion image 120.98 1067.62 7.40 51.02 4771.91 11.44
(b) Fusion image 129.37 1337.58 7.49 54.94 6375.61 10.23
(c) Fusion image 135.10 1532.84 7.53 57.69 7649.06 9.49
(d) Fusion image 131.97 1127.37 7.45 57.18 6562.71 10.10
(e) Fusion image 128.62 1553.56 7.49 53.14 6404.10 10.21
(f) Fusion image 128.80 1923.10 7.50 50.37 6588.51 10.09

(a) (b) (c)


Figure 10. The relationships of Contrast, Entropy and CM v.s. λ for fused CLAHE power remains
image (BS=8*8, CL=0.008, γ =0.78 ): (a) Contrast v.s. λ ; (b) Entropy v.s. λ ; (c) CM v.s. λ .

(a) (b) (c)


Figure 11. The relationships of Contrast, Entropy and CM v.s. γ for fused CLAHE power remains
image (BS=8*8, CL=0.008, λ = 1.0625 ): (a) Contrast v.s. γ ; (b) Entropy v.s. γ ; (c) CM v.s. γ .

The data cursors ordinates of Contrast, Entropy and CM in Figure 10 are as same as
Figure 11, which are the quantitative metrics data according to Table 5(b). γ = 0.78 and
λ = 1.0625 may be the best choice to ensure the enhancement fusion image in biggest Contrast,
Entropy and CM, and smallest MSE.
Preprints (www.preprints.org) | NOT PEER-REVIEWED | Posted: 14 March 2017 doi:10.20944/preprints201703.0086.v1

22 of 27

4.3.3 Enhancement results of coral branches image


The different enhancement results of the original coral branches image are shown in Figure 12.
The enhancement algorithms includes DCP, MSR, MSRCR, RGB-CLAHE, YIQ-CLAHE and HSI-
CLAHE. The 6 quantitative metrics of enhancement results for the image according to Figure 12 are
shown in Table 6. The quantitative metrics includes mean, contrast, entropy, CM, MSE and PSNR.
The proposed enhancement algorithm results of the original coral branches image with different
CL and BS are shown in Figure 13. The 6 quantitative metrics of our proposed enhancement results
according to Figure 13 are shown in Table 7. The relationships of Contrast, Entropy and CM v.s. the
average of Sobel detector λ for fused CLAHE coral branches image (BS=8*8, CL=0.012, γ =0.71 ) are
shown in Figure 14. The relationships of Contrast, Entropy and CM v.s. the fusion coefficient γ for
fused CLAHE coral branches image (BS=8*8, CL=0.012, λ = 1.0073 ) are shown in Figure 15.

(a) (b) (c)

(d) (e) (f)


Figure 12. The different enhancement results for original coral branches image: (a) DCP; (b) MSR; (c)
MSRCR; (d) RGB-CLAHE; (e) YIQ-CLAHE; (f) HSI-CLAHE.

Table 6. The 6 quantitative metrics of enhancement results for coral branched image in Figure 12.
(CLAHE: BS=8*8, CL=0.011)
Mean Contrast Entropy CM MSE PSNR
DCP 109.83 7.46 6.11 25.30 37.03 43.39
MSR 115.22 40.21 6.78 56.08 7343.69 31.92
MSRCR 127.39 119.44 7.54 45.29 4065.14 17.71
RGB-CLAHE 108.70 29.36 6.74 21.63 901.84 32.61
YIQ-CLAHE 107.50 104.69 7.20 28.65 1299.24 21.29
HSI-CLAHE 108.87 107.83 7.37 32.12 1252.03 20.41

(a) (b) (c)


Preprints (www.preprints.org) | NOT PEER-REVIEWED | Posted: 14 March 2017 doi:10.20944/preprints201703.0086.v1

23 of 27

(d) (e) (f)


Figure 13. The proposed algorithm results for original coral branches image( γ = 0.71 , λ = 1.0073 ): (a)
Fused image (BS=8*8, CL=0.010); (b) Fused image (BS=8*8, CL=0.012); (c) Fused image (BS=8*8,
CL=0.014); (d) Fused image (CL=0.012, BS=5*5); (e)Fused image (CL=0.012, BS=10*10); (f) Fused
image (CL=0.012, BS=20*20)

Table 7. The 6 quantitative metrics of proposed algorithms for coral branches image in Figure 13.

Mean Contrast Entropy CM MSE PSNR


(a) Fusion image 125.54 133.15 7.43 41.55 1865.39 16.69
(b) Fusion image 128.76 170.89 7.53 45.25 2508.40 15.60
(c) Fusion image 130.94 200.17 7.59 47.57 2990.98 14.97
(d) Fusion image 127.94 136.09 7.56 47.82 2493.84 15.30
(e) Fusion image 126.51 172.90 7.47 43.81 2096.31 16.06
(f) Fusion image 132.49 219.89 7.39 41.35 2507.79 15.87

(a) (b) (c)


Figure 14. The relationships of Contrast, Entropy and CM v.s. λ for fused CLAHE coral branches
image (BS=8*8, CL=0.012, γ =0.71 ): (a) Contrast v.s. λ ; (b) Entropy v.s. λ ; (c) CM v.s. λ .

(a) (b) (c)


Figure 15. The relationships of Contrast, Entropy and CM v.s. γ for fused CLAHE coral branches
image (BS=8*8, CL=0.012, λ = 1.0073 ): (a) Contrast v.s. γ ; (b) Entropy v.s. γ ; (c) CM v.s. γ .

The data cursors ordinates of Contrast, Entropy and CM in Figure 14 are as same as
Figure 15, which are the quantitative metrics data according to Table 7(b). γ = 0.71 and
λ = 1.0073 may be the best choice to ensure the enhancement fusion image in biggest Contrast,
Entropy and CM, and smallest MSE.
4.4. Discussions
The enhancement effect of DCP algorithm for underwater image is very limited, especially in
term of contrast improvement. The MSR enhancement algorithm can improve contrast only for
Preprints (www.preprints.org) | NOT PEER-REVIEWED | Posted: 14 March 2017 doi:10.20944/preprints201703.0086.v1

24 of 27

original image with low contrast, but it could lead to serious color cast. The MSRCR enhancement
algorithm can improve contrast, entropy, and restore color, but further improvement may be very
difficult. The MSR and MSRCR algorithms suffer from noise amplification in relatively local regions,
which may lead to serious color mottles. The CLAHE-YIQ and CLAHE-HSI algorithms can produce
better enhancement effects than the ahead three enhancement algorithms in terms of contrast and
entropy improvements. The CLAHE-RGB algorithm can produce higher PSNR for human
perception than others except DCP.
The enhancement image is getting bright, and the contrast, entropy, and CM are increasing
when CL is increased because input image has very low intensity and larger CL makes its
histogram flatter. As the BS is bigger, the dynamic range becomes larger and the contrast of image
is also increasing, but the entropy and CM decreasing. The image quality mainly depends on the CL
rather than BS.
The quantitative metrics of contrast, entropy and CM in enhancement fusion image are getting
bigger when average Sobel edge detector increases, but the metric of MSE may appear valley value.
The quantitative metrics of contrast and CM in enhancement fusion image are getting bigger when
the fusion coefficient increases, but the metric of Entropy may appear peak value. The average
Sobel edge detector and the fusion coefficient must be chosen reasonably to ensure the
enhancement fusion image in biggest Contrast, Entropy and CM, and smallest MSE.
In a word, there are two key parameters in CLAHE algorithm: BS and CL, another two key
parameters in fusion enhancement algorithm: average Sobel edge detector and fusion coefficient.
These four key parameters affect the quality of the final CLAHE enhancement fusion image, which
should be chosen in a reasonable range. The quantitative metrics are integrated factors to assess the
enhancement image, and these factor should be considered in a whole rather than only one or two.

5. Conclusions
Contrast improving and color restoring is an important but difficult task for underwater image
application. Underwater images may lose contrast suffering from degradation because of poor
visibility conditions and effects such as light absorption, light reflection, bending of light and
scattering of light, which lead to dimness and distortion. Existing image enhancing algorithm may
not be able to improve contrast and restore color efficiently for the underwater image. Thus, this
paper proposes an CLAHE enhancement fusion algorithm for underwater image. The proposed
algorithm consists of four steps: from RGB to YIQ and HSI color spaces, CLAHE enhancement in
YIQ and HSI color spaces, from YIQ and HSI backward to RGB color space, and two improved RGB
images fusion in Euclidean norm and GLR operation. Based on experimental results obtained by
processing various underwater images with different mean, contrast and entropy, the contrast
improving and color restoring can be effectively achieved by using the proposed algorithm which
outperforms existing state-of-the-art image enhancement algorithm in visual performance and
quantitative evaluation.
The main contributions of the proposed algorithm include that this study proposed two
different color spaces transformations for CLAHE enhancement: RGB-YIQ and RGB-HSI, an
improved Euclidean norm to fuse the two individual color spaces CLAHE images, an improved 4
directions Sobel edge detector and GLR operation. These are four key parameters should be chosen
to achieve high contrast and entropy for the final CLAHE enhancement fusion image, which are BS
and CL in CLAHE algorithm, average Sobel edge detector and fusion coefficient in fusion
enhancement algorithm. The image enhancement effectiveness could be proven by the objective
quality metrics. For the underwater image with high contrast and entropy, the contrast and entropy
could be improved at least 131.25% and 2.36%; for the image with low contrast and entropy, these
two ratios were 2495.52% and 24.66%, respectively. These results indicates that our algorithm could
provide underwater image enhancement with the highest quality.
The proposed algorithm is applicable for degraded underwater image and other remote
sensing image for visual enhancement of contrast and entropy. However, the main limitation is that
it is sometimes more time-consuming than the existing algorithm, and the PSNR is less than 30 dB
Preprints (www.preprints.org) | NOT PEER-REVIEWED | Posted: 14 March 2017 doi:10.20944/preprints201703.0086.v1

25 of 27

which is not really acceptable by human perception. Therefore, our future work will focus on the
acceleration of our CLAHE enhancement fusion algorithm, and also focus on the CLAHE fusion
algorithm optimization applications in underwater image balancing of uneven illumination
environment.
Acknowledgments: This work is jointed supported by the National Natural Science Foundation of China (No.
61573128, No. 61671202 , No. 61475027 and No. 61273170); the National Key Research Foundation of China (No.
2016YFC0401606); the Fundamental Research Funds for the Central Universities of China (No. 2015B25214);
the Colleges and Universities in Jiangsu Province Plans to Graduate Research and Innovation (No.
CXZZ14_0140); and the Natural Science Foundation of Changzhou Institute of Technology (No. YN1514). The
authors would like to thank those who provide the valuable original underwater images.

Author Contributions: Xinnan Fan is the corresponding author who theoretically proposed the whole
algorithm, involved in writing and revising the manuscript; Jinxiang Ma implemented and improved the
algorithm, supervised the experimental work and wrote the whole paper; Simon X. Yang and Xuewu Zhang
investigated the literature, prepared the experimental setup, tested various methods on underwater images
and contributed to writing the paper; Xifang Zhu helped to implement the experiments and assisted in data
processing.

Conflicts of Interest: The authors declare no conflict of interest.

References
1. Shi, P.; Fan, X.; Ni, J.; Wang, G. A detection and classification approach for underwater dam cracks.
Structural Health Monitoring. 2016, 15, 541-554.
2. Fan, X.; Shi, P.; Ni, J.; Li, M. A Thermal Infrared and Visible Images Fusion Based Approach for Multi-
target Detection under Complex Environment. Mathematical Problems in Engineering. 2015, 2015, 1-11.
3. Lal, S. ; Narasimhadhan, A.V.; Kumar, R. Automatic Method for Contrast Enhancement of Natural Color
Images. Journal Eletrical Engineering Technology. 2015, 10(3), 1233-1243.
4. Ting, C.C.; Wu, B.F.; Chung, M.L.; Chiu, C.C.; Wu, Y.C. Visual Contrast Enhancement Algorithm Based
on Histogram Equalization. Sensors. 2015, 15, 16981-16999.
5. Hitam, M.S.; Yussof, W.J.; Awalludin, E.A.; Bachok, Z. Mixture contrast limited adaptive histogram
Equalization for Underwater Image Enhancement. International Conference on Computer Applications
Technology. 2013, 1, 1-5.
6. Naik, S.K.; Murthy, C. A. Hue-Preserving Color Image Enhancement Without Gamut Problem. IEEE
Transaction on image processing.2003,12,1591-1600.
7. Xie, B.; Guo, F.; Cai, Z. Improved single image dehazing using dark channel prior and multi-scale retinex.
International Conference on Intelligent System Design and Engineering Application. 2010,1,848–851.
8. He, K.; Sun ,J.; Tang, X. Single image haze removal using dark channel prior, IEEE Conference on Computer
Vision and Pattern Recognition, 2009,6,1956-1963.
9. Tarel ,J-P.; Hautiere, N. Fast visibility restoration from a single color or gray level image. IEEE International
Conference on Computer Vision, 2009, 9, 2201-2208.
10. Tarel, J-P.; Hautire, N.Caraffa,L.; Cord, A.; Halmaoui, H.; Gruyerl, D. Vision enhancement in
homogeneous and heterogeneous fog. IEEE Intelligent Transportation Systems Magazine. 2012,4,6-20.
11. Gibson, K.B.; Nguyen, T.Q. Fast single image fog removal using the adaptive Wiener filter. IEEE
International Conference on Image Processing, 2013, 9, 714-718.
12. Hanumantharaju, M.C.; Ravishankar, M.; Rameshbabu, D.R.; Ramchandran, S. Color Image Enhancement
using Multiscale Retinex with Modified Color Restoration Technique. Second International Conference on
Emerging Applications of Information Technology. 2011, 2, 93-97.
13. Land, E. H. Recent advances in retinex theory, Vision Research. 1986, 26, 7-21.
14. He, L.; Luo, L.; Shang, J. An Image Enhancement Algorithm Based Retinex Theory. First International
Workshop on Education Technology and Computer Science. 2009, 3, 350-352.
15. Rahman, Z.; Jobson, D.J.; Woodell, G.A. Multi-scale retinex for color image enhancement. Proceeding of 3rd
IEEE International Conference on Image Processing. 1996,3,1003-1006.
16. Jobson, D.J.; Rahman, Z; Woodell, G.A. A multiscale Retinex for bridging the gap between color images
and the human observation of scenes. IEEE Transaction on Image Processing. 1997,6, 965-976.
Preprints (www.preprints.org) | NOT PEER-REVIEWED | Posted: 14 March 2017 doi:10.20944/preprints201703.0086.v1

26 of 27

17. Xiong, J.; Han, L.; Geng, G.; Zhou ,M. Real Color Image Enhancement by Illumination-Reflectance Model
and Wavelet Transform.International Forum on Information Technology and Applications. 2009,2, 691-695.
18. Wang, Y.; Wang, H.; Yin, C.; Dai,M. Biologically inspired image enhancement based on Retinex,
Neurocomputing. 2016,177, 373-384.
19. Zimmerman, J.B.; Pizer, S.M. An evaluation of the effectiveness of adaptive histogram equalization for
contrast enhancement. IEEE Transaction Medical Imaging. 1988, 7, 304-312 .
20. Lin, C.L. An Approach to Improve the Quality of Infrared Images of Vein-Patterns. Sensors. 2011, 11,
11447-11463.
21. Chiu, C.C.; Ting, C.C. Contrast Enhancement Algorithm Based on Gap Adjustment for Histogram
Equalization. Sensors. 2016, 16, 936(01-18).
22. Zuiderveld, K. Contrast Limited Adaptive Histogram Equalization. Academic Press Inc.,1994.
23. Pisano, E.; Zong, S.;Hemminger, B.; DeLuca,M.; Johnston, R; Muller, K.; Braeuning, M.; Pizer, S. Contrast
Limited Adaptive Histogram Equalization Image Processing to Improve the Detection of Simulated
Spiculations in Dense Mammograms. Journal of Digital Imaging. 1998,11,193–200 .
24. Jintasuttisak, T.; Intajag, S. Color Retinex Image Enhancement by Rayleigh Contrast Limited Histogram
Equalization. International Conference on Control, Automation and Systems.2014,10,692-697.
25. Abuturab, M.R. Multiple color-image authentication system using HSI color space and QR decomposition
in gyrator domains. Journal of Modern Optics. 2015,12,1-16.
26. Gonzalez, R.C.; Woods, R. E. Digital Image Processing, 3rd edition, Prentice Hall, 2010.
27. Min, B.S.; Lim, D.K.; Kim, S.J.;Lee, J.H. A Novel Method of Determining Parameters of CLAHE Based on
Image Entropy.International Journal of Soft Engineering and its Applications. 2013, 7, 113-120.
28. Xu, Z.; Liu, X.; Ji, N. Fog Removal from Color Images using Contrast Limited Adaptive Histogram
Equalization. International Congress on Image and Signal Processing. 2009,10,1-5.
29. Blotta, E; Bouchet, A.; Ballarin, V.; Pastore, J. Enhancement of medical images in HSI color space. Journal of
Physics: Conference Series, 2011, 332, (012041)1-8.
30. Wirth, M.; Nikitenko, D. The effect of color space on image sharpening algorithms. Canadian Conference
Computer and Robot Vision. 2010, 5,79–85.
31. Khalili, M.; Asatryan, D. Colour spaces effects on improved discrete wavelet transform-based digital
image watermarking using Arnold transform map. Iet Signal Processing, 2013,7,177-187.
32. Yu, C.; Chen, S. Universal colour quantisation for different colour spaces. Iee Proceedings-Vision Image And
Signal Processing. 2006,153,445-455.
33. Finlayson, G.D.; Hordley, S.D.; Hubel, P.M. Color by correlation: a simple, unifying framework for color
constancy. IEEE Transactions on Pattern Analysis and Machine Intelligence. 2001,23,1209-1221.
34. Geusebroek, J.M.; van Den Boomgaard, R.; Smeulders, A.W.M.; Geerts, H. Color invariance. IEEE
Transactions on Pattern Analysis and Machine Intelligence. 2001,23,1338–1350.
35. Hsu, R.L.; Abdel-Mottaleb, M.; Jain, A.K. Face detection in color images. IEEE Transactions on Pattern
Analysis and Machine Intelligence. 2002,24, 696–706.
36. Tan, R.T.; Ikeuchi, K. Separating reflection components of textured surfaces using a single image. IEEE
Transactions on Pattern Analysis and Machine Intelligence. 2005,27,178–193.
37. Ledley,R.S.; Buas, M.; Golab, T.J. Fundamentals of true-color image processing. Proceedings. 10th
International conference on Pattern Recognition. 1990,1, 791-795.
38. Plataniotis, K.N. Color Image Processing and Applications. Measurement Science and Technology. 2001,
12,222-222.
39. Kim, S.H.; Lee, K.H.; Lee, Y.H.; Ha, H.Y. Digital watermarking based on color differences. Security and
Watermarking of Multimedia Contents III. 2001, 4314, 10–17.
40. Gasparri, J.P.; Bouchet, A.; Abras, G.; Ballarin, V.; Pastore, J.I. Medical Image Segmentation using the HSI
color space and Fuzzy Mathematical Morphology. Journal of Physics:Conference Series. 2011, 332, 012033
(1-9).
41. Sinha, A.; Banerji, S.; Liu, C. New color GPHOG descriptors for object and scene image classification.
Machine Vision and Applications. 2014, 25,361-375.
42. Ilea, D.E.; Whelan, P.F. CTex-An Adaptive Unsupervised Segmentation Algorithm Based on
Color-Texture Coherence. IEEE Transactions on Image Processing. 2008, 17,1926-1939.
43. Tsai, V.J.D. A Comparative Study on Shadow Compensation of Color Aerial Images in Invariant Color
Models. IEEE Transactions on Geoscience and Remote Sensing. 2006,44,1661-1671
Preprints (www.preprints.org) | NOT PEER-REVIEWED | Posted: 14 March 2017 doi:10.20944/preprints201703.0086.v1

27 of 27

44. Jiao, Z.; Xu, B. An image enhancement approach using retinex and YIQ. International Conference on
Information Technology and Computer Science. 2009, 1,476–479.
45. Pritchard, D. U S color television fundamentals-A review. IEEE Transaction on Consumer Electronics.
1977,23, 467–478.
46. Neal, C. Television colorimetry for receiver engineers. IEEE Transaction Broadcast Television Receivers.
1973,19, 149–162.
47. Yang, C. Efficient Gamut Clipping for Color Image Processing Using LHS and YIQ. Society of
Photo-Optical Instrumentation Engineers. 2003,42,701-711.
48. Boo, K.J.; Bose, N.K. Multispectral Image Restoration with Multisensors. IEEE Transactions on Geoscience
and Remote Sensing.1997, 35,1160-1170.
49. Gonzalez, R.C.; Woods, R.E.; Eddins, S.L. Digital Image Processing Using MATLAB®, 2nd ed. Gatesmark.
LLC, 2009.
50. Lillesand, T.M.; Kiefer, R.W.; Chipman, J.W. Remote Sensing and Image Interpretation, 6th ed.; Wiley.
New York, NY, 2007.
51. Zhao, H.; Li, Q.; Feng, H. Multi-focus color image fusion in the HSI space using the
sum-modified-laplacian and coarse edge map. Image and Vision Computing, 2008, 26,1285-1295.
52. El-Khamy, S.E.; Lotfy, M.; El-Yamany, N. A Modified Fuzzy Sobel Edge Detector. Proceedings of the
Seventeenth National Radio Science Conference. 2000, C32(1-9) .
53. Zhang, Y.H.; Li, X.; Xiao, J.Y. A Digital Fuzzy Edge Detector for Color Images. Computer Vision and Pattern
Recognition. 2017,1,1-7.
54. Ahmad, M.B.; Choi, T.S. Local threshold and boolean function based edge detection. IEEE Transactions on
Consumer Electronics. 1999, 45,674–679.
55. Singh, S.; Sauraw, S.; Saini, R.; Saini, A.K.; Shekhar, C.; Vohra, A. Comprehensive Review and
Comparative Analysis of Hardware Architectures for Sobel Edge Detector. ISRN Electronics.
2014,2014,1-9.
56. Rosenfel, A. Computer vision: a source of models for biological visual process. IEEE Transactions on
Biomedical Engineering. 1989,36,93-96.
57. Sobel, I. Neighbourhood coding of binary images fast contour following and general array binary
processing. Computer Graphics and Image Processing. 1978,8,127-135.
58. Abbasi, T.A.; Abbasi, M.U. A novel FPGA-based architecture for Sobel edge detection operator.
International Journal of Electronics. 2007,94, 889-896.
59. Bi, G.L.; Xu, Z.J.; Zhao, J.; Sun, Q. Multispectral image enhancement based on irradiation- reflection
model and bounded operation. Acta Physica Sinica. 2015,64, (100701)1-9.
60. Susstrunk, S.; Winkler, S. Color Image Quality on the Internet. In Proceedings of IS & T/SPIE Electronic
Imaging: Internet Imaging V. 2004,5304, 118-131.
61. Mukherjee J.; Mitra, S.K. Enhancement of Color Images by Scaling the DCT Coefficients. IEEE Transaction
on Image Processing. 2008,17,1783-1794.
62. Ghani, A.S.A.; Isa, N.A.M. Enhancement of low quality underwater image through intergrated global and
local contrast correction. Applied Soft Computer. 2015,37,332-344.
63. Biancoa, G.; Muzzupappaa, M.; Brunoa, F.; Garciab, R.; Neumann, L. A New Correction Method for
Underwater Imaging. The International Archives of the Photogrammetry, Remote Sensing and Spatial
Information Sciences. 2015,5,25-32.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy