0% found this document useful (0 votes)
39 views

Dip Unit2

The document discusses image enhancement techniques. It covers enhancing images in the spatial and frequency domains. Some key spatial domain techniques include histogram equalization, which spreads out the frequencies in an image to improve contrast, and arithmetic/logical operations between images like subtraction and averaging. Frequency domain techniques modify an image's Fourier transform. Color image processing and pseudocolor image processing are also covered. The goal of image enhancement is to process an image to make it more suitable for a specific application or visual interpretation.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
39 views

Dip Unit2

The document discusses image enhancement techniques. It covers enhancing images in the spatial and frequency domains. Some key spatial domain techniques include histogram equalization, which spreads out the frequencies in an image to improve contrast, and arithmetic/logical operations between images like subtraction and averaging. Frequency domain techniques modify an image's Fourier transform. Color image processing and pseudocolor image processing are also covered. The goal of image enhancement is to process an image to make it more suitable for a specific application or visual interpretation.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 172

Unit 2: Image Enhancement

1. Enhancement in Spatial Domain:


Basic gray level transformations,
Histogram processing
Equalization
Arithmetic and logical operations
between images
Histogram Processing
Equalization
Basics of spatial filtering, smoothening
and sharpening spatial filters.
Unit 2: Image Enhancement
2. Image Enhancement in frequency
Domain:
 Smoothening and sharpening frequency
domain filters.

3. Fundamental of color image processing:


 Color models: RGB, CMY, YIQ, HIS.

4. Pseudo Color Image processing:


 Intensity filtering
 Gray level to color transformation
 Basics of full color image processing.
Image Enhancement
• Goal: Process an image so that the result
is more suitable than the original image
for a specific application
• Visual interpretation
• Problem oriented
Definition

• Image Enhancement refers to


accentuation or sharpening of image
features such as edges , boundaries or
contrast to make a graphic display more
useful for display and analysis.
• IE does not increase the inherent contents
of the original image but it increases the
dynamic range of chosen features so that
they can be detected easily.
Image enhancement example
Two categories
• There is no general theory of image
enhancement
• Spatial domain
– Direct manipulation of pixels
• Point processing
• Neighborhood processing

• Frequency domain
– Modify the Fourier transform of an image
Outline: spatial domain operations
• Background
• Gray level transformations
• Arithmetic/logic operations
Background
• Spatial domain processing
– g(x,y)=T[ f(x,y) ] f(x,y) g(x,y)
T
– f(x,y): input image
– g(x,y): output image
– T: operator
• Defined over some neighborhood of (x,y)
Background (cont.)

* T applies to each pixel


in the input image

* T operates over
neighborhood of (x,y)
Point processing
• 1x1 neighborhood
– Gray level transformation, or point processing
– s = T(r)

contrast thresholding
stretching
Neighborhood processing
• A larger predefined neighborhood
– Ex. 3x3 neighborhood
– mask, filters, kernels, templates, windows
– Mask processing or filtering
Some Basic Gray Level Transformations

• Image negatives (complement)


• Log transformation
• Power-law transform
• Piece-wise linear transform
• Gray level slicing
• Bit plane slicing
Some gray level transformations
Log transformations
• s = c log(1+r)
 Compress the
dynamic range of
images with large
variation in pixel
values
Example: Log transformations
• log(fft2(I)) : log of Fourier transform

log
Power-law transformations

• s=cr
• 1
• 1
• : gamma
– display, printers,
scanners follow
power-law
– Gamma correction
Example: Gamma correction

• CRT: intensity-to- 2.5


voltage response
follow a power-law.
1.8<2.5
1/2.5
2.5
1
Power-law:

• Expand dark gray


levels

0.6

0.4 0.3
Power-law: 1

• Expand light
gray levels

3

4 5
Piece-wise linear transformations
control point

• Advantage: the
piecewise function
can be arbitrarily
complex
Contrast
stretching
Intensity or Gray-level slicing
• Highlighting a specific range of gray levels
Intensity or Gray-level slicing
• Without background
S= S1 ; A <= r <= B
0 ; Otherwise
• With Background
S= S1 ; A <= r <= B
r ; Otherwise
Bit-plane slicing
* Highlight specific bits

1
0
0
1
0
1
0
0
bit-planes of an image
(gray level 0~255)
Bit-plane slicing: example
For image
compression

7 6

5 4 3

2 1 0
Arithmetic/logic operations

• Logic operations
• Image subtraction
• Image averaging
Logic operations
• Logic operations: pixel-wise AND, OR, NOT
• The pixel gray level values are taken as
string of binary numbers
Ex. 193 => 11000001

• Use the binary mask to take out the region


of interest(ROI) from an image
Logic operations: example
A B A and B

AND

A or B

OR
Image subtraction
f:original(8 bits) h:4 sig. bits

• Difference image
g(x,y)=f(x,y)-h(x,y)

scaling

difference image
Image subtraction: scaling the
difference image
• g(x,y)=f(x,y)-h(x,y)
– f and h are 8-bit => g(x,y)  [-255, 255]
1. (1)+255 (2) divide by 2
• The result won’t cover [0,255]
2. (1)-min(g) (2) *255/max(g)

Be careful of the dynamic range after the image


is processed.
Image subtraction: scaling the
difference image
• g(x,y)=f(x,y)-h(x,y)
– f and h are 8-bit => g(x,y)  [-255, 255]
1. +255 and divide by 2
2. -min(g) and *255/max(g)

Be careful of the dynamic range after the image


is processed.
Digital Subtraction: Angiography

• Medical imaging technique used to see


blood vessels:
• Take one X-ray
• Inject a contrast agent
• Take another X-ray (and hope the patient
hasn’t moved, or even breathed too much)
• Subtract the first (background) from the
second (background + vessels)
Image subtraction example: mask
mode radiography
• Inject contrast medium into bloodstream
original (head) difference image
Image averaging
• Noisy image g(x,y)=f(x,y)+η(x,y)
original noise

Clear image Noisy image

• Suppose η(x,y) is uncorrelated and has zero


mean
Image averaging
• Noisy image g(x,y)=f(x,y)+η(x,y)
original noise

Clear image Noisy image

• Suppose η(x,y) has zero mean


Image averaging: noise reduction
 Averaging over K noisy images gi(x,y)
K
1
g ( x, y ) 
K
 g ( x, y )
i 1
i

E g ( x , y )   f ( x , y )

1
g 2
( x, y )
 2
( x, y ) K   2 
K
original Gaussian
noise

averaging averaging
K=8 K=16

averaging averaging
K=64 K=128
Histogram
The (intensity or brightness) histogram shows how many
times a particular gray level (intensity) appears in an image.

For example, 0 - black, 255 – white

0 1 1 2 4 6

2 1 0 0 2 4

5 2 0 0 4 2

1 1 2 4 1 0

0 1 2 3 4 5 6

image histogram
Histogram Processing
• Histogram counts the number of
occurrences of each gray-level value
• Plot of frequency of occurrence as a
function of pixel value
• It is equivalent of a Probability Density
Function (pdf)
Image Histograms
The histogram of an image shows us the
distribution of grey levels in the image
Massively useful in image processing, especially in
segmentation
Frequencies

Grey Levels
Histogram Examples (cont…)
Dark Image
Histogram Examples (cont…)
Bright Image
Histogram Examples (cont…)
High Contrast Image
Histogram Examples (cont…)
Low Contrast Image
Histogram
Low Contrast Image
An image has low contrast when the complete range of possible
values is not used. Inspection of the histogram shows this
lack of contrast.
Histogram Equalisation
Spreading out the frequencies in an
image (or equalising the image) is a
simple way to improve dark or washed
out images
Basic idea: find a map f(x) such that
the histogram of the modified
(equalized) image is flat (uniform).
The formula for histogram
sk  T ( rk )
k
equalisation is given where
 rk: input intensity   pr (r j )
 sk: processed intensity j 1

 k: the intensity range k


nj
(e.g 0.0 – 1.0) 
 nj: the frequency of intensity j j 1 n
 n:the sum of all frequencies i.e.
Equalisation Examples
1
Equalisation Examples
2
Mean, Variance, Standard Deviation
How and why does it work ?
How to Adjust the Image?
• Histogram equalization
– Key motivation: cumulative probability
function (cdf) of a random variable
approximates a uniform distribution

Suppose h(t) is the histogram (pdf) s( x )   h (t )


t 0

EE465: Introduction to Digital Image


52
Processing
Histogram Equalization
x
Uniform
y  L   h (t ) Quantization
t 0

x Note:  h (t )  1
y s   h (t ) t 0

t 0
cumulative probability function
L 1

x
0 L
http://en.wikipedia.org/wiki/Inverse_transform_sampling
53
EE465: Introduction to Digital Image
Processing
Histogram Specification/Matching
Given a target image B, how to modify a given image A such that
the histogram of the modified A can match that of target image B?

histogram1 histogram2

S-1*T

T S

?
EE565 Advanced Image Processing 54
Copyright Xin Li 2008
Spatial Filtering
• Spatial operations are performed on the
neighborhood of input pixels using SPATIAL
MASKS or SPATIAL FILTERS .
• The mask that is used on the sub image of
the image is called KERNEL, TEMPLATE or
WINDOW.
• The values in the filter sub image are
referred as COEFFICIENTS rather than
PIXELS.
The Spatial Filtering Process
Origin x
a b c r s t
d
g
e
h
f
i
* u
x
v
y
w
z
Original Image Filter
Simple 3*3 Pixels
e 3*3 Filter
Neighbourhood
eprocessed = v*e +
r*a + s*b + t*c +
u*d + w*f +
y Image f (x, y) x*g + y*h + z*i

The above is repeated for every pixel in the


original image to generate the filtered image
Spatial Filtering: Equation Form
a b

g ( x, y )    w ( s, t ) f ( x  s, y  t )
s  a t b

•Filtering can be
given in equation
form as shown
above
•Notations are based
on the image shown
to the left
Smoothing Spatial Filters
(Averaging Filters or Low Pass filters)
 The output of the linear
spatial filter is simply
the average of the
pixels in a 1/ 1/ 1/
neighbourhood around 9 9 9
a central value Simple
1/ 1/ 1/
 As this process results 9 9 9 averaging
in an image with
reduced sharp 1/ 1/ 1/
filter
transitions in gray 9 9 9
levels, It is Especially
useful in removing
noise
from images.

 Also useful for


highlighting gross
detail
Smoothing Spatial Filtering
Origin x
104 100 108 1/ 1/ 1/
9 9 9
99 106 98
95 90 85
* 1/

1/
9
1/

1/
9
1/

1/
9

9 9 9
1 / 100
1 1/ Original Image Filter
1049 /9 108
Simple 3*3 199
/9 1106
9
/9 198
/9
3*3 Smoothing Pixels
Neighbourhood 195
/9 190
/9 185
/9
Filter
e = 1/9*106 +
1/ *104 + 1/ *100 + 1/ *108 +
9 9 9
1/ *99 + 1/ *98 +
9 9
y 1/ *95 + 1/ *90 + 1/ *85
Image f (x, y) 9 9 9
= 98.3333
The above is repeated for every pixel in the
original image to generate the smoothed image
Image Smoothing Example
The image at the top left
is an original image of
size 500*500 pixels
The subsequent images
show the image after
filtering with an averaging
filter of increasing sizes
 3, 5, 9, 15 and 35
Notice how detail begins
to disappear
Weighted Smoothing Filters
More effective smoothing
filters can be generated by
allowing different pixels in
1/ 2/ 1/
the neighbourhood with 16 16 16
different weights in the
averaging function 2/
16
4/
16
2/
16
 Pixels closer to the
central pixel are more 1/ 2/ 1/
important 16 16 16
 Often referred to as a Weighted
weighted averaging averaging filter
Another Smoothing Example
By smoothing the original image we get rid of
lots of the finer detail which leaves only the
gross features for thresholding

Original Image Smoothed Image Thresholded Image


Averaging Filter Vs. Median Filter
Example

Original Image Image After Image After


With Noise Averaging Filter Median Filter

Filtering is often used to remove noise from


images
Sometimes a median filter works better than
an averaging filter
Simple Neighbourhood Operations
Example
x
123 127 128 119 115 130

140 145 148 153 167 172

133 154 183 192 194 191

194 199 207 210 198 195

164 170 175 162 173 151

y
Strange Things Happen At The
Edges!
At the edges of an image we are missing
pixels to form a neighbourhood
Origin x
e e

e e e
y Image f (x, y)
Strange Things Happen At The
Edges! (cont…)
• There are a few approaches to dealing with missing
edge pixels:
– Omit missing pixels
• Only works with some filters
• Can add extra code and slow down processing
– Pad the image
• Typically with either all white or all black pixels
– Replicate border pixels
– Truncate the image
– Allow pixels wrap around the image
• Can cause some strange image artefacts
Strange Things Happen At The
Edges! (cont…)
Filtered Image:
Zero Padding

Original Filtered Image:


Image Replicate Edge Pixels

Filtered Image:
Wrap Around Edge Pixels
Correlation & Convolution
The filtering we have been talking about so far
is referred to as correlation with the filter itself
referred to as the correlation kernel
Convolution is a similar operation, with just
one subtle difference

a b c r s t eprocessed = v*e +
z*a + y*b + x*c +
d
f
e
g
e
h
* u
x
v
y
w
z
w*d + u*e +
t*f + s*g + r*h
Original Image Filter
Pixels
Sharpening Spatial Filters
• Sharpening spatial filters seek to highlight
fine detail
– Remove blurring from images
– Highlight edges
• Sharpening filters are based on spatial
differentiation
Spatial Differentiation
Differentiation measures the rate of change of a
function
Let’s consider a simple 1 dimensional example
Spatial Differentiation

A B
1 st Derivative
The st
formula for the 1 derivative of a function
is as follows:
f
 f (x  1)  f ( x)
x
It’s
just the difference between subsequent
values and measures the rate of change of the
function
1 st Derivative (cont…)

5 5 4 3 2 1 0 0 0 6 0 0 0 0 1 3 1 0 0 0 0 7 7 7 7

0 -1 -1 -1 -1 0 0 6 -6 0 0 0 1 2 -2 -1 0 0 0 7 0 0 0
2 nd Derivative
The formula for the 2 nd derivative of a function
is as follows:
 f2

 f (x  1)  f (x  1)  2 f ( x )
 x
2

Simply takes into account the values both


before and after the current value
2 nd Derivative (cont…)

5 5 4 3 2 1 0 0 0 6 0 0 0 0 1 3 1 0 0 0 0 7 7 7 7

-1 0 0 0 0 1 0 6 -12 6 0 0 1 1 -4 1 1 0 0 7 -7 0 0
Using Second Derivatives For Image
Enhancement
• The 2nd derivative is more useful for image
enhancement than the 1st derivative
– Stronger response to fine detail
– Simpler implementation
– We will come back to the 1st order derivative later on
• The first sharpening filter we will look at is the
Laplacian
– Isotropic
– One of the simplest sharpening filters
– We will look at a digital implementation
The Laplacian
The Laplacian is defined as follows:
 f  f 2 2

 f  2  2
2

st
 x  y
where the partial 1 order derivative in the x
direction is defined as follows:

 f
2

 f (x  1, y)  f (x  1, y)  2 f ( x, y)
 x
2

and in the y direction as follows:


 f
2

 f ( x, y  1)  f ( x, y  1)  2 f ( x , y)
 y
2
The Laplacian (cont…)
So, the Laplacian can be given as follows:
 2
f [ f (x  1, y)  f (x  1, y)
 f ( x , y  1)  f ( x, y  1)]
 4 f ( x, y )
We can easily build a filter based on this

0 1 0

1 -4 1

0 1 0
The Laplacian (cont…)
Applying the Laplacian to an image we get a
new image that highlights edges and other
discontinuities

Original Laplacian Laplacian


Image Filtered Image Filtered Image
Scaled for Display
But That Is Not Very Enhanced!
The result of a Laplacian filtering
is not an enhanced image
We have to do more work in order
to get our final image
Subtract the Laplacian result from
the original image to generate our Laplacian
final sharpened enhanced image Filtered Image
Scaled for Display

g ( x, y )  f ( x, y )  2
f
Laplacian Image Enhancement

- =
Original Laplacian Sharpened
Image Filtered Image Image

In the final sharpened image edges and fine


detail are much more obvious
Simplified Image Enhancement
The entire enhancement can be combined into
a single filtering operation
g ( x, y ) f ( x, y )   f2

 f ( x , y )  [ f ( x  1, y )  f ( x  1, y )
 f ( x , y  1)  f ( x , y  1)
 4 f ( x , y )]
 5 f ( x , y )  f ( x  1, y )  f ( x  1, y )
 f ( x , y  1)  f ( x , y  1)
Simplified Image Enhancement
(cont…)
This gives us a new filter which does the whole
job for us in one step

0 -1 0

-1 5 -1

0 -1 0
Variants On The Simple Laplacian
There are lots of slightly different versions of
the Laplacian that can be used:
0 1 0 1 1 1
Simple Variant of
1 -4 1 1 -8 1
Laplacian Laplacian
0 1 0 1 1 1

-1 -1 -1

-1 9 -1

-1 -1 -1
1 st Derivative Filtering
Implementing st
1 derivative filters is difficult in
practice
For a function f(x, y) the gradient of f at
coordinates (x, y) is given as the column vector:

 f 
G x   x 
f      f 
G y   
 y 
1 st Derivative Filtering (cont…)
The magnitude of this vector is given by:
f  mag ( f )
1
 G x  G y 
2 2 2

1
 f  2  f   2 2

      
 x   y  
For practical reasons this can be simplified as:

f  G x
Gy
1 st Derivative Filtering (cont…)
There is some debate as to how best to calculate
these gradients but we will use:
f   z7  2 z8  z 9    z1  2 z 2  z 3 
  z 3  2 z 6  z 9    z1  2 z 4  z 7 
which is based on these coordinates

z1 z2 z3

z4 z5 z6

z7 z8 z9
Sobel Operators
Based on the previous equations we can derive
the Sobel Operators

-1 -2 -1 -1 0 1

0 0 0 -2 0 2

1 2 1 -1 0 1
To filter an image it is filtered using both operators
the results of which are added together
Sobel Example
An image of a
contact lens which is
enhanced in order to
make defects (at
four and five o’clock
in the image) more
obvious

Sobel filters are typically used for edge detection


1 st & 2 nd Derivatives
• Comparing the 1st and 2nd derivatives we
can conclude the following:
– 1st order derivatives generally produce thicker
edges
– 2nd order derivatives have a stronger response
to fine detail e.g. thin lines
– 1st order derivatives have stronger response
to grey level step
– 2nd order derivatives produce a double
response at step changes in grey level
Image Enhancement in
Frequency Domain
Some Basic Frequency Domain
Filters
Low Pass Filter

High Pass Filter


Smoothing Frequency Domain
Filters
• Smoothing is achieved in the frequency domain by
dropping out the high frequency components
• The basic model for filtering is:
G(u,v) = H(u,v)F(u,v)
where F(u,v) is the Fourier transform of the image
being filtered and H(u,v) is the filter transform
function
• Low pass filters – only pass the low frequencies,
drop the high ones
Ideal Low Pass Filter
Simply cut off all high frequency components
that are a specified distance D0 from the origin of
the transform

changing the distance changes the behaviour of


the filter
Ideal Low Pass Filter (cont…)
The transfer function for the ideal low pass
filter can be given as:

1 if D ( u , v )  D0
H (u , v ) 
0 if D (u , v )  D 0
where D(u,v) is given as:

D (u , v )  [( u  M / 2)
2
 (v  N 2 1/2
/ 2) ]
Ideal Low Pass Filter (cont…)

Above we show an image, it’s Fourier spectrum


and a series of ideal low pass filters of radius 5,
15, 30, 80 and 230 superimposed on top of it
Ideal Low Pass Filter (cont…)
Result of filtering
Original with ideal low
image pass filter of
radius 5

Result of filtering Result of filtering


with ideal low with ideal low
pass filter of pass filter of
radius 15 radius 30

Result of filtering
Result of filtering
with ideal low
with ideal low
pass filter of
pass filter of
radius 230
radius 80
Butterworth Lowpass Filters
The transfer function of a Butterworth lowpass
filter of order n with cutoff frequency at
distance D0 from the origin is defined as:
1
H (u , v ) 
1  [ D (u , v ) / D 0 ]
2n
Butterworth Lowpass Filter (cont…)
Result of filtering
Original with Butterworth
image filter of order 2 and
cutoff radius 5

Result of filtering Result of filtering


with Butterworth with Butterworth
filter of order 2 and filter of order 2 and
cutoff radius 15 cutoff radius 30

Result of filtering
Result of filtering
with Butterworth
with Butterworth
filter of order 2 and
filter of order 2 and
cutoff radius 230
cutoff radius 80
Gaussian Lowpass Filters
The transfer function of a Gaussian lowpass
filter is defined as:
D 2 2

H (u , v ) e (u ,v ) /2 D 0
Gaussian Lowpass Filters (cont…)
Result of filtering
Original with Gaussian
image filter with cutoff
radius 5

Result of filtering Result of filtering


with Gaussian with Gaussian
filter with cutoff filter with cutoff
radius 15 radius 30

Result of Result of filtering


filtering with with Gaussian
Gaussian filter filter with cutoff
with cutoff radius 230
radius 85
Lowpass Filters Compared

Result of
Result of filtering
filtering with
with ideal low
Butterworth
pass filter of
filter of order 2
radius 15
and cutoff
radius 15

Result of filtering
with Gaussian
filter with cutoff
radius 15
Lowpass Filtering Examples
A low pass Gaussian filter is used to connect
broken text
Lowpass Filtering Examples (cont…)
Different lowpass Gaussian filters used to
remove blemishes in a photograph
Lowpass Filtering Examples (cont…)

Original Gaussian
image lowpass filter

Spectrum of Processed
original image image
Sharpening in the Frequency
Domain
• Edges and fine detail in images are
associated with high frequency
components
• High pass filters – only pass the high
frequencies, drop the low ones
• High pass frequencies are precisely the
reverse of low pass filters, so:
Hhp(u, v) = 1 – Hlp(u, v)
Ideal High Pass Filters
The ideal high pass filter is given as:
0 if D ( u , v )  D0
H (u , v ) 
1 if D ( u , v )  D0
where D0 is the cut off distance as before
Ideal High Pass Filters (cont…)

Results of ideal Results of ideal Results of ideal


high pass filtering high pass filtering high pass filtering
with D0 = 15 with D0 = 30 with D0 = 80
Butterworth High Pass Filters
The Butterworth high pass filter is given as:
1
H (u , v ) 
1  [ D 0 / D ( u , v )]
2n

where n is the order and D0 is the cut off


distance as before
Butterworth High Pass Filters
(cont…)

Results of Results of
Butterworth Butterworth
high pass high pass
filtering of filtering of
order 2 with order 2 with
D0 = 15 D0 = 80

Results of Butterworth high pass


filtering of order 2 with D0 = 30
Gaussian High Pass Filters
The Gaussian high pass filter is given as:

D 2 2

H (u , v ) 1e (u ,v ) /2 D 0

where D0 is the cut off distance as before


Gaussian High Pass Filters (cont…)

Results of Results of
Gaussian Gaussian
high pass high pass
filtering filtering
with D0 = 15 with D0 = 80

Results of Gaussian high


pass filtering with D0 = 30
Highpass Filter Comparison

Results of ideal Results of Butterworth Results of Gaussian


high pass filtering high pass filtering of high pass filtering with
with D0 = 15 order 2 with D0 = 15 D0 = 15
After histogram
Highpass filtering result equalisation
Highpass Filtering Example
Original image emphasis result
High frequency
Laplacian In The Frequency Domain

2-D image of Laplacian


frequency domain

in the frequency
Laplacian in the

domain
frequency domain
Laplacian in the
Inverse DFT of

Zoomed section
of the image on
the left compared
to spatial filter
Frequency Domain Laplacian
Example

Original Laplacian
image filtered
image

Laplacian
Enhanced
image
image
scaled
(R,G,B) Parameterization of Full
Color Images
Grayscale Images
Digital Image Types : Intensity
Image
Intensity image or monochrome image
each pixel corresponds to light intensity
normally represented in gray scale (gray
level).

Gray scale values


10 10 16 28 
9 6 26 37

 
15 25 13 22 
 
32 15 87 39 
Digital Image Types : RGB Image
Color image or RGB image:
each pixel contains a vector
representing red, green and
blue components.

RGB components
10 10 16 28 
 9 656 7026 5637  43 
 32 9954 7096 56  67  78 
15  2560 13 90 22 96
  67 
  21  54 47  42  
32  1585 87 85 39 43  92 
54  65 65 39  
32 65 87 99 
Image Types : Binary Image
Binary image or black and white image
Each pixel contains one bit :
1 represent white
0 represents black

Binary data
0 0 0 0
0 0 0 0

 
1 1 1 1
 
1 1 1 1
Color Image Fundamentals
In 1666 Sir Isaac Newton discovered that when a
beam of sunlight passes through a glass prism,
the emerging beam is split into a spectrum of
colors
Color Image Fundamentals
Color Image Fundamentals
Chromatic light spans the electromagnetic
spectrum from approximately 400 to 700 nm
As we mentioned before human color vision is
achieved through 6 to 7 million cones in each eye
Color Image Fundamentals
Approximately 66% of these cones are sensitive
to red light, 33% to green light and 6% to blue light
Absorption curves for the different cones have
been determined experimentally
Strangely these do not match the CIE standards
for red (700nm), green (546.1nm) and blue
(435.8nm) light as the standards were developed
before the experiments!
Color Image Fundamentals
Color Image Fundamentals
3 basic qualities are used to describe the quality
of a chromatic light source:
 Radiance: the total amount of energy that flows from
the light source (measured in watts)
 Luminance: the amount of energy an observer
perceives from the light source (measured in lumens)
 Note we can have high radiance, but low luminance
 Brightness: a subjective (practically unmeasurable)
notion that embodies the intensity of light
CIE Chromacity Diagram
• Specifying colors systematically can be
achieved using the CIE chromacity diagram
• On this diagram the x-axis represents the
proportion of red and the y-axis represents
the proportion of green.
• The proportion of blue used in a color is
calculated as:
z = 1 – (x + y)
CIE Chromacity Diagram (cont…)
Green: 62% green, 25%
red and 13% blue
Red: 32% green, 67%
red and 1% blue
CIE Chromacity Diagram (cont…)
• Any color located on the boundary of the chromacity
chart is fully saturated
• The point of equal energy has equal amounts of
each color and is the CIE standard for pure white
• Any straight line joining two points in the diagram
defines all of the different colors that can be
obtained by combining these two colors additively
• This can be easily extended to three points
CIE Chromacity Diagram (cont…)
This means the entire
color range cannot be
displayed based on any
three colors
The triangle shows the
typical color gamut
produced by RGB
monitors
The strange shape is
the gamut achieved by
high quality color
printers
Color Models
• From the previous discussion it should be
obvious that there are different ways to
model color
• Models used in color image processing:
– RGB (Red Green Blue)
– HIS (Hue Saturation Intensity)
– YIQ
RGB
• In the RGB model each color appears in its
primary spectral components of red, green and
blue
• The model is based on a Cartesian coordinate
system
– RGB values are at 3 corners
– Cyan magenta and yellow are at three other corners
– Black is at the origin
– White is the corner furthest from the origin
– Different colors are points on or inside the cube
represented by RGB vectors
RGB (cont…)
RGB (cont…)
RGB (cont…)
The HSI Color Model
• RGB is useful for hardware
implementations.
• However, RGB is not a particularly intuitive
way in which to describe colors
• Rather when people describe colors they
tend to use hue, saturation and brightness
• RGB is great for color generation, but HSI
is great for color description
The HSI Color Model (cont…)
• The HSI model uses three measures to
describe colors:
– Hue: A color attribute that describes a pure
color (pure yellow, orange or red)
– Saturation: Gives a measure of how much a
pure color is diluted with white light
– Intensity: Brightness is nearly impossible to
measure because it is so subjective. Instead we
use intensity. Intensity is the same achromatic
notion that we have seen in grey level images
HSI, Intensity & RGB
• Intensity can be extracted from RGB images
– which is not surprising if we stop to think
about it
• Remember the diagonal on the RGB color
cube that we saw previously ran from black
to white
• Now consider if we stand this cube on the
black vertex and position the white vertex
directly above it
The HSI Color Model
The HSI Color Model (cont…)
The HSI Color Model (cont…)
Because the only important things are the angle and
the length of the saturation vector this plane is also
often represented as a circle or a triangle
HSI Model Examples
HSI Model Examples
Converting From RGB To HSI
Converting From HSI To RGB
Converting From HSI To RGB (cont…)
HSI & RGB

RGB Color Cube

H, S, and I Components of RGB Color Cube


RGB -> HSI -> RGB

RGB
Hue
Image

Saturation Intensity
RGB -> HSI -> RGB (cont…)

Hue
Saturation

Intensity RGB
Image
HSI, Intensity & RGB (cont…)
HSI, Hue & RGB
YIQ color model
• This model was designed to separate chrominance from
luminance.
• This was a requirement in the early days of color
television when black-and-white sets still were expected
to pick up and display what were originally color pictures.
• The Y-channel contains luminance information
(sufficient for black-and-white television sets) while the I
and Q channels (in-phase and in-quadrature) carried the
color information.
• A color television set would take these three channels, Y,
I, and Q, and map the information back to R, G, and B
levels for display on a screen.
YIQ color model
Pseudo Color Image Processing
Pseudo color (also called false color)
image processing consists of
assigning colors to grey values based
on a specific criterion
The principle use of pseudo color
image processing is for human
visualizations
 Humans can discern between
thousands of color shades and
intensities, compared to only about two
dozen or so shades of grey
Pseudo Color Image Processing –
Intensity Slicing
• Intensity slicing and color coding is one of the
simplest kinds of pseudo color image processing
• First we consider an image as a 3D function
mapping spatial coordinates to intensities (that we
can consider heights)
• Now consider placing planes at certain levels
parallel to the coordinate plane
• If a value is one side of such a plane it is rendered in
one color, and a different color if on the other side
Pseudo Color Image Processing –
Intensity Slicing (cont..)
Pseudo color Image Processing –
Intensity Slicing (cont…)
• In general intensity slicing can be
summarized as:
– Let [0, L-1] represent the grey scale
– Let l0 represent black [f(x, y) = 0] and let lL-1
represent white [f(x, y) = L-1]
– Suppose P planes perpendicular to the intensity
axis are defined at levels l1, l2, …, lp
– Assuming that 0 < P < L-1 then the P planes
partition the grey scale into P + 1 intervals V1, V2,
…,VP+1
Pseudo Color Image Processing
Pseudo color (also called false color)
image processing consists of
assigning colors to grey values based
on a specific criterion
The principle use of pseudo color
image processing is for human
visualizations
 Humans can discern between
thousands of color shades and
intensities, compared to only about two
dozen or so shades of grey
Pseudo Color Image Processing –
Intensity Slicing
• Intensity slicing and color coding is one of the
simplest kinds of pseudo color image processing
• First we consider an image as a 3D function
mapping spatial coordinates to intensities (that we
can consider heights)
• Now consider placing planes at certain levels
parallel to the coordinate plane
• If a value is one side of such a plane it is rendered in
one color, and a different color if on the other side
Functional Block Diagram for Pseudo
Color Image Processing

• The outputs are fed into the corresponding red,


green and blue inputs of an RGB color monitor
Transformation
functions used to
obtain images
Pseudo color enhancement using gray level to color
transformations (as shown in previous slide)
A Pseudo color coding approach used when
several monochrome images are available
Basics of full color image
processing

• Spatial masks for gray scale and RGB color images


A full color
image and its
various
components
Adjusting the intensity of an image using color
transformations

Original
image Result of
decreasing image
intensity by 30%
(k=0.7)

The required RGB,


CMY & HSI
transformation
functions
Components of a color circle
Color compliment transformations

Original
image Compliment
transformatio
n functions

Complimen
t of image
based on
RGB An approximation
mapping of RGB
function complement
using HSI
transformations

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy