Dip Unit2
Dip Unit2
• Frequency domain
– Modify the Fourier transform of an image
Outline: spatial domain operations
• Background
• Gray level transformations
• Arithmetic/logic operations
Background
• Spatial domain processing
– g(x,y)=T[ f(x,y) ] f(x,y) g(x,y)
T
– f(x,y): input image
– g(x,y): output image
– T: operator
• Defined over some neighborhood of (x,y)
Background (cont.)
* T operates over
neighborhood of (x,y)
Point processing
• 1x1 neighborhood
– Gray level transformation, or point processing
– s = T(r)
contrast thresholding
stretching
Neighborhood processing
• A larger predefined neighborhood
– Ex. 3x3 neighborhood
– mask, filters, kernels, templates, windows
– Mask processing or filtering
Some Basic Gray Level Transformations
log
Power-law transformations
• s=cr
• 1
• 1
• : gamma
– display, printers,
scanners follow
power-law
– Gamma correction
Example: Gamma correction
0.6
0.4 0.3
Power-law: 1
• Expand light
gray levels
3
4 5
Piece-wise linear transformations
control point
• Advantage: the
piecewise function
can be arbitrarily
complex
Contrast
stretching
Intensity or Gray-level slicing
• Highlighting a specific range of gray levels
Intensity or Gray-level slicing
• Without background
S= S1 ; A <= r <= B
0 ; Otherwise
• With Background
S= S1 ; A <= r <= B
r ; Otherwise
Bit-plane slicing
* Highlight specific bits
1
0
0
1
0
1
0
0
bit-planes of an image
(gray level 0~255)
Bit-plane slicing: example
For image
compression
7 6
5 4 3
2 1 0
Arithmetic/logic operations
• Logic operations
• Image subtraction
• Image averaging
Logic operations
• Logic operations: pixel-wise AND, OR, NOT
• The pixel gray level values are taken as
string of binary numbers
Ex. 193 => 11000001
AND
A or B
OR
Image subtraction
f:original(8 bits) h:4 sig. bits
• Difference image
g(x,y)=f(x,y)-h(x,y)
scaling
difference image
Image subtraction: scaling the
difference image
• g(x,y)=f(x,y)-h(x,y)
– f and h are 8-bit => g(x,y) [-255, 255]
1. (1)+255 (2) divide by 2
• The result won’t cover [0,255]
2. (1)-min(g) (2) *255/max(g)
E g ( x , y ) f ( x , y )
1
g 2
( x, y )
2
( x, y ) K 2
K
original Gaussian
noise
averaging averaging
K=8 K=16
averaging averaging
K=64 K=128
Histogram
The (intensity or brightness) histogram shows how many
times a particular gray level (intensity) appears in an image.
0 1 1 2 4 6
2 1 0 0 2 4
5 2 0 0 4 2
1 1 2 4 1 0
0 1 2 3 4 5 6
image histogram
Histogram Processing
• Histogram counts the number of
occurrences of each gray-level value
• Plot of frequency of occurrence as a
function of pixel value
• It is equivalent of a Probability Density
Function (pdf)
Image Histograms
The histogram of an image shows us the
distribution of grey levels in the image
Massively useful in image processing, especially in
segmentation
Frequencies
Grey Levels
Histogram Examples (cont…)
Dark Image
Histogram Examples (cont…)
Bright Image
Histogram Examples (cont…)
High Contrast Image
Histogram Examples (cont…)
Low Contrast Image
Histogram
Low Contrast Image
An image has low contrast when the complete range of possible
values is not used. Inspection of the histogram shows this
lack of contrast.
Histogram Equalisation
Spreading out the frequencies in an
image (or equalising the image) is a
simple way to improve dark or washed
out images
Basic idea: find a map f(x) such that
the histogram of the modified
(equalized) image is flat (uniform).
The formula for histogram
sk T ( rk )
k
equalisation is given where
rk: input intensity pr (r j )
sk: processed intensity j 1
x Note: h (t ) 1
y s h (t ) t 0
t 0
cumulative probability function
L 1
x
0 L
http://en.wikipedia.org/wiki/Inverse_transform_sampling
53
EE465: Introduction to Digital Image
Processing
Histogram Specification/Matching
Given a target image B, how to modify a given image A such that
the histogram of the modified A can match that of target image B?
histogram1 histogram2
S-1*T
T S
?
EE565 Advanced Image Processing 54
Copyright Xin Li 2008
Spatial Filtering
• Spatial operations are performed on the
neighborhood of input pixels using SPATIAL
MASKS or SPATIAL FILTERS .
• The mask that is used on the sub image of
the image is called KERNEL, TEMPLATE or
WINDOW.
• The values in the filter sub image are
referred as COEFFICIENTS rather than
PIXELS.
The Spatial Filtering Process
Origin x
a b c r s t
d
g
e
h
f
i
* u
x
v
y
w
z
Original Image Filter
Simple 3*3 Pixels
e 3*3 Filter
Neighbourhood
eprocessed = v*e +
r*a + s*b + t*c +
u*d + w*f +
y Image f (x, y) x*g + y*h + z*i
g ( x, y ) w ( s, t ) f ( x s, y t )
s a t b
•Filtering can be
given in equation
form as shown
above
•Notations are based
on the image shown
to the left
Smoothing Spatial Filters
(Averaging Filters or Low Pass filters)
The output of the linear
spatial filter is simply
the average of the
pixels in a 1/ 1/ 1/
neighbourhood around 9 9 9
a central value Simple
1/ 1/ 1/
As this process results 9 9 9 averaging
in an image with
reduced sharp 1/ 1/ 1/
filter
transitions in gray 9 9 9
levels, It is Especially
useful in removing
noise
from images.
1/
9
1/
1/
9
1/
1/
9
9 9 9
1 / 100
1 1/ Original Image Filter
1049 /9 108
Simple 3*3 199
/9 1106
9
/9 198
/9
3*3 Smoothing Pixels
Neighbourhood 195
/9 190
/9 185
/9
Filter
e = 1/9*106 +
1/ *104 + 1/ *100 + 1/ *108 +
9 9 9
1/ *99 + 1/ *98 +
9 9
y 1/ *95 + 1/ *90 + 1/ *85
Image f (x, y) 9 9 9
= 98.3333
The above is repeated for every pixel in the
original image to generate the smoothed image
Image Smoothing Example
The image at the top left
is an original image of
size 500*500 pixels
The subsequent images
show the image after
filtering with an averaging
filter of increasing sizes
3, 5, 9, 15 and 35
Notice how detail begins
to disappear
Weighted Smoothing Filters
More effective smoothing
filters can be generated by
allowing different pixels in
1/ 2/ 1/
the neighbourhood with 16 16 16
different weights in the
averaging function 2/
16
4/
16
2/
16
Pixels closer to the
central pixel are more 1/ 2/ 1/
important 16 16 16
Often referred to as a Weighted
weighted averaging averaging filter
Another Smoothing Example
By smoothing the original image we get rid of
lots of the finer detail which leaves only the
gross features for thresholding
y
Strange Things Happen At The
Edges!
At the edges of an image we are missing
pixels to form a neighbourhood
Origin x
e e
e e e
y Image f (x, y)
Strange Things Happen At The
Edges! (cont…)
• There are a few approaches to dealing with missing
edge pixels:
– Omit missing pixels
• Only works with some filters
• Can add extra code and slow down processing
– Pad the image
• Typically with either all white or all black pixels
– Replicate border pixels
– Truncate the image
– Allow pixels wrap around the image
• Can cause some strange image artefacts
Strange Things Happen At The
Edges! (cont…)
Filtered Image:
Zero Padding
Filtered Image:
Wrap Around Edge Pixels
Correlation & Convolution
The filtering we have been talking about so far
is referred to as correlation with the filter itself
referred to as the correlation kernel
Convolution is a similar operation, with just
one subtle difference
a b c r s t eprocessed = v*e +
z*a + y*b + x*c +
d
f
e
g
e
h
* u
x
v
y
w
z
w*d + u*e +
t*f + s*g + r*h
Original Image Filter
Pixels
Sharpening Spatial Filters
• Sharpening spatial filters seek to highlight
fine detail
– Remove blurring from images
– Highlight edges
• Sharpening filters are based on spatial
differentiation
Spatial Differentiation
Differentiation measures the rate of change of a
function
Let’s consider a simple 1 dimensional example
Spatial Differentiation
A B
1 st Derivative
The st
formula for the 1 derivative of a function
is as follows:
f
f (x 1) f ( x)
x
It’s
just the difference between subsequent
values and measures the rate of change of the
function
1 st Derivative (cont…)
5 5 4 3 2 1 0 0 0 6 0 0 0 0 1 3 1 0 0 0 0 7 7 7 7
0 -1 -1 -1 -1 0 0 6 -6 0 0 0 1 2 -2 -1 0 0 0 7 0 0 0
2 nd Derivative
The formula for the 2 nd derivative of a function
is as follows:
f2
f (x 1) f (x 1) 2 f ( x )
x
2
5 5 4 3 2 1 0 0 0 6 0 0 0 0 1 3 1 0 0 0 0 7 7 7 7
-1 0 0 0 0 1 0 6 -12 6 0 0 1 1 -4 1 1 0 0 7 -7 0 0
Using Second Derivatives For Image
Enhancement
• The 2nd derivative is more useful for image
enhancement than the 1st derivative
– Stronger response to fine detail
– Simpler implementation
– We will come back to the 1st order derivative later on
• The first sharpening filter we will look at is the
Laplacian
– Isotropic
– One of the simplest sharpening filters
– We will look at a digital implementation
The Laplacian
The Laplacian is defined as follows:
f f 2 2
f 2 2
2
st
x y
where the partial 1 order derivative in the x
direction is defined as follows:
f
2
f (x 1, y) f (x 1, y) 2 f ( x, y)
x
2
f ( x, y 1) f ( x, y 1) 2 f ( x , y)
y
2
The Laplacian (cont…)
So, the Laplacian can be given as follows:
2
f [ f (x 1, y) f (x 1, y)
f ( x , y 1) f ( x, y 1)]
4 f ( x, y )
We can easily build a filter based on this
0 1 0
1 -4 1
0 1 0
The Laplacian (cont…)
Applying the Laplacian to an image we get a
new image that highlights edges and other
discontinuities
g ( x, y ) f ( x, y ) 2
f
Laplacian Image Enhancement
- =
Original Laplacian Sharpened
Image Filtered Image Image
f ( x , y ) [ f ( x 1, y ) f ( x 1, y )
f ( x , y 1) f ( x , y 1)
4 f ( x , y )]
5 f ( x , y ) f ( x 1, y ) f ( x 1, y )
f ( x , y 1) f ( x , y 1)
Simplified Image Enhancement
(cont…)
This gives us a new filter which does the whole
job for us in one step
0 -1 0
-1 5 -1
0 -1 0
Variants On The Simple Laplacian
There are lots of slightly different versions of
the Laplacian that can be used:
0 1 0 1 1 1
Simple Variant of
1 -4 1 1 -8 1
Laplacian Laplacian
0 1 0 1 1 1
-1 -1 -1
-1 9 -1
-1 -1 -1
1 st Derivative Filtering
Implementing st
1 derivative filters is difficult in
practice
For a function f(x, y) the gradient of f at
coordinates (x, y) is given as the column vector:
f
G x x
f f
G y
y
1 st Derivative Filtering (cont…)
The magnitude of this vector is given by:
f mag ( f )
1
G x G y
2 2 2
1
f 2 f 2 2
x y
For practical reasons this can be simplified as:
f G x
Gy
1 st Derivative Filtering (cont…)
There is some debate as to how best to calculate
these gradients but we will use:
f z7 2 z8 z 9 z1 2 z 2 z 3
z 3 2 z 6 z 9 z1 2 z 4 z 7
which is based on these coordinates
z1 z2 z3
z4 z5 z6
z7 z8 z9
Sobel Operators
Based on the previous equations we can derive
the Sobel Operators
-1 -2 -1 -1 0 1
0 0 0 -2 0 2
1 2 1 -1 0 1
To filter an image it is filtered using both operators
the results of which are added together
Sobel Example
An image of a
contact lens which is
enhanced in order to
make defects (at
four and five o’clock
in the image) more
obvious
1 if D ( u , v ) D0
H (u , v )
0 if D (u , v ) D 0
where D(u,v) is given as:
D (u , v ) [( u M / 2)
2
(v N 2 1/2
/ 2) ]
Ideal Low Pass Filter (cont…)
Result of filtering
Result of filtering
with ideal low
with ideal low
pass filter of
pass filter of
radius 230
radius 80
Butterworth Lowpass Filters
The transfer function of a Butterworth lowpass
filter of order n with cutoff frequency at
distance D0 from the origin is defined as:
1
H (u , v )
1 [ D (u , v ) / D 0 ]
2n
Butterworth Lowpass Filter (cont…)
Result of filtering
Original with Butterworth
image filter of order 2 and
cutoff radius 5
Result of filtering
Result of filtering
with Butterworth
with Butterworth
filter of order 2 and
filter of order 2 and
cutoff radius 230
cutoff radius 80
Gaussian Lowpass Filters
The transfer function of a Gaussian lowpass
filter is defined as:
D 2 2
H (u , v ) e (u ,v ) /2 D 0
Gaussian Lowpass Filters (cont…)
Result of filtering
Original with Gaussian
image filter with cutoff
radius 5
Result of
Result of filtering
filtering with
with ideal low
Butterworth
pass filter of
filter of order 2
radius 15
and cutoff
radius 15
Result of filtering
with Gaussian
filter with cutoff
radius 15
Lowpass Filtering Examples
A low pass Gaussian filter is used to connect
broken text
Lowpass Filtering Examples (cont…)
Different lowpass Gaussian filters used to
remove blemishes in a photograph
Lowpass Filtering Examples (cont…)
Original Gaussian
image lowpass filter
Spectrum of Processed
original image image
Sharpening in the Frequency
Domain
• Edges and fine detail in images are
associated with high frequency
components
• High pass filters – only pass the high
frequencies, drop the low ones
• High pass frequencies are precisely the
reverse of low pass filters, so:
Hhp(u, v) = 1 – Hlp(u, v)
Ideal High Pass Filters
The ideal high pass filter is given as:
0 if D ( u , v ) D0
H (u , v )
1 if D ( u , v ) D0
where D0 is the cut off distance as before
Ideal High Pass Filters (cont…)
Results of Results of
Butterworth Butterworth
high pass high pass
filtering of filtering of
order 2 with order 2 with
D0 = 15 D0 = 80
D 2 2
H (u , v ) 1e (u ,v ) /2 D 0
Results of Results of
Gaussian Gaussian
high pass high pass
filtering filtering
with D0 = 15 with D0 = 80
in the frequency
Laplacian in the
domain
frequency domain
Laplacian in the
Inverse DFT of
Zoomed section
of the image on
the left compared
to spatial filter
Frequency Domain Laplacian
Example
Original Laplacian
image filtered
image
Laplacian
Enhanced
image
image
scaled
(R,G,B) Parameterization of Full
Color Images
Grayscale Images
Digital Image Types : Intensity
Image
Intensity image or monochrome image
each pixel corresponds to light intensity
normally represented in gray scale (gray
level).
RGB components
10 10 16 28
9 656 7026 5637 43
32 9954 7096 56 67 78
15 2560 13 90 22 96
67
21 54 47 42
32 1585 87 85 39 43 92
54 65 65 39
32 65 87 99
Image Types : Binary Image
Binary image or black and white image
Each pixel contains one bit :
1 represent white
0 represents black
Binary data
0 0 0 0
0 0 0 0
1 1 1 1
1 1 1 1
Color Image Fundamentals
In 1666 Sir Isaac Newton discovered that when a
beam of sunlight passes through a glass prism,
the emerging beam is split into a spectrum of
colors
Color Image Fundamentals
Color Image Fundamentals
Chromatic light spans the electromagnetic
spectrum from approximately 400 to 700 nm
As we mentioned before human color vision is
achieved through 6 to 7 million cones in each eye
Color Image Fundamentals
Approximately 66% of these cones are sensitive
to red light, 33% to green light and 6% to blue light
Absorption curves for the different cones have
been determined experimentally
Strangely these do not match the CIE standards
for red (700nm), green (546.1nm) and blue
(435.8nm) light as the standards were developed
before the experiments!
Color Image Fundamentals
Color Image Fundamentals
3 basic qualities are used to describe the quality
of a chromatic light source:
Radiance: the total amount of energy that flows from
the light source (measured in watts)
Luminance: the amount of energy an observer
perceives from the light source (measured in lumens)
Note we can have high radiance, but low luminance
Brightness: a subjective (practically unmeasurable)
notion that embodies the intensity of light
CIE Chromacity Diagram
• Specifying colors systematically can be
achieved using the CIE chromacity diagram
• On this diagram the x-axis represents the
proportion of red and the y-axis represents
the proportion of green.
• The proportion of blue used in a color is
calculated as:
z = 1 – (x + y)
CIE Chromacity Diagram (cont…)
Green: 62% green, 25%
red and 13% blue
Red: 32% green, 67%
red and 1% blue
CIE Chromacity Diagram (cont…)
• Any color located on the boundary of the chromacity
chart is fully saturated
• The point of equal energy has equal amounts of
each color and is the CIE standard for pure white
• Any straight line joining two points in the diagram
defines all of the different colors that can be
obtained by combining these two colors additively
• This can be easily extended to three points
CIE Chromacity Diagram (cont…)
This means the entire
color range cannot be
displayed based on any
three colors
The triangle shows the
typical color gamut
produced by RGB
monitors
The strange shape is
the gamut achieved by
high quality color
printers
Color Models
• From the previous discussion it should be
obvious that there are different ways to
model color
• Models used in color image processing:
– RGB (Red Green Blue)
– HIS (Hue Saturation Intensity)
– YIQ
RGB
• In the RGB model each color appears in its
primary spectral components of red, green and
blue
• The model is based on a Cartesian coordinate
system
– RGB values are at 3 corners
– Cyan magenta and yellow are at three other corners
– Black is at the origin
– White is the corner furthest from the origin
– Different colors are points on or inside the cube
represented by RGB vectors
RGB (cont…)
RGB (cont…)
RGB (cont…)
The HSI Color Model
• RGB is useful for hardware
implementations.
• However, RGB is not a particularly intuitive
way in which to describe colors
• Rather when people describe colors they
tend to use hue, saturation and brightness
• RGB is great for color generation, but HSI
is great for color description
The HSI Color Model (cont…)
• The HSI model uses three measures to
describe colors:
– Hue: A color attribute that describes a pure
color (pure yellow, orange or red)
– Saturation: Gives a measure of how much a
pure color is diluted with white light
– Intensity: Brightness is nearly impossible to
measure because it is so subjective. Instead we
use intensity. Intensity is the same achromatic
notion that we have seen in grey level images
HSI, Intensity & RGB
• Intensity can be extracted from RGB images
– which is not surprising if we stop to think
about it
• Remember the diagonal on the RGB color
cube that we saw previously ran from black
to white
• Now consider if we stand this cube on the
black vertex and position the white vertex
directly above it
The HSI Color Model
The HSI Color Model (cont…)
The HSI Color Model (cont…)
Because the only important things are the angle and
the length of the saturation vector this plane is also
often represented as a circle or a triangle
HSI Model Examples
HSI Model Examples
Converting From RGB To HSI
Converting From HSI To RGB
Converting From HSI To RGB (cont…)
HSI & RGB
RGB
Hue
Image
Saturation Intensity
RGB -> HSI -> RGB (cont…)
Hue
Saturation
Intensity RGB
Image
HSI, Intensity & RGB (cont…)
HSI, Hue & RGB
YIQ color model
• This model was designed to separate chrominance from
luminance.
• This was a requirement in the early days of color
television when black-and-white sets still were expected
to pick up and display what were originally color pictures.
• The Y-channel contains luminance information
(sufficient for black-and-white television sets) while the I
and Q channels (in-phase and in-quadrature) carried the
color information.
• A color television set would take these three channels, Y,
I, and Q, and map the information back to R, G, and B
levels for display on a screen.
YIQ color model
Pseudo Color Image Processing
Pseudo color (also called false color)
image processing consists of
assigning colors to grey values based
on a specific criterion
The principle use of pseudo color
image processing is for human
visualizations
Humans can discern between
thousands of color shades and
intensities, compared to only about two
dozen or so shades of grey
Pseudo Color Image Processing –
Intensity Slicing
• Intensity slicing and color coding is one of the
simplest kinds of pseudo color image processing
• First we consider an image as a 3D function
mapping spatial coordinates to intensities (that we
can consider heights)
• Now consider placing planes at certain levels
parallel to the coordinate plane
• If a value is one side of such a plane it is rendered in
one color, and a different color if on the other side
Pseudo Color Image Processing –
Intensity Slicing (cont..)
Pseudo color Image Processing –
Intensity Slicing (cont…)
• In general intensity slicing can be
summarized as:
– Let [0, L-1] represent the grey scale
– Let l0 represent black [f(x, y) = 0] and let lL-1
represent white [f(x, y) = L-1]
– Suppose P planes perpendicular to the intensity
axis are defined at levels l1, l2, …, lp
– Assuming that 0 < P < L-1 then the P planes
partition the grey scale into P + 1 intervals V1, V2,
…,VP+1
Pseudo Color Image Processing
Pseudo color (also called false color)
image processing consists of
assigning colors to grey values based
on a specific criterion
The principle use of pseudo color
image processing is for human
visualizations
Humans can discern between
thousands of color shades and
intensities, compared to only about two
dozen or so shades of grey
Pseudo Color Image Processing –
Intensity Slicing
• Intensity slicing and color coding is one of the
simplest kinds of pseudo color image processing
• First we consider an image as a 3D function
mapping spatial coordinates to intensities (that we
can consider heights)
• Now consider placing planes at certain levels
parallel to the coordinate plane
• If a value is one side of such a plane it is rendered in
one color, and a different color if on the other side
Functional Block Diagram for Pseudo
Color Image Processing
Original
image Result of
decreasing image
intensity by 30%
(k=0.7)
Original
image Compliment
transformatio
n functions
Complimen
t of image
based on
RGB An approximation
mapping of RGB
function complement
using HSI
transformations