0% found this document useful (0 votes)
58 views166 pages

Dip Module2 2018 PDF

The document discusses image enhancement techniques in the spatial domain. It defines image enhancement as improving visual quality affected by acquisition issues. The spatial domain operates directly on pixel values using neighborhood operations like filters. Point operations modify single pixels while neighborhood operations use a mask/filter over an area. Common point operations include thresholds, contrasts stretching, and intensity transformations like negatives, logarithms and power laws. The document provides examples of these spatial domain techniques.

Uploaded by

veena
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
58 views166 pages

Dip Module2 2018 PDF

The document discusses image enhancement techniques in the spatial domain. It defines image enhancement as improving visual quality affected by acquisition issues. The spatial domain operates directly on pixel values using neighborhood operations like filters. Point operations modify single pixels while neighborhood operations use a mask/filter over an area. Common point operations include thresholds, contrasts stretching, and intensity transformations like negatives, logarithms and power laws. The document provides examples of these spatial domain techniques.

Uploaded by

veena
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 166

1

of
19
MODULE2
Spatial Domain: Some Basic Intensity Transformation Functions,
Histogram Processing, Fundamentals of Spatial Filtering, Smoothing
Spatial Filters, Sharpening Spatial Filters
[Text: Chapter 3: Sections 3.2 to 3.6]

Frequency Domain: Preliminary Concepts, The Discrete Fourier


Transform (DFT) of Two Variables, Properties of the 2-D DFT, Filtering
in the Frequency Domain, Image Smoothing and Image Sharpening
Using Frequency Domain Filters, Selective Filtering.
[Text: Chapter 4: Sections 4.2, 4.5 to 4.10]
2
of
19

Image Enhancement-
Spatial Domain
3

Introduction
of
19

 What is image enhancement?


 A process of enhancing the visual quality of
images due to non ideal image acquisition
process (e.g., motion blurring, out-of-focus,
poor illumination, coarse quantization etc.)
 Image visual quality assessment
 Objective quality metrics (e.g., MSE) might
not always match subjective quality scores
 Human vision system (HVS) is the ultimate
JUDGE.
3
4
of
19
Principle Objective of Enhancement

Process an image so that the result will be


more suitable than the original image for a
specific application.
The suitableness is up to each application.
A method which is quite useful for enhancing
an image may not necessarily be the best
approach for enhancing another images

4
5
of
19
2 domains
Spatial Domain : (image plane)
– Techniques are based on direct manipulation of
pixels in an image
Frequency Domain :
– Techniques are based on modifying the Fourier
transform of an image
There are some enhancement techniques based
on various combinations of methods from these
two categories.

5
6
of
19
Good images
For human visual
– The visual evaluation of image quality is a highly
subjective process.
– It is hard to standardize the definition of a good
image.
For machine perception
– The evaluation task is easier.
– A good image is one which gives the best machine
recognition results.
A certain amount of trial and error usually is
required before a particular image enhancement
approach is selected.
6
7
of
19
Spatial Domain
• Procedures that operate
directly on pixels.
g(x,y) = T[f(x,y)]
where
– f(x,y) is the input image
– g(x,y) is the processed
image
– T is an operator on f
defined over some
neighborhood of (x,y)

Usha B S 7
8
of
19
Mask/Filter
Neighborhood of a point (x,y) can
be defined by using a
(x,y) square/rectangular (common
used) or circular subimage

• area centered at (x,y)


The center of the subimage is
moved from pixel to pixel
starting at the top of the corner

Usha B S 8
9
of
19
Point Processing
• Neighborhood = 1x1 pixel
• g depends on only the value of f at (x,y)
• T = gray level (or intensity or mapping)
transformation function
s = T(r)
• Where
– r = gray level of f(x,y)
– s = gray level of g(x,y)

9
10
of Enhancement by Point Processing
19

These are methods based only on the intensity


of single pixels.

– r denotes the pixel intensity before processing.

– s denotes the pixel intensity after processing.


11
of
19

Point Operations Overview


Point operations are zero-memory operations where
a given gray level r[0,L-1] is mapped to another
gray level r[0,L-1] according to a transformation

s s  T (r )
L-1

r
L-1

L=256: for grayscale images


11
12
of
19
Thresholding

Produces a two-level image

Usha B S 12
13
of
19
??

13
14
of
19
Some Examples

14
15

Lazy Man Operation


of
19

sr
s
L-1

r
L-1

No influence on visual quality at all

15
16
of
19
Contrast Stretching

Produce higher contrast


than the original by
– darkening the levels
below m in the original
image
– Brightening the levels
above m in the original
image

Usha B S 16
17
of
19 3 basic gray-level transformation
functions
Linear function
Negative

nth root
– Negative and identity
transformations
Log
nth power Logarithm function
– Log and inverse-log
transformation
Power-law function
Inverse Log
Identity
– nth power and nth root
transformations
Input gray level, r
Usha B S 17
18
of
19
Image Negatives
• An image with gray level in the
Negative range [0, L-1]
nth root where L = 2n ; n = 1, 2…
• Negative transformation :
Log
nth power s = L – 1 –r
• Reversing the intensity levels
of an image.
• Suitable for enhancing white
Inverse Log
or gray detail embedded in
Identity
dark regions of an image,
especially when the black area
Input gray level, r dominant in size.
Usha B S 18
19
of
19 Digital Negative
s
L-1

s  L 1  r
r
0 L-1

19
20
of
19 Log Transformations
s = c log (1+r)
• c is a constant
Negative
and r  0
nth root
• Log curve maps a narrow
Log
range of low gray-level
nth power values in the input image
into a wider range of output
levels.
• Opposite is true of higher
Inverse Log
values of input values.
Identity
• Used to expand the values
of dark pixels in an image
Input gray level, r while compressing the
Usha B S higher-level values.
20
21
of
19
Log Transformations
It compresses the dynamic range of images with
large variations in pixel values
Example of image with dynamic range: Fourier
spectrum image
It can have intensity range from 0 to 106 or higher.
We can’t see the significant degree of detail as it
will be lost in the display.

21
22
of
19
Example of Logarithm Image

Fourier Spectrum with range Result after apply the log


= 0 to 1.5 x 106 transformation with c = 1,
range = 0 to 6.2

22
23
of
19
Range Compression

s  c log10 (1  r )

r
0 L-1

c=100 23
24
of
19
Inverse Logarithm Transformations

Negative
Do opposite to the Log
nth root
Transformations
Used to expand the
Log
nth power values of high pixels in
an image while
compressing the
darker-level values.
Identity Inverse Log

Input gray level, r


24
25
of
19
Power-Law Transformations


s  cr
C,  : positive constants

Gamma correction

26
of
19

=c=1: identity
27
of Power-Law Transformations
19

s = cr
• c and  are positive constants
• Power-law curves with fractional values of  map
a narrow range of dark input values into a wider
range of output values, with the opposite being
true for higher values of input levels.
• c =  = 1  Identity function

Usha B S 27
28
of
19
Gamma correction
• Cathode ray tube (CRT)
devices have an
Monitor
intensity-to-voltage
response that is a power
function, with  varying
from 1.8 to 2.5
 = 2.5
Gamma
correction
• The picture appears
darker.
• Gamma correction is
Monitor
done by preprocessing
the image before
inputting it to the monitor
with s = cr1/
Usha B S 28
 =1/2.5 = 0.4
29
of

a b
19
Another example : MRI
c d
(a) a magnetic resonance image of
an upper thoracic human spine
with a fracture dislocation and
spinal cord impingement
– The picture is predominately dark
– An expansion of gray levels are
desirable  needs  < 1
(b) result after power-law
transformation with  = 0.6, c=1
(c) transformation with  = 0.4
(best result)
(d) transformation with  = 0.3
(under acceptable level)
Usha B S 29
30
of
19
Effect of decreasing gamma
When the  is reduced too much, the image
begins to reduce contrast to the point
where the image started to have very
slight “wash-out” look, especially in the
background

30
a b
31
Another example
c d
of
19

(a) image has a washed-out


appearance, it needs a
compression of gray
levels  needs  > 1
(b) result after power-law
transformation with =
3.0 (suitable)
(c) transformation with =
4.0 (suitable)
(d) transformation with 
= 5.0
(high contrast, the image
has areas that are too
dark, some detail is lost)
Usha B S 31
32
of
Some Simple
19 Intensity Transformations

Piecewise-Linear Transformation Functions:


– Contrast stretching
– Gray-level slicing
– Bit-plane slicing
33
of
Piecewise-Linear Transformation
19 Functions
Advantage:
– The form of piecewise functions can be
arbitrarily complex
Disadvantage:
– Their specification requires considerably
more user input

33
34
of
19

• Low contrast images can occur due to


• poor or non uniform lighting conditions
• due to non linearity in imaging sensors
• due to small dynamic range of imaging sensors
• The slope of transformation is chosen greater
than unity in the region of stretch.
• The parameters a & b are obtained by
examining histogram

34
35
of
19
Contrast Stretching
 r 0r a
 L-1
s    ( r  a )  sa ar b sb
  ( r  b)  s brL
 b sa

a b r
0 L-1

a  50, b  150,   0.2,   2,   1, sa  30, sb  200


35
36
of
19

If s1=r1, s2=r2
no change in the gray
level values
If r1=r2, s1=0, s2=L-1
it is Thresholding
function
r1<=r2, s1<=s2

36
37
of
19
Contrast Stretching
Increase the dynamic range of the
gray levels in the image
(b) a low-contrast image : result from
poor illumination, lack of dynamic
range in the imaging sensor, or
even wrong setting of a lens
aperture of image acquisition
(c) result of contrast stretching: (r1,s1)
= (rmin,0) and (r2,s2) = (rmax,L-1)
(d) result of thresholding

Usha B S 37
38
of
19
Gray-level slicing
Highlighting a specific
range of gray levels in
an image
– Display a high value of
all gray levels in the
range of interest and a
low value for all other
gray levels
(a) transformation highlights
range [A,B] of gray level and
reduces all others to a
constant level
(b) transformation highlights
range [A,B] but preserves all
other levels

Usha B S 38
39
of
19
Clipping Function

IIf most of the gray-


levels in the image
are in [u1 u2], the
following mapping
increases the image
contrast.

Usha B S 39
40
of
19
Clipping Function

40
41
of
19
Bit-plane slicing
• Highlighting the contribution made
to total image appearance by
specific bits
Bit-plane 7
One 8-bit byte
(most significant) • Suppose each pixel is represented
by 8 bits
• Higher-order bits (4-bits) contain
the majority of the visually
significant data
• Other planes contribute to the
Bit-plane 0 details in the image
(least significant)
• Useful for analyzing the relative
importance played by each bit of
the image

Usha B S 41
42
of
19
Example
The (binary) image for bit-
plane 7 can be obtained
by processing the input
image with a thresholding
gray-level transformation.
– Map all levels between 0
and 127 to 0
– Map all levels between 129
and 255 to 255

An 8-bit fractal image


Usha B S 42
43
of
19
8 bit planes

Bit-plane 7 Bit-plane 6

Bit- Bit- Bit-


plane 5 plane 4 plane 3

Bit- Bit- Bit-


plane 2 plane 1 plane 0

Usha B S 43
44
of
19 Summary of Point Operation
 So far, we have discussed various forms
of mapping function T(r) that leads to
different enhancement results
 MATLAB function >imadjust
 The natural question is: How to select an
appropriate t(r) for an arbitrary image?
 One systematic solution is based on the
histogram information of an image
 Histogram equalization and specification

44
45
of
19 Histogram based Enhancement

Histogram of an image represents the relative frequency


of occurrence of various gray levels in the image
3000

2500

2000

1500

1000

500

0
0 50 100 150 200

MATLAB function >imhist(x)


45
46
of
19
Why Histogram?

4
x 10

3.5

2.5

1.5

0.5

0 50 100 150 200 250

It is a baby in the cradle!

Histogram information reveals that image is under-


exposed 46
47
of
19

Another Example

7000

6000

5000

4000

3000

2000

1000

0 50 100 150 200 250

Over-exposed image
47
48
of
19
Histogram Processing

• Histogram of a digital image with gray levels in the


range [0,L-1] is a discrete function
h(rk) = nk
• Where
– rk : the kth gray level
– nk : the number of pixels in the image having gray level
rk
– h(rk) : histogram of a digital image with gray levels rk

48
49
of
19
Normalized Histogram
• dividing each of histogram at gray level rk by the
total number of pixels in the image, n
p(rk) = nk / n
• For k = 0,1,…,L-1
• p(rk) gives an estimate of the probability of
occurrence of gray level rk
• The sum of all components of a normalized
histogram is equal to 1

49
50
of
19

50
51
of
19
Example

2 3 3 2
4 2 4 3 n(rk) or p(rk)
3 2 3 5
rk
2 4 2 4
4x4 image

Gray scale = [0,9]

51
52
of
19
Example
No. of pixels

6
2 3 3 2 5

4 2 4 3 4

3 2 3 5 3

2
2 4 2 4
1
Gray level
4x4 image
0 1 2 3 4 5 6 7 8 9
Gray scale = [0,9]
histogram
52
53
of
19
Histogram Processing
Basic for numerous spatial domain
processing techniques
Used effectively for image enhancement
Information inherent in histograms also is
useful in image compression and
segmentation

53
54 h(rk) or p(rk)
of
19
Example
rk

Dark image

Components of
histogram are
concentrated on the
low side of the gray
scale.
Bright image

Components of
histogram are
concentrated on the
high side of the gray
scale.
54
55
of
19
Example

Low-contrast image

histogram is narrow
and centered toward
the middle of the gray
scale. Dull washed out
gray look
High-contrast image
histogram covers broad
range of the gray scale and
the distribution of pixels is
not too far from uniform,
with very few vertical lines
being much higher than the
others

55
56
of
19
Histogram Equalization
Goal : To obtain a uniform histogram for the output
image.
As the low-contrast image’s histogram is narrow
and centered towards the middle of the gray
scale, if we distribute the histogram to a wider
range the quality of the image will be improved.
We can do it by adjusting the probability density
function of the original histogram of the image so
that the probability spread equally

56
57
of
19

Image Example

before after
57
58
of
Histogram transformation
19
• r The input gray level to be enhanced , rϵ[0, 1]
• The output transformed pixel values are
represented as
s = T(r) ; 0  r  1 (every r mapped to a value s)
• Assumption: T(r) satisfies following conditions
(a). T(r) is single-valued and monotonically
increasingly in the interval 0  r  1
(b). 0  T(r)  1 for 0  r  1

Usha B S 58
59
of 2 Conditions of T(r)
19
Condition a:
– Single-valued (one-to-one relationship) guarantees that the
inverse transformation will exist
– Monotonicity condition preserves the increasing order from
black to white in the output image thus it won’t cause a
negative image
Condition b:
– 0  T(r)  1 for 0  r  1 guarantees that the output gray
levels will be in the same range as the input levels.

59
60
of
19

• The inverse transformation from s back to


r is
r = T -1(s) ; 0  s  1
s

sk= T(rk)

T(r)

60
0 rk 1 r
61
of
19
Probability Density Function
The gray levels in an image may be viewed
as random variables in the interval [0,1]
PDF is one of the fundamental descriptors of
a random variable

61
62
of
19
pdf
The probability density function (pdf)
of random variable x is defined as the
derivative of the cumulative density
function (cdf):
dF ( x )
p( x ) 
dx
F(x) = P(X  x)

Where p(x) is pdf, F(x) is cdf of x

62
63
of
19

The pdf satisfies the following properties:

63
64
of If a random variable x is transformed by a monotonic
19
transformation function T(x) to produce a new
random variable y,
the probability density function of y can be obtained
from knowledge of T(x) and the probability density
function of x, as follows:

dx
p y ( y )  px ( x )
dy
where the vertical bars signify the absolute value.

64
65
of
19
Random Variables

A function T(x) is monotonically increasing if


T(x1) < T(x2) for x1 < x2, and
A function T(x) is monotonically decreasing if
T(x1) > T(x2) for x1 < x2.
The preceding equation is valid if T(x) is an
increasing or decreasing monotonic
function.

65
66
of
19
Applied to Image

• Let pr(r) denote the PDF of random variable r of


input image
• ps (s) denote the PDF of random variable s of
output image
• If pr(r) and T(r) are known and T-1(s) satisfies
condition (a) then ps(s) can be obtained using a
formula :

dr
ps(s)  pr(r) (1)
ds 66
67
of
19
Applied to Image

The PDF of the transformed variables


is determined by the gray-level PDF of the
input image and by the chosen
transformation function

67
68
of
19
Transformation function

A transformation function is a cumulative


distribution function (CDF) of random
variable r :
r
s  T ( r )   pr ( w )dw
0
where w is a dummy variable of integration

Note: T(r) depends on pr68(r)


69
of
Cumulative Distribution function
19

CDF is an integral of a probability function


(always positive) is the area under the
function
– CDF is always single valued and
monotonically increasing function
CDF satisfies the condition (a)
CDF can be used as a transformation
function

69
70
of
19
Finding ps(s) from given T(r)

ds dT ( r )

dr dr
  dr
p s ( s )  pr ( r )
r
d
   pr ( w )dw  ds
dr  0 
1
 pr ( r )  pr ( r )
pr ( r )
 1 where 0  s  1
Substitute and yield
70
71
of
19
ps(s)
• As ps(s) is a probability function, it must be
zero outside the interval [0,1] in this case
because its integral over all values of s
must equal 1.
• Called ps(s) as a uniform probability
density function
• ps(s) is always a uniform, independent of
the form of pr(r)

71
72
of r
s  T ( r )   pr ( w )dw
19

yields

Ps(s)

a random variable s 1
characterized by
a uniform probability
function 0
s
72
73
of
Discrete
19 transformation function

The probability of occurrence of gray level in


an image is approximated by
nk
pr ( rk )  where k  0 , 1, ..., L-1
n
The discrete version of transformation
k
sk  T ( rk )   pr ( r j )
j 0
k nj
 where k  0 , 1, ..., L-1
j 0 n 73
74
of
19
Histogram Equalization
• Thus, an output image is obtained by mapping
each pixel with level rk in the input image into a
corresponding pixel with level sk in the output
image
• In discrete space, it cannot be proved in general
that this discrete transformation will produce the
discrete equivalent of a uniform probability
density function, which would be a uniform
histogram

74
75
of
19
Example

before after Histogram


equalization

75
76
of
19
Example

before after Histogram


equalization

The quality is not


improved much
because the original
image already has a
broaden gray-level
scale

76
77
of
19

77
78
of
19

78
79
of
19

79
80
of
19
Example
No. of pixels

6
2 3 3 2 5

4 2 4 3 4

3 2 3 5 3

2
2 4 2 4
1
Gray level
4x4 image
0 1 2 3 4 5 6 7 8 9
Gray scale = [0,9]
histogram
80
81
of
19
Gray
0 1 2 3 4 5 6 7 8 9
Level(j)
No. of
0 0 6 5 4 1 0 0 0 0
pixels
k

n j 0
j 0 0 6 11 15 16 16 16 16 16

k nj 6 11 15 16 16 16 16 16
s 0 0 / / / / / / / /
j 0 n
16 16 16 16 16 16 16 16
3.3 6.1 8.4
sx9 0 0 9 9 9 9 9
3 6 8
82
of
19
Example
No. of pixels

6
3 6 6 3 5

8 3 8 6 4

6 3 6 9 3

2
3 8 3 8
1
Output image
0 1 2 3 4 5 6 7 8 9
Gray scale = [0,9] Gray level
Histogram equalization
82
83
of
19
Note
It is clearly seen that
– Histogram equalization distributes the gray level to
reach the maximum gray level (white) because the
cumulative distribution function equals 1 when
0  r  L-1
– If the cumulative numbers of gray levels are slightly
different, they will be mapped to little different or same
gray levels as we may have to approximate the
processed gray level of the output image to integer
number
– Thus the discrete transformation function can’t
guarantee the one to one mapping relationship

83
84
of
19
Histogram Matching (Specification)

Histogram equalization has a disadvantage


which is that it can generate only one type of
output image.
With Histogram Specification, we can specify
the shape of the histogram that we wish the
output image to have.
It doesn’t have to be a uniform histogram

84
85
of Consider the continuous domain
19
•Let pr(r) denote continuous PDF of gray-level r
of input image,
•Let pz(z) denote desired (specified) continuous
probability density function of gray-level z of
output image,
•Let s be a random variable with the property
r
s  T ( r )   pr ( w )dw Histogram equalization
0

Where w is a dummy variable of integration

85
86
of Next, we define a random variable z with the property
19 z
g( z )   pz ( t )dt  s Histogram equalization
0

Where t is a dummy variable of integration

thus s = T(r) = G(z)

Therefore, z must satisfy the condition

z = G-1(s) = G-1[T(r)]

Assume G-1 exists and satisfies the condition (a) and (b)

We can map an input gray level r to output gray level z


86
87
of
19
Procedure Conclusion
1. Obtain the transformation function T(r) by
calculating the histogram equalization of the
input image
r
s  T ( r )   pr ( w )dw
0
2. Obtain the transformation function G(z) by
calculating histogram equalization of the
desired density function
z
G( z )   pz ( t )dt  s
0
87
88
of
19
Procedure Conclusion
3. Obtain the inversed transformation
function G-1
z = G-1(s) = G-1[T(r)]

4. Obtain the output image by applying the


processed gray-level from the inversed
transformation function to all the pixels in
the input image

88
89
of
19
Example

Assume an image has a gray level probability density


function pr(r) as shown.

Pr(r)
  2r  2 ;0  r  1
pr ( r )  
2
 0 ; elsewhere

1 r

 p ( w )dw  1
0
r

0 1 2 r
89
90
of
19
Example

We would like to apply the histogram specification with


the desired probability density function pz(z) as shown.

Pz(z)

 2z ;0  z  1
2 pz ( z )  
 0 ; elsewhere
1 z

z
 p ( w )dw  1
0
z

0 1 2
90
91
of
19
Step 1:

Obtain the transformation function T(r)


s=T(r) r
s  T ( r )   pr ( w )dw
0
1 r
  ( 2 w  2 )dw
One to one 0
mapping
r
function
  w  2w 2
0
r
0 1   r  2r
2

91
92
of
19
Step 2:

Obtain the transformation function G(z)

z
G( z )   ( 2w )dw
z
z 2
z 2
0
0

92
93
of
19
Step 3:

Obtain the inversed transformation function G-1

G( z )  T ( r )
z   r  2r
2 2

z  2r  r 2

We can guarantee that 0  z 1 when 0  r 1


93
94
of
19
Discrete formulation

k
sk  T ( rk )   pr ( r j )
j 0
k nj
 k  0 ,1,2 ,..., L  1
j 0 n
k
G ( z k )   pz ( z i )  s k k  0 ,1,2 ,..., L  1
i 0

zk  G 1 T ( rk )
G 1
sk  k  0 ,1,2 ,..., L94 1
95
of
19
Example

Image is dominated by large, dark areas,


resulting in a histogram characterized by
a large concentration of pixels in pixels in
the dark end of the gray scale
Image of Mars moon 95
96
of
19
Image Equalization

Result image
after histogram
equalization
Transformation function
Histogram of the result image
for histogram equalization
The histogram equalization doesn’t make the result image look better than
the original image. Consider the histogram of the result image, the net
effect of this method is to map a very narrow interval of dark pixels into
the upper end of the gray scale of the output image. As a consequence, the
output image is light and has a washed-out appearance. 96
97
of
19
Solve the problem
Histogram Equalization

Since the problem with the


transformation function of the
histogram equalization was
caused by a large concentration
of pixels in the original image with
Histogram Specification
levels near 0

a reasonable approach is to
modify the histogram of that
image so that it does not have
this property
Usha B S 97
98
of
19
Histogram Specification

(1) the transformation


function G(z) obtained
from
k
G ( z k )   pz ( z i )  s k
i 0

k  0 ,1,2 ,..., L  1

(2) the inverse


transformation G-1(s)

Usha B S 98
99
of
19
Result image and its histogram

The output image’s histogram

Notice that the output


histogram’s low end has
After applied the
Original image histogram shifted right toward the
equalization lighter region of the gray
scale as desired.
Usha B S 99
100
of
19
Note
Histogram specification is a trial-and-error
process
There are no rules for specifying histograms,
and one must resort to analysis on a case-
by-case basis for any given enhancement
task.

100
101
of
19
Note
Histogram processing methods are global
processing, in the sense that pixels are
modified by a transformation function
based on the gray-level content of an
entire image.
Sometimes, we may need to enhance
details over small areas in an image,
which is called a local enhancement.

101
102 a) Original image
of
19
Local Enhancement (slightly blurred to
reduce noise)
b) global histogram
equalization (enhance
noise & slightly
increase contrast but
the construction is not
changed)
c) local histogram
equalization using
7x7 neighborhood
(reveals the small
squares inside larger
ones of the original
image.
(a) (b) (c)

define a square or rectangular neighborhood and move the center of this


area from pixel to pixel.
at each location, the histogram of the points in the neighborhood is
computed and either histogram equalization or histogram specification
transformation function is obtained.
another approach used to reduce computation is to utilize
nonoverlapping regions, but it usually produces an undesirable
checkerboard effect.Usha B S 102
103
of
19
Explain the result in c)

Basically, the original image consists of many small


squares inside the larger dark ones.
However, the small squares were too close in gray
level to the larger ones, and their sizes were too
small to influence global histogram equalization
significantly.
So, when we use the local enhancement technique,
it reveals the small areas.
Note also the finer noise texture is resulted by the
local processing using relatively small
neighborhoods.
103
104
of
Enhancement using Arithmetic/Logic
19
Operations
Arithmetic/Logic operations perform on pixel
by pixel basis between two or more
images
except NOT operation which perform only
on a single image

104
105
of
19
Logic Operations
Logic operation performs on gray-level
images, the pixel values are processed as
binary numbers
light represents a binary 1, and dark
represents a binary 0
NOT operation = negative transformation

105
106
of
19
Example of AND Operation

original image AND image result of AND


mask operation

Usha B S 106
107
of
19
Example of OR Operation

original image OR image result of OR


mask operation

Usha B S 107
108
of
19
Image Subtraction

g(x,y) = f(x,y) – h(x,y)

• enhancement of the differences between


images

108
109 a b
of
19 c d
Image Subtraction
a). original fractal image
b). result of setting the four lower-order
bit planes to zero
– refer to the bit-plane slicing
– the higher planes contribute
significant detail
– the lower planes contribute more
to fine detail
– image b). is nearly identical
visually to image a), with a very
slightly drop in overall contrast
due to less variability of the gray-
level values in the image.
c). difference between a). and b).
(nearly black)
d). histogram equalization of c).
(perform contrast stretching
transformation)

Usha B S 109
110
of Mask mode radiography-image sequence
19

h(x,y) is the mask, an X-ray image


of a region of a patient’s body
captured by an intensified TV
camera (instead of traditional X-
ray film) located opposite an X-
ray source
f(x,y) is an X-ray image taken after
injection a contrast medium into
the patient’s bloodstream
mask image an image (taken after images are captured at TV rates,
injection of a contrast so the doctor can see how the
medium (iodine) into the medium propagates through the
bloodstream) with mask various arteries in the area
Note: subtracted out. being observed (the effect of
• the background is dark because it subtraction) in a movie showing
doesn’t change much in both images. mode.
• the difference area is bright because it
has a big change Usha B S 110
111
of
19
Note

We may have to adjust the gray-scale of the subtracted image to


be [0, 255] (if 8-bit is used)
– first, find the minimum gray value of the subtracted image
– second, find the maximum gray value of the subtracted
image
– set the minimum value to be zero and the maximum to be
255
– while the rest are adjusted according to the interval
[0, 255], by timing each value with 255/max
Subtraction is also used in segmentation of moving pictures to
track the changes
– after subtract the sequenced images, what is left should be
the moving elements in the image, plus noise
111
112
of
19
Contents
Next, we will look at spatial filtering
techniques:

– What is spatial filtering?


– Smoothing Spatial filters.
– Sharpening Spatial Filters.
– Combining Spatial Enhancement Methods
113
of
19
Neighbourhood Operations
Neighbourhood operations simply operate
on a larger neighbourhood of pixels than
point operations Origin x

Neighbourhoods are
mostly a rectangle
around a central pixel
(x, y)
Any size rectangle Neighbourhood

and any shape filter


are possible
y Image f (x, y)
114
of
19
Neighbourhood Operations
For each pixel in the origin image, the
outcome is written on the same location at
the target image.
Origin
Origin x Target

(x, y)
Neighbourhood

y Image f (x, y)
115
of
19
Simple Neighbourhood Operations
Simple neighbourhood operations example:

– Min: Set the pixel value to the minimum in


the neighbourhood

– Max: Set the pixel value to the maximum in


the neighbourhood
116
of
19
The Spatial Filtering Process
Origin x
a b c j k l
d
g
e
h
f
i
* m
p
n
q
o
r
Original Image Filter (w)
Simple 3*3 Pixels
e 3*3 Filter
Neighbourhood
eprocessed = n*e +
j*a + k*b + l*c +
m*d + o*f +
y Image f (x, y) p*g + q*h + r*i

The above is repeated for every pixel in the


original image to generate the filtered image
117
of
19
Spatial Filtering: Equation Form
a b

  w(s, t ) f ( x  s, y  t )
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

g ( x, y ) 
s   at   b

Filtering can be given


in equation form as
shown above
Notations are based
on the image shown
to the left
118
of
19
Smoothing Spatial Filters
One of the simplest spatial filtering
operations we can perform is a smoothing
operation
– Simply average all of the pixels in a
neighbourhood around a central value
– Especially useful
1/ 1/ 1/
in removing noise 9 9 9
from images Simple
1/ 1/ 1/
– Also useful for 9 9 9 averaging
highlighting gross filter
1/ 1/ 1/
detail 9 9 9
119
of
19
Smoothing Spatial Filtering
Origin x
104 100 108 1/ 1/ 1/
9 9 9

99 106 98

95 90 85
* 1/

1/
9
1/

1/
9
1/

1/
9

9 9 9

1/ 100
104
9
1/ 108
9
1/
9
Original Image Filter
Simple 3*3 199
/9 1106
/9 198
/9
3*3 Smoothing Pixels
Neighbourhood 195
/9 190
/9 185
/9
Filter
e = 1/9*106 +
1/ *104 + 1/ *100 + 1/ *108 +
9 9 9
1/ *99 + 1/ *98 +
9 9
y Image f (x, y) 1/ *95 + 1/ *90 + 1/ *85
9 9 9
= 98.3333
The above is repeated for every pixel in the
original image to generate the smoothed image
120
of
19
Image Smoothing Example
The image at the top left
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

is an original image of
size 500*500 pixels
The subsequent images
show the image after
filtering with an averaging
filter of increasing sizes
– 3, 5, 9, 15 and 35
Notice how detail begins
to disappear
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

of
19
121
Image Smoothing Example
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

of
19
122
Image Smoothing Example
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

of
19
123
Image Smoothing Example
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

of
19
124
Image Smoothing Example
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

of
19
125
Image Smoothing Example
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

of
19
126
Image Smoothing Example
127
of
19
Weighted Smoothing Filters
More effective smoothing filters can be
generated by allowing different pixels in the
neighbourhood different weights in the
averaging function
1/ 2/ 1/
– Pixels closer to the 16 16 16
central pixel are more
2/ 4/ 2/
important 16 16 16

– Often referred to as a 1/ 2/ 1/
weighted averaging 16 16 16

Weighted
averaging filter
128
of
19
Another Smoothing Example
By smoothing the original image we get rid
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

of lots of the finer detail which leaves only


the gross features for thresholding

Original Image Smoothed Image Thresholded Image

* Image taken from Hubble Space Telescope


129
of
Averaging Filter Vs. Median Filter
19 Example
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

Original Image Image After Image After


With Noise Averaging Filter Median Filter

Filtering is often used to remove noise from


images
Sometimes a median filter works better than
an averaging filter
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

of
19
130

Original
Averaging Filter Vs. Median Filter
Example
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

of
19
131

Filter
Averaging
Averaging Filter Vs. Median Filter
Example
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

of
19
132

Filter
Median
Averaging Filter Vs. Median Filter
Example
133
of
19
Strange Things Happen At The Edges!

At the edges of an image we are missing


pixels to form a neighbourhood
Origin x
e e

e e e
y Image f (x, y)
134
of
Strange Things Happen At The Edges!
19 (cont…)
There are a few approaches to dealing with
missing edge pixels:
– Omit missing pixels
• Only works with some filters
• Can add extra code and slow down processing
– Pad the image
• Typically with either all white or all black pixels
– Replicate border pixels
– Truncate the image
135
of
19
Correlation & Convolution
The filtering we have been talking about so
far is referred to as correlation with the filter
itself referred to as the correlation kernel
Convolution is a similar operation, with just
one subtle difference
a b c r s t eprocessed = v*e +
z*a + y*b + x*c +
d
f
e
g h
e
* u
x
v
y
w
z
w*d + u*e +
t*f + s*g + r*h
Original Image Filter
Pixels

For symmetric filters it makes no difference


136
of
19
Sharpening Spatial Filters
Previously we have looked at smoothing
filters which remove fine detail
Sharpening spatial filters seek to highlight
fine detail
– Remove blurring from images
– Highlight edges
Sharpening filters are based on spatial
differentiation
137
of
19
Spatial Differentiation
Differentiation measures the rate of change of
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

a function
Let’s consider a simple 1 dimensional
example
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

of
19
138

A
B
Spatial Differentiation
139
of
19
1st Derivative
The formula for the 1st derivative of a
function is as follows:
f
 f ( x  1)  f ( x)
x
It’s just the difference between subsequent
values and measures the rate of change of
the function
140
of
19
1st Derivative (cont…)
Image Strip

8
7
6
5
f(x) 4
3
2
1
0

5 5 4 3 2 1 0 0 0 6 0 0 0 0 1 3 1 0 0 0 0 7 7 7 7
0 -1 -1 -1 -1 0 0 6 -6 01st 0
Derivative
0 1 2 -2 -1 0 0 0 7 0 0 0
8
6
4

f’(x) 2
0
-2
-4
-6
-8
141
of
19
2nd Derivative
The formula for the 2nd derivative of a
function is as follows:
 f
2
 f ( x  1)  f ( x  1)  2 f ( x)
 x
2

Simply takes into account the values both


before and after the current value
142
of
19
2nd Derivative (cont…)
Image Strip

8
7
6
5
f(x) 4
3
2
1
0

5 5 4 3 2 1 0 0 0 6 0 0 0 0 1 3 1 0 0 0 0 7 7 7 7
-1 0 0 0 0 1 0 6 -12 6
2nd0 0 1
Derivative 1 -4 1 1 0 0 7 -7 0 0

10

f’’(x) 0

-5

-10

-15
143
of
19
1st and 2nd Derivative
Image Strip

8
7
6
5
4
f(x) 3
2
1
0 1st Derivative

8
6
4
2

f’(x) 0
-2
-4
-6
-8
2nd Derivative

10

f’’(x) 0

-5

-10

-15
144
of
Using Second Derivatives For Image
19 Enhancement
The 2nd derivative is more useful for image
enhancement than the 1st derivative
– Stronger response to fine detail
– Simpler implementation
– We will come back to the 1st order derivative
later on
The first sharpening filter we will look at is
the Laplacian
– Isotropic
– One of the simplest sharpening filters
– We will look at a digital implementation
145
of
19
The Laplacian
The Laplacian is defined as follows:
 f  f
2 2
 f  2  2
2

 x  y
where the partial 1st order derivative in the x
direction is defined as follows:
 f
2
 f ( x  1, y)  f ( x  1, y)  2 f ( x, y)
 x
2

and in the y direction as follows:


 f
2
 f ( x, y  1)  f ( x, y  1)  2 f ( x, y )
 y
2
146
of
19
The Laplacian (cont…)
So, the Laplacian can be given as follows:
 f  [ f ( x  1, y)  f ( x  1, y)
2

 f ( x, y  1)  f ( x, y  1)]
 4 f ( x, y)
We can easily build a filter based on this
0 1 0

1 -4 1

0 1 0
147
of
19
The Laplacian (cont…)
Applying the Laplacian to an image we get a
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

new image that highlights edges and other


discontinuities

Original Laplacian Laplacian


Image Filtered Image Filtered Image
Scaled for Display
148
of
19
But That Is Not Very Enhanced!
The result of a Laplacian filtering
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

is not an enhanced image


We have to do more work in
order to get our final image
Subtract the Laplacian result
Laplacian
from the original image to Filtered Image
Scaled for Display
generate our final sharpened
enhanced image
g ( x, y)  f ( x, y)   f
2
149
of
19
Laplacian Image Enhancement
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

- =
Original Laplacian Sharpened
Image Filtered Image Image

In the final sharpened image edges and fine


detail are much more obvious
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

of
19
150
Laplacian Image Enhancement
151
of
19
Simplified Image Enhancement
The entire enhancement can be combined
into a single filtering operation
g ( x, y)  f ( x, y)   f
2

 f ( x, y)  [ f ( x  1, y)  f ( x 1, y)
 f ( x, y  1)  f ( x, y  1)
 4 f ( x, y)]
 5 f ( x, y)  f ( x  1, y)  f ( x  1, y)
 f ( x, y  1)  f ( x, y 1)
152
of
19
Simplified Image Enhancement (cont…)

This gives us a new filter which does the


Images taken from Gonzalez & Woods, Digital Image Processing (2002)

whole job for us in one step

0 -1 0

-1 5 -1

0 -1 0
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

of
19
153
Simplified Image Enhancement (cont…)
154
of
19
Variants On The Simple Laplacian
There are lots of slightly different versions of
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

the Laplacian that can be used:


0 1 0 1 1 1
Simple Variant of
1 -4 1 1 -8 1
Laplacian Laplacian
0 1 0 1 1 1

-1 -1 -1

-1 9 -1

-1 -1 -1
155
of
19
Unsharp Mask & Highboost Filtering
Using sequence of linear spatial filters in
order to get Sharpening effect.

-Blur
- Subtract from original image
- add resulting mask to original image
156
of
19
Highboost Filtering
157
of
19
1st Derivative Filtering
Implementing 1st derivative filters is difficult in
practice
For a function f(x, y) the gradient of f at
coordinates (x, y) is given as the column
vector:
 f 
Gx   x 
f      f 
G y   
 y 
158
of
19
1st Derivative Filtering (cont…)
The magnitude of this vector is given by:
f  mag(f )

 G G2
x
2
y 
1
2

1
 f   f  
2 2 2

      
 x   y  

For practical reasons this can be simplified as:


f  Gx  Gy
159
of
19
1st Derivative Filtering (cont…)
There is some debate as to how best to
calculate these gradients but we will use:
f  z7  2 z8  z9   z1  2 z2  z3 
 z3  2 z6  z9   z1  2 z4  z7 
which is based on these coordinates

z1 z2 z3

z4 z5 z6

z7 z8 z9
160
of
19
Sobel Operators
Based on the previous equations we can
derive the Sobel Operators
-1 -2 -1 -1 0 1

0 0 0 -2 0 2

1 2 1 -1 0 1

To filter an image it is filtered using both


operators the results of which are added
together
161
of
19
Sobel Example
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

An image of a
contact lens which
is enhanced in
order to make
defects (at four
and five o’clock in
the image) more
obvious

Sobel filters are typically used for edge


detection
162
of
19
1st & 2nd Derivatives
Comparing the 1st and 2nd derivatives we
can conclude the following:
– 1st order derivatives generally produce thicker
edges
– 2nd order derivatives have a stronger
response to fine detail e.g. thin lines
– 1st order derivatives have stronger response
to grey level step
– 2nd order derivatives produce a double
response at step changes in grey level
163
of
Combining Spatial Enhancement
19 Methods
Successful image
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

enhancement is typically not


achieved using a single
operation
Rather we combine a range
of techniques in order to
achieve a final result
This example will focus on
enhancing the bone scan to
the right
164
of
Combining Spatial Enhancement
19 Methods (cont…)
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

(a)
Laplacian filter of
bone scan (a)
(b)
Sharpened version of
bone scan achieved (c)
by subtracting (a)
and (b) Sobel filter of bone
scan (a) (d)
165
of
Combining Spatial Enhancement
19 Methods (cont…)
Result of applying a (h)
power-law trans. to
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

Sharpened image (g)


which is sum of (a)
and (f) (g)
The product of (c)
and (e) which will be (f)
used as a mask
(e)

Image (d) smoothed with


a 5*5 averaging filter
166
of
Combining Spatial Enhancement
19 Methods (cont…)
Compare the original and final images
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy