Module 4 Fullnotes
Module 4 Fullnotes
MODULE 4
(IMAGE RESTORATION & IMAGE SEGMENTATION)
where the terms in capital letters are the Fourier transforms of the
corresponding terms in the above equation.
These two equations are the bases for most of the restoration
material in this chapter.
Contd..
g ( x, y ) = f ( x, y ) + ( x, y )
where f(x, y) is the original image pixel, η(x, y)
is the noise term and g(x, y) is the resulting noisy pixel
➢If we can estimate the model that the noise in an image is
based on, this will help us to figure out how to restore the
image
Types of Noise Models
➢Type of noise determines best types of filters for
removing it.
➢Here Noise may be considered random variables,
characterized by a probability density
function(PDF)
Types of Noise Models
1. Salt and pepper noise:
Image
Histogram to go here
Histogram
Images taken from Gonzalez & Woods, Digital Image Processing (2002)
1
ˆ
f ( x, y ) = g ( s, t )
mn ( s ,t )S xy
➢Works well for salt noise, but fails for pepper noise Also
does well for other kinds of noise such as Gaussian noise
Mean filters(contd..)
Contraharmonic Mean filter
➢The contra harmonic mean filter yields a restored image
based on the expression
g
( s ,t )S xy
( s , t ) Q +1
fˆ ( x, y ) =
g (
( s ,t )S xy
s , t ) Q
5 9 1 0 2 53 58
51 204 2 5 0 57 60
48 50 1 9 3 59 63
49 51 52 55 58 64 67
50 54 57 60 63 67 70
51 55 59 62 65 69 72
Image f (x, y) Image f (x, y)
y y
Order Statistics Filters
Images taken from Gonzalez & Woods, Digital Image Processing (2002)
fˆ ( x, y ) = median{g ( s, t )}
( s ,t )S xy
➢Excellent at noise removal, without the smoothing effects that can occur with
other smoothing filters
➢Particularly good when salt and pepper noise is present
Order Statistics Filters(contd..)
Max and Min Filter
Max Filter: It replaces value of pixel by maximum value of its specified
fˆ ( x, y ) = max {g ( s, t )}
neighbourhood
( s ,t )S xy
fˆ ( x, y ) = min {g ( s, t )}
( s ,t )S xy
➢Max filter is good for pepper noise and min is good for salt noise
Order Statistics Filters(contd..)
Midpoint Filter
➢The midpoint filter simply computes the midpoint between
the maximum and minimum values in the area encompassed
by the filter
Both (a )and
(b) are easy to
perform
thresholding,
but ( c ) is
difficult
because of
high standard
deviation of
noise
Here the second
The role of illumination and image represents a
reflectance nonuniform
illumination.
Multiplying the first
image with second
will give a non
uniform illuminated
image. As see in the
corresponding
histogram images,
non uniform
illumination will
affect the
thresholding badly.
This is also true in
the case of non
uniform reflectance
Basic Global Thresholding
➢When the intensity distributions of objects and background
pixels are sufficiently distinct, it is possible to use a single
(global) threshold applicable over the entire image.
➢In most applications, there is usually enough variability
between images that, even if global thresholding is a suitable
approach, an algorithm capable of estimating automatically
the threshold value for each image is required.
Contd..
➢The following iterative algorithm can be used for this
purpose:
Contd..
➢Parameter ∆T is used to control the number of
iterations in situations,where speed is an important
issue. In general, ∆T the larger is, the fewer
iterations the algorithm will perform.
➢The initial threshold must be chosen greater than
the minimum and less than maximum intensity level
in the image The average intensity of the image is a
good initial choice for T.
Optimum Global Thresholding
Using Otsu’s Method
➢The method is optimum in the sense that it maximizes the
between-class variance, a well-known measure used in statistical
discriminant analysis.
➢The basic idea is that well- thresholded classes should be distinct
with respect to the intensity values of their pixels and, conversely,
that a threshold giving the best separation between classes in
terms of their intensity values would be the best (optimum)
threshold.
➢In addition to its optimality, Otsu’s method has the important
property that it is based entirely on computations performed on
the histogram of an image, an easily obtainable 1-D array
Otsu’s Algorithm
Step 1:Compute the normalized histogram of the input image.
Denote the components of the histogram by wi, i = 0, 1, 2,... , L – 1
,where L is the no of intensity levels.
wi =(No of pixels with intensity level i)
Total no of pixels
Step 2: Suppose we take a threshold intensity value k ,where
0<k<L-1 and use it to threshold the input image into two classes
(foreground and background), background class consists of all the
pixels in the image with intensity values in the range [0, k] and
foreground class consists of the pixels with values in the
range[k+1,L-1]
Contd..
Contd..
T=1
i<=1 i>1
Intensi No of wi
Intensi No of wi ty pixels
ty pixels
2 4 4/36
0 9 9/36
3 5 5/36
1 6 6/36
4 8 8/36
5 4 4/36
Contd..
Wb= (9/36)+(6/36)=(9+6)/36=15/36
Wf= (4/36)+(5/36)+ (8/36)+ (4/36)=(4+5+8+4)/36
=21/36
Contd..
➢Compute cumulative mean
= (0*9/36)+(1*6/36) = (0*9)+(1*6)
15/36 15
= (2*4/36)+(3*5/36)+(4*8/36)+(5*4/36) = (2*4)+(3*5)+(4*8)+(5*4)
21/36 21
Contd..
➢Find Between class variance when threshold =2
=(15/36)*(21/36) *((6/15)-(75/21))2
=2.44
Tutorial
➢Consider a 6 (0..5) gray level image with size (6*6)
and its histogram
= σ𝑪𝒌=𝟏 𝑾𝒌 ( ⴑk- ⴑG )2
ⴑk=Cumulative mean of k
𝑾𝒌 = 𝒘𝒊
class
𝒊∈ 𝒌
ⴑG= σ𝑳−𝟏
𝒊=0
𝒊 ∗ 𝒘𝒊
Contd..
➢Although this method can be used to segment the
image in to any number of segments, practically we
use this method only up to 2 threshold values.
➢Applications that require more than two
thresholds generally are solved using more than
just intensity values. Instead, the approach is to use
additional descriptors (e.g., color) and the
application is cast as a pattern recognition problem
Contd..
➢The procedure starts by selecting the first value of k1 (that
value is 1 because looking for a threshold at 0 intensity makes
no sense; also, keep in mind that the increment values are
integers because we are dealing with intensities).
➢Next,k2 is incremented through all its values greater than k1
and less than L-1,and Between class variance for each
combination is calculated
➢Then k1 is incremented to its next value and k2 is
incremented again through all its values greater than k1.This
process repeats until k1=L-3
➢The result of this process is a 2-D array,
Contd..
➢The last step is to look for the maximum value in this array.
The values of k1 and k2 corresponding to that maximum are
the optimum thresholds k1* and k2 *
➢The thresholding can be performed as
Variable Thresholding
➢To improve the result of thresholding , we can use
variable thresholding in which When the value of T
changes over an image.
➢Various techniques for choosing variable
thresholds
1.Image partitioning
2.Variable thresholding based on local
image properties
1.Using moving averages
Image partitioning
➢One of the simplest approaches to variable thresholding is
to subdivide an image into nonoverlapping rectangles and
apply thresholding technique(otsu’s method) separately on
these rectangles
➢This approach is used to compensate for non-uniformities
in illumination and/or reflectance.
➢One important thing while choosing this method is that, the
rectangles are chosen small enough so that the illumination
of each is approximately uniform.
Example
Contd..
➢Image subdivision generally works well when the objects
of interest and the background occupy regions of reasonably
comparable size
➢When this is not the case as in the image given below, the
method typically fails because of the likelihood of
subdivisions containing only object or background pixels
Variable thresholding based on
local image properties
➢A more general approach than the image subdivision
method is to compute a threshold at every point(x,y) in the
image based on one or more specified properties computed
in a neighborhood of (x,y).
Here z k-n represents the intensity at point going back n points from current
time
Contd..
➢Because a moving average is computed for every point in
the image, segmentation is implemented using the threshold
T =b *m(x,y)with where b is constant and m(x,y)is the
moving average from at point (x, y) in the input image.
5/6/2024 97
Region Growing
➢Region growing is a procedure that groups pixels or subregions
into larger regions.
➢The simplest of these approaches is pixel aggregation, which
starts with a set of “seed” points and from these grows regions
by appending to each seed points those neighboring pixels that
have similar properties (such as gray level, texture, color, shape).
➢Region growing based techniques are better than the edge-
based techniques in noisy images where edges are difficult to
detect.
5/6/2024 98
5/6/2024 99
4-connectivity
5/6/2024 100
8-connectivity
5/6/2024 101
Issues in implementing Region
growing approach
1. How to find Seed points??
• Can be often based on the nature of the
problem
For, example consider the image of 8-bit
X-ray image of a weld (the horizontal
dark region) containing several cracks
and porosities (the bright regions
running horizontally through the center
of the image).We illustrate the use of
region growing by segmenting the
defective weld regions
Contd..
➢ From the physics of the problem, we know that cracks and porosities will
attenuate X-rays considerably less than solid welds, so we expect the regions
containing these types of defects to be significantly brighter than other parts of
the X-ray image. We can extract the seed points by thresholding the original
image, using a threshold set at a high percentile .(here we use the gray level
value 254).The thresholded image is shown on the right side..
Contd..
➢ If the result of these computations shows clusters of values( not single points),
the pixels whose properties place them near the centroid of these clusters can
be used as seeds. The result of is shown below;
Contd..
2.How to select similarity criteria??
The selection of similarity criteria depends not only on the
problem under consideration, but also on the type of image data available.
For example, the analysis of land-use satellite imagery depends heavily on
the use of color.This problem would be significantly more difficult, or even
impossible, to solve without the inherent information available in color images.
When the images are monochrome, region analysis must be carried out with
a set of descriptors based on intensity levels and spatial properties. And example
can be written as
5/6/2024 108
Contd..
5/6/2024 113
Region Split-and-Merge
Algorithm (contd..)
➢The second stage of the algorithm, the Merging
➢If only splitting is used, the final partition normally
contains adjacent regions with identical properties.
➢This drawback can be remedied by allowing merging as
well as splitting.
➢During Merging phase, we check whether the
combination of adjacent regions in the quad tree
satisfies the predicate. If satisfied, then we merge that
regions in to a single region.
➢This process continues, until no further meging is
possible.
Contd..
➢The algorithm can be summarized as