0% found this document useful (0 votes)
1 views116 pages

Module 4 Fullnotes

This document covers image processing techniques focusing on image restoration and segmentation, detailing noise models, filtering methods, and edge detection. It discusses various types of noise, such as salt and pepper, Gaussian, and Rayleigh noise, along with their respective filtering techniques including mean filters, order statistic filters, and adaptive filters. The document emphasizes the importance of understanding the degradation process to effectively restore images and improve their quality.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
1 views116 pages

Module 4 Fullnotes

This document covers image processing techniques focusing on image restoration and segmentation, detailing noise models, filtering methods, and edge detection. It discusses various types of noise, such as salt and pepper, Gaussian, and Rayleigh noise, along with their respective filtering techniques including mean filters, order statistic filters, and adaptive filters. The document emphasizes the importance of understanding the degradation process to effectively restore images and improve their quality.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 116

IMAGE PROCESSING TECHNIQUE

MODULE 4
(IMAGE RESTORATION & IMAGE SEGMENTATION)

IMAGE DEGRADATION MODEL, NOISE MODELS, MEAN FILTERS,


ORDER STATISTIC FILTER, ADAPTIVE FILTERS.

EDGE DETECTION, GRADIENT OPERATORS, LAPLACE OPERATORS


AND ZERO CROSSINGS.

THRESHOLDING, BASIC GLOBAL THRESHOLDING, OPTIMUM


GLOBAL THRESHOLDING USING OTSU METHOD, MULTIPLE
THRESHOLDS, VARIABLE THRESHOLDING, MULTIVARIABLE
THRESHOLDING.

REGION-BASED APPROACH TO SEGMENTATION.


Image Restoration
Image degradation model, Noise models, Mean
Filters, Order Statistic filter, Adaptive filters.
What is Image Restoration?
➢Restoration attempts to recover an image that has been
degraded by using a known of the degradation phenomenon
➢Identify the degradation process and attempt to reverse it
➢Similar to image enhancement, but more objective
A Model of the Image
Degradation/Restoration Process

➢ Here , image degradation is modelled as applying a degradation


function together with an additive noise term, on an input
image f(x,y)to produce a degraded image g(x,y)
➢ Likewise Image restoration is modelled as estimating the function
from the degraded image by using some knowledge
about the degradation function H, and some knowledge about the additive
noise term .We want the estimate to be as close as possible to the
original input image f(x,y)
Contd..
➢If H is a linear, position-invariant process, then the degraded image
g(x,y) is given in the spatial domain by

where h is the spatial representation of the degradation function


and,operation is convolution.
➢We know that convolution in the spatial domain is analogous to
multiplication in the frequency domain, so we may write the model as
equivalent frequency domain representation:

where the terms in capital letters are the Fourier transforms of the
corresponding terms in the above equation.
These two equations are the bases for most of the restoration
material in this chapter.
Contd..

In the following sections, we assume that H


is the identity operator, and we deal only with
degradations due to noise
Noise and Images
➢The sources of noise in digital images arise
during image acquisition (digitization) and
transmission
◦ Imaging sensors can be affected by ambient conditions
◦ Interference can be added
to an image during transmission
Noise Models
➢We can consider a noisy image to be modelled as follows:

g ( x, y ) = f ( x, y ) +  ( x, y )
where f(x, y) is the original image pixel, η(x, y)
is the noise term and g(x, y) is the resulting noisy pixel
➢If we can estimate the model that the noise in an image is
based on, this will help us to figure out how to restore the
image
Types of Noise Models
➢Type of noise determines best types of filters for
removing it.
➢Here Noise may be considered random variables,
characterized by a probability density
function(PDF)
Types of Noise Models
1. Salt and pepper noise:

➢Randomly scattered black + white pixels


➢Also called impulse noise, shot noise , binary
noise , Data-drop-out or spike noise
➢Caused by sudden sharp disturbance
➢The PDF of (bipolar) impulse noise is given by
Contd..

➢ If b>a intensity b will appear as a light dot in the image.


Conversely, level a will appear like a dark dot.
➢ Normally value of b is taken as highest possible value for
bright pixel(say 255)and value of a is as highest value for
dark pixel( say 0)
➢ If either a or b is zero, the impulse noise is called unipolar.
Types of Noise Models
2. Gaussian noise

➢ Idealized form of white noise added to image,


normally distributed
➢The PDF of a Gaussian random variable, , is given by
Types of Noise Models
3. Rayleigh noise
➢The PDF of Rayleigh noise is given by

➢The mean and variance of this density is given by


Types of Noise Models
4. Erlang (gamma) noise

➢The PDF of Erlang noise is given by


Types of Noise Models
5. Exponential noise
Types of Noise Models
6. Uniform noise
Types of Noise Models
7. Periodic Noise
➢ Caused by disturbances of a periodic Nature
➢Periodic noise in an image arises typically from
electrical or electromechanical interference during
image acquisition
➢Salt and pepper, Gaussian and other noise can be
cleaned using spatial filters
➢Periodic noise can be cleaned Using frequency
domain filtering
The PDFs provide useful tools for modeling a broad
range of noise corruption situations found in practice
❖For example, Gaussian noise arises in an image due to factors such as
electronic circuit noise and sensor noise due to poor illumination
and/or high temperature.
❖The Rayleigh density is helpful in characterizing noise phenomena in
range imaging.
❖The exponential and gamma densities find application in laser
imaging.
❖Impulse noise is found in situations where quick transients, such as
faulty switching, take place during imaging.
❖The uniform density is perhaps the least descriptive of practical
situations. However, the uniform density is quite useful as the basis for
numerous random number generators that are used in simulations
Noise Example
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

The test pattern to the right is ideal for


demonstrating the addition of noise
The following slides will show the result of adding
noise based on various models to this image

Image

Histogram to go here

Histogram
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

Noise Example (contd…)


Images taken from Gonzalez & Woods, Digital Image Processing (2002)

Noise Example (contd…)


Restoration in the Presence of Noise
Only—Spatial Filtering

➢ The noise terms are unknown, so subtracting them from or is


not a realistic option for restoring.
➢ Mainly we are using spatial filtering operation
Restoration in the Presence of Noise
Only—Spatial Filtering

➢Main filters used in spatial filtering are


1. Mean filters
2.Order statistic filters
3.Adaptive filters
Mean filters
➢We can use spatial Mean filters of different kinds
to remove different kinds of noise.
(i) Arithmetic mean filter
(ii)Geometric Mean
(iii)Harmonic Mean
(iv)Contraharmonic Mean
Mean filters(contd..)
Arithmetic Mean filter
➢The arithmetic mean filter is a very simple one and is calculated as
follows:

1
ˆ
f ( x, y ) =  g ( s, t )
mn ( s ,t )S xy

➢ie, each pixel is replaced by the average of pixels in the neighbourhood


of size m*n
➢ This is implemented as the simple smoothing filter
➢Blurs the image to remove noise
Mean filters(contd..)
Geometric Mean filter
➢An image restored using a geometric mean filter is
given by the expression
1
  mn
fˆ ( x, y ) =   g ( s, t )
( s ,t )S xy 

➢Here, each restored pixel is given by the product of


the pixels in the subimage window, raised to the
power 1/mn
Mean filters(contd..)
Harmonic Mean filter
➢The harmonic mean filtering operation is given by the
expression
mn
fˆ ( x, y ) =
1

( s ,t )S xy g ( s, t )

➢Works well for salt noise, but fails for pepper noise Also
does well for other kinds of noise such as Gaussian noise
Mean filters(contd..)
Contraharmonic Mean filter
➢The contra harmonic mean filter yields a restored image
based on the expression
 g
( s ,t )S xy
( s , t ) Q +1

fˆ ( x, y ) =
 g (
( s ,t )S xy
s , t ) Q

➢Q is the order of the filter and adjusting its value changes


the filter’s behaviour
➢Positive values of Q eliminate pepper noise
➢Negative values of Q eliminate salt noise
➢It cannot do both simultaneously
Question :Find the new pixel value at position (2,2)
using various mean filers for image
restoration??suppose filter size is (3*3)
Original Image x Filtered Image x
4 2 7 5 6 2 5

5 9 1 0 2 53 58

51 204 2 5 0 57 60

48 50 1 9 3 59 63
49 51 52 55 58 64 67

50 54 57 60 63 67 70

51 55 59 62 65 69 72
Image f (x, y) Image f (x, y)
y y
Order Statistics Filters
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

➢Spatial filters that are based on ordering the pixel


values that make up the neighbourhood operated
on by the filter
➢Useful spatial filters include
❑Median filter
❑Max and min filter
❑Midpoint filter
❑Alpha trimmed Mean filter
Order Statistics Filters(contd..)
Median Filter
➢ It replaces the value of a pixel by the median of the intensity levels in the
neighborhood of that pixel:

fˆ ( x, y ) = median{g ( s, t )}
( s ,t )S xy

➢Excellent at noise removal, without the smoothing effects that can occur with
other smoothing filters
➢Particularly good when salt and pepper noise is present
Order Statistics Filters(contd..)
Max and Min Filter
Max Filter: It replaces value of pixel by maximum value of its specified

fˆ ( x, y ) = max {g ( s, t )}
neighbourhood

( s ,t )S xy

Min Filter: It replaces value of pixel by minimum value of its specified


neighbourhood

fˆ ( x, y ) = min {g ( s, t )}
( s ,t )S xy

➢Max filter is good for pepper noise and min is good for salt noise
Order Statistics Filters(contd..)
Midpoint Filter
➢The midpoint filter simply computes the midpoint between
the maximum and minimum values in the area encompassed
by the filter

ˆf ( x, y) = 1  max {g ( s, t )} + min {g ( s, t )}


2 ( s ,t )S xy ( s ,t )S xy 

➢Note that this filter combines order statistics and averaging.


It works best for randomly distributed noise, like Gaussian
or uniform noise
Order Statistics Filters(contd..)
Alpha-Trimmed Mean Filter
➢Suppose that we delete the lowest d/2and the highest d/2
intensity values of in the neighborhood Sxy Let represent the
remaining mn-d Pixels by gr(s,t) .
➢A filter formed by averaging these remaining pixels is
called an alphatrimmed mean filter:
1
fˆ ( x, y ) =
mn − d
 g ( s, t )
( s ,t )S xy
r

➢Alpha-trimmed filter is useful in situations involving


multiple types of noise, such as a combination of salt-and-
pepper and Gaussian noise
Question :Find the new pixel value at position
(1,3) using various order statistic filers for
image restoration?? suppose filter size is (3*3
Adaptive Filters
➢Adaptive filters are a special type of filters
whose behavior changes based on statistical
characteristics of the image inside the filter
region .
➢Two examples of adaptive filters
1. Adaptive, local noise reduction filter
2. Adaptive median filter
Adaptive, local noise Reduction
filter
➢The simplest statistical measures of a random
variable are its mean and variance.
➢The mean gives a measure of average intensity in
the region over which the mean is computed, and the
variance gives a measure of contrast in that region.
Suppose Our filter is to operate on a local region S xy The
response of the filter at any point (x, y) on which the region is
centered is to be based on four quantities:
Contd..
(a) g(x,y),the value of the noisy image at (x, y)
(b) the variance of the noise corrupting to form
(c) the local mean of the pixels in S xy
(d) The local variance of the pixels in S xy

❑The algorithm takes an assumption that >=


Contd..
➢The algorithm works as follows;
1.If is zero, the filter should return simply the value of
g(x,y).This is the trivial, zero-noise case in which g(x,y) is
equal to f(x,y) .
2. If the local variance is high relative to the filter
should return a value close to g(x, y).A high local variance
typically is associated with edges, and these should be
preserved.
3. If the two variances are equal, we want the filter to return
the arithmetic mean value of the pixels in S xy .This condition
occurs when the local area has the same properties as the
overall image, and local noise is to be reduced simply by
averaging.
Contd..
➢An adaptive expression for obtaining based on these
assumptions may be written as

➢In terms of overall noise reduction, the adaptive filter achieved


results similar to the arithmetic and geometric mean filters.
However, the image filtered with the adaptive filter is much
sharper.
➢Disadvantage :Filter complexity is high in adaptive filters than
normal filters
Adaptive median filter

➢The adaptive median filter changes (increases) the


size of S xy during filter operation, depending on
certain conditions listed below;
➢Consider the following notations;
Contd..
➢The adaptive median-filtering algorithm works in two
stages, denoted stage A and stage B, as follows:
Contd..
➢The key to understanding the mechanics of this
algorithm is to keep in mind that it has three main
purposes:
1. To remove salt-and-pepper (impulse) noise
2. To provide smoothing of other noise that may not be
impulsive
3. To reduce distortion, such as excessive thinning or
thickening of object boundaries
➢The values and Z min and Z max are considered statistically by the
algorithm to be “impulse-like”noise components, even if these are not the
lowest and highest possible pixel values in the image
Contd..(Algorithm explanation)
➢At stage A, Algorithm first checks , whether the median value is any of the
extreme values (Z min or Z max ). ie, whether Median value represents salt or
pepper. If it is not, then we will go to Stage B
➢If median value is one of the extreme value, then we increase the size of the
window and repeat the Stage A until the size of window reaches the maximum
specified value. When window size reaches the maximum value , it returns the
median value as new pixel value
➢In stage B , we check whether , the original value of the center pixel is any of
the extreme values. If not ( means it is not a salt and pepper noise) , we retain
its original value. By not changing these “intermediate-level” points, distortion
is reduced in the image. If the value is any of the extreme values, then we
replace it with median value.(like normal median filtering).
Comparison between
Normal Median filtering and Adaptive
Median filtering
1. Adaptive median filter works well if the spatial density (Pa or
Pb )of the impulse noise is large. But normal median filtering is
not.
2. Adaptive median filter retain certain pixel values as such ,
while performing filtering. But in median filtering all pixel
values are changed.
3. Adaptive filter can change the filter window size
Image segmentation
Edge Detection, gradient operators, Laplace operators and zero crossings.

Thresholding, Basic Global Thresholding, Optimum global thresholding


using Otsu method, Multiple thresholds, Variable thresholding,
Multivariable thresholding.

Region-Based Approach to Segmentation


Image Segmentation

© 1992–2008 R. C. Gonzalez & R. E. Woods


Contd..
➢Condition (a) indicates that the segmentation must be complete; that is,
every pixel must be in a region.
➢ Condition (b) requires that points in a region be connected in some predefined
sense (e.g., the points must be 4- or 8-connected).
➢Condition (c) indicates that the regions must be disjoint.
➢ Condition (d) deals with the properties that must be satisfied by the
pixels in a segmented region
➢Finally, condition (e) indicates that two adjacent regions Ri and Rj must be
different in the sense of predicate Q
➢ Two approaches for segmentation:
1. Edge based segmentation: This approach is to
partition an image based on abrupt changes in
intensity, such as edges.
2. Region based segmentation :These are based on
partitioning an image into regions that are similar
according to a set of predefined criteria.
Thresholding, region growing, and region splitting
and merging are examples of methods in this
category

© 1992–2008 R. C. Gonzalez & R. E. Woods


Region based Segmentation
Thresholding Techniques
Histogram of an image

The histogram plots the number of pixels in the image (vertical


axis) with a particular brightness value (horizontal axis)
The basics of intensity thresholding

➢The basis of intensity level thresholding is to select a


threshold intensity value T, that separates objects and its
background. Then,(x,y) any point in the image at which
f(x,y)>T is called an object point; otherwise, the point is
called a background point.
➢In other words, the segmented image g(x,y) , is given by
Contd..
➢ Based on the value of T, the thresholding can be classifies as

1. Global Thresholding : When T is a constant over an entire image


2. Variable thresholding or local or regional thresholding : When
the value of T changes over an image
3. Dynamic or adaptive thresholding: When the value of T depends
on the spatial coordinates of the image
4. Multiple Thresholding : More than one threshold is used. For
example , the usage of two threshold T1 and T2 on image can be
written as
Contd..
➢key factors affecting the thresholding are:
(1) The separation between peaks in the image histogram(the
further apart the peaks are, the better the chances of separating the
modes)
(2) The noise content in the image (the modes broaden as noise
increases);
(3) The relative sizes of objects and background;
(4) The uniformity of the illumination source;
(5) The uniformity of the reflectance properties of the image.
The role of noise in image
thresholding

Both (a )and
(b) are easy to
perform
thresholding,
but ( c ) is
difficult
because of
high standard
deviation of
noise
Here the second
The role of illumination and image represents a
reflectance nonuniform
illumination.
Multiplying the first
image with second
will give a non
uniform illuminated
image. As see in the
corresponding
histogram images,
non uniform
illumination will
affect the
thresholding badly.
This is also true in
the case of non
uniform reflectance
Basic Global Thresholding
➢When the intensity distributions of objects and background
pixels are sufficiently distinct, it is possible to use a single
(global) threshold applicable over the entire image.
➢In most applications, there is usually enough variability
between images that, even if global thresholding is a suitable
approach, an algorithm capable of estimating automatically
the threshold value for each image is required.
Contd..
➢The following iterative algorithm can be used for this
purpose:
Contd..
➢Parameter ∆T is used to control the number of
iterations in situations,where speed is an important
issue. In general, ∆T the larger is, the fewer
iterations the algorithm will perform.
➢The initial threshold must be chosen greater than
the minimum and less than maximum intensity level
in the image The average intensity of the image is a
good initial choice for T.
Optimum Global Thresholding
Using Otsu’s Method
➢The method is optimum in the sense that it maximizes the
between-class variance, a well-known measure used in statistical
discriminant analysis.
➢The basic idea is that well- thresholded classes should be distinct
with respect to the intensity values of their pixels and, conversely,
that a threshold giving the best separation between classes in
terms of their intensity values would be the best (optimum)
threshold.
➢In addition to its optimality, Otsu’s method has the important
property that it is based entirely on computations performed on
the histogram of an image, an easily obtainable 1-D array
Otsu’s Algorithm
Step 1:Compute the normalized histogram of the input image.
Denote the components of the histogram by wi, i = 0, 1, 2,... , L – 1
,where L is the no of intensity levels.
wi =(No of pixels with intensity level i)
Total no of pixels
Step 2: Suppose we take a threshold intensity value k ,where
0<k<L-1 and use it to threshold the input image into two classes
(foreground and background), background class consists of all the
pixels in the image with intensity values in the range [0, k] and
foreground class consists of the pixels with values in the
range[k+1,L-1]
Contd..
Contd..

Where w i==(No of pixels with intensity level i)


Total no of pixels
Wb=Toatl no of pixels in background
Total no of pixels
Substituting this
Contd..
➢Similarly

Where w i==(No of pixels with intensity level i)


Total no of pixels
Wf=Total no of pixels in foreground
Total no of pixels
Substituting this
Contd..
Step 5: Compute the between-class variance

Step 6: Repeat the steps 2 to 5 for all values of k ie 0<k<L-1


Step 7:Obtain the Otsu threshold k*, as the value of for
which is maximum. If the maximum is not unique,
obtain k* by averaging the values of corresponding to the
various maxima detected.
Contd..
Step 8: Compute average intensity of the entire image (i.e., the global mean) is
given by

Step 9:Obtain the separability measure can be used to obtain a


quantitative estimate of the separability of classes, which in turn gives an idea of the
ease of thresholding a given image
Where is global
variance
Example: Calculation of between class variance
Consider a 6 (0..5) gray level image with size (6*6)
and its histogram
Contd…
➢The normalized histogram values of each
gray level is
w0=9/36
w1=6/36
w2=4/36
w3=5/36
w4=8/36
w5=4/36
Contd..
➢Suppose, We take threshold as 2( here we are taking the assumption that
gray levels greater than to threshold is foreground pixels. Less or equal
values are taken as background pixels)

T=1

i<=1 i>1
Intensi No of wi
Intensi No of wi ty pixels
ty pixels
2 4 4/36
0 9 9/36
3 5 5/36
1 6 6/36
4 8 8/36
5 4 4/36
Contd..

Wb= (9/36)+(6/36)=(9+6)/36=15/36
Wf= (4/36)+(5/36)+ (8/36)+ (4/36)=(4+5+8+4)/36
=21/36
Contd..
➢Compute cumulative mean

= (0*9/36)+(1*6/36) = (0*9)+(1*6)
15/36 15

= (2*4/36)+(3*5/36)+(4*8/36)+(5*4/36) = (2*4)+(3*5)+(4*8)+(5*4)
21/36 21
Contd..
➢Find Between class variance when threshold =2

=(15/36)*(21/36) *((6/15)-(75/21))2
=2.44
Tutorial
➢Consider a 6 (0..5) gray level image with size (6*6)
and its histogram

➢Find between class variance when T=3


How to improve thresholding
operation ??
1.By Performing image smoothing before thresholding
to remove noise
2.Instead of using histogram of the original image, to
find the threshold value , we can use the histogram of
edge image (image showing only the edges of image)to
find the threshold using Otsu’s method. This will
improve performance if the an image composed of a
small object on a large background area (or vice versa)
Multiple Thresholds
➢The thresholding method can be extended to an arbitrary
number of thresholds. ie, instead of using a single threshold
, we can use more than one threshold values to segment the
image into segments.
➢This is because the between class variance measure can be
generalized for any number of classes.
Contd.
➢The general form of between class variance between C
classes can be written as

= σ𝑪𝒌=𝟏 𝑾𝒌 ( ⴑk- ⴑG )2
ⴑk=Cumulative mean of k
𝑾𝒌 = ෍ 𝒘𝒊
class
𝒊∈ 𝒌

ⴑk= σ𝒊∈ 𝒌 𝒊 ∗ 𝒘𝒊 ⴑG= Global mean


𝑾𝒌

ⴑG= σ𝑳−𝟏
𝒊=0
𝒊 ∗ 𝒘𝒊
Contd..
➢Although this method can be used to segment the
image in to any number of segments, practically we
use this method only up to 2 threshold values.
➢Applications that require more than two
thresholds generally are solved using more than
just intensity values. Instead, the approach is to use
additional descriptors (e.g., color) and the
application is cast as a pattern recognition problem
Contd..
➢The procedure starts by selecting the first value of k1 (that
value is 1 because looking for a threshold at 0 intensity makes
no sense; also, keep in mind that the increment values are
integers because we are dealing with intensities).
➢Next,k2 is incremented through all its values greater than k1
and less than L-1,and Between class variance for each
combination is calculated
➢Then k1 is incremented to its next value and k2 is
incremented again through all its values greater than k1.This
process repeats until k1=L-3
➢The result of this process is a 2-D array,
Contd..
➢The last step is to look for the maximum value in this array.
The values of k1 and k2 corresponding to that maximum are
the optimum thresholds k1* and k2 *
➢The thresholding can be performed as
Variable Thresholding
➢To improve the result of thresholding , we can use
variable thresholding in which When the value of T
changes over an image.
➢Various techniques for choosing variable
thresholds
1.Image partitioning
2.Variable thresholding based on local
image properties
1.Using moving averages
Image partitioning
➢One of the simplest approaches to variable thresholding is
to subdivide an image into nonoverlapping rectangles and
apply thresholding technique(otsu’s method) separately on
these rectangles
➢This approach is used to compensate for non-uniformities
in illumination and/or reflectance.
➢One important thing while choosing this method is that, the
rectangles are chosen small enough so that the illumination
of each is approximately uniform.
Example
Contd..
➢Image subdivision generally works well when the objects
of interest and the background occupy regions of reasonably
comparable size
➢When this is not the case as in the image given below, the
method typically fails because of the likelihood of
subdivisions containing only object or background pixels
Variable thresholding based on
local image properties
➢A more general approach than the image subdivision
method is to compute a threshold at every point(x,y) in the
image based on one or more specified properties computed
in a neighborhood of (x,y).

➢The two quantities are quite useful for determining local


threshold at a pixel are mean and standard deviation of pixel
values in the neighborhood of a pixels because they are
descriptors of local contrast and average intensity.
Let and denote σ x,y and mxy the standard deviation and mean value of
the set of pixels contained in a neighborhood Sx,y, centered at coordinates in
an image
The following are common forms of variable, local thresholds:
Contd..
➢Significant power (with a modest increase in computation) can be added to
local thresholding by using predicates based on the parameters computed in
the neighborhoods of (x,y)
Contd..
➢Although this may seem like a laborious process, modern algorithms
and hardware allow for fast neighborhood processing, especially for
common functions such as logical and arithmetic operations.
Using moving averages
➢A special case of the local thresholding method just
discussed is based on computing a moving average along
scan lines of an image. This implementation is quite useful
in document processing, where speed is a fundamental
requirement.
➢The scanning typically is carried out line by line in a zigzag
pattern to reduce illumination bias.
➢ Moving average of a point at time k+1 is calculated
calculate the moving average based on the ‘n’ most recent
data points.
Contd..
➢Suppose m(k+1) represents the moving average at time k+1.It can be
calculated using the equation

➢This can be also be represented as

Here z k-n represents the intensity at point going back n points from current
time
Contd..
➢Because a moving average is computed for every point in
the image, segmentation is implemented using the threshold
T =b *m(x,y)with where b is constant and m(x,y)is the
moving average from at point (x, y) in the input image.

➢This addresses the issue of varying light intensity, which


global thresholding methods struggle to handle. Typically,
this algorithm performs well with relatively small or thin
objects and images, such as text images obscured by
speckles or the brightness of sine waveforms.
Multivariable Thresholding
➢Thus far, we have been concerned with thresholding based on a
single variable: gray-scale intensity.
➢In some cases, a sensor can make available more than one
variable to characterize each pixel in an image, and thus allow
multivariable thresholding.
➢A notable example is color imaging, where red (R), green
(G),and blue (B) components are used to form a composite color
image .
➢In this case, each “pixel” is characterized by three values, and
can be represented as a 3-D vector z = (z1, z2, z3), whose
components are the RGB colors at a point..
Contd..
➢Suppose, the threshold values corresponding to z1,z2,z3
are a1,a2,a3 respectively
➢ To apply thresholding ,based on the values of these three
component , we usually use any of the distance measures
.The commonly used are
1. Euclidean distance
Contd..
2.Mahalanobis distance
Contd..
➢And thresholding is applied as

➢where is T a threshold, and it is understood that the distance computation is


performed at all coordinates in the input image to generate the corresponding
segmented values in g. Note that the inequalities in this equation are the opposite
of the inequalities we used for thresholding a single variable.
➢The reason is that the equation defines a volume and it is more intuitive to think
of segmented pixel values as being contained within the volume and background
pixel values as being on the surface or outsidethe volume.
Region-Based Segmentation
➢Segmentation techniques that are based on
finding the regions directly

1. Region Growing (Bottom-up approach)


2. Region Split-and-merge (Top-down approach)

5/6/2024 97
Region Growing
➢Region growing is a procedure that groups pixels or subregions
into larger regions.
➢The simplest of these approaches is pixel aggregation, which
starts with a set of “seed” points and from these grows regions
by appending to each seed points those neighboring pixels that
have similar properties (such as gray level, texture, color, shape).
➢Region growing based techniques are better than the edge-
based techniques in noisy images where edges are difficult to
detect.

5/6/2024 98
5/6/2024 99
4-connectivity

5/6/2024 100
8-connectivity

5/6/2024 101
Issues in implementing Region
growing approach
1. How to find Seed points??
• Can be often based on the nature of the
problem
For, example consider the image of 8-bit
X-ray image of a weld (the horizontal
dark region) containing several cracks
and porosities (the bright regions
running horizontally through the center
of the image).We illustrate the use of
region growing by segmenting the
defective weld regions
Contd..
➢ From the physics of the problem, we know that cracks and porosities will
attenuate X-rays considerably less than solid welds, so we expect the regions
containing these types of defects to be significantly brighter than other parts of
the X-ray image. We can extract the seed points by thresholding the original
image, using a threshold set at a high percentile .(here we use the gray level
value 254).The thresholded image is shown on the right side..
Contd..
➢ If the result of these computations shows clusters of values( not single points),
the pixels whose properties place them near the centroid of these clusters can
be used as seeds. The result of is shown below;
Contd..
2.How to select similarity criteria??
The selection of similarity criteria depends not only on the
problem under consideration, but also on the type of image data available.
For example, the analysis of land-use satellite imagery depends heavily on
the use of color.This problem would be significantly more difficult, or even
impossible, to solve without the inherent information available in color images.
When the images are monochrome, region analysis must be carried out with
a set of descriptors based on intensity levels and spatial properties. And example
can be written as

TRUE if the absolute difference of the intensities



Q= between the seed and the pixel at (x,y) is  T
 FALSE
 otherwise
Contd..
Also we have to properly specify the connectivity (4 connectivity,8 connectivity
etc..) used for region growing.
Contd..
3. How to formulate stopping criterion??
General strategy is, Region growth should
stop when no more pixels satisfy the criteria for
inclusion in that region
Region Growing
Basic algorithm

1.Seed Selection: Region growing starts with the selection of


one or more seed points. These seed points are typically chosen
based on some predefined criteria, such as user input or
automatically determined based on certain characteristics of
the image.
2.Pixel Similarity Criteria: A similarity criterion is defined to
determine whether a neighboring pixel should be included in
the growing region. This criterion could be based on intensity
values, color values (for color images), texture features, or any
other relevant image properties.

5/6/2024 108
Contd..

3.Growing Process: Starting from the seed point(s), the algorithm


iteratively examines neighboring pixels. If a neighboring pixel satisfies the
similarity criterion, it is added to the growing region, and its neighbors are
also examined in the subsequent iterations.
4.Termination Condition: The growing process continues until no more
pixels can be added to the region. This termination condition could be
based on reaching the image boundary, exceeding a certain size threshold
for the region, or when no more pixels meet the similarity criterion.
5.Region Formation: Once the growing process is complete, all the pixels
that were included in the region form a connected component,
representing a segmented region in the image.
5/6/2024 110
Region Splitting and Merging (Top
down approach)
➢In this approach we subdivide an image initially
into a set of arbitrary, disjoint regions and then
merge and/or split the regions in an attempt to
satisfy the conditions of segmentation.
Region Split-and-Merge
Algorithm
The algorithm operates in two stages:
➢The first stage is the splitting stage.
➢We start with the entire region. Suppose Q(R ) is a predicate
condition to be satisfied in a region. If Q(R ) is false , then
we divide the image into quadrants.
➢Then calculate Q( R) for any sub region .If is FALSE for any
quadrant, we subdivide that quadrant into subquadrants, and
so on.
➢This process continues until all region satisfies Q(R).This
particular splitting technique has a convenient representation
in the form of so-called quadtrees(trees in which each node
has exactly four descendants)
Quadtree Representation

5/6/2024 113
Region Split-and-Merge
Algorithm (contd..)
➢The second stage of the algorithm, the Merging
➢If only splitting is used, the final partition normally
contains adjacent regions with identical properties.
➢This drawback can be remedied by allowing merging as
well as splitting.
➢During Merging phase, we check whether the
combination of adjacent regions in the quad tree
satisfies the predicate. If satisfied, then we merge that
regions in to a single region.
➢This process continues, until no further meging is
possible.
Contd..
➢The algorithm can be summarized as

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy