0% found this document useful (0 votes)
50 views27 pages

21EC722 Module 3 DIP

digital image processsing
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
50 views27 pages

21EC722 Module 3 DIP

digital image processsing
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 27

DIP 15EC72 MODULE-2

Q1. What is the importance of image enhancement in image processing? Explain in brief any
two point processing techniques implemented in image processing. (10 marks)

Explain some of the widely used gray-level transformations. (10 marks)

Explain: (i) Contrast stretching (ii) Gray-level slicing (iii) Bit-plane slicing. (06 marks)

Explain the power law transformation and piece-wise linear contrast stretching with a neat
graphical illustration. (10 marks)

Objectives of enhancement

1. To process an image so that the result is more suitable than the original image for a
specific application.
2. The suitableness is up to each application.
3. A method which is quite useful for enhancing an image may not necessarily be the best
approach for enhancing another image.
4. To improve the interpretability or perception of information in images.
5. Provides better input for other automated image processing techniques.

The enhancement doesn't increase the inherent information content of the data, but it increases the
dynamic range of the chosen features so that they can be detected easily.

Spatial domain: Enhancement by point processing

3 basic Intensity transformation functions are

1. Linear functions: Negative and identity transformation


2. Logarithmic function: Logarithmic and inverse logarithmic transformation
3. Power -law function: nth power and nth root transformation

S.SHOBHA, SCE, DEPT. OF ECE, BENGALURU 1


DIP 15EC72 MODULE-2

1. Identity function
• Output intensities are identical to input intensities.
• It is included in the graph for completeness.
2. Image negatives
• Negative transformation s =L-1-r
• Reversing the intensity levels of an image.
• Suitable for enhancing white or gray detail embedded in dark regions of an image, especially
when the black area dominant in size.

Application

Negatives of digital images are useful in numerous applications, such as displaying medical images
and photographing a screen with monochrome positive film with the idea of using the resulting
negatives as normal slides.

3. Log transformation

Log transformation S = C log (1+r)

where C is a constant and r ≥ 0

• Sometimes the dynamic range of a processed image far exceeds the capability of the
display device, in which case only the brightest parts of the images are visible on the
display screen.

S.SHOBHA, SCE, DEPT. OF ECE, BENGALURU 2


DIP 15EC72 MODULE-2

• An effective way to compress the dynamic range of pixel values is to perform log
transformation function

• The log transformation maps a narrow range of low intensity values in the input into a
wider range of output levels.
• It used to expand the values of dark pixels in an image while the higher-level values are
compressed. The opposite is true of the inverse log transformation.
4. Power-Law (Gamma) Transformations

Power-law transformations have the basic form

S = C rᵞ

where c and γ are positive constants. A variety of devices used for image capture, printing, and
display according to a power-law. By convention, the exponent in the power-law equation is
referred to as gamma.

Unlike the log function, changing the value of γ we will obtain a family of possible transformations.
The curves generated with values of γ > 1 have exactly the opposite effect as those generated with
values of γ < 1. The process used to correct these power-law response phenomena is called gamma
correction.

Application

Gamma correction is important if displaying an image accurately on a computer screen is of


concern. Gamma correction has become increasingly important as the use of digital images over

S.SHOBHA, SCE, DEPT. OF ECE, BENGALURU 3


DIP 15EC72 MODULE-2

the Internet has increased. In addition to gamma correction, power-law transformations are very
useful for general-purpose contrast manipulation.

5. Piecewise-Linear Transformation

A complementary approach to the above-mentioned methods is to use piecewise linear functions.


Contrast stretching One of the simplest piecewise linear functions is a contrast stretching
transformation.

(i) Contrast-stretching transformation and Thresholding Function

Contrast-stretching is a process that expands the range of intensity levels in an image so that it spans
the full intensity range of the recording medium or display device.

Poor contrast is the most common defect in images and is caused by reduced and/or nonlinear
amplitude range or poor lighting conditions

Thresholding is one of the segmentation techniques that generates a binary image (a binary image
is one whose pixels have only two values – 0 and 1 and thus requires only one bit to store pixel
intensity) from a given grayscale image by separating it into two regions based on a threshold value.

(ii) Intensity-level slicing /gray level slicing

S.SHOBHA, SCE, DEPT. OF ECE, BENGALURU 4


DIP 15EC72 MODULE-2

Fig (a) This transformation highlights Fig(b) This transformation highlights


Intensities in the range [A, B] intensities in the range [A, B] and
all other intensities to lower range preserve other intensities

When the objective is to highlight a specific range of gray levels in an image then intensity slicing
is preferred. There are several ways of doing level slicing, but most of them are variations of two
basic themes.
i) One approach is to display a high value for all gray levels in the range of interest
and a low value for all other gray levels. This transformation, shown in Fig(a) above.
It produces a binary image.
ii) The second approach is to brighten the desired range of gray levels but preserving
the background and gray-level intensities in the image. This transformation is shown
in fig(b)
Application: Enhancing features such as masses of water in satellite imagery and enhancing flaws
in X-ray images.

(iii) Bit-plane slicing

S.SHOBHA, SCE, DEPT. OF ECE, BENGALURU 5


DIP 15EC72 MODULE-2

• Bit plane slicing is the conversion of image into multilevel binary image. it will be useful
for processing these data in very less time complexity.
• Digitally, an image is represented in terms of pixels. These pixels can be expressed further
in terms of bits. Separating a digital image into its bit-planes is useful for analyzing the
relative importance played by each bit of image, a process aids in determining the adequacy
of the no. of bits used to quantize each pixel. This type of decomposition is useful for image
compression.
• Each pixel in an image is represented by 8 bits. Imagine the image is composed of 8, 1-bit
planes ranging from bit plane 0 (LSB) to bit plane 7 (MSB).
• plane 0 contains all lowest order bits in the bytes comprising the pixels in the image and
plane 7 contains all high order bits.
• For example, bit plane 7 is obtained by proceeding the input image with a thresholding gray-
level transformation function that maps all levels between 0 and 127 to one level (e.g. 0)
and maps all levels from 129 to 253 to another (e.g. 255).

Q2. What is a histogram? How does histogram of the following image look like:

(i) Dark Image (ii) Bright image (iii) Low contrast image (iv) High contrast Image

Histograms are

• simple to calculate
• Give information about the kind (global appearance) of image and its properties.
• Used for Image enhancement

S.SHOBHA, SCE, DEPT. OF ECE, BENGALURU 6


DIP 15EC72 MODULE-2

• Used for Image compression


• Used for Image segmentation

• Can be used for real time processing

• Can be used for image statistics

The histogram of a digital image with L total possible intensity levels in the range [0,L-1] is defined
as the discrete function:

h(rk) = nk

Where rk is the kth intensity level in the interval [0,L-1]

nk is the number of pixels in the image whose intensity level is in the range [0 L-1]

Example: consider a 4X4 image

2 3 3 2
4 2 4 3
3 2 3 5
2 4 2 4

k nk
2 6/16= 0.375
3 5/16= 0.3125
4 4/16= 0.25
5 1/16= 0.0625

Normalized histograms: can be obtained by dividing all elements of h(rk ) by the total number
of pixels in the image:

𝒏𝒌
𝑷(𝒓𝒌) = 𝑴.𝑵

S.SHOBHA, SCE, DEPT. OF ECE, BENGALURU 7


DIP 15EC72 MODULE-2

For k = 0,1 ,2…………..L-1

M.N is the total number of pixels

k nk P(rk)
2 6 6/16= 0.375
3 5 5/16= 0.3125
4 4 4/16= 0.25
5 1 1/16= 0.0625

A low contrast image will result in a histogram with a large volume of pixels concentrated along a
relatively narrow range of tones.

A high contrast image will often produce a histogram with a broad distribution along the tonal
range, or several narrow prominences set far apart.

Q3.

S.SHOBHA, SCE, DEPT. OF ECE, BENGALURU 8


DIP 15EC72 MODULE-2

Q2. Explain histogram equalization technique for image enhancement. Also give the digital
formation for the same. (10 marks)

----------------------------------------------------------------------------------------------------------------------
----------

Histogram Equalization

Histogram Equalization: is a method which increases the dynamic range of the gray-level in a low-
contrast image to cover full range of gray-levels.

Histogram equalization is achieved by having a transformation function T(r), (which can be defined
to be the Cumulative Distribution Function (CDF) of a given Probability Density Function (PDF)
of a gray-levels in a given image (the histogram of an image can be considered as the approximation
of the PDF of that image).

Continuous Case

The intensity levels in an image may be viewed as random variables in the interval [0, L -1]. A
fundamental descriptor of a random variable is its probability density function (PDF).

Let Pr (r) and Ps (s) denote the probability density functions of r and s. A fundamental result from
basic probability theory is that if Pr (r) and T(r) are known, and T(r) is continuous and differentiable
over the range of values of interest, then the PDF of the transformed variable s can be obtained
using the formula.

The PDF of the transformed variable s can be obtained using simple formula
dr
Ps (s) = Pr (r)|ds|-----------------------------------------------1
r
s = T(r) = (L − 1) ∫0 Pr (w) dw----------------------------2(area under the curve)

S.SHOBHA, SCE, DEPT. OF ECE, BENGALURU 9


DIP 15EC72 MODULE-2

Where w is dummy variable of integration


ds dT(r)
= -------------------------------------------------------3(Leibniz rule)
dr dr
𝑟
d ∫0 𝑃𝑟 (𝑤)𝑑𝑤
=(L − 1) ---------------------------------------4
dr

= (L − 1)𝑃𝑟 (r)--------------------------------------------5
Using equation 4 in 1 we get
dr
Ps (s) = Pr (r)|ds|
1
= Pr (r)|(𝐿−1)𝑃 |---------------------------------------6
𝑟 (𝑟)
1
= 0 ≤ s ≤ L-1-------------------------------------------7
(L−1)

Equation 6 shows the that Ps (s) always is uniform, independently of the form of Pr (r).

1. Problem

For a given 4x4 image having gray scales between [0,7] perform histogram equalization and draw
the histogram of the image before and after equalization.

2 3 3 2
4 2 4 3
3 2 3 5
2 4 2 4

S.SHOBHA, SCE, DEPT. OF ECE, BENGALURU 10


DIP 15EC72 MODULE-2

rk(gray nk 𝒏𝒌
Pr (rk )=𝑴.𝑵 sk Rounded
level) sk
0 0 0 0 0
1 0 0 0 0
2 6 0.3750 2.625 3
3 5 0.3125 4.8125 5
4 4 0.2500 6.5625 7
5 1 0.0625 7 7

Q3. What is histogram Matching? Explain the development and implementation of the
method. (10 marks)

Histogram Matching

• Histogram equalization yields a uniform pdf only.


• What if we want to obtain a histogram other than uniform?
• In this case, we specify (1) the input image and (2) the desired histogram i.e. histogram
matching Histogram specification /histogram matching method develops a gray level
transformation such that the histogram of the output image matches that of the pre-specified
histogram of a target image.

S.SHOBHA, SCE, DEPT. OF ECE, BENGALURU 11


DIP 15EC72 MODULE-2

S.SHOBHA, SCE, DEPT. OF ECE, BENGALURU 12


DIP 15EC72 MODULE-2

Problem-1

S.SHOBHA, SCE, DEPT. OF ECE, BENGALURU 13


DIP 15EC72 MODULE-2

S.SHOBHA, SCE, DEPT. OF ECE, BENGALURU 14


DIP 15EC72 MODULE-2

S.SHOBHA, SCE, DEPT. OF ECE, BENGALURU 15


DIP 15EC72 MODULE-2

Problem-2

Q4. Explain Local histogram processing (4 marks)

Local histogram processing

• Histogram equalization/specification are global methods.


• The intensity transformation is computed using pixels from the entire image.
• Global transformations are not appropriate for enhancing little details in an image. The
number of pixels in these areas might be very small, contributing very little to the
computation of the transformation.

Steps to be followed for Local histogram processing

S.SHOBHA, SCE, DEPT. OF ECE, BENGALURU 16


DIP 15EC72 MODULE-2

1 Define a window and move its center from pixel to pixel.


2. At each location, the histogram of the points in the neighborhood is computed. Obtain
histogram equalization or histogram specification transformation.
3. Map the intensity of the pixel centered in the neighborhood.
4. Move to the next location and repeat the procedure.

Q5. Explain the basic concept of spatial filtering in image enhancement. (5 marks)

Basics of Spatial Filtering

• The concept of filtering has its roots in the use of the Fourier transform for signal processing
in the so-called frequency domain.
• Spatial filtering term is the filtering operations that are performed directly on the pixels of
an image

Mechanics of spatial filtering

• The process consists simply of moving the filter mask from point to point in an image.
• At each point (x, y) the response of the filter at that point is calculated using a predefined
relationship

Linear spatial filtering

S.SHOBHA, SCE, DEPT. OF ECE, BENGALURU 17


DIP 15EC72 MODULE-2

• The result is the sum of products of the mask coefficients with the corresponding pixels
directly under the mask
• The coefficient w(0,0) coincides with image value f(x,y), indicating that the mask is
centered at (x,y) when the computation of sum of products takes place.
• For a mask of size mxn, we assume that m=2a+1 and n=2b+1, where a and b are nonnegative
integer. Then m and n are odd.
• In general, linear filtering of an image f of size MxN with a filter mask of size mxn is given
by the expression:

𝑎 𝑏
𝑤(𝑠, 𝑡)𝑓(𝑥 + 𝑠, 𝑦 + 𝑡)
𝑔(𝑥, 𝑦) = ∑ ∑
𝑠=−𝑎 𝑡=−𝑏

S.SHOBHA, SCE, DEPT. OF ECE, BENGALURU 18


DIP 15EC72 MODULE-2

• The process of linear filtering similar to a frequency domain concept called


“convolution”

Simplify expression

R = w1z1+w2z2+……….+ wmn zmn

= ∑𝑚𝑛
𝑘=1 𝑤𝑘 𝑧𝑘

For a mask of size 3x3

W1 W2 W3
W4 W5 W6
W7 W8 W9

R = w1z1+w2z2+………..+w9 z9

= ∑ 𝑤𝑘 𝑧𝑘
𝑘=1

Where the w’s are mask coefficients, the z’s are the value of the image gray levels corresponding
to those coefficients.

Spatial filters can be classified as

1) linear filters
2) nonlinear filters

Nonlinear spatial filtering

• Nonlinear spatial filters also operate on neighborhoods, and the mechanics of sliding a
mask past an image are the same as was just outlined.
• The filtering operation is based conditionally on the values of the pixels in the
neighborhood under consideration

Q7. Explain smoothing filters in spatial domain. (5 marks)

S.SHOBHA, SCE, DEPT. OF ECE, BENGALURU 19


DIP 15EC72 MODULE-2

Smoothing Spatial Filters

Smoothing filters are used for blurring and for noise reduction. – Blurring is used in preprocessing
steps, such as removal of small details from an image prior to object extraction, and bridging of
small gaps in lines or curves – Noise reduction can be accomplished by blurring.

Types of Smoothing Filter

There are 2 types of smoothing spatial filters

1. Linear Filters – operations performed on image pixel


2 Order-Statistics (non-linear) Filters - based on ranking the pixels
Linear spatial filter is simply the average of the pixels contained in the neighborhood of the filter
mask. The idea is replacing the value of every pixel in an image by the average of the gray levels
in the neighborhood defined by the filter mask.

This process results in an image reduces the sharp transitions in intensities.

• Two mask Averaging filter


• Weighted averaging filter.

Averaging Filter
A major use of averaging filters is in the reduction of irrelevant detail in image. mxn mask would
have a normalizing constant equal to 1/mn. Its also known as low pass filter. A spatial averaging
filter in which all coefficients are equal is called a box filter. The value of every pixel in image is
replaced by the average of the gray levels in the neighborhood defined by the filter mask.
3X3 standard Average filter 3x3 weighted ave1rage filter
1 1 1 1 2 1
1 1
x9 [1 1 1] [
𝑥 16 2 4 2]
1 1 1 1 2 1
• Used for blurring and noise reduction
• Used to remove small details in an image
• Used to reduce false contours

S.SHOBHA, SCE, DEPT. OF ECE, BENGALURU 20


DIP 15EC72 MODULE-2

Q8. Explain the following order statistics filters in spatial domain. (8 marks)

Nonlinear filter/ order-Statistics Filter


Order-statistics filters are nonlinear spatial filters. It is based on ordering (ranking) the pixels
contained in the image area encompassed by the filter, by replacing the value of the center pixel
with the value determined by the ranking result.
• The filter selects a sample from the window.
• Edges are better preserved than with liner filters
• Best suited for “salt and pepper” noise

Types of order-statics filter

Different types of order-statics filters are

1. Minimum filter
2. Maximum filter
3. Median filter

Median filter

• The median filter is a sliding-window spatial filter.

• It replaces the value of the center pixel with the median of the intensity values in the
neighborhood of that pixel.

• Median filtering is a nonlinear operation often used in image processing to reduce "salt and
pepper" noise. A median filter is more effective than convolution when the goal is to
simultaneously reduce noise and preserve edges.

• Median filters are particularly effective in the presence of impulse noise, also called ‘salt –
and – pepper’ noise because of its appearance as white and black dots superimposed on an
image.

• For every pixel, a 3x3 neighborhood with the pixel as center is considered. In median
filtering, the value of the pixel is replaced by the median of the pixel values in the 3x3
neighborhood.

S.SHOBHA, SCE, DEPT. OF ECE, BENGALURU 21


DIP 15EC72 MODULE-2

Example

• Arrange the intensities in ascending order


1 ,1 ,1, 3, 3, 3, 7, 8, 7
• Find median value
The median value=3
• Replace center pixel i.e. intensity 8 by 3
8→3

Q9. Compute the median value of the marked pixels shown in figure. Using 3X3 mask (4
marks)

Step 1.18,19,22,22,24,32,33,34,128 median value is 24, replace 128 by 24

Step 2. 19,22,24,24,25,31,32,33,172 median value is 25, replace 24 by 25

Step 3. 25,25,26,28,31,32,32,33,172 median value is 31, replace 172 by 31

Step 4. 23,24,25,26,26,28,31,31,32, median value is 26, replace 26 by 26

Q10. Explain sharpening spatial filter. (6 marks)

Sharpening filter

• Used to highlight fine detail in an image or to enhance detail that has been blurred.

S.SHOBHA, SCE, DEPT. OF ECE, BENGALURU 22


DIP 15EC72 MODULE-2

• Sharpening is an inverse process, to find the difference by the neighborhood, done by spatial
differentiation.

Difference operator

• The strength of the response of a derivative operator is proportional to the degree of operator
is proportional to the degree of discontinuity of the image at the point at which the operator
is applied.
• Image differentiation – enhances edges and other discontinuities (noise) – deemphasizes
area with slowly varying gray-level values.

First and second order difference of 1D

The basic definition of the first-order derivative of a one-dimensional function f(x) is the difference

𝑑𝑓
= 𝑓 (𝑥 + 1) − 𝑓(𝑥)
𝑑𝑥

The second-order derivative of a one-dimensional function f ( x ) is the difference

𝑑𝑓 2
= 𝑓(𝑥 + 1) + 𝑓 (𝑥 − 1) − 2𝑓(𝑥)
𝑑𝑥 2

Example

S.SHOBHA, SCE, DEPT. OF ECE, BENGALURU 23


DIP 15EC72 MODULE-2

Illustration of first order and second order derivative of a 1-D function representing
horizontal intensity profile of an image

1-order derivative

• Generally, produce thicker edges in an image


• have a stronger response to fine detail
• Generally, have a stronger response to a gray-level step
• Zero in flat segments
• Non zero along ramp
• Non zero along step or ramp

2-order derivative

• Zero in flat segments


• Non zero along onset and end of a gray level step or ramp
• Zero along ramps of constant slop

Comparison between first order and second order

The 1st-order derivative is nonzero along the entire ramp, while the 2nd-order derivative is nonzero
only at the onset and end of the ramp. The response at and around the point is much stronger for
the 2nd- than for the 1st-order derivative.

S.SHOBHA, SCE, DEPT. OF ECE, BENGALURU 24


DIP 15EC72 MODULE-2

Q11 Explain Laplacian spatial filter. (10 marks)

Sharpening using a Laplacian operator

It is a derivative operator

-It highlights gray-level discontinuities in an image

- it deemphasizes regions with slowly varying gray levels

It tends to produce images that have

– grayish edge lines and other discontinuities, all superimposed on a dark,

– featureless background.

Laplacian operator is given by

𝜕 2𝑓(𝑥,𝑦) 𝜕 2 𝑓(𝑥,𝑦)
𝛥2 f = +=
𝜕𝑥 2 𝜕𝑦 2

𝜕 2 𝑓(𝑥,𝑦)
= 𝑓 (𝑥 + 1, 𝑦) + 𝑓 (𝑥 − 1, 𝑦) − 2𝑓(𝑥, 𝑦)
𝜕𝑥 2

𝜕 2 𝑓(𝑥,𝑦)
= 𝑓 (𝑥, 𝑦 + 1) + 𝑓 (𝑥, 𝑦 − 1) − 2𝑓(𝑥, 𝑦)
𝜕𝑦 2

∆2 𝑓(𝑥, 𝑦) = 𝑓 (𝑥 + 1, 𝑦) + 𝑓(𝑥 − 1, 𝑦) + 𝑓(𝑥, 𝑦 + 1) + 𝑓(𝑥, 𝑦 − 1) − 4𝑓(𝑥, 𝑦)


Result of Laplacian mask using 4 neighbors

0 1 0
1 -4 1
0 1 0
Result of Laplacian mask using diagonal neighbors

1 0 1
0 - 0
4
1 0 1

S.SHOBHA, SCE, DEPT. OF ECE, BENGALURU 25


DIP 15EC72 MODULE-2

Laplacian mask implemented using 4 and diagonal neighbors

1 1 1
1 -8 1
1 1 1

Q12. Explain High boost filtering. (6 marks)

Unsharp masking and High boost filtering

• The high-boost filter can be used to enhance high frequency component while still keeping
the low frequency components.

• High boost filter is composed by an all pass filter and a edge detection filter (Laplacian
filter). Thus, it emphasizes edges and results in image sharpener.

• The high-boost filter is a simple sharpening operator in signal and image processing.

• It is used for amplifying high frequency components of signals and images. The
amplification is achieved via a procedure which subtracts a smoothed version of the media
data from the original one.
• Unsharp masking filter (High-boost filter) removes the blurred parts and enhances the
edges

Steps for unsharp masking and high boost filtering

1. Blur the original image


2. Subtract the original image from the blurred image.
3. Add the mask to original

̅ y) denote the blurred image.


Let f (x, y) denote original image and 𝑓(x,

𝑔𝑚𝑎𝑠𝑘 (x, y) = f(x, y) –𝑓 ̅ (x, y)

g(x, y) = f(x, y) + k * 𝑔𝑚𝑎𝑠𝑘

S.SHOBHA, SCE, DEPT. OF ECE, BENGALURU 26


DIP 15EC72 MODULE-2

K=1 unsharp masking

When k > 1 the process is referred to as high boost filtering

Choosing k < 1 de-emphasizes the contribution of the unsharp mask

S.SHOBHA, SCE, DEPT. OF ECE, BENGALURU 27

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy