0% found this document useful (0 votes)
5 views129 pages

Image EnhancementI(2)

The document discusses image enhancement techniques aimed at improving image quality for specific applications, categorized into spatial and frequency domain approaches. It details various enhancement methods such as contrast intensification, noise cleaning, and edge sharpening, along with specific processing techniques like point operations and histogram equalization. The document emphasizes that enhancement is subjective and dependent on the application, with no universal theory governing the process.

Uploaded by

Supratick Mondal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views129 pages

Image EnhancementI(2)

The document discusses image enhancement techniques aimed at improving image quality for specific applications, categorized into spatial and frequency domain approaches. It details various enhancement methods such as contrast intensification, noise cleaning, and edge sharpening, along with specific processing techniques like point operations and histogram equalization. The document emphasizes that enhancement is subjective and dependent on the application, with no universal theory governing the process.

Uploaded by

Supratick Mondal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 129

Image Enhancement

Introduction
• The objective of enhancement is to process
an image so that it appears more suitable
than original image for a specific application.
• Approaches are categorically two types:
– Spatial domain based approach
• Direct manipulation of pixel value
– Frequency domain based approach
• Transform image into Fourier/DCT domain and then
apply the method
Introduction
• At the time of digitization, transmission
and scanning, etc. there are some form of
degradation at the output.
– If reason behind this is known, then by
inverse operation original image can be
obtained. This is called image restoration.
– If no such knowledge or information, then
quality can be improved for some application
by some process. This is known as image
enhancement.
Introduction
• Enhancement of quality => increasing the
dominance of some features ( at the cost
of suppression of some features).
• The enhancement techniques are divided
into three categories:
– Contrast intensification
– Noise cleaning or smoothing
– Edge sharpening
Simple processing
• Transpose
– B(i,j) = A(j,i)
Simple processing
• Flip Vertical
– B(i,M-I-j) = A(i,j)
Simple processing
• Cropping
B(k,l) = A(n1+k,n2+l); k= 0 to N1-1 and l=0 to N2-1
Sample mean and variance
1 N 1 M 1
• Mean (ma) = 
MN i 0 j 0
A(i, j )

• Std. dev(σ) = N 1 M 1 2

  A(i, j)  m 
1
a
MN i 0 j 0
Principle Objective of
Enhancement
• Process an image so that the result will be
more suitable than the original image for a
specific application.

• The suitableness is up to each application.

• A method which is quite useful for enhancing


an image may not necessarily be the best
approach for enhancing another image.
Application feedback process
Domain of enhancement
• Spatial Domain : (image plane)
– Techniques are based on direct manipulation
of pixels in an image
• Frequency Domain :
– Techniques are based on modifying the
Fourier transform of an image
• There are some enhancement techniques
based on various combinations of
methods from these two categories.
Image enhancement domain
Spatial Domain Methods
The value of a pixel at location (x,y) in the enhanced image is
the result of performing some operation on the pixels in the
neighborhood of (x,y) in the input image.

T
neighborhood g(x,y)
operator
around f(x,y)

input image f output image g


For computational reasons the neighborhood is usually
square but it can be any shape.
Image enhancement
• There is no general theory of image
enhancement; viewer is the ultimate judge.
• The spatial domain process is expressed
by
g(x,y) = T[f(x,y)] where f is the input image
and g is the processed image. T is an
operator on f, defined on some neighbors
of (x,y).
Ngh can be square or rectangle area
centered at (x,y);
choices of neighborhood
Enhancement techniques
• Point operations : Each pixel is modified by an
equation that is not dependent on other pixel
values
• Mask operations : Each pixel is modified
according to the values in a small neighborhood
(subimage)
• Global operations : All pixel values in the image
are taken into consideration
Spatial domain processing methods include all three
types, but frequency domain operations, are global
operations
Point Processing
• We will now utilize a “function” g(l) (l = 0,…,255) to
generate a new image B from a given image A via:
B(i,j) = g(A(i, j)); i = 0,…,N-1; j = 0,…,M-1
• The function g(l) operates on each image pixel or
each image point independently.
• In general the resulting image g(A(i; j)) may not be
an image matrix, i.e., it may be the case that
g(A(i,j)) doest not belong to {0,…,255} for some
(i,j).
• Thus we will have to make sure we obtain an
image B that is an image matrix.
• The histograms hA(l) and hB(l) will play important
roles.
Contrast Enhancement (point
processing)
Produce higher contrast
than the original by
• darkening the levels
below m in the
original image
• brightening the levels
above m in the
original image
Threshloding (point processing)
• Produce a two-level
(binary) image
• Before transformation
pixel value is r and
after transformation
value is s.
Basic gray level transformation
• Linear function:
Negative and
identity
transformations
• Logarithm function:
Log and inverse-log
transformation
• Power-law function:
nth power and nth
root transformation
functions
Identity point function
• Let g(l) = l (l = 0,…,255).
B(i,j) = g(A(i,j)); i = 0,…,N-1; j = 0,…,M-1
= A(i,j)
• In this case g(A(i; j)) belongs to {0,…,255}
so no further processing is necessary to
ensure B is an image matrix.
• Note also that B = A and hence hA(l)=hB(l).
Digital Negative
• Let g(l) = 255-l (l = 0,…,255).
• B(i,j) = g(A(i,j)); i = 0,…,N-1; j = 0,…, M-
1
= 255- A(i,j) Suitable for enhancing
white or gray detail
embedded in dark
regions of an image,
especially when the black
area dominant in size.
Digital Negative Contd.
Application digital negation

Original mammogram Negative Image : gives a better


vision to analyze the image
Log Transformations
• s = c log (1+r) c is a
constant and r >=0
• Log curve maps a
narrow range of low
gray-level values in the
input image into a wider
range of output levels.
• Used to expand the
values of dark pixels in
an image while
compressing the higher
level values.
Log Transformations
• Compresses the dynamic range of
images with large variations in pixel
values.
• Example of image with dynamic range:
Fourier spectrum image.
• It can have intensity range from 0 to 106
or higher.
• We can’t see the significant degree of
detail as it will be lost in the display.
Example of Logarithm Image
Inverse Logarithm Transformations
• Do opposite to the Log Transformations
• Used to expand the values of high pixels
in an image while compressing the darker-
level values.
Power-Law Transformations
• s=crγ or s=c(r+ε)γ
• c and γ are positive constants
• Power-law curves with fractional values of
γ map a narrow range of dark input values
into a wider range of output values, with
the opposite being true for higher values of
input levels.
• c = γ = 1 Identity function
Power function with c=1
Output of power function
Gamma=3.0
Org

Gamma=4.0 Gamma=4.0
Linear transformations
• Identity function, Negative function,
single
• Piecewise-Linear Transformation
Functions
– Advantage
• The form of piecewise functions can be arbitrarily
complex
– Disadvantage:
• Their specification requires considerably more user
input
Piecewise-Linear Transformation
Functions
• r=T(s)
• r = alpha1 * l
= alpha2 * (l-l1) +
alpha1 * l1
= alpha3 * (l-l2) + alpha1
* l1 + alpha2 * (l2 - l1)
Alpha > 1 range stretching
< 1 compression
= identity
0 to l1
0 to l1 – l2
0 to L-1– l2
Contrast Stretching
• increase the dynamic
range of the gray
levels in the image
• a low-contrast image :
result from poor
illumination, lack of
dynamic range in the
imaging sensor, or
even wrong setting of
a lens aperture of
image acquisition.
Contrast Stretching
• The location
(r1,s1) and (r2,s2)
control the (r2,s2)

transformation.
– If (r1=s1) and
(r2=s2), then
identical function
(r1,s1)
– If (r1=r2) and
(s1=0 and s2=L-1)
thresholding
function
Contrast Stretching
• In general r1<=r2 and
s1<=s2
– Single valued and (r2,s2)
monotonically increasing
function
– Preserves the order of
grey-level and reduces the
chances of artifact (r1,s1)
– Normally, r1 =rmin , s1=0
and r2=rmax , s2=L-1
Piecewise-Linear discontinuous
Transformation Functions

0 to l1
0 to l1 – l2
0 to L-1– l2
Gray-level slicing
• Highlighting a specific
range of gray levels in an
image
– Display a high value of all
gray levels in the range of
interest
• transformation
highlights range [A,B] of
gray level and reduces
all others to a constant
level
• transformation
highlights range [A,B]
but preserves all other
levels
Results

Input Output
Bit-plane slicing
• Highlighting the
contribution made to total
image appearance by
specific bits
• Suppose each pixel is
represented by 8 bits
• Higher-order bits contain
the majority of the visually
significant data
• Useful for analyzing the
relative importance
played by each bit of the
image
Bit-plane slicing

7 6

Original image 5 4 3

2 1 0
Simple Image Statistics-
Histogram
• Let S be a set and define #S as the cardinality of
this set
– The histogram hA(l) (l=0,…,255) of the image A is
defined as:
hA(l) = #{(i,j) | A(i,j)=l, i=0,…,N-1; j=0,…,M-1}
255

h
l 0
A (l )  Number of pixels in A
For(i=0;i<255;i++)
H[i]=0;
for(i=0;i<N;i++)
for(j=0;j<M;j++)
H[A(i,j)]++;
Histogram

A hA(l)
Histogram

A hA(l)
Histogram

A hA(l)
Normalized Histogram
• dividing each of histogram at gray level rk
by the total number of pixels, n
– P(rk) = nk /n
• P(rk) gives an estimate of the probability
of occurrence of gray level rk
• The sum of all components of a
normalized histogram is equal to 1
Histogram Processing
• Basic for numerous spatial domain
processing techniques.
• Used effectively for image enhancement
• Information inherent in histograms also is
useful in image compression and
segmentation.
Example
Example
Histogram Equalization
• Contrast stretching means histogram stretching
• the low-contrast image’s histogram is narrow
and centered toward the middle of the gray
scale, if we distribute the histogram to a wider
range the quality of the image will be improved.
• We can do it by adjusting the probability density
function of the original histogram of the image so
that the probability spread equally.
Histogram transformation
• s = T(r) where 0 <= r
<=1

• T(r) satisfies
– T(r) is single valued
and monotonically
increasingly in the
interval 0 <= r <= 1
– 0 <= T(r) <= 1 for 0
<= r <= 1
2 Conditions of T(r)
• Single-valued (one-to-one relationship)
guarantees that the inverse transformation will
exist
• Monotonicity condition preserves the increasing
order from black to white in the output image
thus it won’t cause a negative image
• 0 <=T(r) <=1 for 0 <= r <= 1 guarantees that the
output gray levels will be in the same range as
the input levels.
• The inverse transformation from s back to r is
r = T -1 (s) ; 0 <=s <=1
Probability Density Function
• The gray levels in an image may be
viewed as random variables in the interval
[0,1]
• PDF is one of the fundamental descriptors
of a random variable
Discrete
transformation function
• The probability of occurrence of gray level in an
image is approximated by
pr(rk) = nk/n , k= 0 to L-1

• The discrete version of transformation


sk = T(rk) = p0 + p1 + p2 + … + pk
= (n0 + n1 + n2 + … + nk)/n

• Thus, an output image is obtained by mapping


each pixel with level rk in the input image into a
corresponding pixel with level sk in the output
image
Example
Example
Example (Contd.)
Example (Contd.)
Note
• Histogram equalization distributes the
gray level to reach the maximum gray
level (white) because the cumulative
distribution function equals 1 when 0<= r
<=L-1
• the discrete transformation function can’t
guarantee the one to one mapping
relationship
Histogram Matching (Specification)
• Histogram equalization can generates only
one type of output image.
• With Histogram Specification, we can
specify the shape of the histogram that we
wish the output image to have.
• It doesn’t have to be a uniform histogram
Note
• Histogram specification is a trial-and error
process
• There are no rules for specifying
histograms, and one must resort to
analysis on a case-by-case basis for any
given enhancement task.
Note
• Histogram processing methods are global
processing, in the sense that pixels are
modified by a transformation function
based on the gray-level content of an
entire image.

• Sometimes, we may need to enhance


details over small areas in an image,
which is called a local enhancement.
Local enhancement
• Global method is computationally intensive.
• The concepts can be used, but in local sense.
That is consider a square or rectangular
neighborhood and move the center of this area
from pixel to pixel.
• At each location, the histogram of the points in the
neighborhood is computed and either histogram
equalization or histogram specification
transformation function is obtained.
• Another approach used to reduce computation is
to utilize non-overlapping regions, but it usually
produces an undesirable checkerboard effect.
Local enhancement

Basically, the original image consists of


many small squares inside the larger
dark ones.
However, the small squares were too
close in gray level to the larger ones, and
their sizes were too small to influence
global histogram equalization significantly.
So, when we use the local enhancement technique, it reveals
the small areas.
Enhancement using
Arithmetic/Logic Operations
• Arithmetic/Logic operations perform on
pixel by pixel basis between two or more
images.
• except NOT operation which perform only
on a single image.
• Logical AND or OR operations are used
for masking (Region of interest)
Example of AND Operation
Example of OR Operation
Image subtraction
• g(x,y) = f(x,y) – h(x,y)
• enhancement of the differences between
the images.
Image Subtraction
a). original fractal image
b). result of setting the four lower-order bit planes to
zero
• refer to the bit-plane slicing
• the higher planes contribute significant detail
• the lower planes contribute more to fine detail
• image b). is nearly identical visually to image
a), with a very slightly drop in overall contrast
due to less variability of the gray-level values in
the image.
c). difference between a). and b). (nearly black)
d). histogram equalization of c). (perform contrast
stretching transformation)
Note
• We may have to adjust the gray-scale of the
subtracted image to be [0, 255] (if 8-bit is used)
– first, find the minimum gray value of the subtracted
image
– second, find the maximum gray value of the subtracted
image
– set the minimum value to be zero and the maximum to
be 255
– while the rest are adjusted according to the interval [0,
255], by timing each value with 255/max
• Subtraction is also used in segmentation of
moving pictures to track the changes
– after subtract the sequenced images, what is left should
be the moving elements in the image, plus noise
Image Averaging
• consider a noisy image g(x,y) formed by the
addition of noise η(x,y) to an original image
f(x,y)
g(x,y) = f(x,y) + η(x,y)
– if noise has zero mean and be uncorrelated at (x,y)
then it can be shown that if
• image formed by averaging K different noisy
images K
1
g ( x, y ) 
K
 g ( x, y )
i 1
i
Image averaging
Image averaging
Spatial Filtering
A common application of spatial filtering is image smoothing
using an averaging filter, or averaging mask, or kernel.

neighborhood
average g(x,y)
around f(x,y)

input image f output image g

Each point in the smoothed image, g(x,y) is obtained from the


average pixel value in a neighborhood of f(x,y) in the input
image.
Spatial filtering

• use filter (can also be called as mask/


kernel/ template or window)
• the values in a filter sub-image are referred
to as coefficients, rather than pixel.
• our focus will be on masks of odd sizes, e.g.
3x3, 5x5,…
Filter response
Spatial Filtering Process
• simply move the filter mask from point to
point in an image.
• at each point (x,y), the response of the
filter at that point is calculated using a
predefined relationship.
Linear Filtering
• Linear Filtering of an image f of size M x N filter
mask of size m x n is given by the expression
Convolutions
Computationally, spatial filtering is implemented in a computer
program via a mathematical process known as convolution.
If we assume a 3x3 neighborhood the smoothing by averaging would
correspond to convolving the image with the following filter, or mask.

1/9 1/9 1/9

1/9 1/9 1/9

1/9 1/9 1/9

This filter is convolved with the image by placing it over a 3x3


portion of the image, multiplying the overlaying pixel values and
adding them all up to find the value that replaces the original central
pixel value.
.1 .1 .1
Filter

.1 .1 .1

.1 .1 .1

3 x.1 2 x.1 1 x.1 2 2 - - - - -

4 x.1 2 x.1 5 x.1 3 3 - 3.4 -

8 x.1 6 x.1 3 x.1 1 1 - -

4 7 2 1 6 -

6 5 3 2 8

Input image Filtered image


.1 .1 .1
Filter

.1 .1 .1

.1 .1 .1

3 2 x.1 1 x.1 2 x.1 2 - - - - -

4 2 x.1 5 x.1 3 x.1 3 - 3.4 2.5 -

8 6 x.1 3 x.1 1 x.1 1 - -

4 7 2 1 6 -

6 5 3 2 8

Input image Filtered image


.1 .1 .1
Filter

.1 .1 .1

.1 .1 .1

3 2 1 x.1 2 x.1 2 x.1 - - - - -

4 2 5 x.1 3 x.1 3 x.1 - 3.4 2.5 2.1 -

8 6 3 x.1 1 x.1 1 x.1 - -

4 7 2 1 6 -

6 5 3 2 8

Input image Filtered image


.1 .1 .1
Filter

.1 .1 .1

.1 .1 .1

3 2 1 2 2 - - - - -

4 x.1 2 x.1 5 x.1 3 3 - 3.4 2.5 2.1 -

8 x.1 6 x.1 3 x.1 1 1 - 4.1 -

4 x.1 7 x.1 2 x.1 1 6 -

6 5 3 2 8

Input image Filtered image


.1 .1 .1
Filter

.1 .1 .1

.1 .1 .1

3 2 1 2 2 - - - - -

4 2 x.1 5 x.1 3 x.1 3 - 3.4 2.5 2.1 -

8 6 x.1 3 x.1 1 x.1 1 - 4.1 3.0 -

4 7 x.1 2 x.1 1 x.1 6 -

6 5 3 2 8

Input image Filtered image


.1 .1 .1
Filter

.1 .1 .1

.1 .1 .1

3 2 1 2 2 - - - - -

4 2 5 x.1 3 x.1 3 x.1 - 3.4 2.5 2.1 -

8 6 3 x.1 1 x.1 1 x.1 - 4.1 3.0 2.5 -

4 7 2 x.1 1 x.1 6 x.1 -

6 5 3 2 8

Input image Filtered image


.1 .1 .1
Filter

.1 .1 .1

.1 .1 .1

3 2 1 2 2 - - - - -

4 2 5 3 3 - 3.4 2.5 2.1 -

8 x.1 6 x.1 3 x.1 1 1 - 4.1 3.0 2.5 -

4 x.1 7 x.1 2 x.1 1 6 - 4.4

6 x.1 5 x.1 3 x.1 2 8

Input image Filtered image


.1 .1 .1
Filter

.1 .1 .1

.1 .1 .1

3 2 1 2 2 - - - - -

4 2 5 3 3 - 3.4 2.5 2.1 -

8 6 x.1 3 x.1 1 x.1 1 - 4.1 3.0 2.5 -

4 7 x.1 2 x.1 1 x.1 6 - 4.4 3.0

6 5 x.1 3 x.1 2 x.1 8

Input image Filtered image


.1 .1 .1
Filter

.1 .1 .1

.1 .1 .1

3 2 1 2 2 - - - - -

4 2 5 3 3 - 3.4 2.5 2.1 -

8 6 3 x.1 1 x.1 1 x.1 - 4.1 3.0 2.5 -

4 7 2 x.1 1 x.1 6 x.1 - 4.4 3.0 2.7

6 5 3 x.1 2 x.1 8 x.1

Input image Filtered image


The mask is moved .1 .1 .1
across the image until Filter
every pixel has been .1 .1 .1
covered.
We convolve the image .1 .1 .1

with the mask.

3 2 1 2 2 - - - - -

4 2 5 3 3 - 3.4 2.5 2.1 -

8 6 3 1 1 - 4.1 3.0 2.5 -

4 7 2 1 6 - 4.4 3.0 2.7 -

6 5 3 2 8 - - - - -

Input image Filtered image


Smoothing Spatial Filters
• used for blurring and for noise reduction
• output is simply the average of the pixels
contained in the neighborhood of the filter
mask.
• called averaging filters or lowpass filters.
• replacing the value of every pixel in an
image by the average of the gray levels in
the neighborhood will reduce the “sharp”
transitions in gray levels.
Smoothing Spatial Filters
• sharp transitions
– random noise in the image
– edges of objects in the image
• thus, smoothing can reduce noises
(desirable) and blur the edges
(undesirable).
3x3 Smoothing Linear Filters
Weighted average filter
• the basic strategy behind weighting the
center point the highest and then reducing
the value of the coefficients as a function
of increasing distance from the origin is
simply an attempt to reduce blurring in the
smoothing process.
General form : smoothing mask
• filter of size mxn (m and n odd)
Example
a b
• a) original image
500x500 pixel
• b) - f) results of
smoothing with c d
square averaging
filter masks of size n
= 3, 5, 9, 15 and 35,
respectively. e f
– big mask is used to
eliminate small objects
from an image.
Example
Smoothing is useful when we want to eliminate fine detail,
particularly if that detail is an artifact of the image and not part of
the inherent information. The larger the smoothing neighborhood,
the more blurred the result becomes.

Smoothing can be useful in eliminating "salt and pepper" noise.


White and black dots on images
Gaussian smoothing
Sometimes we want to emphasize the fact that the closest pixels are
the ones most influencing the result in an area transform process.
Thus we use a smoothing filter that places more weight on the
central pixel, and which lessens the weight the further one moves
from the centre.
When the distribution of weights follows the shape of a bell curve,
the filter is called a Gaussian filter.

0.00 0.01 0.02 0.01 0.00


A 5x5 mask 0.01 0.06 0.10 0.06 0.01
approximating 0.02 0.10 0.16 0.10 0.02
a Gaussian 0.01 0.06 0.10 0.06 0.01
0.00 0.01 0.02 0.01 0.00
Order-Statistics Filters (Nonlinear
Filters)
• the response is based on ordering
(ranking) the pixels contained in the image
area encompassed by the filter.
• example
– median filter: R = median{zk | k= 0,1,…, n x n}
– max filter : R = max{zk | k= 0,1,…, n x n}
– min filter : R = min{zk | k= 0,1,…, n x n}
note: n x n is the size of the mask
Median Filtering
Median filtering is an alternative form of area process
transformation where the central pixel value in a neighborhood is
replaced by the median value of those pixels.

neighborhood
median g(x,y)
around f(x,y)

input image f output image g


The median of a list of numbers is the value such that half the
numbers are less than this value and the other half are greater.
For example, the list 1 1 1 1 3 3 3 3 5 has median 3.
Median Filters
• replaces the value of a pixel by the median of
the gray levels in the neighborhood of that pixel
(the original value of the pixel is included in the
computation of the median).
• quite popular because for certain types of
random noise (impulse noise) they provide
excellent noise-reduction capabilities, with
considering less blurring than linear smoothing
filters of similar size.
• forces the points with distinct gray levels to be
more like their neighbors.
Median Filters
• isolated clusters of pixels that are light or dark
with respect to their neighbors, and whose area
is less than n2/2 (one-half the filter area), are
eliminated by an n x n median filter.
• forced to have the value equal the median
intensity of the neighbors.
• larger clusters are affected considerably less
• Median filtering is particularly good at removing
"salt and pepper" noise while preserving edge
structure.
Example : Median Filters

Original Image output of: 3 x 3 average filter 3 x 3 median filter


Sharpening Spatial Filters
• to highlight fine detail in an image or
• to enhance detail that has been blurred,
either in error or as a natural effect of a
particular method of image acquisition.
Sharpening
The opposite of smoothing is sharpening.
The basic idea is to identify those pixels that are substantially
different from their neighbors, so that the edges and fine detail
stand out.
The usual way to sharpen is to multiply the value of the centre
pixel by the number of pixels in the neighborhood (4 in the
smallest neighborhood), and then to subtract the sum of the values
of the other pixels.
Thus the filter looks like
0 -1 0

-1 4 -1 This is just one of a


number of sharpening
0 -1 0 filters one can use
Blurring vs. Sharpening
• as we know that blurring can be done in
spatial domain by pixel averaging in a
neighbors.
• since averaging is analogous to
integration.
• thus, we can guess that the sharpening
must be accomplished by spatial
differentiation.
Derivative operator
• the strength of the response of a
derivative operator is proportional to the
degree of discontinuity of the image at the
point at which the operator is applied.
• thus, image differentiation
– enhances edges and other discontinuities
(noise)
First-order derivative
• a basic definition of the first-order derivative of
a one-dimensional function f(x) is the difference
Properties of first order derivatives
• Must be zero in flat segments (areas of
constant gray-level values)
• Must be nonzero at the onset of a gray-
level step or ramp
• Must be nonzero along ramp
Second-order derivative
• similarly, we define the second-order derivative
of a one-dimensional function f(x) is the
difference.
Properties of second order
derivatives
• Must be zero in flat segments (areas of
constant gray-level values)
• Must be nonzero at the onset and end of a
gray gray-level step or ramp
• Must be zero along ramp of constant slope
Example
first and second order derivates
• The response of 2nd order derivative much
stronger than 1st order at an isolated noise
point.
• Second order derivative is much
aggressive than first order derivative in
enhancing sharp changes.
First and Second-order derivative
of f(x,y)
• when we consider an image function of two
variables, f(x,y), at which time we will dealing
with partial derivatives along the two spatial
axes.
Discrete Form of Laplacian
Result Laplacian mask
Laplacian mask implemented an
extension of diagonal neighbors
Other implementation of
Laplacian masks
Image sharpening: Laplacian
• easily by adding the original and Laplacian
image.
• be careful with the Laplacian filter used
Imaging sharpening with
Laplacian
a) image of the North b
a
pole of the moon
b) Laplacian-filtered
image with
c) Laplacian image
scaled for display
purposes c d
d) image enhanced
by addition with
original image
Mask of Laplacian + addition
• to simply the computation, we can create a
mask which do both operations, Laplacian
Filter and Addition the original image.
Example of Laplacian

Composite Laplacian mask Original Image Result of Filtering

Result of Filtering
Unsharp masking
Gradient Operator
• first derivatives are implemented
using the magnitude of the
gradient.
Gradient Operator
Gradient Mask
• simplest approximation, 2x2
Gradient Mask
• Roberts cross-gradient operators, 2x2
Gradient Mask
• Sobel operators, 3x3
Note
• the summation of coefficients in all masks
equals 0, indicating that they would give a
response of 0 in an area of constant gray
level.

Original Image Sobel gradient

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy