0% found this document useful (0 votes)
13 views

Updated Digital Image Processing

This document provides an overview of digital image processing, defining key concepts such as digital images, their origins, and applications across various fields including remote sensing, medical imaging, and industrial inspection. It outlines fundamental steps in image processing, including acquisition, enhancement, and segmentation, as well as the importance of pixel relationships and operations. Additionally, it discusses linear and non-linear operations, sampling, quantization, and various transformations used for image enhancement.

Uploaded by

mabhi2705
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views

Updated Digital Image Processing

This document provides an overview of digital image processing, defining key concepts such as digital images, their origins, and applications across various fields including remote sensing, medical imaging, and industrial inspection. It outlines fundamental steps in image processing, including acquisition, enhancement, and segmentation, as well as the importance of pixel relationships and operations. Additionally, it discusses linear and non-linear operations, sampling, quantization, and various transformations used for image enhancement.

Uploaded by

mabhi2705
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 111

Lecture 1

Introduction

Digital Image
Processing
Image
• An image is a visual representation of an object or scene, created
by capturing light, color, and intensity information.
Digital Image
• Digital Image: an image is a two-dimensional array of pixels,
where each pixel holds data representing the color or intensity at
that particular point. These pixels are arranged in rows and
columns to form the complete image. (A digital image is a
specific type of image that is represented by numerical data,
usually stored as a matrix of pixels.)
Origins of Digital Image Processing
• One of the first applications of digital images was in the
newspaper industry. Pictures were first sent by submarine cable
between London and New York in the early 1920s – fast
transmission. Specialized printing equipment coded pictures for
cable transmission and then reconstructed them at the receiving
end.
• Technique was abandoned toward the end of 1921 in favor of a
technique based on photographic reproduction made from tapes
perforated at the telegraph receiving terminal . The improvements
are evident, both in tonal quality and in resolution.
• The early Bartlane systems were capable of coding images in five
distinct levels of gray. This capability was increased to 15 levels in
1929 .
Digital Image Processing
• The term digital image processing generally refers to processing of
a two- dimensional picture by a digital computer. In a broader
context, it implies digital processing of any two-dimensional data.
Application of DIP
• Digital image processing has a broad spectrum of applications,
such as
• remote sensing via satellites and other spacecrafts,
• image transmission and storage for business applications,
• medical processing,
• radar,
• sonar, and acoustic image processing,
• robotics, and
• automated inspection of industrial parts.
Remote sensing via satellites and other
spacecrafts
• Images acquired by satellites are useful in tracking of earth
resources; geo graphical mapping; prediction of agricultural
crops, urban growth, and weather: flood and fire control; and
many other environmental applications. Space image
applications include recognition and analysis of objects
contained in images ob tained from deep space-probe missions.
Image transmission and storage application
• broadcast television, teleconferencing, transmission of facsimile
im ages (printed documents and graphics) for office automation,
communication over computer networks, closed-circuit television
based security monitoring systems, and in military
communications.
Medical applications
• In medical applications one is concerned with processing of chest
X rays, projection images of tomography, and other medical
images that occur in radiology, nuclear magnetic resonance
(NMR), and ultrasonic scanning. These images may be used for
patient screening and monitoring or for detection of tumors or
other disease in patients.
Radar and sonar
• Radar and sonar images are used for detection and recognition of
various types of targets or in guidance of aircraft or missile
systems.
Fundamentals Steps in Image Processing

• 1. Image Acquisition . This step involves acquiring an image in digital form and preprocessing such as
scaling.
• 2. Image Enhancement – it is the process of manipulating an image so that the result is more suitable
than the original for a specific application. For e.g. filtering, sharpening, smoothing, etc
• Image Restoration – these techniques also deal with improving the appearance of an image. But, as
opposed to image enhancement which is based on human subjective preferences regarding what
constitutes a good enhancement result, image restoration is based on mathematical or probabilistic
models of image degradation.
• 4. Color Image Processing – Color processing in digital domain is used as the basis for extracting
features of interest in an image.
• 5. Wavelets – they are used to represent images in various degrees of resolution.
• 6. Image compression – they deal with techniques for reducing the storage required to save an image,
or the bandwidth required to transmit it.
• 7. Image Segmentation – these procedures partition an image into its constituent parts or objects. This
gives raw pixel data, constituting either the boundary of a region or all the points in the region itself.
Types of image
• Analog image
• Digital Image
Analog of image : if the value of f,x,y are
continuous in nature.

Digital image: If the value of f,x,y is discrete


in nature is called digital image.
Digital Image
Why we use digital image??

• Ease of Storage and Transmission


• Noise Resistance and Quality Retention
• Flexibility in Processing: Digital images can be easily manipulated using
algorithms for enhancement, segmentation, compression, and other
processing tasks.
• Better Compression Techniques.
• Reproducibility : Digital images can be copied and reproduced without
any loss in quality, unlike analog images that degrade over time.
When we capture an image, both analog and digital signals can be involved, depending on the stage of the process:

1. Initial Capture – Analog Signal


➢ The light from the scene is analog in nature — it has continuous variations in intensity and color.
➢ When this light hits an image sensor (like a CCD or CMOS sensor in a camera), the sensor converts the
light (analog) into electrical signals (still analog) — one for each pixel.

2. Conversion – Analog to Digital


➢ These electrical signals are then passed through an Analog-to-Digital Converter (ADC).
➢ The ADC samples the analog signals at regular intervals and quantizes them into digital values
(numbers).
➢ After this step, the image is now represented as a digital signal — a matrix of pixel values.

3. Final Form – Digital Image


➢ Once digitized, the image can be stored, processed, displayed, or transmitted as a digital signal.
Image Sampling and Quantization

The most common and basic requirement for computer processing


of an image required it into digital form (array of finite length).
In Digital Image Processing, signals captured from the physical world
need to be translated into digital form by “Digitization” Process.
Digitization

In order to become suitable for digital processing, an image function


f(x,y) must be digitized both spatially and in amplitude. This
digitization process involves two main processes called

1.Sampling: Digitizing the co-ordinate value is called sampling.


2.Quantization: Digitizing the amplitude(intensity) value is called
quantization

f(x,y) is an image
Where, (x,y) are related to the coordinates
f, amplitude (intensity level)
Process
Convert the image into equivalent
one dimensional function (Along
line AB)

This is plot of amplitude over


one dimensional function.

Quantization
Plot the sample on timeline
Another Image Example:
Sampling
• Sampling has a relationship with image pixels. The total number of
pixels in an image can be calculated as Pixels = total no of rows * total
no of columns. For example, let’s say we have total of 36 pixels, that
means we have a square image of 6X 6. As we know in sampling, that
more samples eventually result in more pixels. So it means that of our
continuous signal, we have taken 36 samples on x axis.

• Design sampling row by row from 1 to n. Each sample (pixel) have a


quantity(Amplitude).
Quantization
• Now let’s see how quantization is done. Here we
assign levels to the values generated by sampling
process. In the image showed in sampling
explanation, although the samples has been taken,
but they were still spanning vertically to a
continuous range of gray level values. In the image
shown below, these vertically ranging values have
been quantized into 5 different levels or partitions.
Ranging from 0 black to 4 white. This level could
vary according to the type of image you want.
Basic Relationships Between Pixels
• Neighborhood
• Adjacency
• Paths
• Connectivity
• Regions
• Boundaries
Why we use
In digital image processing, we use neighboring pixels to leverage spatial relationships and perform various operations like identifying
features, exploiting correlation, and implementing algorithms like blurring or edge detection.

Here's a more detailed explanation:


• Identifying Groups of Similar Pixels:
Neighboring pixels are used to find regions or objects with similar characteristics (e.g., color, texture).
• Exploiting Spatial Correlation:
Pixels near each other often have similar values, and this spatial correlation is used in various algorithms.
• Image Processing Algorithms:
Blurring: Averaging the values of neighboring pixels can smooth the image, reducing noise and sharpening edges.
Edge Detection: Identifying differences between a pixel's value and its neighbors helps find edges and boundaries in an image.
Median Filter: Replaces a pixel's value with the median of its neighbors, reducing noise while preserving edges.
Anisotropic Diffusion: Smooths the image while preserving edges by considering the direction of the gradients between pixels.
Neighbors of a pixel – N4(p)
• Any pixel p(x, y) has two vertical and two horizontal neighbors, given by
(x+1, y),
(x-1, y),
(x, y+1),
(x, y-1)
•This set of pixels are called the 4-neighbors of P,
and is denoted by N4(P).

Finding neighboring pixels is essential for various image


processing and computer vision tasks. Since an image is a
grid of pixels, understanding the relationship between
adjacent pixels helps in edge detection, segmentation,
object recognition, and more.
Neighbors of a pixel – ND(p)
• Any pixel p(x, y) has four diagonal
neighbors, given by
(x+1, y+1), (x+1, y-1), (x-1, y+1), (x-1 ,y-1)

This set is denoted by ND(p).


Adjacency
• Let V be the set of intensity values used to define adjacency. In binary image V={1} if we
referring to adjacency of pixels with value 1.
• In a gray-scale image, the idea is the same, but set V typically contain more elements. For
example, in the adjacency of pixels with range of possible intensity values 0 to 255, set V
could be any subset of these 256 values. we consider three types of adjacency:
1. 4-adjacency,two pixels p and q with values from are 4-adjacency if q is in the set N4(p).
2. 8-adjacency, two pixels p and q with values from are 8-adjacency if q is in the set N8(p).
3. m-adjacency (mixed adjacency), two pixels p and q with values from are m-adjacency if q :
q is in N4(p) , or
q is in ND(p) and the set N4 (p) ꓵ N8 (q) as no pixels whose values are from V (mixed
adjacency is a modification of 8-adjacency.)
Example

M connectivity of pixels used to solve the confusion between connectivity in pixels fig 8 connectivity
have two direct connection 4 neighbour and diagonal neighbour. Then which is best or suitable
connection or adjacent. (m connectivity say that if you have two path one is 4 neighbour and one is
diagonal then choose 4 neighbour for the connection)

Adjacency of pixels with grayscale image with set V:


Distance Measures
• Measuring the distance between pixels is essential in various
computer vision and image processing applications. It helps in
object detection, segmentation, clustering, and spatial
analysis.
• Euclidean Distance: De(p, q) = [(x-s)2 + (y-t)2]1/2
• City Block Distance: D4(p, q) = |x-s| + |y-t|
• Chess Board Distance: D8(p, q) = max(|x-s|, |y-t|)
Example:
Linear and Non Linear Operations.

• Linear operation : Linear operations satisfy superposition (i.e., they


follow addition and scalar multiplication). These operations preserve
the relationship:
T(aA+bB)=aT(A)+bT(B)
where A and B are images, and a, b are scalars.
1. Spatial Domain Operations (Applied directly on pixels)
1. Averaging (Blurring / Low-pass Filtering) : Replaces a pixel with the average of
its neighborhood.
2. Sharpening : Enhances edges by emphasizing high-frequency components.
3. Scaling : Increases or decreases intensity linearly.
2. Frequency Domain Operations (Applied on transformed images)
1. Fourier Transform
2. Wiener Filtering
4×4 Sample Image (Matrix Representation)
We define two example images A and B:

We will check linearity using the blurring (linear) and


median filtering (non-linear) operations.
Step 1: Apply Linear Operation (Blurring/Averaging)

Formula: Each pixel is replaced by the average of its 3×3 neighbors (except at
borders).

Neighborhood for (1,1):


To understand Linearity, we use Additive and homogeneity (scaling ). (If we apply two operations first addition and then
scaling then
results of Addition of two images followed by scaling = results of scaling followed by addition )

Process 1 (Scaling -> Addition ) Process 2 (Additon -> Scaling )

Compute T(A) (Blurring on


A)Each pixel (except borders) is replaced by the
average of its 3×3 neighborhood: Compute A+B

Compute T(B) (Blurring on B)


Compute T(A+B)(Blurring on A+B)

Compute T(A) + T(B)

Compute A+B
Non Linear operations:
• Non-linear operations do not satisfy superposition, meaning:
T(aA+bB) ≠ aT(A)+bT(B)
1. Spatial Domain Non-Linear Operations
1.Median Filtering : Replaces each pixel with the median of its neighborhood.
2. Morphological Operations: Erosion: Shrinks objects in an image, Dilation: Expands
objects in an image,
2. Frequency Domain Non-Linear Operations
1.Wavelet Transform
2. Bilateral Filtering
Module 2
Image Enhancement in the Spatial Domain
Spatial Domain

• In image processing, the spatial domain


refers to the representation of an image by
its pixels, where each pixel's intensity or
color value is treated as a point in a 2D or
3D space. .
For color image
Some Gray Level Transformation
• Gray Level Transformation?
The gray level transformation can be used as an image enhancement
technique.
A gray level image consists of 256 levels of gray in a histogram.
The Horizontal axis ranges from 0 to 255, and the vertical axis relies on
the number of pixels in the image.
There are three types of transformation:
1.Linear
2.Logarithmic
3.Power — law
1. Linear transformation

There are two types:


1.1 Identity transformation : Here each value of the input image is
directly mapped to the output image values.
1.2 Negative transformation
• This is the opposite of identity transformation. Each value of the input
image is subtracted from the L-1 and mapped onto the output image.

Purpose:
•Enhances details in dark regions of an image
•Useful for medical imaging, satellite imaging, etc.
Example
In a negative transformation, each pixel value in the input image is subtracted from the maximum possible
pixel value (L-1) to produce the corresponding pixel in the output image.
s=(L−1)−r
r = input pixel value
s = output pixel value
L = number of possible intensity levels (e.g., 256 for 8-bit
images, so L−1=255L-1 = 255L−1=255)

For an 8-bit grayscale image:


Original pixel value = 50 For an 8 Bit image :
Negative = 255−50=205 a) input pixel value = 0 , then output pixel value will be 255.
b) input pixel value = 100 , then output pixel value will be 155.
c) input pixel value = 255, then output pixel value will be 0.
d) input pixel value = x, then output pixel value will be 255-x.
2. Logarithmic transformations
A logarithmic transformation is used to expand the values of dark
pixels in an image while compressing the higher-level values,
which is useful for images with large dynamic ranges of pixel
values.
It contain 2 types of transformation.

1.Log transformation
2. inverse log transformation.
Log Transformation
The logarithmic transformation maps a narrow range of low input
pixel values to a wider range of output values. It’s especially
useful for enhancing details in darker regions of an image.
s=c⋅log(1+r)

s = output pixel value


r = input pixel value (0 to L−1)
c = a constant scaling factor (depends on max value of r)
log = natural log or base-10 log
The +1 ensures the log of 0 is not undefined

The value 1 is added to each of the pixel values of the input image because if there is a pixel intensity of 0 in the
image, then log (0) is equal to infinity. So 1 is added, to make the minimum value at least 1.
Calculate the value of c (scaling factor)
• c = scaling constant (usually c= to normalize the
results)

• Calculate the constant c: Suppose you have an 8 bit grayscale


image. 2^8 = 256 (0-255) L = 255 maximum intensity value.

Calculate the constant c:


Apply transformation: Let's do it for a few pixels (rounded values):

• Original Image matrix :

• s=105.886 ⋅log10(1+10) =105.886⋅ log10(11) ≈ 105.886⋅1.0414 ≈ 110.3


• s=105.886⋅log10(1+50)=105.886⋅1.7076 ≈ 180.9
• s=105.886⋅log10(1+255)=105.886⋅2.4082 ≈ 255

Transformed Image
Inverse Log Transformation
The inverse log transformation is used to reverse the effect of log
transformation. While log compresses high pixel values and
expands low ones, the inverse log transformation does the
opposite — it expands high values and compresses low values.
Power — law
• Power-law transformations, also known as gamma correction, are a
class of mathematical transformations used to adjust the tonal and
brightness characteristics of an image.

• These transformations are particularly useful in image processing and


computer vision to enhance the visual quality of images or correct for
issues related to illumination and contrast.

r: input pixel value (normalized to range [0, 1])


s: output pixel value
γ: gamma value (the exponent)
c: constant (often 1 in normalized images)
Behavior Based on γ (gamma):
Working
• Why Normalize Pixel Values?
In an 8-bit image, pixel values range from 0 to 255.
To perform power-law transformation correctly, we normalize them to the
range [0, 1].

Normalization Formula: r/ 255

r = 150 (example pixel value)

Normalize : 150/255 = 0.588


• It is particularly useful for correcting brightness in images that
are either too dark or too bright.
• The transformation changes the brightness and contrast of the
image depending on the value of gamma γ.
• Adjusting images for better visualization in medical imaging and
satellite imaging.
Original Image Matrix (r): Case 1: γ = 1.0
[[ 50, 100, 150], s = r^1 = r
[200, 220, 240],
[ 30, 80, 255]] Output:
[[0.196, 0.392, 0.588],
Step 1: Normalize (r) [0.784, 0.863, 0.941],
Divide all values by 255 to bring them into the [0,1] range: [0.118, 0.314, 1.000]]
Normalized Matrix:
[[0.196, 0.392, 0.588], Case 2: γ = 0.5 (Brighten)
[0.784, 0.863, 0.941], s = sqrt(r)
[0.118, 0.314, 1.000]]
Output:
Step 2: Apply Power-Law Transformation (s = c * r^γ) [[0.443, 0.626, 0.767],
Let’s assume: [0.885, 0.929, 0.970],
c=1 [0.343, 0.560, 1.000]]
Case 3: γ = 2.0 (Darken)
We’ll try with:
s = r^2
γ = 1.0 (identity)
γ = 0.5 (brighten)
Output:
γ = 2.0 (darken)
[[0.038, 0.154, 0.346],
[0.615, 0.745, 0.886],
[0.014, 0.099, 1.000]]
Step 3: Convert Back to 0–255 Scale
• Multiply each output by 255 and round:

γ = 1.0 γ = 0.5 γ = 2.0


[[50, 100, 150], [[113, 160, 196], [[10, 39, 88],
[200, 220, 240], [226, 237, 247], [157, 190, 226],
[30, 80, 255]] [ 87, 143, 255]] [ 4, 25, 255]]
Example
Enhance Images with Arithmetic and
Logical Operations
Arithmetic operations in image processing involve manipulating
pixel values by performing mathematical operations on
corresponding pixels of two images.
The four basic arithmetic operations are
addition,
subtraction,
multiplication, and
division.
These operations can be used to enhance images, reduce noise,
and perform other transformation techniques.
Addition
• These operations are applied on pixel-by-pixel basis. So, to add two images
together, we add the value at pixel (0 , 0) in image 1 to the value at pixel (0 , 0)
in image 2 and store the result in a new image at pixel (0 , 0).

o(x, y)= f(x, y) + g(x, y)

Clearly, this can work properly only if the two images have identical
dimensions. If they do not, then combination is still possible, but a meaningful
result can be obtained only in the area of overlap. If our images have
dimensions of w1*h1, and w2*h2 and we assume that their origins are aligned,
then the new image will have dimensions w*h, where:
w = min (w1, w2)
h = min (h1, h2)
Addition Effect on image
Subtraction
• Subtracting two 8-bit grayscale images can produce values
between - 225 and +225.
g(x,y) = |f1 (x,y) - f2 (x,y)|
The main application for image subtraction is in change detection (or motion detection)
If we make two observations of a scene and compute their difference using the above equation, then
changes will be indicated by pixels in the difference image which have non-zero values.
Assignment… (Arithmetic and Logical operation on images)
Spatial Filtering Smoothening and Sharpening Spatial Filters, Combining Spatial
Enhancement Methods.

• Spatial filtering is a technique used to enhance the image based on the


spatial characteristics of the image.
(The word filtering borrowed from frequency domain where a certain frequency or
object may rejecting or accepting as per any define criteria)

It involves manipulating the pixels of an image based on the values of neighboring


pixels, often to enhance certain features or reduce noise.
• It can be used for
image sharpening,
edge detection,
blurring,
image sharpening and
noise reduction.
What is Spatial Filtering?
Spatial filtering involves moving a filter mask (kernel) over an
image and modifying the value of the central pixel based on a
mathematical operation involving its neighbors. (Operates in the
spatial domain (i.e., directly on pixel values))

Commonly used for image


smoothing, sharpening, edge
detection, and noise reduction.
Basics Terminologies
• Filter/Kernel/Mask :- A small matrix used to perform operations like
smoothing or edge detection.

• Neighbourhood :-The group of pixels around a central pixel used in the


operation.

• Convolution:- A mathematical operation that applies a kernel to an


image.

• Padding: When applying filters near borders, image is padded with


extra pixels (zeros or replicated values).
Process
Types of Spatial Filters
a) Smoothing (Low-pass filtering)
b) Sharpening (High-pass filtering)
Types
Spatial filtering modifies an image by replacing the value of each
pixel by a function of the values of the pixel and its neighbors.
Linear: If the operation performed on the image pixels is linear, then
the filter is called a linear spatial filter.
Non Linear: If the operation performed via a function or neighbors,
the filter is a nonlinear spatial filter.
We will focus attention first on linear filters and then introduce
some basic nonlinear filters.
Linear filters
A linear spatial filter performs a sum-of-products operation
between an image 𝑓 and a filter kernel, 𝑤.

The output is a weighted sum of input values (pixels), in linear


filters, there's no sorting, taking max/min, or non-linear logic —
only multiplying and summing.
Example
• The Gaussian filter uses a kernel (e.g., 3x3 or 5x5 matrix) where
values come from the Gaussian distribution, like:

1 2 1
2 4 2
1 2 1
Non-Linear Filtering
In non-linear filtering, the output pixel value is not a weighted
sum of the input values.
Instead, it uses non-linear operations such as:
• Median
• Max/Min
• Conditional logic
• Morphological operations (like dilation, erosion)
Example: median filter
[12, 150, 18]
Sorted: [12, 18, 20, 22, 30, 35, 40, 150, 255]
[20, 255, 22]
[30, 40, 35] → Median is 30 → replace center pixel (255) with 30

Salt-and-pepper noise removed


Feature Linear Filter Non-Linear Filter
Operation Weighted sum Sorting/Ranking/Logic
Examples Mean, Gaussian, Sobel Median, Max, Adaptive
Edge Preservation Poor Good

Removes Gaussian noise Yes Not best

Removes Salt & Pepper No Yes


Complexity Low Higher
SMOOTHING (LOWPASS) SPATIAL FILTERS
• retains low-frequency components (when changes in pixels are
low)
• In order to remove high spatial frequency noise from a digital image,
low pass filtering (also known as smoothing) is used.
• Low-pass filters usually use a moving window operator that affects
one pixel of the image at a time, modifying its value with some local
pixel area (window) functionality. To impact all the pixels in the
image, the operator moves over the image.
Mean filtering
Mean filtering, also known as average filtering or averaging, is a
common image processing technique used for noise reduction and
smoothing. It works by replacing each pixel's value with the average of
its neighboring pixels within a defined area, creating a more uniform
and less noisy image.
• Compared to the original input, we can see the filtered image (right)
has been blurred a bit (left).
For Denoising (Salt and pepper)
• The low pass filter can be used for denoising, as stated earlier. Let’s
test that. First, we spray some pepper and salt on the image to make
the input a bit dirty, and then apply the average filter:
Example of Low pass filters
1. Mean Filter (Box Filter)
2. Gaussian Filter
3. Moving Average Filter
4. Bilateral Filter
High Pass filter
• A High-Pass Filter allows high-frequency components (edges,
sharp changes) to pass and blocks or reduces low-frequency
components (smooth areas).
• A high-pass filter retains the contours (also called edges) of an
image (high frequency)
Edges are sharp intensity changes
Examples of High-Pass Filters
1. Laplacian Filter : Detects edges in all directions.

0 -1 0
-1 4 -1
0 -1 0
2. Sobel Filter
• Detects edges in a specific direction (horizontal or vertical).
Sobel X (horizontal edges):
-1 0 1
-2 0 2
-1 0 1
Sobel Y (vertical edges):
-1 -2 -1
0 0 0
1 2 1
Assignments 1
1. Image Transformation in Frequency Domain
Assignment 2
• Smoothening and Sharpening Frequency Domain Filters
• Homomorphic Filters
Noise
• Noise is unwanted and random variation in brightness or color
information that produces different pixel value instead of true pixel
value.
• Noise tells unwanted information in digital images. Noise
produces undesirable effects such as artifacts, unrealistic edges,
unseen lines, corners, blurred objects and disturbs background
scenes.
Why Noise are introduced in images…
• There are several reasons for this:
Sensor limitations:
• Digital cameras (like CMOS or CCD sensors) can't
perfectly capture light.
Environmental conditions:
• Low light → sensors need to "amplify" signals →
amplification also amplifies noise.
Transmission errors:
• If the image is sent over a network (Wi-Fi, satellite, etc.),
interference can corrupt some pixel data.
Compression and digitization:
• When analog signals (real world) are converted to digital
(pixels), rounding and quantization introduce tiny errors.
A(x,y) = H(x,y) + B(x,y)
Where, A(x,y)= function of noisy image, H(x,y)= function of image noise , B(x,y)= function of original image.
Why we learn Noise Model…
• To reduce undesirable effects, prior learning of noise models is
essential for further processing.

• Noise models are mathematical ways to describe how noise


behaves in images. Each type of noise has different
characteristics, so we use different models to simulate or study it.
Types of Noise Models
• There are multiple types of noise models. Some of basic are
1. Gaussian Noise:
Gaussian Noise is a statistical noise having a probability density
function equal to normal distribution, also known as Gaussian
Distribution.
Effect of Standard Deviation(sigma) on
Gaussian noise:
Standard deviation (σ, sigma) controls how strong or how spread
out the Gaussian noise is.
• Here’s the simple idea:

σ (Standard Deviation) Effect on Noise


Very light noise; image looks almost clean, with tiny
Small σ (like 5)
grain.
Noticeable graininess; image looks noisy but still
Medium σ (like 15)
recognizable.
Heavy noise; image becomes very messy and
Large σ (like 50 or 100)
details are lost.
2. Impulse Noise
• Impulse Noise is a type of noise where random pixels in the
image are suddenly set changed, while most of the image remains
unaffected.
Types of Impulse Noise:
Salt Noise: Salt noise is added to an image by addition of random
bright (with 255 pixel value) all over the image.
Pepper Noise: Salt noise is added to an image by addition of random
dark (with 0 pixel value) all over the image.
• Salt and Pepper Noise: Salt and Pepper noise is added to an image by
addition of both random bright (with 255 pixel value) and random
dark (with 0 pixel value) all over the image. This model is also known
as data drop noise because statistically it drop the original data
values.
Salt and Pepper noise
Speckle Noise
• A fundamental problem in optical and digital holography is the
presence of speckle noise in the image reconstruction process.
Speckle is a granular noise that inherently exists in an image and
degrades its quality. Speckle noise can be generated by multiplying
random pixel values with different pixels of an image.
Step 1: Start with the original image
Suppose you have an image f(x,y)
Each pixel has some brightness — say, pixel at (10, 20) has a
value 120.

Step 2: Generate random noise


Create a random noise matrix n(x,y) of the same size as the
image.
•Each n(x,y) is a random number.
•Example:
• Could be small values like 0.02, -0.05, 0.03, etc.
•The noise has a mean value of zero (positive and negative
values).

Step 3: Multiply noise with the image


Now multiply the original pixel f(x,y) with its corresponding
noise value n(x,y)
•Example:
• Original pixel: 120
• Random noise: -0.05
• Noise effect = 120×(−0.05)=−6
Step 4: Add the noise to the original image
Finally, add the noise effect to the original pixel:
g(x,y)=f(x,y)+f(x,y)×n (x,y)
Continuing the example:
g(10,20)=120+(−6)=114
So the pixel becomes 114 (a little darker).

Pixel (x,y) f(x,y) (Original) n(x,y) (Random Noise) New g(x,y)


(10,20) 120 -0.05 114
(10,21) 140 +0.03 144.2
(10,22) 100 -0.02 98
Image segmentation
• Image segmentation involves partitioning a digital image into multiple
regions (sets of pixels) to simplify the image or make it more
meaningful and easier to analyze. The goal is to change the
representation of an image into something that is more meaningful
and easier to analyze. Segmentation is typically used to locate objects
and boundaries (lines, curves, etc.) in images.
Image Segmentation
Applications of Image Segmentation
1. Medical Imaging:
• Tumor detection, organ segmentation, and other diagnostic purposes.
2. Autonomous Vehicles:
• Object detection, road segmentation, and obstacle avoidance.
3. Satellite and Aerial Imaging:
• Land cover classification, urban planning, and environmental monitoring.
4. Face Recognition:
• Facial feature extraction, emotion detection, and identity verification.
5. Agriculture:
• Crop monitoring, disease detection, and yield estimation.
Types of Image Segmentation
1. Semantic Segmentation
2. Instance Segmentation
Methods of Image Segmentation
1. Thresholding:
• Threshold image segmentation is a fundamental technique in image
processing that is used to separate objects of interest (foreground)
from the background in an image. The basic idea is to convert a
grayscale image into a binary image, where pixels are classified as
either belonging to the object (foreground) or the background based
on a certain threshold value.
Steps in Threshold Image Segmentation
1. Image Acquisition
2. Convert to Grayscale
Choose a Threshold Value (T)
Decide a threshold value T that will separate foreground from background.
Can be manual (fixed value)
Or automatic (e.g., Otsu’s method)
4. Apply the Threshold
For binary thresholding:
If pixel value > T, set pixel 1
Other wise 0
5.Generate Binary Image:
Clustering
• Clustering-based image thresholding is a technique that
segments an image by grouping pixels into clusters based on
their intensity values (or other features), and then assigning
thresholds based on the cluster boundaries.
• Clustering-based thresholding treats thresholding as a
classification problem, where pixels are classified into groups
(clusters), typically foreground and background (or more classes),
using clustering algorithms.
Common Clustering Methods for
Thresholding
K-Means Clustering:
Groups pixel intensities into k clusters.
Each cluster gets a centroid (mean intensity).
Thresholds are set between the centroids of clusters.
For binary segmentation, k = 2 is common.
3. Edge-based Segmentation:
Edge-based segmentation is an image segmentation technique
that detects object boundaries by finding edges — places where
the image intensity changes sharply. These edges often correspond
to boundaries of objects within the image.
Main Steps in Edge-Based Segmentation
•Image Preprocessing (Optional but helpful)
•Apply smoothing (like Gaussian blur) to reduce noise.
•Edge Detection
•Use edge detectors like:
•Sobel Operator (gradient-based)
•Prewitt Operator
•Roberts Operator
•Canny Edge Detector (most popular, multi-stage)
•These operators calculate where the gradient (change) in intensity is strong.
•Edge Linking and Boundary Detection
•Connect the detected edge points to form meaningful object boundaries.
•Methods:
•Hough Transform
•Morphological processing
•Graph-based approaches
•Contour following
•Post-processing
•Remove false edges (caused by noise).
•Fill gaps between edges to form closed regions.
4. Region-based Segmentation:
• Region-based segmentation is a technique in image processing that
divides an image into regions based on the similarity of pixels within a
region. Unlike edge-based segmentation, which focuses on detecting
boundaries, region-based segmentation emphasizes grouping
together pixels that share common properties, such as intensity, color,
or texture.
Key Concepts of Region based segmentation
1. Region:

• A region is a contiguous group of pixels in an image that are similar according to some criteria (e.g., color, intensity, or texture).

2. Homogeneity:

• The core idea of region-based segmentation is to identify regions that are homogeneous, meaning all pixels within a region share
similar properties.

3. Region Growing:

• This is a process of starting with a seed point and expanding the region by adding neighboring pixels that meet the homogeneity
criteria.

4. Region Merging and Splitting:

• Regions may be merged if they are similar enough, or split if they are too heterogeneous, to refine the segmentation.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy