Updated Digital Image Processing
Updated Digital Image Processing
Introduction
Digital Image
Processing
Image
• An image is a visual representation of an object or scene, created
by capturing light, color, and intensity information.
Digital Image
• Digital Image: an image is a two-dimensional array of pixels,
where each pixel holds data representing the color or intensity at
that particular point. These pixels are arranged in rows and
columns to form the complete image. (A digital image is a
specific type of image that is represented by numerical data,
usually stored as a matrix of pixels.)
Origins of Digital Image Processing
• One of the first applications of digital images was in the
newspaper industry. Pictures were first sent by submarine cable
between London and New York in the early 1920s – fast
transmission. Specialized printing equipment coded pictures for
cable transmission and then reconstructed them at the receiving
end.
• Technique was abandoned toward the end of 1921 in favor of a
technique based on photographic reproduction made from tapes
perforated at the telegraph receiving terminal . The improvements
are evident, both in tonal quality and in resolution.
• The early Bartlane systems were capable of coding images in five
distinct levels of gray. This capability was increased to 15 levels in
1929 .
Digital Image Processing
• The term digital image processing generally refers to processing of
a two- dimensional picture by a digital computer. In a broader
context, it implies digital processing of any two-dimensional data.
Application of DIP
• Digital image processing has a broad spectrum of applications,
such as
• remote sensing via satellites and other spacecrafts,
• image transmission and storage for business applications,
• medical processing,
• radar,
• sonar, and acoustic image processing,
• robotics, and
• automated inspection of industrial parts.
Remote sensing via satellites and other
spacecrafts
• Images acquired by satellites are useful in tracking of earth
resources; geo graphical mapping; prediction of agricultural
crops, urban growth, and weather: flood and fire control; and
many other environmental applications. Space image
applications include recognition and analysis of objects
contained in images ob tained from deep space-probe missions.
Image transmission and storage application
• broadcast television, teleconferencing, transmission of facsimile
im ages (printed documents and graphics) for office automation,
communication over computer networks, closed-circuit television
based security monitoring systems, and in military
communications.
Medical applications
• In medical applications one is concerned with processing of chest
X rays, projection images of tomography, and other medical
images that occur in radiology, nuclear magnetic resonance
(NMR), and ultrasonic scanning. These images may be used for
patient screening and monitoring or for detection of tumors or
other disease in patients.
Radar and sonar
• Radar and sonar images are used for detection and recognition of
various types of targets or in guidance of aircraft or missile
systems.
Fundamentals Steps in Image Processing
• 1. Image Acquisition . This step involves acquiring an image in digital form and preprocessing such as
scaling.
• 2. Image Enhancement – it is the process of manipulating an image so that the result is more suitable
than the original for a specific application. For e.g. filtering, sharpening, smoothing, etc
• Image Restoration – these techniques also deal with improving the appearance of an image. But, as
opposed to image enhancement which is based on human subjective preferences regarding what
constitutes a good enhancement result, image restoration is based on mathematical or probabilistic
models of image degradation.
• 4. Color Image Processing – Color processing in digital domain is used as the basis for extracting
features of interest in an image.
• 5. Wavelets – they are used to represent images in various degrees of resolution.
• 6. Image compression – they deal with techniques for reducing the storage required to save an image,
or the bandwidth required to transmit it.
• 7. Image Segmentation – these procedures partition an image into its constituent parts or objects. This
gives raw pixel data, constituting either the boundary of a region or all the points in the region itself.
Types of image
• Analog image
• Digital Image
Analog of image : if the value of f,x,y are
continuous in nature.
f(x,y) is an image
Where, (x,y) are related to the coordinates
f, amplitude (intensity level)
Process
Convert the image into equivalent
one dimensional function (Along
line AB)
Quantization
Plot the sample on timeline
Another Image Example:
Sampling
• Sampling has a relationship with image pixels. The total number of
pixels in an image can be calculated as Pixels = total no of rows * total
no of columns. For example, let’s say we have total of 36 pixels, that
means we have a square image of 6X 6. As we know in sampling, that
more samples eventually result in more pixels. So it means that of our
continuous signal, we have taken 36 samples on x axis.
M connectivity of pixels used to solve the confusion between connectivity in pixels fig 8 connectivity
have two direct connection 4 neighbour and diagonal neighbour. Then which is best or suitable
connection or adjacent. (m connectivity say that if you have two path one is 4 neighbour and one is
diagonal then choose 4 neighbour for the connection)
Formula: Each pixel is replaced by the average of its 3×3 neighbors (except at
borders).
Compute A+B
Non Linear operations:
• Non-linear operations do not satisfy superposition, meaning:
T(aA+bB) ≠ aT(A)+bT(B)
1. Spatial Domain Non-Linear Operations
1.Median Filtering : Replaces each pixel with the median of its neighborhood.
2. Morphological Operations: Erosion: Shrinks objects in an image, Dilation: Expands
objects in an image,
2. Frequency Domain Non-Linear Operations
1.Wavelet Transform
2. Bilateral Filtering
Module 2
Image Enhancement in the Spatial Domain
Spatial Domain
Purpose:
•Enhances details in dark regions of an image
•Useful for medical imaging, satellite imaging, etc.
Example
In a negative transformation, each pixel value in the input image is subtracted from the maximum possible
pixel value (L-1) to produce the corresponding pixel in the output image.
s=(L−1)−r
r = input pixel value
s = output pixel value
L = number of possible intensity levels (e.g., 256 for 8-bit
images, so L−1=255L-1 = 255L−1=255)
1.Log transformation
2. inverse log transformation.
Log Transformation
The logarithmic transformation maps a narrow range of low input
pixel values to a wider range of output values. It’s especially
useful for enhancing details in darker regions of an image.
s=c⋅log(1+r)
The value 1 is added to each of the pixel values of the input image because if there is a pixel intensity of 0 in the
image, then log (0) is equal to infinity. So 1 is added, to make the minimum value at least 1.
Calculate the value of c (scaling factor)
• c = scaling constant (usually c= to normalize the
results)
Transformed Image
Inverse Log Transformation
The inverse log transformation is used to reverse the effect of log
transformation. While log compresses high pixel values and
expands low ones, the inverse log transformation does the
opposite — it expands high values and compresses low values.
Power — law
• Power-law transformations, also known as gamma correction, are a
class of mathematical transformations used to adjust the tonal and
brightness characteristics of an image.
Clearly, this can work properly only if the two images have identical
dimensions. If they do not, then combination is still possible, but a meaningful
result can be obtained only in the area of overlap. If our images have
dimensions of w1*h1, and w2*h2 and we assume that their origins are aligned,
then the new image will have dimensions w*h, where:
w = min (w1, w2)
h = min (h1, h2)
Addition Effect on image
Subtraction
• Subtracting two 8-bit grayscale images can produce values
between - 225 and +225.
g(x,y) = |f1 (x,y) - f2 (x,y)|
The main application for image subtraction is in change detection (or motion detection)
If we make two observations of a scene and compute their difference using the above equation, then
changes will be indicated by pixels in the difference image which have non-zero values.
Assignment… (Arithmetic and Logical operation on images)
Spatial Filtering Smoothening and Sharpening Spatial Filters, Combining Spatial
Enhancement Methods.
1 2 1
2 4 2
1 2 1
Non-Linear Filtering
In non-linear filtering, the output pixel value is not a weighted
sum of the input values.
Instead, it uses non-linear operations such as:
• Median
• Max/Min
• Conditional logic
• Morphological operations (like dilation, erosion)
Example: median filter
[12, 150, 18]
Sorted: [12, 18, 20, 22, 30, 35, 40, 150, 255]
[20, 255, 22]
[30, 40, 35] → Median is 30 → replace center pixel (255) with 30
0 -1 0
-1 4 -1
0 -1 0
2. Sobel Filter
• Detects edges in a specific direction (horizontal or vertical).
Sobel X (horizontal edges):
-1 0 1
-2 0 2
-1 0 1
Sobel Y (vertical edges):
-1 -2 -1
0 0 0
1 2 1
Assignments 1
1. Image Transformation in Frequency Domain
Assignment 2
• Smoothening and Sharpening Frequency Domain Filters
• Homomorphic Filters
Noise
• Noise is unwanted and random variation in brightness or color
information that produces different pixel value instead of true pixel
value.
• Noise tells unwanted information in digital images. Noise
produces undesirable effects such as artifacts, unrealistic edges,
unseen lines, corners, blurred objects and disturbs background
scenes.
Why Noise are introduced in images…
• There are several reasons for this:
Sensor limitations:
• Digital cameras (like CMOS or CCD sensors) can't
perfectly capture light.
Environmental conditions:
• Low light → sensors need to "amplify" signals →
amplification also amplifies noise.
Transmission errors:
• If the image is sent over a network (Wi-Fi, satellite, etc.),
interference can corrupt some pixel data.
Compression and digitization:
• When analog signals (real world) are converted to digital
(pixels), rounding and quantization introduce tiny errors.
A(x,y) = H(x,y) + B(x,y)
Where, A(x,y)= function of noisy image, H(x,y)= function of image noise , B(x,y)= function of original image.
Why we learn Noise Model…
• To reduce undesirable effects, prior learning of noise models is
essential for further processing.
• A region is a contiguous group of pixels in an image that are similar according to some criteria (e.g., color, intensity, or texture).
2. Homogeneity:
• The core idea of region-based segmentation is to identify regions that are homogeneous, meaning all pixels within a region share
similar properties.
3. Region Growing:
• This is a process of starting with a seed point and expanding the region by adding neighboring pixels that meet the homogeneity
criteria.
• Regions may be merged if they are similar enough, or split if they are too heterogeneous, to refine the segmentation.