0% found this document useful (0 votes)
6 views83 pages

Unit1pdf 2021 06 26 15 04 18

The document provides an overview of digital image processing, defining digital images and their components, such as pixels and the processes involved in image analysis. It outlines the stages of digital image processing, including image acquisition, enhancement, restoration, and recognition, along with various applications in fields like medical imaging and autonomous vehicles. Additionally, it discusses the components of an image processing system and key points related to image sensing and acquisition.

Uploaded by

paulbogale
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views83 pages

Unit1pdf 2021 06 26 15 04 18

The document provides an overview of digital image processing, defining digital images and their components, such as pixels and the processes involved in image analysis. It outlines the stages of digital image processing, including image acquisition, enhancement, restoration, and recognition, along with various applications in fields like medical imaging and autonomous vehicles. Additionally, it discusses the components of an image processing system and key points related to image sensing and acquisition.

Uploaded by

paulbogale
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 83

Department of

CE/IT

Image Processing Unit-1


Introduction To
Digital Image
Processing
(01CE0507)

By: Prof. Ankita


Chavda
Prepared by: Ankita Chavda
Prepared by: Ankita Chavda
Introduction
• Digital Image:
• An image may be defined as two-dimensional function f(x, y), where x and y
are spatial (plane) co-ordinates and the amplitude of f at any pair of co-
ordinates (x, y) is called the intensity or gray level of the image at that point.
• When x, y and amplitude value of f are all finite and discrete quantities, we call
the image a Digital Image.
• The field of digital image processing refers to processing digital images by
means of a digital computer.
• Pixel is the term most widely used to denote the elements of digital image.
• Image processing
• Image Analysis
• Computer vision
4
Prepared by: Ankita Chavda
Digital Image Processing
•The continuum from image processing to computer vision can be
broken up into low-, mid- and high-level processes

Low Level Process Mid Level Process High Level Process


Input: Image Input: Image Input: Attributes Output:
Output: Image Output: Attributes Understanding
Examples: Noise removal, Examples: Object recognition, Examples: Scene
image sharpening segmentation understanding, autonomous
navigation

In this course we will stop here

Prepared by: Ankita Chavda


Example of Fields use DIP

Gamma – Ray imaging:


• Nuclear medicine and astronomical observation
• In nuclear medicine, inject a patient with a radioactive isotope that emits
gamma rays.
• Images are produced from the emissions collected by gamma ray detectors.
• To detect infection or tumor

Prepared by: Ankita Chavda


Example of Fields use DIP
X Ray Imaging:
• Oldest source of EM radiation used for imaging.
• Well known use of X ray is for medical diagnostics.
• Also used in industry or in other area like astronomy.
• In industries, it is used to examine circuit boards, to detect missing
components or broken traces.

Ultraviolet band Imaging:


• Application of ultraviolet like lithography, industrial inspection, microscopy,
lasers, biological imaging and astronomical observation.
• Ultraviolet light is itself not visible.
• But photon of ultraviolet radiation collides with electron in an atom of
fluorescent material, emits light.
Prepared by: Ankita Chavda
Example of Fields use DIP
Visible and infrared band Imaging:
• Applications like light microscopy, astronomy, remote sensing, industry and
law enforcement.
• Weather observation is prediction are major application of multi spectral
imaging from satellite, as satellite using sensors in the visible and infrared
bands.
• Major area of application in the visual spectrum is automated visual inspection
of manufactured goods.
• Machine looks for missing pills
• Look for bottles that are not filled up to an acceptable level.
• License plate reading- traffic monitoring and surveillance.
• In short, objective is to find damaged or incorrectly manufactured implants
automatically, prior to packaging.
Prepared by: Ankita Chavda
Example of Fields use DIP
Microwave band Imaging:
• The dominant application of microwave band is radar.
• Unique feature of imaging radar is to collect data over virtually any region, at
any time, regardless of weather or ambient lighting conditions.
• In many cases, radar is the only way to explore inaccessible region of Earth’s
surface.
• Instead of camera lens, radar uses antenna and digital computer processing
to record its images.

Radio Band Imaging:


• Major application in radio band is medicine and astronomy.
• Ex. MRI (Magnetic Resonance Imaging)

Prepared by: Ankita Chavda


Applications
&
Research Topics

Prepared by: Ankita Chavda


Document Handling

Prepared by: Ankita Chavda


Signature Verification

Prepared by: Ankita Chavda


Biometrics

Prepared by: Ankita Chavda


Fingerprint Verification / Identification

Prepared by: Ankita Chavda


Fingerprint Identification Research at UNR
Minutiae Matching

Delaunay Triangulation

Prepared by: Ankita Chavda


Object Recognition

Prepared by: Ankita Chavda


Object Recognition Research
reference view 1 reference view 2

novel view recognized

Prepared by: Ankita Chavda


Indexing into Databases
• Shape content

Prepared by: Ankita Chavda


Indexing into Databases (cont’d)
• Color, texture

Prepared by: Ankita Chavda


Target Recognition
• Department of Defense (Army, Airforce, Navy)

Prepared by: Ankita Chavda


Interpretation of Aerial Photography
Interpretation of aerial photography is a problem domain in both computer vision and
registration.

Prepared by: Ankita Chavda


Autonomous Vehicles
• Land, Underwater, Space

Prepared by: Ankita Chavda


Traffic Monitoring

Prepared by: Ankita Chavda


Face Detection

Prepared by: Ankita Chavda


Face Recognition

Prepared by: Ankita Chavda


Face Detection/Recognition Research at UNR

Prepared by: Ankita Chavda


Facial Expression Recognition

Prepared by: Ankita Chavda


Face Tracking

Prepared by: Ankita Chavda


Face Tracking (cont’d)

Prepared by: Ankita Chavda


Hand Gesture Recognition
• Smart Human-Computer User Interfaces
• Sign Language Recognition

Prepared by: Ankita Chavda


Human Activity Recognition

Prepared by: Ankita Chavda


Medical Applications
• skin cancer breast cancer

Prepared by: Ankita Chavda


Morphing

Prepared by: Ankita Chavda


Inserting Artificial Objects into a Scene

Prepared by: Ankita Chavda


Companies In this Field In India

• Sarnoff Corporation
• Kritikal Solutions
• National Instruments
• GE Laboratories
• Ittiam, Bangalore
• Interra Systems, Noida
• Yahoo India (Multimedia Searching)
• nVidia Graphics, Pune (have high requirements)
• Microsoft research
• DRDO labs
• ISRO labs
• …
Prepared by: Ankita Chavda
Key Stages in Digital Image Processing

Morphological
Image Restoration
Processing

Image
Segmentation
Enhancement

Image Acquisition Object Recognition

Representation &
Problem Domain
Description
Colour Image
Image Compression
Processing
Prepared by: Ankita Chavda
Key Stages in Digital Image Processing:
Image Aquisition
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

Morphological
Image Restoration
Processing

Image
Segmentation
Enhancement

Image Acquisition Object Recognition

Representation &
Problem Domain
Description
Colour Image
Image Compression
Processing
Prepared by: Ankita Chavda
Key Stages in Digital Image Processing:
Image Enhancement
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

Morphological
Image Restoration
Processing

Image
Segmentation
Enhancement

Image Acquisition Object Recognition

Representation &
Problem Domain
Description
Colour Image
Image Compression
Processing
Prepared by: Ankita Chavda
Key Stages in Digital Image Processing:
Image Restoration
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

Morphological
Image Restoration
Processing

Image
Segmentation
Enhancement

Image Acquisition Object Recognition

Representation &
Problem Domain
Description
Colour Image
Image Compression
Processing
Prepared by: Ankita Chavda
Key Stages in Digital Image Processing:
Morphological Processing
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

Morphological
Image Restoration
Processing

Image
Segmentation
Enhancement

Image Acquisition Object Recognition

Representation &
Problem Domain
Description
Colour Image
Image Compression
Processing
Prepared by: Ankita Chavda
Key Stages in Digital Image Processing:
Segmentation
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

Morphological
Image Restoration
Processing

Image
Segmentation
Enhancement

Image Acquisition Object Recognition

Representation &
Problem Domain
Description
Colour Image
Image Compression
Processing
Prepared by: Ankita Chavda
Key Stages in Digital Image Processing:
Object Recognition
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

Morphological
Image Restoration
Processing

Image
Segmentation
Enhancement

Image Acquisition Object Recognition

Representation &
Problem Domain
Description
Colour Image
Image Compression
Processing
Prepared by: Ankita Chavda
Key Stages in Digital Image Processing:
Representation & Description
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

Morphological
Image Restoration
Processing

Image
Segmentation
Enhancement

Image Acquisition Object Recognition

Representation &
Problem Domain
Description
Colour Image
Image Compression
Processing
Prepared by: Ankita Chavda
Key Stages in Digital Image Processing:
Image Compression
Morphological
Image Restoration
Processing

Image
Segmentation
Enhancement

Image Acquisition Object Recognition

Representation &
Problem Domain
Description
Colour Image
Image Compression
Processing
Prepared by: Ankita Chavda
Key Stages in Digital Image Processing:
Colour Image Processing
Morphological
Image Restoration
Processing

Image
Segmentation
Enhancement

Image Acquisition Object Recognition

Representation &
Problem Domain
Description
Colour Image
Image Compression
Processing
Prepared by: Ankita Chavda
Key Stages in Digital Image Processing:

Prepared by: Ankita Chavda


Key Stages in Digital Image Processing

• Image acquisition is very first stage. Generally this stage involves


preprocessing, such as scaling.
• Image enhancement is the process of manipulating an image so that the
result is more suitable than the original for a specific application.
• This techniques are problem oriented (Subjective)
• Image Restoration is an area that deals with improving the appearance of
an image.
• Objective and based on mathematical or probabilistic models of image
degradation.
• Color Image processing is gaining importance due to increase in use of
digital images over internet.
• Wavelets are the foundation for representing images in various degrees of
solution, in which images are subdivided into smaller regions.

Prepared by: Ankita Chavda


Key Stages in Digital Image Processing

• Compression deals with the techniques for reducing the storage


required to save an image.
• Jpg file extension is used over internet.
• Morphological processing deals with the tools for extracting image
components that are useful in representation and description of the
shape.
• Segmentation partition an image into its constituent parts or objects.
In general, the more accurate the segmentation, the more likely
recognition is to succeed.
• Representation and description almost always follows output of a
segmentation stage.
• Description also called feature selection which will extract attributes
and result in some quantitative information.
• Recognition is the process that assigns a label to an object based on
its descriptor.

Prepared by: Ankita Chavda


Key Stages in Digital Image Processing

• Knowledge base:
• Knowledge about the problem domain is coded into an
image processing system in form of a knowledge database.
• Knowledge base can be simple or complex.
• Knowledge base also controls the interaction between
modules.

Prepared by: Ankita Chavda


Components of Image processing system

Prepared by: Ankita Chavda


Components of Image processing system

• Figure shows the basic components comprising a typical general purpose


system used for digital image processing.

1. Sensing: two elements are required to acquire digital images.


• Physical device: It is sensitive to the energy radiated by the object we wish to
image.
• Digitizer: A device for converting o/p of physical sensing device into digital form.
2. Specialized image processing hardware:
• It consist of digitizer plus hardware (ALU), which perform operations in parallel
on entire images.
• This type of hardware also called as front-end subsystems.
• Characteristic is speed.

Prepared by: Ankita Chavda


Components of Image processing system

3. Computer in image processing can range from PC to supercomputer. Almost


any well equipped PC type machine is suitable for offline image processing task.

4. Software for the image processing consists of specialized modules that


perform specific tasks. A well defined package also includes the capability for
user to write a code as minimum.

5. Mass storage capability is a must in image processing applications. An


image of 1024 X 1024 pixels with intensity of each pixel is 8-bit quantity,
requires 1MB of storage space if image is not compressed. So to store millions
of images is a challenging task.

Prepared by: Ankita Chavda


Components of Image processing system

3 categories for digital storage:


1. Short term storage for use during processing
• Computer memory
2. Online storage for relatively fast recall
• Magnetic disk or optical media storage
3. Archival storage for infrequent access
• Magnetic tapes or optical disk
6. Image displays are generally TV monitors, driven by graphics display card
that are even integral part of computer system.
Sometimes necessary to have stereo displays. (Ex. goggles used to watch 3D
movies)

Prepared by: Ankita Chavda


Components of Image processing system

7. Hardcopy devices for recording images include laser printers, film camera,
heat sensitive devices, inkjet units, digital units like CD.

8. Networking is used for large amount of data transmission and key


consideration is bandwidth for that transmission.
For dedicated network, its not a problem, but communication with remote sites
are not always as efficient.

Prepared by: Ankita Chavda


Key Points to remember
• Photon: Stream of massless particles, travelling in wavelike pattern and
moving at the speed of light, contains certain amount of energy is called
photon.
• Energy is proportional to frequency.
• Visible band of electromagnetic spectrum is divided into 6 broad
regions: violet, blue, green, yellow, orange and red.
• Light is void of color is known as monochromatic light and with colors it
is known as chromatic light.
• Gray level is used to describe for ranges from black to grays and finally
white.
• Radiance is total amount of energy that flows from the light source and
measured in Watts (W).
• Brightness is a subjective descriptor of light perception that is practically
impossible to measure.

Prepared by: Ankita Chavda


Image sensing and acquisition

• Images are generated by the combination of “illumination” source


and the reflection or absorption of energy from that source by the
elements of the “Scene” being imaged.

• Fig shows sensor arrangements used to transform illumination


energy to digital images.

Prepared by: Ankita Chavda


Fig 1 – Single
Imaging sensor
Fig-2 - Line Sensor
Fig 3 – Array Sensor

Prepared by: Ankita Chavda


Image acquisition using sensor array

Prepared by: Ankita Chavda


Image acquisition using sensor array

• Fig shows individual sensors arranged in the form of 2-D array.


• There is also predominant arrangement found in digital
cameras.
• CCD sensors are widely used in digital cameras and other light
sensing instruments.
• Image can be obtained by focusing the energy pattern onto the
surface of the array.
• The sensor array produces the output, proportional to the
integral of the light received at each sensor.
• This output is then digitized by another section of Image
processing system.

Prepared by: Ankita Chavda


A simple Image formation Model
f ( x, y )  i ( x, y ) r ( x, y )

f ( x, y ) : intensity at the point (x, y )


i ( x, y ) : illumination at the point (x, y )
(the amount of source illumination incident on the scene)
r ( x, y ) : reflectance/transmissivity at the point (x, y )
(the amount of illumination reflected/transmitted by the object)
where 0 < i ( x, y ) <  and 0 < r ( x, y ) < 1

Prepared by: Ankita Chavda


Image sampling and Quantization

• To create the digital image, we need to convert the continuous sensed


data into digital form.
• This involve 2 process:
1. Sampling
2. Quantization
• An image may be continuous with respect to the x and y co-ordinates
and also in amplitude.
• To convert it into digital form, we have to sample the function in both
coordinates and in amplitude.
• Digitizing the co-ordinate values is called sampling.
• Digitizing the amplitude values is called Quantization.

Prepared by: Ankita Chavda


Image sampling and Quantization

Prepared by: Ankita Chavda


Image sampling and Quantization

Prepared by: Ankita Chavda


Image sampling and Quantization

• The random variations are due to image noise.


• The set of these discrete locations gives the sampled function.
• In order to form a digital image, the gray level values also must
be converted (Quantized) into discrete quantities.
• Gray level scale is divided into 8 discrete levels, ranging from
black to white.
• The digital samples resulting from both sampling and
Quantization.

Prepared by: Ankita Chavda


Representing Digital Image

Prepared by: Ankita Chavda


Representing Digital Image

• Suppose we sample the continuous image into 2D array, f(x,y), containing


M rows and N columns, where (x,y) are discrete coordinates.
• In general, the value of the image at any coordinates (x,y) is denoted f(x,y)
where x and y are integers.
• The section of the real plane spanned by the coordinates of an image is
called the spatial domain, with x and y being referred to as spatial
variables or spatial coordinates.
• Fig shows 3 basic ways to represent f(x,y).
• Here, intensity of each point is proportional to the value of f at that point.
• In fig, there are only 3 equally spaced intensity values like [0 0.5 1] for
black, gray and white respectively.
• The 3rd representation is simply to display the numeric values of f(x,y) as an
array(matrix).

Prepared by: Ankita Chavda


Representing Digital Image

 f (0, 0) f (0,1) ... f (0, N  1) 


 f (1, 0) f (1,1) ... f (1, N  1) 
f ( x, y )   
 ... ... ... ... 
 
 f ( M  1, 0) f ( M  1,1) ... f ( M  1, N  1) 

• Each element of this matrix is called an image element, picture element,


pixel or pel.
• The digitization process requires that decisions be made regarding the
values for M, N and for the number L of discrete intensity levels.
• Due to storage and quantizing hardware consideration, the number of
intensity levels typically is an integer power of 2.

Prepared by: Ankita Chavda


Representing Digital Image

• We assume that the discrete levels are equally spaced and they are
integers in the interval [0, L-1].
• Range of the value spanned by the gray scale is referred to as the
dynamic range.
• Dynamic range of an imaging system to be ratio of maximum measurable
intensity to the minimum detectable intensity level in the system.
• Upper limit is determined by saturation and lower limit by noise.
• Dynamic range establish the highest and lowest intensity levels that a
system can represent.
• We define the difference in intensity between the highest and lowest
intensity levels in image is known as image contrast.

Prepared by: Ankita Chavda


Representing Digital Image
The number b of bits required to store a M × N digitized image
b=M×N×k
When M=N, then b=N2k

Prepared by: Ankita Chavda


Spatial & Intensity Resolution

• Spatial resolution is a measure of the smallest discernible (Visible /


Noticeable) detail in image.
• It can be measured in line pairs per unit distance or dots(pixel) per unit
distance.
• We are generally using dots per unit distance / dots per inch.
• Ex. Newspapers (75 dpi), magazines (133 dpi), glossy brochures (175 dpi),
book (2400 dpi).
• Intensity resolution refers to the smallest discernible change in intensity
level.
• The number of intensity levels usually is an integer power of 2.
• 8-bit of intensity resolution is most common.

Prepared by: Ankita Chavda


Image Interpolation
• Interpolation is the basic tool used extensively in task such as zooming,
shrinking, rotating and geometric corrections.
• Objective is to introduce interpolation and apply it to image resizing, which
are basically image resampling methods.
• Interpolation is process of using known data to estimate values at
unknown locations.
• Ex. 500x500 image -> 1.5 times enlarges -> 750x750 pixels.
• To perform intensity level assignment for any point, look for its closest pixel
in original image and assign its intensity value to that new pixel.
• This method is called as nearest neighbor interpolation.
• Bipolar Interpolation -> 4 nearest neighbors
• Bicubic Interpolation ->16 nearest neighbors

Prepared by: Ankita Chavda


Basic Relationships between Pixels
As we know, image is denoted by f(x,y). We will see several relationships
between pixels.
1. Neighbors of a pixel:
• A pixel p, at coordinates (x, y) has total 4 horizontal and vertical neighbors
whose coordinates are
(x+1, y), (x-1, y), (x, y+1), (x,y-1)
• This set of pixels, called the 4-neighbors of p, denoted by N4(p).
• Some of the neighbor location of p lie outside the digital image, if (x, y) is
on border of the image.
• The 4 diagonal neighbors of p have coordinates as below and denoted by
ND(p).
(x+1,y+1), (x+1, y-1), (x-1, y+1), (x-1, y-1)
• These points together with 4-neighbors, are called 8-neighbors of p and
denoted by N8(p).

Prepared by: Ankita Chavda


Basic Relationships between Pixels

Prepared by: Ankita Chavda


Basic Relationships between Pixels
2. Adjacency:
• Let V be the set of intensity values used to define adjacency.
• For example, the adjacency of pixels with a range of possible
intensity values 0 to 255.
• We consider 3 types of adjacency:
4-adjacency: Two pixels p and q with values from V are 4-adjacent if
q is in the set N4(p).
8-adjacency: Two pixels p and q with values from V are 8-adjacent if
q is in the set N8(p).
m-adjacency: Two pixels p and q with values from V are m-adjacent
if
• (i) q is in the set N4(p), or
• (ii) q is in the set ND(p) and the set N4(p) ∩ N4(q) has no pixels
whose values are from V.

Prepared by: Ankita Chavda


Basic Relationships between Pixels

Connectivity and Regions:


• A (digital) path (or curve) from pixel p with coordinates (x0, y0) to pixel q
with coordinates (xn, yn) is a sequence of distinct pixels with coordinates
(x0, y0), (x1, y1), …, (xn, yn)
• Where (x0, y0) is p and (xn, yn) is q.
• Where (xi, yi) and (xi-1, yi-1) are adjacent for 1 ≤ i ≤ n. Here n is the
length of the path.
• If (x0, y0) = (xn, yn), the path is closed path.
• We can define 4-, 8-, and m-paths based on the type of adjacency
used.
Prepared by: Ankita Chavda
Basic Relationships between Pixels

Prepared by: Ankita Chavda


Basic Relationships between Pixels

Prepared by: Ankita Chavda


Basic Relationships between Pixels
Connected in S:
• Let S represent a subset of pixels in an image. Two pixels p with coordinates
(x0, y0) and q with coordinates (xn, yn) are said to be connected in S if there
exists a path
(x0, y0), (x1, y1), …, (xn, yn)
• Where i, 0  i  n, ( xi , yi )  S
• For every pixel p in S, the set of pixels in S that are connected to p is called a
connected component of S.
• If S has only one connected component, then S is called Connected Set.

Prepared by: Ankita Chavda


Basic Relationships between Pixels
We call R a region of the image if R is a connected set.
Two regions, Ri and Rj are said to be adjacent if their union forms a
connected set.
Regions that are not to be adjacent are said to be disjoint.
The boundary of the region R is the set of the pixels in the region that have
one or more neighbors that are not in R.
If R happens to be an entire image, then its boundary is defined as the set
of pixels in the first and last rows and columns of the image.
Foreground and background:
An image contains K disjoint regions, Rk, k = 1, 2, …, K. Let Ru denote
the union of all the K regions, and let (Ru)c denote its complement.
All the points in Ru is called foreground;
All the points in (Ru)c is called background.
Prepared by: Ankita Chavda
Basic Relationships between Pixels

Prepared by: Ankita Chavda


Distance measures
Given pixel p, q and z with coordinates (x, y), (s, t), (u, v)
respectively. The distance function D has following properties.

a. D(p, q) ≥ 0 [D(p, q) = 0, if p = q]

b. D(p, q) = D(q, p)

c. D(p, z) ≤ D(p, q) + D(q, z)

Prepared by: Ankita Chavda


Distance measures
The following are the different Distance measures:

a. Euclidean Distance :
De(p, q) = [(x-s)2 + (y-t)2]1/2

b. City Block Distance:


D4(p, q) = |x-s| + |y-t|

c. Chess Board Distance:


D8(p, q) = max(|x-s|, |y-t|)

Prepared by: Ankita Chavda


Thank you

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy