0% found this document useful (0 votes)
15 views8 pages

Set B PART B - C Answer - Key

The document outlines a test for a Digital Image Processing course at SRM Institute of Science and Technology, detailing the test structure, including course outcomes and various questions related to image processing techniques and concepts. It covers topics such as image acquisition, enhancement, color models, and pixel representation, along with practical applications and theoretical explanations. The test is designed to assess students' understanding and application of digital image processing principles within a 100-minute timeframe.

Uploaded by

govindprakash83
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views8 pages

Set B PART B - C Answer - Key

The document outlines a test for a Digital Image Processing course at SRM Institute of Science and Technology, detailing the test structure, including course outcomes and various questions related to image processing techniques and concepts. It covers topics such as image acquisition, enhancement, color models, and pixel representation, along with practical applications and theoretical explanations. The test is designed to assess students' understanding and application of digital image processing principles within a 100-minute timeframe.

Uploaded by

govindprakash83
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

Register No.

SRM Institute of Science and Technology


College of Engineering and Technology
SET - B
School of Computing
(Common to all branches)
Academic Year: 2023-24 (ODD)

Test: CLA-T1 Date: 20-2-2024


Course Code & Title: 21CSE251T DIGITAL IMAGE PROCESSING Duration: 100 minutes
Year & Sem: II Year / IV Sem Max. Marks: 50

Course Articulation Matrix: (to be placed)

Course
S.No. PO1 PO2 PO3 PO4 PO5 PO6 PO7 PO8 PO9 PO10 PO11 PO12 PSO1 PSO2 PSO3
Outcome

1 CO1 3 2 - - - - - - - - - - - 2 -
2 CO2 3 2 - 1 - - - - - - - - - 2 -
3 CO3 3 - 2 - 2 - - - - 1 - - - 2 -
4 CO4 3 2 - 1 - - - - - - - - - 2 -

5 CO5 3 - 2 1 2 - - - - 1 - - - 2 -

Part – B
(4 x 5 = 20 Marks)
Answer All 4 Questions
21a You are an agriculture researcher trying to diagnose a 5 L3 1 1 1.3.1
rare disease based on a plant's symptoms. How would
you use the digital image processing steps and predict the
disease.

Image Sensors:
An image sensor or imager is a sensor that detects and
conveys information used to make an image.
Image sensors senses the intensity, amplitude, co-
ordinates and other features of the images and passes the
result to the image processing hardware. It includes the
problem domain.
The software that includes all the mechanisms and
algorithms that are used in image processing system.
Processing Tools. DIY Filters. Standard Filters.
GPUGraphics Processing Unit, the main →IC on a
graphics adapter (Grafikkarte) Filters.
OpenCV Filters. ImageJ Filters.
Python Tools. PIL. SciKit-Image. SimpleCV.
Dataflow Tools. FilterForge
Image Processing Hardware:
It is used to process the instructions obtained from the
image sensors. It passes the result to general purpose
computer. The three most common choices for image
processing platforms in machine vision applications are
the
 Central processing unit (CPU)
 Graphics processing unit (GPU),
 Field programmable gate array (FPGA)
Once the image is processed then it is stored in the hard
copy device. It can be a pen drive or any external ROM
device. It includes the monitor or display screen that
displays the processed images. Network is the
connection of all the above elements of the image
processing system.

OR
21 b Illustrate the key stages of image processing with a clear 5 L2 1 1 1.3.1
and structured block diagram.

Step 1: Image Acquisition


Analysis the first action required is to acquire or
capturing the image.
Step 2: Image Enhancement
Enhancement techniques is to bring out details that are
hidden, or simple to highlight certain features of interest
in an image.
Step 3: Image Restoration
Deals with improving the appearance of an image
Step 4: Color image processing
Color image processing involves the analysis,
manipulation, and enhancement of images that contain
color information
Step 5: Wavelets and Multiresolution Processing
Wavelets and Multiresolution Processing are the
foundation of representing images in various degrees of
resolution, being used for image data
Step 6: Compression
Compression deals with techniques for reducing the
storage required to save an image or the bandwidth to
transmit it
Step 7: Morphological processing
Tools for extracting image components that are useful in
the representation and description of shape, including
morphological operations like erosion and dilation are in
the morphological processing steps
Step 8: Segmentation
Segmentation divide or partition the image into various
parts called segments
Step 9: Representation and Description
Representation and description almost always follow the
output of a segmentation stage, which usually is raw
pixel data, constituting either the boundary of a region
or all the points in the region itself
Step 10: Object Recognition / Pattern Recognition
In machine learning, pattern recognition or object
recognition, and in image processing the feature
extraction, starts from an initial set of measured data and
builds derived values

22 a Assume that you're leading a workshop on image 5 L2 1 1 1.3.1


processing for a group of graphic designers who are keen
on understanding the principles of visual perception to
improve their designs. To facilitate their learning, you
decide to explain the concept using diagrams. How
would you visually illustrate the process of visual
perception in image processing, and how would you
engage the participants to ensure they grasp the concepts
effectively?

The basic elements of visual perceptions are:


• Structure of Eye
• Image Formation in the Eye
• Brightness Adaptation and
Discrimination

In an ordinary photographic camera, the lens has a fixed


focal length, and focusing at various distances is
achieved by varying the distance between the lens and
the imaging plane, where the film/sensor is located.

In the human eye, the converse is true; the distance


between the lens and the imaging region (the retina) is
fixed, and the focal length needed to achieve proper
focus is obtained by varying the shape of the lens using
the ciliary muscles.

Lens->Thick and more rounded->when focusing nearby


objects

Lens->Flat, Thin and relaxed->When focusing distance


object

The distance between the center of the lens and the retina
along the visual axis is approximately 17 mm. The range
of focal lengths is approximately 14 mm to 17 mm, the
latter taking place when the eye is relaxed and focused at
distances greater than about 3 m
OR
22 b Describe an in-depth analysis of the following: 5 L3 1 1 1.3.1
i) Brightness Adaptation and Discrimination
Brightness adaptation, also known as light adaptation, is
the process of adjusting the sensitivity of our eyes to
changes in light levels. It allows us to see effectively in
different lighting conditions, from dim to bright
environments
Brightness discrimination, also known as lightness
discrimination, is the ability of our visual system to
perceive and differentiate between different levels of
brightness in a scene
ii) Pixel Path
A path from pixel p at (x,y) to pixel q at (s,t) is a
sequence of distinct pixels:

(x0,y0), (x1,y1), (x2,y2),…, (xn,yn) such that

(x0,y0) = (x,y) and (xn,yn) = (s,t) and (xi,yi) is adjacent


to (xi-1,yi-1), i = 1,…,n
iii) Types of Pixel Adjacency

Let V: a set of intensity values used to define adjacency


and connectivity.In a binary image, V = {1}, if we are
referring to adjacency of pixels with value 1.In a gray-
scale image, the idea is the same, but V typically
contains more elements, for example, V = {180, 181,
182, …, 200}.If the possible intensity values 0 – 255, V
set can be any subset f these 256 values.
23 a Discuss various color models used in digital imaging. 5 L2 1 1 1.3.1
Explore their unique properties, applications, and how
they represent colors differently.
Color fundamentals and models refers to the basic
principles and various systems used to understand and
represent colors.
It also includes different color models, such as RGB
(Red, Green, Blue), CMY (Cyan, Magenta, Yellow),
HSL (Hue, Saturation, Lightness), and HSV (Hue,
Saturation, Value), among others, which provide
different ways to represent and manipulate colors in
digital and print media.
RGB color model
 Pixel depth: the number of bits used to represent
each pixel in RGB space
 Full-color image: 24-bit RGB color image
 (R, G, B) = (8 bits, 8 bits, 8 bits)
HSI color model

 Hue : color attribute


 Saturation: Saturation refers to the intensity or
purity of the color (white->0, primary color->1)
 Brightness: Value, also known as brightness or
lightness, determines the brightness of the color

OR
23 b Assume you are a graphic designer; discuss how you 5 L2 1 1 1.3.1
would explain the relationship between pixels in an
image and its representation for better understanding.

Representation of Digital Images:


The image f(x,y) is digitized resulting in an M×N image,
where M represents the number of rows and N represents
the number of columns.
Discrete Coordinates: The values of the coordinates
(x,y) are discrete quantities, typically represented by
integer values.
The origin of the image is commonly defined at
(0,0)(0,0).
Subsequent coordinate values increase along rows and
columns. For example, the coordinate (0,1)(0,1)
represents the second sample along the first row.
Coordinate Ranges: It's important to note that x ranges
from 0 to M−1, and y ranges from 0 to N−1.
This means that for an image of size M×N, the valid
range of x coordinates is from 0 to M−1, and the valid
range of y coordinates is from 0 to N−1.
The basic relationship between pixels helps define how
they are connected or adjacent to each other.
It is used for establishing boundaries of objects and
components of region in an image. Two pixels are
considered connected if they share a common boundary
or are neighbors, and if they are adjacent in some sense
(such as neighboring pixels, 4/8/m-adjacency).
 Neighbour Pixels: Two pixels are connected if
they are next to each other horizontally,
vertically, or diagonally.
 4-Adjacency: Two pixels are connected if they
share a common edge, meaning they are adjacent
horizontally or vertically but not diagonally.
 8-Adjacency: Two pixels are connected if they
share a common edge or corner, meaning they
are adjacent horizontally, vertically, or
diagonally

24 a Identify and explain the fundamental intensity 5 L2 2 1 1.3.1


transformations used in image enhancements.

Image Enhancement Techniques -> To improve the


visual quality of an image by emphasizing certain
features, reducing unwanted elements, and making the
image more suitable for specific purposes.
Two fundamental approaches for image
enhancement:
 Spatial domain
 Frequency domain
Frequency domain methods involve transforming the
image from the spatial domain to the frequency/transform
domain using techniques called the Fourier Transform.
The term frequency in an image tells about the rate of
change of pixel values.
This transformation allows the image's frequency
components (high and low frequencies) to be analyzed
and manipulated. Once in the frequency domain,
enhancements can be applied, such as noise reduction
through frequency filtering or sharpening using high-
frequency emphasis. After applying the desired
enhancements, the image can be transformed back to the
spatial domain using the inverse Fourier Transform

OR
24 b Describe a step-by-step process, along with the rationale 5 L2 2 1 1.3.1
behind each step, for how you would use local histogram
processing and adaptive filters to enhance the quality of
an old photograph that has degraded over time and
contains significant historical information?
 Histogram is a graphical representation of the
intensity distribution of an image. i.e., it
represents the number of pixels for each intensity
value considered.
 It provides information about distribution of
pixel intensities, helping to understand the
overall characteristics of an image's brightness
and contrast.
 In a histogram, the x-axis represents the range
of possible intensity values, usually spanning
from 0 (black) to 255 (white) in grayscale
images. The y-axis represents the frequency or
count of pixels that have a specific intensity
value.
It helps guide decisions on how to adjust or manipulate
an image's intensity values to achieve desired visual
effects or prepare it for further processing

 A histogram for a grayscale image with intensity


values in range I(x, y) [0, L—1] would contain
exactly L entries.
 E.g. 8-bit grayscale image, L = 2^8 = 256.
 Each histogram entry is defined as: h(i) =
number of pixels with intensity i for all 0 <= i<
L.
 E.g: h(255) = number of pixels with intensity
value 255
 h(250)=5
A narrow histogram (Low Contrast) indicates that the
pixel intensity values in the image are centered around a
specific range.
A wide histogram (High contrast) suggests a broader
range of intensity values and potentially higher contrast

Part – C
(1 x 10 = 10 Mark)
25 a A computer graphics designer is creating a 2D animation 10 L2 1 2 2.4.1
with rotating objects. To optimize the animation, the
designer decides to use the Discrete Fourier Transform
(DFT) algorithm. Illustrate how the DFT works for the
same.
The Discrete Fourier Transform (DFT) is defined by

Or
25 b You are a photo restoration specialist tasked with 10 L3 2 1 1.3.1
reviving an old, faded photograph that holds sentimental
value to a client. The image lacks contrast, making
details hard to discern. How would you employ spatial
domain methods to enhance the contrast and revive the
old photograph also compute the steps necessary to
implement local histogram equalization employed to
improve contrast and enhance details in an image
characterized by varying lighting conditions across
different regions.
Spatial filtering is an image operation where each pixel
value I(u,v) is changed by a function of the values of the
pixel and its neighbors.
The process consists of simply moving the filter mask
from point to point in an image.
At each point (x,y) the response of the filter at that point
is calculated using a predefined relationship.
 Types of spatial filtering
 Smoothing Filter
 Sharpening filtering
 2 types of smoothing spatial filter
 Linear Filter / Mean Filter
 Order Statistics / Non-linear Filter
 Smoothing (also called averaging) spatial filters
are used to reduce sharp transitions in
intensity.
 Because random noise typically consists of sharp
transitions in intensity, an obvious application of
smoothing is noise reduction.
 Smoothing is used to reduce irrelevant detail in
an image, where ―irrelevant‖ refers to pixel
regions that are small with respect to the size of
the filter kernel.
Linear spatial filtering consists of convolving an image
with a filter kernel. Convolving a smoothing kernel with
an image blurs the image, with the degree of blurring
being determined by the size of the kernel and the values
of its coefficients
 Smoothing filters are used for
 Blurring
 Noise Reduction
 Blurring is used in preprocessing steps to
removal of small details from an image prior to
object extraction and bridging of small gaps in
lines or curves.
 Noise reduction can be accomplished by
blurring.
 If the operation performed on the image pixels is
linear, then the filter is called a linear spatial
filter. Otherwise, the filter is a nonlinear spatial
filter.
 There are 2 ways of smoothing filters
 Linear Filters (Mean Filters): If the
operation performed on the image pixels
is linear, then the filter is called a linear
spatial filter.
 Order-Statistics (Non-linear) Filters.
 Linear spatial filter is simply the average of the
pixels contained in the neighbourhood of the
filter mask.
 The idea is replacing the value of every pixel in
an image by the average of the gray levels in the
neighborhood defined by the filter mask.
 This process result in an image reduce the sharp
transitions in intensities.
 Two mask
 Averaging Filters
 Weighted averaging filter / Gaussian
Smoothing

*Program Indicators are available separately for Computer Science and Engineering in AICTE
examination reforms policy.

Course Outcome (CO) and Bloom’s level (BL) Coverage in Questions

Approved by the Audit Professor/Course Coordinator

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy