0% found this document useful (0 votes)
3 views14 pages

Dec23 Compressed

The document discusses various imaging systems, comparing direct and indirect imaging systems, and explains image digitization. It also covers transforms such as orthogonal and unitary transforms, wavelet transforms, and the importance of frequency domain transformations in image processing. Additionally, it addresses supervised and unsupervised learning techniques, sampling in image digitization, and color models like CMY and CMYK.

Uploaded by

athulmanohar26
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views14 pages

Dec23 Compressed

The document discusses various imaging systems, comparing direct and indirect imaging systems, and explains image digitization. It also covers transforms such as orthogonal and unitary transforms, wavelet transforms, and the importance of frequency domain transformations in image processing. Additionally, it addresses supervised and unsupervised learning techniques, sampling in image digitization, and color models like CMY and CMYK.

Uploaded by

athulmanohar26
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 14

1. (a) What is an imaging system ?

Compare direct imaging system with the indirect


imaging system. Also, explain the term ‘image digitization’.
1(a) Imaging System and Image Digitization

📘 What is an Imaging System?

An imaging system is a combination of hardware and software that captures,


processes, and displays visual information in the form of images. It includes lenses,
sensors (like CCD or CMOS), digitizers, and image processors.

🔁 Direct vs Indirect Imaging System

Feature Direct Imaging System Indirect Imaging System

Image Requires conversion (e.g., light to


Direct capture via sensors
Formation voltage)

Speed Faster Slower (due to additional steps)

Digital cameras,
Examples Medical CT scanners, MRI
microscopes

Data Acquisition Real-time Often delayed/post-processed

📘 Image Digitization

It refers to the conversion of an analog image (continuous-tone image) into a


digital image, consisting of discrete values.
 Steps:
1. Sampling: Divides image into a grid (pixels).
2. Quantization: Assigns intensity values to each pixel.

(b) Compare orthogonal transform and unitary transform. Also, discuss the
properties of both the transforms.

Orthogonal vs Unitary Transforms

Feature Orthogonal Transform Unitary Transform

Transform whose matrix TTT Satisfies TH=T−1T^H = T^{-


Definition satisfies TT=T−1T^T = T^{- 1}TH=T−1, where THT^HTH is the
1}TT=T−1 conjugate transpose
Feature Orthogonal Transform Unitary Transform

Values Real-valued matrices Complex-valued matrices

Energy
Yes Yes
Preservation

Examples DCT, Walsh-Hadamard DFT, Fourier Transform

📘 Properties:

 Energy compaction
 Invertibility
 Preserves Euclidean distances
 Basis functions are orthonormal

(c) Explain the term ‘Wavelet’ with suitable example. Give properties of wavelets.
Differentiate between continuous wavelet transform and discrete wavelet transform.

(c) Wavelet Transform

📘 Wavelet:

A wavelet is a small wave that decays quickly and is localized in both time and
frequency.
 Example: Haar wavelet, Daubechies wavelet
✅ Properties of Wavelets:

1. Localization in time and frequency


2. Multiresolution analysis
3. Orthonormal basis
4. Zero mean

🔁 CWT vs DWT

Continuous Wavelet Transform Discrete Wavelet Transform


Feature
(CWT) (DWT)

Scale &
Continuous Discrete
Translation
Continuous Wavelet Transform Discrete Wavelet Transform
Feature
(CWT) (DWT)

Redundancy High Low (non-redundant)

Theoretical analysis, high Practical, compression


Application
resolution (JPEG2000)

(d) Why do we need to transform an image from spatial domain to frequency


domain ? Discuss the various categories of frequency domain filters. Also, list the
filters under each category.

(d) Frequency Domain Transformation

📘 Why Transform?

 Simplifies image processing tasks like filtering, enhancement, and


compression.
 Highlights periodic patterns and edges.
 Easier to isolate noise and details in frequency domain.

📘 Categories of Frequency Domain Filters:

1. Low-Pass Filters (Smoothing)


o Removes high frequencies (noise, edges)
o Filters: Ideal LPF, Gaussian LPF, Butterworth LPF
2. High-Pass Filters (Sharpening)
o Removes low frequencies, keeps high
o Filters: Ideal HPF, Gaussian HPF, Butterworth HPF
3. Band-Pass Filters
o Keeps mid-frequency components
o Filters: Butterworth Band-Pass, Gaussian Band-Pass

(e) Compare unsupervised learning techniques with supervised learning techniques.


Explain the different categories of supervised machine learning algorithms. Also
draw block diagram for classical taxonomy of clustering methods.
(e) Supervised vs Unsupervised Learning
Feature Supervised Learning Unsupervised Learning

Labeled
Yes No
Data

Learn from known


Objective Discover hidden patterns
inputs/outputs

Clustering, Dimensionality
Examples Classification, Regression
Reduction

📘 Categories of Supervised Algorithms:

1. Classification: SVM, Decision Trees, Naive Bayes, KNN


2. Regression: Linear Regression, Polynomial Regression
3. Ensemble Methods: Random Forest, Gradient Boosting

2. (a) What is sampling ? Explain the role of sampling in image digitization, with
suitable example.
a) What is Sampling?
Sampling is the process of measuring the intensity values of an image at discrete
intervals in the spatial domain. In image digitization, it refers to the process of
dividing a continuous image into a grid of pixels.

📘 Role in Image Digitization

 It converts the continuous spatial domain into a discrete pixel array.


 Determines the spatial resolution of the image.
✅ Example:
Consider a photograph scanned at 300 dpi (dots per inch). This means:
 Each inch is divided into 300 samples (pixels).
 For an image of 4 × 6 inches:
Pixels=4×300 (width)×6×300 (height)=1200×1800\text{Pixels} = 4 \times 300
\text{ (width)} \times 6 \times 300 \text{ (height)} = 1200 \times
1800Pixels=4×300 (width)×6×300 (height)=1200×1800

(b) Given a gray scale image with aspect ratio of 6 : 2 and pixel resolution of 480000
pixels. Calculate the following :
b) Grayscale Image – Aspect Ratio & Resolution
Given:
 Aspect Ratio = 6 : 2 → Simplifies to 3 : 1
 Total pixels = 480,000
Let dimensions be 3x×x3x \times x3x×x
3x⋅x=480000⇒3x2=480000⇒x2=160000⇒x=4003x \cdot x = 480000 \Rightarrow
3x^2 = 480000 \Rightarrow x^2 = 160000 \Rightarrow x =
4003x⋅x=480000⇒3x2=480000⇒x2=160000⇒x=400

(i) Dimensions of image


✅ (i) Dimensions:

 Height = 400 pixels


 Width = 3×400 = 1200 pixels
 Answer: 1200 × 400 pixels

(ii) Size of image


(ii) Size of Image:
If grayscale image = 8 bits/pixel
Size=480000×8=3840000 bits=480000 bytes=468.75 KB\text{Size} = 480000 \times
8 = 3840000 \text{ bits} = 480000 \text{ bytes} = 468.75
\text{ KB}Size=480000×8=3840000 bits=480000 bytes=468.75 KB
(c) Write expression for forward and inverse orthogonal transform of an (N × N)
image f x y ( , ) . Given orthogonal matrix 1 1 1 (A) 2 1 1 é ù = ê ú ë û - and image
matrix 1 3 5 7 f é ù = ê ú ë û , determine orthogonal transform and its inverse.
4. (a) Explain Discrete Fourier Transform (DFT). Discuss the properties of DFT.
Compute the 2-D DFD of the 2 × 2 image 1 1 ( , ) 1 1 f x y é ù = ê ú ë û .
(a) Discrete Fourier Transform (DFT)
📘 Definition:
The Discrete Fourier Transform (DFT) converts a spatial domain image into
its frequency domain representation. For a 2D image f(x,y)f(x, y)f(x,y), the 2D
DFT is defined as:
F(u,v)=∑x=0M−1∑y=0N−1f(x,y)⋅e−j2π(uxM+vyN)F(u, v) = \sum_{x=0}^{M-1}
\sum_{y=0}^{N-1} f(x, y) \cdot e^{-j2\pi\left(\frac{ux}{M} +
\frac{vy}{N}\right)}F(u,v)=x=0∑M−1​y=0∑N−1​f(x,y)⋅e−j2π(Mux​+Nvy​)
📗 Properties of DFT:
1. Linearity: DFT is linear.
2. Translation/Shifting: A shift in the spatial domain causes a phase change in
frequency domain.
3. Conjugate Symmetry: For real-valued images, the spectrum is symmetric.
4. Periodicity: The DFT and its inverse are both periodic.
5. Parseval’s Theorem: Energy is conserved between domains.

✅ Compute 2D DFT of a 2×2 image


Given:

(b) What do you understand by shifting the centre of the spectrum ? Why is it
required ? Write the steps to carry out filtering in frequency domain.
(b) Shifting the Centre of the Spectrum

📘 What is It?

By default, the low-frequency components of a DFT are located at the corners of the
spectrum. Shifting moves the zero-frequency (DC component) to the center of the
image.
✅ Why is it Required?

 Makes visualization and interpretation easier.


 Enhances filtering operations by aligning low frequencies to the center.

🔄 How to Shift the Spectrum:

Multiply the image by (−1)x+y(-1)^{x+y}(−1)x+y before applying the DFT.


📘 Steps for Filtering in Frequency Domain:

1. Multiply input image by (−1)x+y(-1)^{x+y}(−1)x+y to center the Fourier


spectrum.
2. Compute the DFT of the image.
3. Multiply the DFT result with a filter (e.g., Gaussian, Butterworth).
4. Apply inverse DFT to transform back to the spatial domain.
5. Multiply again by (−1)x+y(-1)^{x+y}(−1)x+y to undo the initial shift.

(c) Explain CMY and CMYK colour models.

(c) CMY and CMYK Colour Models

📘 CMY Model (Cyan, Magenta, Yellow):

 Subtractive Color Model used in printing.


 Derived from RGB as:
C=1−RM=1−GY=1−B\begin{align*} C &= 1 - R \\ M &= 1 - G \\ Y &= 1 - B
\end{align*}CMY​=1−R=1−G=1−B​

 Used where inks or dyes are layered to absorb light.

📘 CMYK Model (Cyan, Magenta, Yellow, Key/Black):

 Extension of CMY with Black (K) added for:


o Better depth and detail
o Improved shadow contrast
o Cost reduction (using black instead of combined CMY)
✅ Comparison:

Feature CMY CMYK

Model Type Subtractive Subtractive with Black


Feature CMY CMYK

Components Cyan, Magenta, Yellow Cyan, Magenta, Yellow, Black

Use Case Theoretical color model Practical model used in printers

Output Control Less control over darkness Better black and dark tone output

4. (a) Draw a block diagram to show the learning phases of a supervised learning
algorithm. Also, write steps to explain the process of applying supervised machine
learning to a real world problem.
Supervised Learning – Block Diagram & Process

📘 Block Diagram: Learning Phases in Supervised Learning

+-------------------+
| Labeled Dataset |
+-------------------+
|
v
+-------------------+
| Data Preprocessing|
+-------------------+
|
v
+-------------------+
| Train-Test Split |
+-------------------+
|
v
+-------------------+
| Model Training |
+-------------------+
|
v
+-------------------+
| Model Evaluation |
+-------------------+
|
v
+-------------------+
| Deployment & Prediction |
+-------------------+
✅ Steps to Apply Supervised Learning to a Real-World Problem:

1. Define the problem (e.g., classification, regression).


2. Collect and label data (e.g., images labeled as ‘cat’ or ‘dog’).
3. Preprocess data (normalize, remove noise, handle missing values).
4. Split dataset into training and testing sets.
5. Choose a model (e.g., Decision Tree, SVM, Neural Network).
6. Train the model using labeled training data.
7. Evaluate the model on unseen test data using metrics like accuracy,
precision, etc.
8. Deploy the model for use on real-time inputs.
9. Monitor and retrain as needed based on performance drift.

(b) Explain Agglomerative Hierarchical Clustering. Write steps of general


agglomerative clustering algorithm. Also discuss single-link and complete link type of
agglomerative clustering.
(b) Agglomerative Hierarchical Clustering

📘 Definition:

Agglomerative Hierarchical Clustering is a bottom-up clustering method where:


 Each data point starts as a single cluster.
 Clusters are merged based on similarity until a single cluster remains or a
stopping criterion is met.
✅ Steps of General Agglomerative Clustering Algorithm:

1. Start with N clusters, each containing one data point.


2. Compute proximity matrix (distance between clusters).
3. Merge the two closest clusters.
4. Update the proximity matrix.
5. Repeat steps 3–4 until:
o Desired number of clusters is reached, or
o All data points belong to one cluster (for dendrogram).

📘 Types of Linkages in Agglomerative Clustering:

Type Description Behavior

Distance = minimum distance between any Can form long, "chain-


Single-link
two points (one from each cluster) like" clusters

Complete- Distance = maximum distance between any Tends to form compact,


link two points (one from each cluster) spherical clusters

(c) Describe the following quantities, used to represent any colour :


(i) Brightness
(i) Brightness
 Definition: Brightness is the perceived luminance or lightness of a color.
 Influence: Related to the amplitude of the light wave and controlled by the
intensity component.
 Range: Low brightness = dark color, high brightness = lighter or white.
(ii) Contrast

✅ (ii) Contrast

 Definition: Contrast is the difference in intensity or color between an object


and its background or between different regions of an image.
 High contrast: Sharp difference (e.g., black on white).
 Low contrast: Subtle difference (e.g., light gray on white).
 Role: Key in image clarity and visibility; widely used in image
enhancement.

5. Write short notes on any five of the following :


(a) Classification of images on the basis of attributes
(a) Classification of Images on the Basis of Attributes
Images can be classified based on various attributes such as:
1. Based on Dimensionality:
o 1D: Signals like ECG, audio waveforms.
o 2D: Grayscale, RGB images.
o 3D: Volumetric data (e.g., CT, MRI scans).
2. Based on Color:
o Binary images: Only two values (black and white).
o Grayscale images: Shades of gray (0–255).
o Color images: RGB, CMY, etc.
3. Based on Source:
o Medical images: CT, MRI, X-ray.
o Remote sensing: Satellite images.
o Natural images: Photographs or real-world captures.

(b) Pseudo colour images


(b) Pseudo Color Images
 Definition: Pseudo-coloring is the process of assigning artificial colors to
grayscale images to enhance visual interpretation.
 How it works:
o Pixel intensity values are mapped to specific color values using a
colormap (e.g., red for high intensity, blue for low).
 Applications:
o Medical imaging (e.g., thermography)
o Satellite imagery
o Heat maps in data visualization

(c) Pixel resolution


(c) Pixel Resolution
 Definition: Pixel resolution refers to the number of pixels per unit area
(usually inches or mm), determining the fineness of detail in an image.
 Measured in: Pixels per inch (PPI) or dots per inch (DPI).
 Example:
o An image with resolution 300 DPI has 300 pixels in every inch.
 Higher resolution = more detail, better quality.
 Lower resolution = blocky or blurry images.

(d) Gaussian low pass filter


(d) Gaussian Low-Pass Filter
 Purpose: Used for image smoothing by reducing high-frequency
components (noise and edges).
 Function: Based on a Gaussian function, it gives more weight to nearby
pixels, smoothly blurring the image.

(e) Rayleigh noise


(e) Rayleigh Noise
 Definition: A type of noise whose amplitude follows the Rayleigh
distribution.
 Characteristics:
o Appears in imaging systems like radar and ultrasound.
o One-sided distribution (only positive values).

(f) Median filters


 Definition: A non-linear filter used for noise removal, especially effective
against salt-and-pepper noise.
 Working:
 Replaces each pixel’s value with the median of the surrounding
neighborhood.
 Advantages:
 Preserves edges better than linear filters.
 Effective for impulsive noise without blurring sharp boundaries.
 Example: 3×3 window: [12, 5, 200, 7, 8, 6, 9, 10, 11] → Median = 9

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy