0% found this document useful (0 votes)
12 views16 pages

Digital Image Processing HW 01 .....................

Digital Imaging processing Home work

Uploaded by

nisar03028387462
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views16 pages

Digital Image Processing HW 01 .....................

Digital Imaging processing Home work

Uploaded by

nisar03028387462
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 16

Digital Image Processing HW: 01

Topic: Fundamentals of Digital Image Processing


Submitted By: Nisar Ahmad
Student ID: F2023439003
DIP Section: V2

Submitted to: Dr. Jameel Ahmad


Digital Image Processing Tasks
Q1
% Read the image
original_image = imread('einstein.jpg');
% Invert the image
inverted_image = 255 - original_image;
% Display the original and inverted images
figure;
subplot(1, 2, 1);
imshow(original_image);
title('Original Image');
subplot(1, 2, 2);
imshow(inverted_image);
title('Inverted Image');
Output

Transformation Formula
In Digital Image Processing (DIP), transformation formulas are mathematical functions applied
to pixel values of an image to achieve specific effects or purposes, such as contrast adjustment,
brightness adjustment, and enhancement. Here are some commonly used transformation
formulas
Intensity Range
The maximum value (e.g., 255 for 8-bit images) represents white.
Inversion Process
The inversion process involves reversing the intensity values of an image to create a negative
effect. This is particularly useful for enhancing certain image features, improving visualization,
or preparing images for specific analyses.
Question No 2
Brightness adjustment is a basic image enhancement technique that modifies the intensity of all
pixels in an image by adding or subtracting a constant value. It controls the overall lightness or
darkness of an image without altering its structural details.
Code
% Load a grayscale image
img = imread('einstein.jpg'); % Make sure 'grayscale_image.jpg' is in the working directory
% Convert to double for accurate addition, then add the brightness factor
B = 50;
brightened_img = double(img) + B;
% Clip values above 255 to 255 and values below 0 to 0
brightened_img(brightened_img > 255) = 255;
brightened_img(brightened_img < 0) = 0;
% Convert back to uint8
brightened_img = uint8(brightened_img);
% Display the original and brightened images
subplot(1, 2, 1), imshow(img), title('Original Image');
subplot(1, 2, 2), imshow(brightened_img), title(['Brightened Image by ', num2str(B)]);
Output

Question No 3
Contrast Stretching
Contrast stretching is a technique used to enhance the contrast of an image by expanding the
range of intensity levels. It redistributes pixel intensities to cover a broader or specific intensity
range, making details in the image more visible.
Types of Stretching:
• Linear Stretching: Adjusts intensities linearly across the range.
• Piecewise Linear Stretching: Maps specific intensity ranges differently, allowing finer
control.
Code
% Load a grayscale image or use an example matrix
I = imread('rescaled_image_50_200.jpg'); % Read your image here
I = double(I); % Convert to double for computation
% Define minimum and maximum intensity values for input and output ranges
I_min = 50;
I_max = 200;
O_min = 0;
O_max = 255;
% Apply contrast stretching formula
I_stretched = ( (I - I_min) / (I_max - I_min) ) * (O_max - O_min) + O_min;
% Clip values to [0, 255] if any fall outside due to rounding errors
I_stretched = max(min(I_stretched, O_max), O_min);
% Convert back to uint8 for displaying as image
I_stretched = uint8(I_stretched);
% Display the original and contrast-stretched images
figure;
subplot(1, 2, 1), imshow(uint8(I)), title('Original Image');
subplot(1, 2, 2), imshow(I_stretched), title('Contrast-Stretched Image');
Output

Question No 4
Histogram Equalization in Digital Image Processing
Histogram equalization is a technique used to enhance the contrast of an image by redistributing
its intensity values. It adjusts the pixel intensity distribution such that the histogram becomes
approximately uniform, making the image more balanced in terms of brightness and contrast.
Key Concepts of Histogram Equalization
1. Purpose:
• Improves image contrast, especially in images with uneven lighting or limited
dynamic range.
• Enhances visibility of details by spreading out frequently occurring intensity
levels.
2. Histogram:
• A histogram represents the frequency distribution of pixel intensities in an
image.
• Histogram equalization modifies this distribution to make it more uniform.
3. Global vs. Local Equalization:
• Global Histogram Equalization: Applies the transformation based on the
histogram of the entire image.
• Local Histogram Equalization: Equalizes smaller regions of the image,
preserving local details (e.g., adaptive histogram equalization).
Final Map pixel Value
• Input Intensity Values (rrr) for an 8-bit grayscale image:
r=[0,1,2,3,4]r = [0, 1, 2, 3, 4]r=[0,1,2,3,4]
(Intensity levels present in the image)
• Frequency of Intensities:
f(r)=[5,10,15,30,40]f(r) = [5, 10, 15, 30, 40]f(r)=[5,10,15,30,40]
(Number of pixels at each intensity level)
• Total Pixels:
Total Pixels=5+10+15+30+40=100\text{Total Pixels} = 5 + 10 + 15 + 30 + 40 =
100Total Pixels=5+10+15+30+40=100
• Maximum Intensity Value (L−1L - 1L−1):
L−1=255L - 1 = 255L−1=255
(For an 8-bit image, intensities range from 0 to 255)

Question no 5
Histogram matching

Histogram matching, also known as histogram specification, is a digital image processing


technique used to transform the intensity values of an image so that its histogram matches a
specified histogram. Unlike histogram equalization, which automatically redistributes pixel
intensities to achieve a uniform histogram, histogram matching targets a specific histogram
shape.

Key Concepts of Histogram Matching

1. Purpose:

• Adjust an image's intensity distribution to resemble another image or a desired


histogram shape.
• Useful in applications requiring standardized intensity distributions, such as
medical imaging or remote sensing.

2. Process Overview:
• Match the cumulative distribution function (CDF) of the input image to the
desired (reference) CDF.

3. Global vs. Local Matching:

• Global Histogram Matching: Matches the histogram of the entire image.


• Local Histogram Matching: Matches the histogram of specific regions for
finer control.

Code

% Load an equalized grayscale image

equalized_image = imread('equalized_image.jpg'); % Load your grayscale image

% Target histogram as per the given values

target_histogram = [10, 20, 40, 25, 5, 0];

target_histogram = [target_histogram, zeros(1, 256 - length(target_histogram))]; % Pad to

256 levels

% Step 1: Calculate the histogram of the equalized image

[equalized_histogram, ~] = imhist(equalized_image); % Get histogram of equalized image

equalized_histogram = equalized_histogram / numel(equalized_image); % Normalize

histogram

% Step 2: Normalize the target histogram

target_histogram = target_histogram / sum(target_histogram); % Normalize histogram to

probabilities

% Step 3: Compute the CDFs of the equalized image and the target histogram

equalized_cdf = cumsum(equalized_histogram); % CDF of the equalized image

target_cdf = cumsum(target_histogram); % CDF of the target histogram

% Step 4: Create the mapping function

map_values = zeros(256, 1, 'uint8'); % Initialize the mapping array for each intensity level
% Map each intensity level in the equalized image to the closest intensity in the target

histogram

for i = 1:256

% Find the closest match in the target CDF for each source CDF value

[~, idx] = min(abs(equalized_cdf(i) - target_cdf));

map_values(i) = idx - 1; % Map intensity i-1 in equalized_image to idx-1 in target

end

% Step 5: Apply the mapping to transform the equalized image

matched_image = map_values(double(equalized_image) + 1); % Apply mapping to image

pixels

% Display results

figure;

subplot(1, 3, 1); imshow(equalized_image); title('Equalized Image');

subplot(1, 3, 2); imshow(matched_image); title('Matched Image');

subplot(1, 3, 3);

imhist(matched_image); title('Histogram of Matched Image');

Output
Question No 6
The Sobel filter is an edge-detection operator used in digital image processing and computer
vision to detect edges by calculating the gradient of the image intensity. It emphasizes regions of
high spatial frequency where intensity changes rapidly, such as edges or boundaries within an
image
Code
I = imread('buildingedges.jpg'); % Replace 'image.png' with the path to your image
if size(I, 3) == 3
I = rgb2gray(I); % Convert to grayscale if the image is RGB
end
% Sobel Kernels
Sx = [-1 0 1; -2 0 2; -1 0 1]; % Horizontal kernel
Sy = [-1 -2 -1; 0 0 0; 1 2 1]; % Vertical kernel
% Apply kernels to get gradients
Gx = imfilter(double(I), Sx, 'replicate');
Gy = imfilter(double(I), Sy, 'replicate');
% Calculate gradient magnitude
G = sqrt(Gx.^2 + Gy.^2);
G = uint8(G * (255 / max(G(:)))); % Normalize to display as an image
% Display results
figure;
subplot(1,3,1), imshow(I), title('Original Image');
subplot(1,3,2), imshow(Gx, []), title('Horizontal Edges (Gx)');
subplot(1,3,3), imshow(Gy, []), title('Vertical Edges (Gy)');
figure;
imshow(G, []); title('Edge Detected Image (Magnitude of Gradient)');

Output
Question No 7
Median Filter
The median filter is a nonlinear digital filtering technique used to reduce noise in images.
Unlike linear filters (e.g., Gaussian or mean filters), the median filter preserves the edges of an
image while effectively removing noise, making it a popular choice for preprocessing tasks.
Key Concepts of the Median Filter
1. Purpose:
• Primarily used to remove salt-and-pepper noise (impulsive noise) from an
image.
• Smoothens the image while retaining edge details.
2. How It Works:
• A sliding window (kernel) moves across the image.
• For each window position:
▪ Extract all pixel values within the window.
▪ Compute the median of these values.
▪ Replace the center pixel with the computed median.
3. Kernel Size:
• Typically square-shaped (e.g., 3×3, 5×5).
• Larger kernel sizes smooth the image more but may reduce detail
preservation.
4. Median Calculation:
• The median is the middle value in an ordered list of intensity values.
▪ For an odd-sized kernel, it’s the middle value.
▪ For an even-sized kernel, the average of the two middle values may be
used.
Question no 8
Gaussian Filter
The Gaussian filter is a widely used smoothing filter in digital image processing. It reduces
noise and detail in images by applying a Gaussian kernel, which gives more weight to pixels
closer to the center of the kernel. This ensures a smooth transition between pixel intensities,
making it ideal for reducing high-frequency noise while preserving overall image structure.
Code
% Read the grayscale image
img = imread('einstein.jpg');
if size(img, 3) == 3
img = rgb2gray(img); % Convert to grayscale if needed
end
% Define the Gaussian kernel
sigma = 1;
kernel = fspecial('gaussian', [3, 3], sigma);
% Apply the Gaussian filter to the image
filtered_img = imfilter(img, kernel, 'same');
% Display the original and filtered images
figure;
subplot(1, 2, 1), imshow(img), title('Original Image');
subplot(1, 2, 2), imshow(filtered_img), title('Filtered Image');
Output

Question No 9

Image Arithmetic in Digital Image Processing

Image arithmetic refers to the application of basic mathematical operations (e.g., addition,
subtraction, multiplication, and division) on one or more images. These operations are typically
performed pixel by pixel, and they play a crucial role in enhancing, blending, or analyzing
images.
Key Image Arithmetic Operations

1. Addition

• Description: Adds the intensity values of corresponding pixels in two images.


• Applications:
o Blending images.
o Increasing brightness (adding a constant value).
• Consideration: Values exceeding the maximum intensity (e.g., 255 for 8-bit images)
may need clipping.

2. Subtraction

• Description: Subtracts the intensity values of corresponding pixels in two images.


• Applications:
o Detecting differences between images.
o Highlighting changes or motion.
• Consideration: Resulting negative values are clipped to zero in unsigned images.

3. Multiplication

• Description: Multiplies the intensity values of corresponding pixels.


• Applications:
o Masking images (multiplying with binary masks).
o Enhancing specific features.
• Consideration: Values may need normalization or scaling.

4. Division

• Description: Divides the intensity values of corresponding pixels.


• Applications:
o Normalizing images.
o Shadow correction.
• Consideration: Results may require scaling to fit the intensity range.

Question No 10
Spatial Filters in Digital Image Processing
Spatial filtering involves directly manipulating the pixel values of an image using a kernel or
mask. These filters operate in the spatial domain, performing operations like smoothing,
sharpening, and edge detection to enhance or modify an image.
Key Concepts of Spatial Filtering
1. Kernel or Mask:
• A small matrix (e.g., 3×3, 5×5) used to define the operation applied to the
pixels.
• Slides over the image, performing a mathematical operation (e.g., convolution
or correlation) on each local region.
2. Operation:
• Replace the value of a pixel with the result of a computation involving the kernel
and its neighboring pixels.
3. Linear vs. Nonlinear Filters:
• Linear Filters: Weighted average of pixel values (e.g., Gaussian filter, Mean
filter).
• Nonlinear Filters: Use functions like median or mode instead of averaging (e.g.,
Median filter).
Code
image = imread('einstein.jpg'); % Replace 'einstein.jpg' with the path to your image
if size(image, 3) == 3
image = rgb2gray(image);
end
% Display the original grayscale image
figure, imshow(image), title('Original Grayscale Image');
% Apply Median Filter
median_filtered_image = medfilt2(image, [3, 3]); % 3x3 neighborhood for filtering
figure, imshow(median_filtered_image), title('Median Filtered Image');
% Apply Gaussian Filter
gaussian_filtered_image = imgaussfilt(image, 2); % 2 is the standard deviation for Gaussian
kernel
figure, imshow(gaussian_filtered_image), title('Gaussian Filtered Image');
Output
Question No 11
Histogram Interpretation in Digital Image Processing
A histogram is a graphical representation of the pixel intensity distribution in an image. It
provides insights into the tonal range, contrast, brightness, and overall characteristics of an
image, which are crucial for enhancing and analyzing images.
Components of an Image Histogram
1. X-Axis:
• Represents pixel intensity levels.
• For an 8-bit grayscale image, intensity ranges from 0 (black) to 255 (white).
• For a color image, separate histograms are generated for Red, Green, and Blue
channels.
2. Y-Axis:
• Represents the number of pixels at each intensity level.
Types of Image Histograms
1. Grayscale Histogram:
• Displays the intensity distribution for grayscale images.
2. Color Histogram:
• Displays separate intensity distributions for the Red, Green, and Blue channels.
Interpreting Image Histograms
1. Dark Images (Underexposed):
• Most of the pixel intensities are concentrated on the left side of the histogram (low
intensity values).
• Indicates a lack of light in the image.
2. Bright Images (Overexposed):
• Most of the pixel intensities are concentrated on the right side of the histogram (high
intensity values).
• Indicates excessive light or loss of detail in bright areas.
3. High Contrast Images:
• Histogram spans a wide range of intensity values.
• Peaks may appear at both low and high intensities.
4. Low Contrast Images:
• Histogram is narrow and concentrated in the middle of the intensity range.
• Indicates minimal variation in brightness levels, resulting in a flat-looking image.
5. Uniform Images:
• Histogram has a single narrow peak, representing a uniform intensity level throughout the
image.
Code
% MATLAB script for Linear Contrast Stretching
image = imread('einstein.jpg'); % Replace 'input_image.jpg' with the actual image filename
% Convert to double for calculation
image = double(image);
% Define minimum and maximum pixel values
min_val = 30;
max_val = 220;
% Apply linear contrast stretching
stretched_image = (image - min_val) * (255 / (max_val - min_val));
stretched_image = uint8(stretched_image); % Convert back to uint8 for display
% Display the original and transformed images
subplot(1, 2, 1), imshow(uint8(image)), title('Original Image');
subplot(1, 2, 2), imshow(stretched_image), title('Contrast Stretched Image');
Output

Question No 12
Edge Detection
Code
% Read the grayscale image
img = imread('buildingedges.jpg');
gray_img = rgb2gray(img); % Convert to grayscale if it's a color image
% Define the vertical Sobel kernel
sobel_vertical = [-1 0 1; -2 0 2; -1 0 1];
% Apply the Sobel filter for vertical edges
vertical_edges = imfilter(double(gray_img), sobel_vertical);
% Display the result
figure;
imshow(vertical_edges, []);
title('Vertical Edges Detected Using Sobel Filter');
Output
Question No 13

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy