Divp Manual-22-23
Divp Manual-22-23
YAVATMAL
Lab Manual
VII t h Semester of
Bachelor of Engineering
Mission of Institute
“Provide highest quality resources, learning processes and research to create
technically qualified professionals capable of making significant contribution to
individual and social empowerment”
Vision of Department
“To be a centre of excellence in Electronics & Telecommunication Engineering
within the region”
Mission of Department
“To be a leading centre to impart quality education through effective teaching-
learning practices, reinforcing resources & research to create competent and
responsible technocrats capable of contributing to individual and social
empowerment.”
PROGRAM EDUCATIONAL OBJECTIVES [PEO]
2. To make the aspiring students acquainted with the conceptual as well as practical
knowledge of Industrial Automation & Technologies.
3. To provide hands on experience and exposure to the aspiring student in the area of
embedded and real time system.
JAWAHARLAL DARDA INSTITUTE OF ENGINEERING & TECHNOLOGY, YAVATMAL
CERTIFICATE
This is to certify that the practical report entitled
By
Date : Date :
INDEX
9.
Date:
EXPERIMENT NO.1
AIM: Understanding analogy between Image & Matrix and study different types of images.
THEORY:
Image: An ‘IMAGE’ may be defined as a Two Dimensional Function f(x, y) where x , y
are the Spatial coordinates and f(x, y) is the Amplitude at any pair of coordinates f(x , y) and is
known as the INTENSITY or GRAY LEVEL of the image at that point . An image represents
measure of some characteristics such as brightness or color of a viewed scene. Image is a projection
of a 3D scene into a 2D projection plane.
Digital Image:- A Digital Image is composed of Picture Elements called PIXELS. PIXELS
are the smallest sample of an image. A PIXEL represents the brightness at one point. Conversion of
an Analog Image into a Digital Image involves two important operations:-
1) Sampling.
2) Quantization.
Image as a Matrix:
Digital Image Representation:
The result of Sampling and Quantization is nothing but a matrix of real numbers. Assume
that an image f(x, y) is sampled so that the resulting digital image has ‘M’ Rows and ‘N’ Columns,
thus the complete resulting matrix ‘M*N’ in a compact form is a ‘Digital Image’.
The digitization process requires decision about values for M, N and where L is no. of discrete gray
levels allowed for each pixel M, N= positive integers. However the no. of gray levels is as L = 2 k
where k represents no. of bits used to represent an every pixel, generally k=8.Sometimes the range
of values spanned by the gray scale is called dynamic range of an image. Images whose gray level
span is significant portion of the gray scale is said to have high dynamic range. No. of bits ‘b’
required to store a digitized image is b = M*N*k and if M=N then b = M 2*k or b = N2*k. An Image
having 2k Gray Levels is referred to as ‘k bit image’.
Different types of Images Grayscale, Binary and Color image:
Binary Image: Binary Image takes only two values i.e.0 & 1.The brightness graduation cannot be
differentiated in binary image. The grayscale image can be converted to binary image by
thresholding operation. Geometric properties of an object, like the location of the centroid or the
orientation of the object, can be easily extracted from the binary image. In binary image, foreground
color gives objects in the image and background color gives rest of the image.
Grayscale Image: Grayscale Image contains only brightness information. Each pixel value in a
grayscale image corresponds to the amount or quality of light. Brightness graduation cannot be
differentiated in grayscale image. In grayscale image, each pixel is represented as byte or word, the
value of which represents the light intensity at that point in image. An 8-bit image will have a
brightness variation in from 0 to 255 where ‘0’ represents black and ‘255’ represents white. It
measures only the light intensity. Each pixel is scalar proportional to brightness.
Color Image:- A color image has 3 values per pixel and they measure intensity and chrominance of
light. Each pixel is a vector of color components. Color image can be modeled as 3-band
monochrome image data, where each band of data corresponds to a different color. The actual
information stored in the digital image data is brightness information in each spectral band.
Common color spaces are RGB (Red, Green, Blue) , HSV (Hue , Saturation ,Value) and
CMYK(Cyan ,Magenta, Yellow ,Black). Color images usually consist of 3 separate images
representations called color planes, with pixel values in each plane corresponding to intensity of
certain color at specific point in image. The most popular system is RGB, where 3 color planes are
used to represent intensity of red, green and blue in scene.
colormap (gray)
image (a)
Fig 3.output for matrix as image
FUNCTIONS:
CONCLUSION:
Matlab allows us to process image as a matrix. Various operations on the image can be
carried out by processing image at pixel level In gray scale image intensity of pixel varies from 0-
255. The dark color pixel has intensity near to ‘0’ and the light color pixel have intensity near to
‘255’. In binary image intensity of pixel is 0 for dark color and 1 for light color. In color image the
intensity of any pixel depends on the combination of red, green and blue color. Here, edges of
image are also observed.
Date:
EXPERIMENT NO.2
AIM: To develop a code for resizing and rotating an image
PROGRAM:
clc
clear all
close all
img = imread('pout.tif');
subplot(221)
imshow(img)
title('Original image');
img1 = flipdim(img,1);
subplot(222)
imshow(img1)
title('Up-Down image');
img2 = flipdim(img,2);
subplot(223)
imshow(img2)
title('Left-Right image');
subplot(224)
img3 = imrotate(img,45);
imshow(img3);
title('Rotation of image');
FUNCTIONS:
RESULT:
CONCLUSION:
Date:
EXPERIMENT NO.3
AIM: To develop a code to perform arithmetic and logical operations on image.
PROGRAM:
Logical operators:
clc
clear
close all
S=zeros(100,100);
S(30:60,40:60)=1;
subplot(221)
imshow(S)
title('image1');
P=zeros(100,100);
P(20:40,40:70)=1;
subplot(222)
imshow(P)
title('image2');
Q=and(S,P);
subplot(223)
imshow(Q)
title('AND operator');
R=(S);
subplot(224)
imshow(R);
subplot(224)
imshow(R)
title('NOT operator');
FUNCTIONS:
Result:
Arithmetic operation:
Logical operation:
CONCLUSION:
Date:
EXPERIMENT NO.4
AIM: To develop a code to add and remove noise from the image .
OBJECTIVES : (1) Add different types of noise to image.
(2) Design filter to remove the noise.
THEORY:
Digital images are prone to a variety of types of noise. There are several ways that noise can
be introduced into an image, depending on how the image is created. For example:
If the image is scanned from a photograph made on film, the film grain is a source of noise.
Noise can also be the result of damage to the film, or be introduced by the scanner itself.
If the image is acquired directly in a digital format, the mechanism for gathering the data (such
as a CCD detector) can introduce noise.
Electronic transmission of image data can introduce noise.
Value Description
'gaussian' Gaussian white noise
'localvar' Zero-mean Gaussian white noise with an intensity-dependent variance
'poisson' Poisson noise
'salt & pepper' On and off pixels
'speckle' Multiplicative noise
There are number of different ways to remove or reduce noise in an image. Different methods are
better for different kinds of noise. The methods available include:
Using Linear Filtering: You can use linear filtering to remove certain types of noise. Certain
filters, such as averaging or Gaussian filters, are appropriate for this purpose. For example, an
averaging filter is useful for removing grain noise from a photograph. Because each pixel gets
set to the average of the pixels in its neighborhood, local variations caused by grain are reduced.
Using Median Filtering: Median filtering is similar to using an averaging filter, in that each
output pixel is set to an average of the pixel values in the neighborhood of the corresponding
input pixel. However, with median filtering, the value of an output pixel is determined by the
median of the neighborhood pixels, rather than the mean. The median is much less sensitive
than the mean to extreme values (called outliers). Median filtering is therefore better able to
remove these outliers without reducing the sharpness of the image. The medfilt2 function
implements median filtering.
Using Adaptive Filtering: The wiener2 function applies a Wiener filter (a type of linear filter)
to an image adaptively, tailoring itself to the local image variance. Where the variance is large,
wiener2 performs little smoothing. Where the variance is small, wiener2 performs more
smoothing. This approach often produces better results than linear filtering. The adaptive filter
is more selective than a comparable linear filter, preserving edges and other high-frequency
parts of an image. In addition, there are no design tasks; the wiener2 function handles all
preliminary computations and implements the filter for an input image. wiener2, however, does
require more computation time than linear filtering. wiener2 works best when the noise is
constant-power ("white") additive noise, such as Gaussian noise.
FUNCTIONS:
J = imnoise (I, type) adds noise of a given type to the intensity image I. type is a string that can
have one of these values.
PROGRAM:
clc
clear all
close all
I = imread('eight.tif');
subplot(331)
imshow(I)
C = imnoise(I,'gaussian',0.01)
subplot(332)
imshow(C)
J = imnoise(I,'salt & pepper',0.02);
subplot(333)
imshow(J)
H = imnoise(I,'poisson')
subplot(334)
imshow(H)
T = imnoise(I,'speckle',0.05);
subplot(335)
imshow(T)
RESULT:
CONCLUSION:
From above experiment we can conclude that, we can add different noise pattern to the
images and by using different filters we can remove the noise from the images. For salt and
papper noise the median filter gives better removal of noise than other filters. Adaptive
filtering works best with the guassian noise.
Date:
EXPERIMENT NO. 5
THEORY:
Histogram –
A histogram is bar graph that shows a distribution of data. In image processing histograms are
used to show how many of each pixel value are present in an image. Histograms can be very useful
in determining which pixel values are important in an image. From this data you can manipulate an
image to meet your specifications. Data from a histogram can aid you in contrast enhancement and
thresholding.
Histogram equalization:
Histogram equalization is a method in image processing of contrast adjustment using the
image's histogram. Histogram equalization often produces unrealistic effects in photographs;
however it is very useful for scientific images like thermal or x-ray images, often the same class of
images to which one would apply false color. Also histogram equalization can produce undesirable
effects when applied to images with low color depth. For example, if applied to 8-bit image
displayed with 8 bit gray scale it will further reduce color depth (number of unique shades of gray)
of the image. Histogram equalization will work the best when applied to images with much higher
depth than palette size, like continuous data or 16-bit gray-scale images.
FUNCTIONS:
imhist - In order to create a histogram from an image, use the imhist function.
PROGRAM:
clc
clear all
close all
img=imread('pout.tif');
subplot(221)
imshow(img)
title('Original image');
subplot(222)
imhist(img)
axis tight
title('Histogram of original image');
img1=histeq(img);
subplot(223)
imshow(img1)
title('Enhanced image');
subplot(224)
imhist(histeq(img1))
axis tight
title('Histogram of enhanced image');
RESULT:
CONCLUSION: Thus we have obtained the Equalized Histogram from the original Histogram.
Date:
EXPERIMENT NO.6
AIM: To develop a code for applying Fourier transform and inverse Fourier transform on the image
OBJECTIVES :
1) Find fourier transform of an image.
2) Reconstruct the image by taking inverse fourier transform.
THEORY:
The Fourier transform F(u), of a single variable, continous function, f(x),is defined by the equation,
F(u) = .dx
Conversely , given F(u),we can obtain f(x) by means of the inverse Fourier transform
f(x) = .du
These two equations comprise the Fourier transform pair. They indicate the important fact that a
function can be recovered from its transform.
These equations are easily extended to two variables, u and v:
F(u,v)= [ ]2
where R(u,v) and I(u,v) are the real and imaginary parts of F(u,v) respectively.
FUNCTIONS:
FFT2 (2-D discrete Fourier transform)
Syntax:
Y = fft2(X)
Y = fft2(X,m,n)
Description:
Y = fft2(X) returns the two-dimensional discrete Fourier transform (DFT) of X, computed with a
fast Fourier transform (FFT) algorithm. The result Y is the same size as X.
Y = fft2(X,m,n) truncates X, or pads X with zeros to create an m-by-n array before doing the
transform. The result is m-by-n.This computes the one-dimensional DFT of each column X, then of
each row of the result. The execution time for fft depends on the length of the transform. It is fastest
for powers of two. It is almost as fast for lengths that have only small prime factors. It is typically
several times slower for lengths that are prime or which have large prime factors.
PROGRAM:
clc
clear all
close all
A=zeros(256,256);
A(78:178,78:178)=1;
subplot(221)
imshow(A)
title('Original Image');
Af=fft2(A);
subplot(222)
imshow(Af)
title('FFT of Image');
Aff=fftshift(Af);
subplot(223)
imshow(Aff)
title('Shifted FFT of Image');
Ai=ifft2(Af);
subplot(224)
imshow(Ai)
title('Inverse FFT of Image');
RESULT:
CONCLUSION:
Transformation is basically a tool which allows us to move from one domain to another.
The transformation do not change the information content present in the signal.
Fourier Transform is represented by its magnitude and phase. The magnitude information
gives how much of certain frequency component is present and phase information gives where
frequency component is in the image.
If the phase changes , then the edge information is lost.
Date:
EXPERIMENT NO.7
FUNCTION:
PROGRAM:
clc;
close all;
clear all;
CONCLUSION:
Date:
EXPERIMENT NO.8
OBJECTIVES : (1) Image dilation and erosion (binary and gray image).
(2) Image Closing and Opening (binary and gray image).
THEORY:
Morphology is a broad set of image processing operations that process images based on
shapes. Morphological operations apply a structuring element to an input image, creating an output
image of the same size. In a morphological operation, the value of each pixel in the output image is
based on a comparison of the corresponding pixel in the input image with its neighbors. By
choosing the size and shape of the neighborhood, you can construct a morphological operation that
is sensitive to specific shapes in the input image. The existing dilation and erosion operators have
been extended to work with grayscale images. New functions range from additional basic operators
(opening, closing) to advanced tools useful for segmentation (distance transforms, reconstruction-
based operators, and the watershed transform). The functions use advanced techniques for high
performance, including automatic-structuring element decomposition, 32-bit binary image packing,
and queue-based algorithms.
The most basic morphological operations are dilation and erosion. Dilation adds pixels to the
boundaries of objects in an image, while erosion removes pixels on object boundaries. The number
of pixels added or removed from the objects in an image depends on the size and shape of the
structuring element used to process the image. In the morphological dilation and erosion operations,
the state of any given pixel in the output image is determined by applying a rule to the
corresponding pixel and its neighbors in the input image. The rule used to process the pixels defines
the operation as a dilation or an erosion.
FUNCTIONS:
EROSION: To erode an image, use the imerode function. The imerode function accepts two
primary arguments:
1. The input image to be processed (grayscale, binary, or packed binary image).
2. A structuring element object, returned by the strel function, or a binary matrix defining the
neighborhood of a structuring element.
imerode
Syntax IM2 = imerode(IM,SE)
Description:-
IM2 = imerode(IM,SE) erodes the grayscale, binary, or packed binary image IM, returning
the eroded image IM2. The argument SE is a structuring element object or array of structuring
element objects returned by the strel function. If IM is logical and the structuring element is flat,
imerode performs binary dilation; otherwise it performs grayscale erosion. If SE is an array of
structuring element objects, imerode performs multiple erosions of the input image, using each
structuring element in SE in succession.
Dilation:
Dilate a binary image with a vertical line structuring element.To dilate an image, use the imdilate
function. The imdilate function accepts two primary arguments: The input image to be processed
(grayscale, binary, or packed binary image) A structuring element object, returned by the strel
function, or a binary matrix defining the neighborhood of a structuring element
imdilate
Syntax : IM2 = imdilate(IM,SE)
Description
IM2 = imdilate(IM,SE) dilates the grayscale, binary, or packed binary image IM, returning the
dilated image, IM2. The argument SE is a structuring element object, or array of structuring element
objects, returned by the strel function. If IM is logical and the structuring element is flat, imdilate
performs binary dilation; otherwise, it performs grayscale dilation. If SE is an array of structuring
element objects, imdilate performs multiple dilations of the input image, using each structuring
element in SE in succession.
Description
IM2 = imopen(IM,SE) performs morphological opening on the grayscale or binary image IM with
the structuring element SE. The argument SE must be a single structuring element object, as
opposed to an array of objects. IM2 = imopen(IM,NHOOD) performs opening with the structuring
element strel(NHOOD), where NHOOD is an array of 0's and 1's that specifies the structuring
element neighborhood.
Close an Image
Imclose
Syntax
IM2 = imclose(IM,SE)
IM2 = imclose(IM,NHOOD)
Description
IM2 = imclose(IM,SE) performs morphological closing on the grayscale or binary image IM,
returning the closed image, IM2. The structuring element, SE, must be a single structuring element
object, as opposed to an array of objects. IM2 = imclose(IM,NHOOD) performs closing with the
structuring element strel(NHOOD), where NHOOD is an array of 0's and 1's that specifies the
structuring element neighborhood.
PROGRAM:
clc
clear all
close all
I=imread('cameraman.tif');
figure,imshow(I)
title('Original Image');
Th=graythresh(I);
I1=im2bw(I,Th);
figure,imshow(I1)
title('BinaryImage');
I2=bwmorph(I1,'erode',5);
figure,imshow(I2)
title('Eroded Image');
I3=bwmorph(I1,'dilate',5);
figure,imshow(I3)
title('Dilated Image');
I4=bwmorph(I1,'open',5);
figure,imshow(I4)
title('Opening of Image');
I5=bwmorph(I1,'close',5);
figure,imshow(I5)
title('Closing of Image');
I6=bwmorph(I1,'thin',5);
figure,imshow(I6)
title('Thining of Image');
I7=bwmorph(I1,'thicken',5);
figure,imshow(I7)
title('Thickening of Image');
RESULT:
CONCLUSION:
Morphological image processing is based on probing an image with a small shape or
template known as structuring element.
In dilation and erosion operation, the image is probed with structuring element by
successively placing the origin of structuring element to all possible pixel locations.
The opening operation is defined as erosion followed by dilation. It eliminates small and
thin objects, smoothing the boundaries of large objects.
The closing operation is defined as dilation followed by erosion It fills small and thin holes
in an objects, smoothing the boundaries of large objects.