0% found this document useful (0 votes)
70 views42 pages

AP4011 Lab Manual

Manual

Uploaded by

abimaaa06
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
70 views42 pages

AP4011 Lab Manual

Manual

Uploaded by

abimaaa06
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 42

CHRISTIAN COLLEGE OF ENGINEERING AND TECHNOLOGY,

ODDANCHATRAM-624619.

AP4011 – ADVANCED DIGITAL IMAGE PROCESSING

RECORD NOTE BOOK

REGISTER NO:

Certified that it is a bonafide record of practical lab …………………………………

Work done by……………………………………………..during the year ……………………

STAFF – INCHARGE HEAD OF THE DEPARTMENT

Submitted for the practical exam held on …………………

Internal Examiner External Examiner


INDEX

PAGE MARK
EX.NO NAME OF THE EXPERIMENT SIGN
NO. S

1 WAVELET AND DCT BASED IMAGE COMPRESSION

GEOMETRICAL TRANSFORMATIONS AND


2
INTERPOLATION OF IMAGES

3 EDGE DETECTION USING CANNY EDGE DETECTOR

REGION BASED,THRESHOLD BASED, AND WATERSHED


4
SEGMENTATION

5 IMAGE FILTERING USING DFT

TEXTURE, GABOR, AND WAVELET


6
FEATURE EXTRACTION

7 IMAGE FUSION USING WAVELETS

SEGMENTING 3D IMAGE VOLUME USING K-


8
MEANS CLUSTERING

9 SEGMENTATION OF LUNGS FROM 3D CHEST SCAN


EXP NO: 1 WAVELET AND DCT BASED IMAGE COMPRESSION

DATE:

Aim:
To perform a wavelet and DCT image compression using MATLAB.

THEORY:
WAVELET:
As we are going to deal with compression of images, it is obvious that manipulation in raw image is quite
impossible and if the image is represented in some mathematical form then the manipulation will be
simpler and easier. Hence, the raw image needs to be transformed. To what extend a particular transform
will support data compression depends on both the transform and the nature of images being compressed.
The practicality of an image coding scheme depends on the computational workload of the encoding and
decoding steps, as well as the degree of compression obtained. The availability of a fast implementation
algorithm can greatly enhance the appeal of the particular transform. Some of the transformations are Sine
transform. Cosine transform, Haar transform. Slant transform etc.

MATLAB CODE:
clear all;
close all;
input_image1=imread('peppers.png');
input_image=imnoise(input_image1,'speckle',.01);
figure;
imshow(input_image);
n=input('enter the decomposition level=');
[Lo_D,Hi_D,Lo_R,Hi_R] = wfilters('haar');
[c,s]=wavedec2(input_image,n,Lo_D,Hi_D);
disp(' the decomposition vector Output is');
disp(c); [thr,nkeep] = wdcbm2(c,s,1.5,3*prod(s(1,:)));
[compressed_image,TREED,comp_ratio,PERFL2]=wpdencmp(thr,'s',n,'haar','threshold',5,1);
disp('compression ratio in percentage');
disp(comp_ratio);
re_ima1 = waverec2(c,s,'haar');
re_ima=uint8(re_ima1);
subplot(1,3,1);
imshow(input_image);
title('i/p image');
subplot(1,3,2);
imshow(compressed_image);
title('compressed image');
subplot(1,3,3);
imshow(re_ima);
title('reconstructed image');

1
INPUT IMAGE

OUTPUT IMAGE:

IMAGE COMPRESSION USING DCT


clear all;
close all;
clc;
name = input('Write the image name ( image.jpg ): ','s');
x = input('Compressed Image Quality % (1<x<100): ');
original=imread(name);
original = double(original)/255;
for i=1:3
im=original(:,:,i);
img_dct=dct2(im);
img_pow=(img_dct).^2;
img_pow=img_pow(:);
[B,index]=sort(img_pow);%no zig-zag
B=flipud(B);
index=flipud(index);
compressed_dct=zeros(size(im));
rate=size(index,1)*x/100;
for k=1:rate
compressed_dct(index(k))=img_dct(index(k));
end
img_dct=idct2(compressed_dct);
RGB(:,:,i)=img_dct;
end
2
imshow(original)
title('Original image');
figure;
imshow(RGB);
title('DCT Compressed Image');
imwrite(;RGB, 'compressed.jpg');

OUTPUT:

RESULT: Thus the image compression in DCT and Wavelet using MATLAB was successfully verified.

3
4
EXP NO: 2 GEOMETRICAL TRANSFORMATIONS AND INTERPOLATION OF IMAGES

DATE:

AIM: To perform a Geometrical transformations and interpolation of images using MATLAB.

THEORY:

GEOMETRIC TRANSFORMATIONS:

Geometric transformations are needed to give an entity the needed position,


orientation, or shape starting from existing position, orientation, or shape. The basic
transformations are scaling, rotation, translation, and shear. Other important types of
transformations are projections and mappings.

INTERPOLATION:

Image interpolation occurs when you resize or distort your image from one pixel grid
to another. Image resizing is necessary when you need to increase or decrease the
total number of pixels, whereas remapping can occur when you are correcting for
lens distortion or rotating an image

PROGRAM:
import numpy as np
import cv2 as cv
img = cv.imread('messi5.jpg', cv.IMREAD_GRAYSCALE)
os.path.exists()"
rows,cols = img.shape
M = np.float32([[1,0,100],[0,1,50]])
dst = cv.warpAffine(img,M,(cols,rows)) cv.imshow('img',dst)
cv.waitKey(0)
cv.destroyAllWindows
()

OUTPUT:

5
INTERPOLATION

clc;

clear all;

fs=input('enter sampling

frequency'); f=input('enter

frequency of signal'); L=input('enter

interpolation factor'); t=0:1/fs:1;

x=sin(2*pi*f*t);

N=length(x);

n=0:N-1; m=0:

(N*L)-1;

x1=zeros(1,L*N);

j=1:L:N*L;

x1(j)=x;

f1=fir1(34,0.48,'low');

output=2*filtfilt(f1,1,x1);

y=interp(x,L);

subplot(3,1,1);

stem(n,x);

xlabel('samples');

ylabel('amplitude');

title('Input signal');

subplot(3,1,2);

stem(m,output);

axis ([0 200 -1 1]);

6
xlabel('samples');

ylabel('amplitude');

title('Interpolated signal');

subplot(3,1,3);

stem(m,y);

axis ([0 200 -1 1]);

xlabel('samples');

ylabel('amplitude');

title('Interpolated signal using inbuilt command');

OUTPUT:

RESULT:

Thus the Geometrical transformation and interpolation of images using MATLAB was

successfully verified.

7
8
EXP NO 3 EDGE DETECTION USING CANNY EDGE DETECTOR

DATE:

AIM: To perform edge detection using canny edge detector.

Theory:

Canny edge detection is a technique to extract useful structural information from different vision
objects and dramatically reduce the amount of data to be processed. It has been widely applied
in various computer vision systems. Canny has found that the requirements for the application
of edge detection on diverse vision systems are relatively similar. Thus, an edge detection
solution to address these requirements can be implemented in a wide range of situations.
The general criteria for edge detection include:

1. Detection of edge with low error rate, which means that the detection should
accurately catch as many edges shown in the image as possible
2. The edge point detected from the operator should accurately localize on the center of
the edge.
3. A given edge in the image should only be marked once, and where possible, image
noise should not create false edges.

PROGRAM:

i= imread('cancercell.jpg');

g=rgb2gray(i);

subplot(2,2,1);

imshow(i);

title('Original Image');

subplot(2,2,2);

imshow(g);

title('Gray Image');

c=edge(g,'canny');

subplot(2,2,3);

imshow(c);

title('Canny output');

9
INPUT IMAGE

OUTPUT IMAGE

Result:

Thus the edge detection using canny edge detector was verified successfully.

1
EXP NO : 4 REGION BASED,THRESHOLD BASED, AND WATERSHED SEGMENTATION
DATE:

Aim : To segment an image using region based, threshold based and watershed techniques.

Region based segmentation

Classification of region based segmentation:

 Region growing
 Watershed technique
1. Region growing
 Region growing consist of very fine segmentation merging together similar
adjacent regions.
 Region adjacency graphs are used to represent segmentation data. Each node
represents a region. An edge exists between two nodes if corresponding regions
are adjacent.
 A homogeneity predicate H(R) is a function that takes a region R and returns true
or false according to the pixel properties.
 Basic formulation for region-based segmentation is given by a partition

{R1,R2,..., Rn} such that :

∀k Rkis connected
∀k H (Rk) is true


 Ri Rj = for all i=1, 2, 3….. n(region must be disjoint)
 P (Ri) =true (if all pixels in Ri have the same gray level.
 P (Ri Rj) =false (indicates region Ri& Rj are difficult in
the sense of predicate.

absolute intensity homogeneity

∀p,q ∈S | I(p) − I(q) | ≤


s
 differential intensity homogeneity

∀p,q ∈S and (p neighbor of q)| I(p) − I(q) | ≤ s

 First point in region growing is to select a set of seed points. Seed point

selection is based on some user criteria (e.g.-pixel in certain grayscale range, pixels
evenly spaced on grids).

 Segmentation results in a logical predicate of the form P(R, X, t).X are the
feature vector associated with an image pixel and t is a parameter (threshold).

1
2. Segmentation by Watershed

In geography, watershed is the ridge that divides area drained by different river system.
A catchment basin is the geographical area draining into river to reservoir. The
watershed transform applies the ideas to grayscale image processing in a way that can
be used to solve a variety of image-segmentation problem.

Watershed principle for image segmentation:

 Local minima of the gradient of the image may be chosen as marker.


 Marker based watershed transformation make use of specific marker positions which
have been either explicitly defined by the user or determined automatically with
morphological operators.

Segmentation procedure:

 Compute segmentation function.(This is an image whose dark regions are the


objects we are trying to segment)
 Compute foreground markers. (These are connected blobs of pixels within each of
the objects)
 Compute background marker.(pixels that are not part of any object)
 Modify the segmentation function so that it only has minima at foreground
and background locations.
 Compute watershed transform of modified segmentation function.

1
MATLAB CODE:
REGION BASED:
I=
im2double(imread('Image_To_Read.tiff'));
figure, imshow(I)
imtool(I);
Isizes =
size(I);
threshI =
multithresh(I,3); [m,
n]=ginput(1);
c = impixel(I, m, n);
currPix = c(1);
surr = [-1 0; 1 0; 0 -1; 0 1];
mem = zeros(Isizes(1)*Isizes(2),3);
mem(1, :) = [m, n, currPix];
regSize = 1;
J = zeros(Isizes(1), Isizes(2));
init = 1;
posInList = 1;
k=1;
while(k==1)
for
l=init:posInList
for j=1:4
m1 = m + surr(j,1);
n1 = n + surr(j,2);
check=(m1>=1)&&(n1>=1)&&(m1<=Isizes(1))&&(n1<=Isizes(2));
current = impixel(I, m1, n1);
currPix = current(1);
if(check && currPix<=threshI(2) && (J(m1,
n1)==0)) posInList = posInList+1;
mem(posInList, :) = [m1, n1,
currPix]; J(m1, n1) = 1;
en
d
en
d
en
d
if(posInList ==
init) k = 0; else
init = init+1;
m = mem(init, 1, :);
n = mem(init, 2,
:); k = 1;
en
1
d
en
d

1
imshow(J);

1
OUTPUT:

WATERSHD

THRESHOLD BASED SEGMENTATION


THEORY
Thresholding is the simplest method of image segmentation. From a grayscale image,
Thresholding can be used to create binary images. The simplest Thresholding methods replace in
an image with a black pixel if the image intensity I i, j is less than some fixed constant T or a white
pixel if the image intensity is greater than that constant.
Color images can also be threshold. One approach is to designate a separate threshold for
each of the RGB components of the image and then combine them with an AND operation. This
reflects the way the camera works and how the data is stored in the computer, but it does not
correspond to the way that people recognize color. Therefore, the HSL and HSV color models
are more often used; note that since hue is a circular quantity it requires circular Thresholding. It
is also possible to use the CMYK color model

1
PROGRAM:
clc;
close
all;
clear
all;
im = imread ('C:\Documents and Settings\user\Desktop\Training - Images\
Images\winter.jpg'); figure;
subplot
(1,2,1);
imshow (im);
title ('Original
Image'); [ht wd] = size
(im);
opim=zeros(ht,wd,'uint8')
; for
i=1:ht for
j=1:wd
int=im(i,j)
; if
int<=120
opim(i,j)=0;
else
opim(i,j)=255;
end;
end
; end;
imwrite(opim,'C:\Documents and Settings\user\Desktop\Training - Images\Images\winter1.tif.jpg

subplot(1,2,2);
imshow(opim);
title('Image after Threshold');
1
OUTPUT:

RESULT

Thus the region based, Threshold based and watershed segmentation for given Input Image
using MATLAB software are preformed and Output image are plotted and analyzed successfully.

1
1
EXP NO: 5 IMAGE FILTERING USING

DFT DATE:

AIM:

To create a rectangular image filtering using DFT

THEORY:

Image processing is one of the most immerging and widely growing technique making it a lively
research field. Image processing is the method to convert the image into digital form and perform various
operations on it like enhancing the image or extracting various useful information. One of the most
interesting application of image processing is image filtering. Image filtering is a technique used to twerk
the images in terms of size, shape, colour, depth, smoothness etc. Basically, it alters the pixels of the
image to transform it into desired form using different types of graphical editing methods through a
graphic design and editing software. This paper introduces to various image filtering techniques and
its wide applications.

Functions fft () and fft2 () calculate the spectrum whose center is not in the middle of the obtained
matrix, but in the upper left corner. If we want to display the results so that the origin is in the middle of the
image, we need to apply the function fftshift (). Besides the shift, it necessary to scale the obtained DFT
coefficient values. Scaling in this case corresponds to dividing the coefficients with a number of elements of the
input matrix, which in MATLAB equals to prod (size (imgFT)). It is important to notice that inputs of the
functions fft() and fft2() should not be scaled or shifted!

Since the differences between the coefficient values are commonly very large (up to several orders of
magnitude), amplitude is usually displayed in decibels (dB).

2
PROGRAM:

clc;

clear all;

Close all;

f = zeros (30, 30);

f (5:24,13:17) = 1;

imshow (f,

'notruesize') F = fft2

(f,256, 256);

F2 = fftshift (F);

figure

imshow (log(abs(F2)),[-1

5],'notruesize'); colormap (jet);

colorbar;

OUTPUT:

RESULT :

Thus filtering the image using DFT was successfully verified.


2
EXP NO 6: TEXTURE, GABOR, AND WAVELET FEATURE EXTRACTION

DATE:

AIM: To perform trexture,gabor,and wavelet feature extraction using MATLAB.

THEORY:

TEXTURE

Feature Extraction is a method of capturing visual content of images for indexing & retrieval.
Primitive or low level image features can be either general features, such as extraction of color, texture
and shape or domain specific features.

GABOR

Features are extracted directly from gray-scale character images by Gabor filters which
are specially designed from statistical information of character structures. An adaptive
sigmoid function is applied to the outputs of Gabor filters to achieve better performance
on low-quality images.

WAVELET
These wavelet coefficients are used in extracting features from hyperspectral data. The wavelet
transform is used to dissect the signal or pixel vector of a hyperspectral data into different frequency
components and then depending upon the frequency components they are used in further processing.

MATLAB CODE:

TEXTURE

clc

clear all;

close all;

warning off

I = imread('GFG.png'); % Read the image

imshow(I); % plotting the image in

figure. title('Original Image');

E = rescale(entropyfilt(I));

2
figure;

imshow(E);

title('Result of rescale(Entropy

Filtering)'); figure;

imhist(E); % plotting the histogram of the rescaled

image. binary_img = imbinarize(E,0.7);

figure;

imshow(binary_img); % present the binary image of

histogram. figure;

area_opened = bwareaopen(binary_img,1000);

imshow(area_opened)

OUTPUT:

2
WAVELET EXTRACTION:

clear all;

k=input('Enter the

file

name','s'); im=imread(k);

im1=rgb2gray(im);

im1=medfilt2(im1,[3 3]);

BW =

edge(im1,'sobel');

[imx,imy]=size(BW);

msk=[0 0 0 0 0;

0 1 1 1 0;

0 1 1 1 0;

0 1 1 1 0;

0 0 0 0 0;];

B=conv2(double(BW),double(msk))

; L = bwlabel(B,8);

mx=max(max(L))

[r,c] = find(L==17);

… rc = [r c];

[sx sy]=size(rc);

n1=zeros(imx,imy);

for i=1:sx

x1=rc(i,1);

y1=rc(i,2);

n1(x1,y1)=255;
2
end figure,imshow(im);

2
OUTPUT:

GABOR EXTRACTION:

gaborArray = cell(u,v);

fmax = 0.25;

gama =

sqrt(2); eta =

sqrt(2); for i =

1:u

fu = fmax/((sqrt(2))^(i-

1)); alpha = fu/gama;

beta = fu/eta;

2
for j = 1:v

tetav = ((j-1)/v)*pi;

gFilter = zeros(m,n);

for x = 1:m

for y = 1:n

xprime = (x-((m+1)/2))*cos(tetav)+(y-((n+1)/2))*sin(tetav);

yprime = -(x-((m+1)/2))*sin(tetav)+(y-((n+1)/2))*cos(tetav);

gFilter(x,y) = (fu^2/(pi*gama*eta))exp(-
((alpha^2)(xprime^2)+(beta^2)*(yprime^2)))*exp(1i*2*pi*fu*xprime

); end

end

gaborArray{i,j} = gFilter;

end

end

OUTPUT:

RESULT:

Thus the texture, Gabor, and wavelet feature extraction was completed successfully.

2
EXP NO 7 : IMAGE FUSION USING

WAVELETS DATE:

AIM: To perform image fusion using wavelet transform.

THEORY:

The goal of image fusion is to integrate complementary information


from
multisensor data such that the new images are more suitable for the purpose of human visual
perception and computer-processing tasks such as segmentation, feature extraction, and object
recognition. This paper presents an image fusion scheme which is based on the wavelet
transform. The wavelet transforms of the input images are appropriately combined, and the
new image is obtained by taking the inverse wavelet transform of the fused wavelet
coefficients. An area-based maximum selection rule and a consistency verification step are
used for feature selection.

MATLAB CODE:

clc

clear all;
img=rand(4,4,3)
img = imread('E:\ms.jpg');
imshow(img);
img1=img(:,:,1);
[m
n]=size(img1);
imshow(img1);
img2=img(:,:,2);
imshow(img2);
img3=img(:,:,3);

2
imshow(img3);
temp1=reshape(img1',m*n,1);
imshow(temp1);
temp2=reshape(img2',m*n,1);
imshow(temp2);
temp3=reshape(img3',m*n,1);
imshow(temp3);
I=[temp1 temp2
temp3]; imshow(I);
m1=mean(I,2);
imshow(m1);
temp=double(I);
for i=1:3
I1(:,i)=(temp(:,i)-
m1); end
a1=double(I1)
; imshow(I1);
a=a1';
covv = a*a';

2
[eigenvec
eigenvalue]=eig(covv);
abhi=eigenvalue;
eigenvalue = diag(eigenvalue);
[egn,index]=sort(-
1*eigenvalue);
eigenvalue=eigenvalue(index);
eigenvec=eigenvec(:,index);
imshow(eigenvec);
pcaoutput=a1*eigenvalue;
vt=transpose(eigenvalue);
for i=1:size(pcaoutput,2)
ima=reshape(pcaoutput(:,i)',n,m);
ima=ima';
imshow(ima,[]);
end
hp=imread('E:\pan.jpg');
original=inv(abhi)*pcaoutput;
origin=transpose(original)+m1;
imshow(original);

3
OUTPUT:

INPUT IMAGE

FUSED IMAGE

RESULT: Thus the Image fusion using wavelet was verified successfully.

3
EXP NO 8 : SEGMENTING 3D IMAGE VOLUME USING K- MEANS CLUSTERING

DATE:

AIM: To segment a 3D image using K-means clustering in MATLAB.

THEORY:

Image segmentation is an important step in image processing, and it seems everywhere if we


want to analyze what’s inside the image. For example, if we seek to find if there is a chair or
person inside an indoor image, we may need image segmentation to separate objects and
analyze each object individually to check what it is. Image segmentation usually serves as the
pre- processing before pattern recognition, feature extraction, and compression of the
image.Image segmentation is the classification of an image into different groups. Many kinds of
research have been done in the area of image segmentation using clustering. There are different
methods and one of the most popular methods is K-Means clustering algorithm.

MATLAB CODE:

clc

close all

clear all

[im,map]=imread('pp1.bmp');

im=im2double(im);

[row,col]=size(im);

nc=4;

cs=rand(nc,1);

pcs=cs;

T=50;

t=0;

D=zeros(row,col,nc);

3
tsmld=[];

eps=1.e-5;

cmx=1;

while (t<T && cmx>eps)

for c=1:nc

D(:,:,c)=(im-cs(c)).^2;

end

[mv,ML]=min(D,[],3);

for c=1:nc

I=(ML==c);

cs(c)=mean(mean(im(I)));

end

cmx=max(abs(cs-pcs));

pcs=cs;

t=t+1;

tsmld=[tsmld; sum(mv(:))];

end

colors=hvs(nc);

sim=colors(ML,:);

sim=reshape(sim,row,col,3);

figure,subplot(1,2,1),imshow(im,map);

title('Input Image: pp1');

3
subplot(1,2,2);imshow(sim,map);

title('segmented Image:pp1')

figure;plot(tsmld,'*-b')

xlabel('Iteration');ylabel('Energy');

title('K-means energy

minimization- pp1');

OUTPUT:

RESULT:

Thus the segmenting of 3d image volume using K-means clustering was verified successfully.

3
3
EXP NO 9: SEGMENTATION OF LUNGS FROM 3D CHEST
SCAN DATE:

AIM: To perform segmentation of lungs from 3D chest scan.

THEORY: 3D scanning is the process of analyzing a real-world object or environment to collect


data on its shape and possibly its appearance (e.g. color). The collected data can then be used to
construct digital 3D models.

A 3D scanner can be based on many different technologies, each with its own limitations,
advantages and costs. Many limitations in the kind of objects that can be digitised are still present.
For example, optical technology may encounter many difficulties with dark, shiny, reflective or
transparent objects. For example, industrial computed tomography scanning, structured-light 3D
scanners, LiDAR and Time Of Flight 3D Scanners can be used to construct digital 3D models,
without destructive testing.

MATLAB CODE:
imshow(XY,[],"Border","tight")
imshow(XZ,[],"Border","tight")
imageSegmenter(XY)
BW =
imcomplement(BW);
BW =
imclearborder(BW); BW
= imfill(BW, "holes");
radius = 3;
decomposition = 0;
se =
strel("disk",radius,decomposition);
BW = imerode(BW, se);
maskedImageXY = XY;
maskedImageXY(~BW) = 0;
imshow(maskedImageXY)
BW =
imbinarize(XZ); BW
=
imcomplement(BW);
BW =
imclearborder(BW); BW
= imfill(BW,"holes");
radius = 13;
decomposition = 0;
se =
strel("disk",radius,decomposition);
BW = imerode(BW, se);
maskedImageXZ = XZ;
maskedImageXZ(~BW) = 0;

3
imshow(maskedImageXZ)

3
mask = false(size(V));
mask(:,:,160) =
maskedImageXY;
mask(256,:,:) =
mask(256,:,:)|reshape(maskedImageXZ,[1,512,318]); V =
histeq(V);
BW = activecontour(V,mask,100,"Chan-Vese");
segmentedImage = V.*single(BW);
OUTPUT IMAGE:

RESULT: Thus the segmentation of lungs from 3D-chest scan was verified successfully.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy