0% found this document useful (0 votes)
104 views41 pages

Dip Journal

The document describes performing brightness and contrast enhancement on images. It discusses how brightness can be increased by adding a constant value to each pixel, and decreased by subtracting a constant. The code loads an image, converts it to grayscale, performs brightness enhancement by addition and subtraction, and saves the results. It also mentions contrast stretching is used to increase the dynamic range of grayscale levels to improve poor contrast from issues like lighting.

Uploaded by

shubham avhad
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
104 views41 pages

Dip Journal

The document describes performing brightness and contrast enhancement on images. It discusses how brightness can be increased by adding a constant value to each pixel, and decreased by subtracting a constant. The code loads an image, converts it to grayscale, performs brightness enhancement by addition and subtraction, and saves the results. It also mentions contrast stretching is used to increase the dynamic range of grayscale levels to improve poor contrast from issues like lighting.

Uploaded by

shubham avhad
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 41

Practical 1-

Title: 2D Linear Convolution Aim:

Accept 2d matrices from user and perform linear convolution between two them.

Background:

Linear convolution is a mathematical operation that combines two functions, typically


signals, to produce a third function that expresses how one of the original functions is
modified by the other. It is an important operation in digital image processing that is
used for a wide range of applications such as image filtering, feature extraction, image
restoration, and image compression. It is the fundamental operation used for applying a
filter to an image, where the filter is a small matrix that can be used to enhance certain
features or suppress noise in the image. Linear convolution is also used in image
compression techniques like JPEG and MPEG for data compression and in pattern
recognition algorithms to match images to templates.

In this code, we first prompt the user to enter two matrices, A and B, using the input
function.
We then use the conv2 function to perform linear convolution between the two
matrices. The 'same' argument passed to the conv2 function ensures that the output
matrix is of the same size as the input matrices. Finally, we display the resulting matrix
C using the disp function.

Source Code:

clc;

disp('First Matrix')
m=input('Enter the number of rows of the first matrix:')

n=input('Enter the number of column of the first matrix:') for

i=1:m

for j=1:n

x(i,j)=input('Enter the values:')

end

end disp('Second

Matrix')

a=input('Enter the number of rows of the first matrix:')

b=input('Enter the number of column of the first matrix:')

for i=1:a for j=1:b

h(i,j)=input('Enter the values:')

end

end con=conv2(x,h) disp('First

Signal');disp(x) disp('Second

Signal');disp(h) disp('Linear

Convolution');disp(con)

Output:
Practical 2-
Title:Circular Convolution Aim:

Accept two 2d signal from the matrix from user and perform and perform
circular convolution them
Background:
Circular convolution is a convolution between two finite-length sequences by
treating them as if they were periodic, so that no boundary effects arise. This
code accepts two 2x2 matrices from the user and performs linear and circular
convolution between them using the conv2 function. Then, the code prompts the
user to enter the values for two 2x2 matrices. The for loops are used to iterate
through each element of the matrices and use the ‘input’ function to accept user
input.

Conv2 function is used to perform linear convolution between the two


matrices. The resulting matrix is stored in the con variable. The disp function is
then used to display the input matrices and the resulting matrix. The resulting
matrix is then stored in the lcon variable. The code then calculates circular
convolution using the formula ccon1=[lcon(:,1)+lcon(:,$),lcon(:,2)] and
ccon2=[ccon1(1,:)+ccon1($,:);ccon1(2,:)]. Finally, the disp function is used to
display the resulting matrix.

Source Code:
clc; clear;

disp('First Matrix')

m=input('Enter the number of rows of the first matrix:')

n=input('Enter the number of column of the first matrix:') for

i=1:m

for j=1:n

x(i,j)=input('Enter the values:')


end

end disp('Second

Matrix')

a=input('Enter the number of rows of the first matrix:')

b=input('Enter the number of column of the first matrix:')

for i=1:a for j=1:b


h(i,j)=input('Enter the values:')

end

end

lcon=conv2(x,h) disp('First Signal

');disp(x) disp('Second

Signal');disp(h) disp('Linear

Convolution');disp(lcon)

ccon1=[lcon(:,1)+lcon(:,$),lcon(:,2)]

ccon2=[ccon1(1,:)+ccon1($,:);ccon1(2,:)] disp('Addition

of first and third column ');disp(ccon1) disp('Circular

Convolution');disp(ccon2)

Output:
Practical 3 -
Title: Cross Correlation Aim:

3A) Perform Cross Convolution of two signals.

Background:
Cross correlation is a technique used to measure the similarity between an
image and a kernel or template by sliding the kernel over the image, computing
the sum of products at each position, and storing the results in a new matrix,
which is the crosscorrelation output.

This code accepts two 2D matrices from the user and performs linear convolution
between them.

The first for loop accepts the values of the first matrix and stores them in the
variable 'x', while the second for loop accepts the values of the second matrix and
stores them in the variable 'h'.Then creates a new matrix 'h1' by flipping the rows
of 'h' in reverse
order using the MATLAB notation '$:-1:1'. This is done to ensure that the
convolution operation is performed correctly.

Next, the code creates another matrix 'h2' by flipping the columns of 'h1' in reverse
order using the notation ':$:-1:1'.Finally, the code performs the linear convolution
operation between matrices 'x' and 'h2' using the 'conv2' function and stores the
result in the variable 'y'. The output is then displayed using the 'disp' function.

Overall, this code performs linear convolution between two 2D matrices using
MATLAB and can be used for various signal processing applications.

Source Code:
clc; clear; disp('Enter a 2*2 first

matrix') for i=1:2 for j=1:2


x(i,j)=input('Enter a value:')

end

end disp(x) disp('Enter a 2*2

second matrix') for i=1:2 for

j=1:2 h(i,j)=input('Enter a

value:') end end disp(h)

h1=h($:-1:1,:) h2=h1(:,$:-1:1);

y=conv2(x,h2) disp('Cross

Convolution'); disp(y)

Output:
Practical 3 B-
Title:Auto Coorelation

Aim: Perform Auto Correlation

Background:
Autocorrelation is the correlation of an image with itself as a function of an offset or lag.
In two and three dimensions, the lag has both distance and direction.

Source Code:

clc;

disp("for first matrix")


m=input("no of rows")
n=input("no of columns")
for i=1:m
for j=1:n
x(i,j)=input("enter elements")
end
end disp("for second
matrix") a=input("no of
rows") b=input("no of
columns")
for i=1:a for
j=1:b
h2(i,j)=input("enter elements")
end
end
x=[1,2;3,4] disp(x
)
x1=x($:-1:1,:) //swap 1st and last row disp(x1)
x2=x1(:,$:-1:1) //swap 1st and lasr
column disp(x2) y=conv2(x,x2)
disp(y) ]

Output:
Practical 4- Aim:
DFT using Twiddle Matrix
Background:
The DFT using Twiddle matrix is needed to efficiently compute the
frequency domain representation of a discrete-time signal, which is
important in various applications such as signal processing and image
analysis.Our source code is generating a 2D Discrete Fourier
Transform (DFT) matrix of size 4 x 4 using the formula for the
Discrete Fourier transform.

In the first code, the user is prompted to enter the size of the matrix and
then the code generates the DFT matrix using nested loops. The outer
loop iterates through rows, and the inner loop iterates through the
columns of the matrix. The formula for the DFT is used to compute the
value at each element of the matrix. The computed values are then stored
in the matrix w, and finally, the matrix w is displayed using the disp()
function.

In the alternate code, the matrix F is already defined and its size is 4x4.
The DFT matrix is generated using the same formula and nested loops as
in the first code, and The resulting matrix is stored in the variable W. The
transpose of the matrix W is then computed using the ' (transpose)
operator, and the DFT matrix is obtained by multiplying W, F, and its
transposition. The resulting matrix is stored in the variable F, and both
DFT twiddle matrix and F are displayed using the disp() function.

Source Code:
clc; clear;

N=4

for i=1:0:N-1

for j=0:N-1

w(i+1,j+1)=int(cos(2*%pi*i*j/N)-%i*sin(2*%pi*i*j/N))

end

end

disp('Twiddle matrix'); disp(w)

f=[1,2,3,4;5,6,7,8;1,2,3,4;5,6,7,8]

wt=w' F=w*f*wt

disp('DFT using Twiddle matrix'); disp(F)

Output:
Practical 5-
Title: Image Enhancement –Point Processing
Practical 5 A- Brightness Enhancement
Aim: Perform Brightness Enhancement on images.
Background:
The brightness of an image depends on the value associated with the pixel of the
image. When changing the brightness of an image, a constant is added or
subtracted from the luminance of all sample values. The brightness of the image
can be increased by adding a constant value to each and every pixel of the
image. Similarly the brightness can be decreased by subtracting a constant value
from each and every pixel of the image. (a) Increasing the Brightness of an
Image A simple method to increase the brightness value of an image is to add a
constant value to each and every pixel of the image. If f [m, n] represents the
original image then a new image g [m, n] is obtained by adding a constant k to
each pixel of f [m, n]. This is represented by g[m, n] = f [m, n] + k

Brightness Enhancement

(b) Decreasing the Brightness of an Image The brightness of an image can be


decreased by subtracting a constant k from all the pixels of the input image f
[m,n].
This is represented by g [m, n] = f [m, n] – k

Brightness Suppression

Source Code:
clc; clear;

close();

a=imread("C:\Users\PC-0016\Desktop\DIP\duck.jpeg")

d=rgb2gray(a) b=d+50 c=d-50 imshow(b)


imwrite(d,"C:\Users\PC-0016\Desktop\DIP\duckbrightened.jpeg")

imwrite(b,"C:\Users\PC-0016\Desktop\DIP\duck1.jpeg") imwrite(c,"C:\Users\

PC-0016\Desktop\DIP\duck2.jpeg")

Input:

Output:
d=
b=

c=
Practical 5 B- Title:
Contrast Enhance.
Aim: Perform Contrast Enhance on images.
Background:
Contrast Stretching is used increase the dynamic range of grey levels in an
image. It is required due to poor contrast. Poor Contrast normally occurs due to
poor or non-uniform illumination, non-linear dynamic range in an image sensor,
wrong setting of lens aperture. This technique simply increases the contrast of
an image by making dark regions darker and bright regions brighter The idea
behind contrast stretching is to increase the dynamic range of the gray levels in
the image being processed.
The above picture shows a typical transformation used for contrast stretching.

The locations of points (r1 , s1) and (r2 , s2) control the shape of the
transformation

Source Code:
clc; clear;

close();

a=imread("C:\Users\PC-0016\Desktop\DIP\abc.jpeg")

b=double(a)*0.5; b=uint8(b);

c=double(a)*1.4; c=uint8(c);
imwrite(b,"C:\Users\PC-0016\Desktop\DIP\ab2.jpeg") imwrite(c,"C:\Users\PC-

0016\Desktop\DIP\ab3.jpeg")

Input:

Output: ab

2=
ab 3=
Practical 5 C- Title:
Image Negative.
Aim:Perform Image Negation on images and
compress darker level values darker.
Background:
The simple operation in image processing is to compute the negative of an
image. It can be done by reversing the pixel values from black to white and
white to black. Intensity of output image decreases as intensity of input image
increases. b = (255 – a) (3) where a is an original image & b is negative image
processed. Each pixel of an image gets subtracted from 255 and the resultant
negative image will depend upon the original pixel values.

Graphical representation of Negation

Original image Negative image


Code:
clc; clear;

close();

a=imread("C:\Users\PC-0016\Desktop\DIP\bird.jpeg") b=255-a;

imwrite(c,"C:\Users\PC-0016\Desktop\DIP\bird2.jpeg")

Input:

Output:
Practical 6-
Practical 6 A- Image Threshold Aim:Perform
Image Thresholding .
Background:
It is a process of extracting a part of an image which contains information. In
this transformation, one threshold level is set and pixel values below threshold
are to be taken as 0 and above values are taken as 255 .Let us consider an image
of a lady and transform that image into form of matrix i and j

If a (i, j) < t;

then b (i, j) = 0; else b (i, j) = 255; a and b


are original and processed images respectively. Matrix values i, j are pixel
locations in an image, t is thresholding parameter.

Original image Processed image

Source Code:
clc; clear;

a=imread("C:\Users\PC-0016\Desktop\DIP\man.jpeg")
[m,n]=size(a); t=input('Enter

threshold:'); for i=1:m

for j=1:n

if(a(i,j)<t)

b(i,j)=0; else

b(i,j)=255; end

end

end

imwrite(b,"C:\Users\PC-0016\Desktop\DIP\man2.jpeg")
Input:
Output:
When threshold=9
Practical 6 B-
Title: Gray Level Slicing -without background
Aim: Perform Gray Level Slicing -without
background on images.
Background:
In gray-level slicing with background, the objective is to display high values for the range of interest
and original gray level values in other areas.

Gray-level slicing without background

Results of gray-level slicing without preserving background

Source Code:
clc; clear;

x=imread("C:\Users\PC-0016\Desktop\DIP\bw.jpeg")

[m,n]=size(x)

L=max(x);

a=round(L/3);

b=round(L/2) for

i=1:m

for j=1:n

if(x(i,j)>=a&x(i,j)<=b)

z(i,j)=255; else

z(i,j)=0; end end

end

imwrite(z,"C:\Users\PC-0016\Desktop\DIP\bw2.jpeg")

Input:
Output:
Practical 7-
Title: Image Compression- Reducing Interpixel
redundancy using run-length coding.
Aim:Reduce Interpixel redundancy using run-length
coding.
Background:
Image compression is the process of reducing the size of an image file
without significantly affecting its visual quality, while preserving
important features. It is important because it helps in reducing storage
space requirements and transmission time, making it easier and faster
to store and transmit images.

Run Length Encoding (RLE) is a lossless data compression technique


used in digital image processing.

The program takes a string input and loops through each character in the
string, counting the number of consecutive characters that are the same.
If the character changes, it appends the count and character to a new
string "s", then resets the count to 1.

The resulting "s" string represents the compressed data using fewer
characters than the original string, resulting in reduced interpixel
redundancy and improved compression efficiency.

The output of the program is the compressed string "s".

Source Code:
clc; clear;

close();

s=" ";

str=input("Enter the data:","string")

count=1 for i=1:length(str)


if(part(str,i)==part(str,i+1))

count=count+1 else

s=s+ " " +string(count)

+part(str,i) count=1 end

end disp(s)

Output:

Practical 8-
Title: Histogram
Aim:Perform Histogram Equalization on images for
finding most frequent intensity values Background:
Histogram manipulation basically modifies the histogram of an input image so
as to improve the visual quality of the image. In order to understand histogram
manipulation, it is necessary that one should have some basic knowledge about
the histogram of the image. The following section gives basic idea about
histograms of an image and the histogram-equalisation technique used to
improve the visual quality of an image.

(a) Histogram The histogram of an image is a plot of the number of occurrences


of gray levels in the image against the gray-level values. The histogram provides
a convenient summary of the intensities in an image, but it is unable to convey
any information regarding spatial relationships between pixels. The histogram
provides more insight about image contrast and brightness.

1. The histogram of a dark image will be clustered towards the lower gray level.

2. The histogram of a bright image will be clustered towards higher gray level.

3. For a low-contrast image, the histogram will not be spread equally, that is, the
histogram will be narrow.

4. For a high-contrast image, the histogram will have an equal spread in the gray
level. Image brightness may be improved by modifying the histogram of the
image.

(b) Histogram Equalisation Equalisation is a process that attempts to spread out


the gray levels in an image so that they are evenly distributed across their range.
Histogram equalisation reassigns the brightness values of pixels based on the
image histogram. Histogram equalisation is a technique where the histogram of
the resultant image is as flat as possible. Histogram equalisation provides more
visually pleasing results across a wider range of images.
Source Code:
clc; clear;

close();

image=imread("C:\Users\PC-0016\Desktop\DIP\duck.jpeg")

red=image(:,:,1) green=image(:,:,2) blue=image(:,:,:3)

[yRed,x]=imhist(red)

[yGreen,y]=imhist(green)
[yblue,x]=imhist(blue)

plot(x,yRed,'Red',x,yGreen,'Green',x,yBlue,'Blue')

Input:
Output:

Practical 9-
Title: Edge Detection
Aim: Perform Edge detection to find edges of objects
within images.
Background:
Edge detection is the process of finding meaningful transitions in an image.
Edge detection is one of the central tasks of the lower levels of image
processing. The points where sharp changes in the brightness occur typically
form the border between different objects. These points can be detected by
computing intensity differences in local image regions. That is, the
edgedetection algorithm should look for a neighbourhood with strong signs of
change. Most of the edge detectors work on measuring the intensity gradient at a
point in the image.

Importance of Edge Detection -Edge detection is a problem of fundamental


importance in image analysis. The purpose of edge detection is to identify areas
of an image where a large change in intensity occurs. These changes are often
associated with some physical boundary in the scene from which the image is
derived. In typical images, edges characterise object boundaries and are useful
for segmentation, registration and identification of objects in a scene.

Source Code:
clc; clear;

close();

a=imread("C:\Users\PC-0016\Desktop\DIP\abc.jpeg") b=rgb2gray(a);

c=edge(b,'sobel');

d=edge(b,'prewitt');

e=edge(b,'log'); f=edge(b,'canny');

imwrite(c,"C:\Users\PC-0016\Desktop\DIP\abcc.jpeg") imwrite(d,"C:\Users\

PC-0016\Desktop\DIP\abcd.jpeg") imwrite(e,"C:\Users\PC-0016\Desktop\DIP\

abce.jpeg") imwrite(f,"C:\Users\PC-0016\Desktop\DIP\abcf.jpeg")
Input:

Output:
Sobel -
Prewitt –

Log –
Canny –

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy