0% found this document useful (0 votes)
61 views27 pages

NNDL File 1-6

1. The document contains details of 6 experiments related to neural networks including implementing a perceptron, AND/OR gates using perceptron, counterpropagation network, MATLAB functions for arithmetic operations, heteroassociative neural network, Hopfield network. 2. Experiment 1 implements a perceptron to perform AND logic. Experiments 2 implements AND and OR gates using perceptrons. 3. Experiment 3 writes a program to implement a counterpropagation network. 4. Experiment 4 contains MATLAB functions for addition, subtraction, multiplication and division of two input numbers.

Uploaded by

jamesaby94
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
61 views27 pages

NNDL File 1-6

1. The document contains details of 6 experiments related to neural networks including implementing a perceptron, AND/OR gates using perceptron, counterpropagation network, MATLAB functions for arithmetic operations, heteroassociative neural network, Hopfield network. 2. Experiment 1 implements a perceptron to perform AND logic. Experiments 2 implements AND and OR gates using perceptrons. 3. Experiment 3 writes a program to implement a counterpropagation network. 4. Experiment 4 contains MATLAB functions for addition, subtraction, multiplication and division of two input numbers.

Uploaded by

jamesaby94
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 27

TABLE OF CONTENTS

S. No. Experiment Date Remarks

1 Write a Program to implement Perceptron.

2 Write a Program to implement AND OR Gates


using Perceptron.

Objective: Write a Program to implement full


3 counter propagation network.

Write a MATLAB Script containing four


4
functions Addition, Subtraction,Multiplication

and Division

8
Experiment-1

Objective: Write a Program to implement Perceptron.


Code:
clear;
clc;
x=[1 1 -1 -1;1 -1 1 -1];
t=[1 -1 -1 -1];
w=[0 0];
b=0;
alpha=input('enter learning rate=');
theta=input('enter threshold value=');
con=1;
epoch=0;
while con
con=0;
for i=1:4
yin = b+x(1,i)*w(1)+x(2,i)*w(2);
if yin>theta
y=1;
end
if yin<=theta & yin>=-theta
y=0;
end
if yin<-theta
y=-1;
end
if y-t(i)
con=1;
for j=1:2
w(j)=w(j)+alpha*t(i)*x(j,i);
end
b=b+alpha*t(i);
end
end
epoch=epoch+1;
end
disp('Perceptron For AND function');
disp('Final weight matrix');
disp(w);
disp('Final Bias');
disp(b);
OUTPUT:
Experiment-2

Write a Program to implement AND & OR Gates using Perceptron.


Code (AND ):

import numpy as np
class Perceptron:
def __init__(self, num_inputs, learning_rate=0.01, epochs=100):
self.learning_rate = learning_rate
self.epochs = epochs
self.weights = np.zeros(num_inputs + 1) # +1 for the bias
self.errors = []

def predict(self, inputs):


summation = np.dot(inputs, self.weights[1:]) + self.weights[0]
return 1 if summation > 0 else 0

def train(self, training_data, labels):


for _ in range(self.epochs):
total_error = 0
for inputs, label in zip(training_data, labels):
prediction = self.predict(inputs)
error = label - prediction
total_error += error
self.weights[1:] += self.learning_rate * error * inputs
self.weights[0] += self.learning_rate * error
self.errors.append(total_error)

if __name__ == "__main__":
# Example usage:
training_data = np.array([[0, 0], [0, 1], [1, 0], [1, 1]])
labels = np.array([0, 0, 0, 1])

perceptron = Perceptron(num_inputs=2, learning_rate=0.1, epochs=100)


perceptron.train(training_data, labels)

test_inputs = np.array([[0, 0], [0, 1], [1, 0], [1, 1]])


for inputs in test_inputs:
prediction = perceptron.predict(inputs)
print(f"Input: {inputs}, Prediction: {prediction}")
OUTPUT:
Code (OR ):
import numpy as np

class Perceptron:
def __init__(self, num_inputs, learning_rate=0.01, epochs=100):
self.learning_rate = learning_rate
self.epochs = epochs
self.weights = np.zeros(num_inputs + 1) # +1 for the bias
self.errors = []

def predict(self, inputs):


summation = np.dot(inputs, self.weights[1:]) + self.weights[0]
return 1 if summation > 0 else 0

def train(self, training_data, labels):


for _ in range(self.epochs):
total_error = 0
for inputs, label in zip(training_data, labels):
prediction = self.predict(inputs)
error = label - prediction
total_error += error
self.weights[1:] += self.learning_rate * error * inputs
self.weights[0] += self.learning_rate * error
self.errors.append(total_error)

if __name__ == "__main__":
# Example usage:
training_data = np.array([[0, 0], [0, 1], [1, 0], [1, 1]])
labels = np.array([0, 1, 1, 1])
perceptron = Perceptron(num_inputs=2, learning_rate=0.1, epochs=100)
perceptron.train(training_data, labels)

test_inputs = np.array([[0, 0], [0, 1], [1, 0], [1, 1]])


for inputs in test_inputs:
prediction = perceptron.predict(inputs)
print(f"Input: {inputs}, Prediction: {prediction}")
OUTPUT:
Experiment-3

Objective: Write a Program to implement full counter propagation


network.
Code:
v=[0.6 0.2; 0.6 0.2;0.2 0.6; 0.2 0.6];
w=[0.4 0.3;0.4 0.3];
x=[0 1 1 0];
y= [1 0];
alpha=0.3;
for j=1:2
D(j) =0;
for i=1:4
D(j) =D(j)+(x(i) -v(i,j)) ^2;
end
for k=1:2
D(j) =D(j)+(y(k) -w(k,j)) ^2;
end
end
for j=1:2
if D(j)==min(D)
J=j;
end
end
disp('After one step the weight matrix are');
v(:,J)=v(:,J)+alpha*(x' -v(: ,J))
w(:,J)=w(:,J)+alpha*(y'-w(:,J))
OUTPUT:
Experiment-4

Write a MATLAB Script containing four functions Addition, Subtraction,

Multiplication and Division

p=input('Enter the value of first number')

q=input('Enter the value of second number')

function add=ADD(a, b)

add=a+b;

end

function sub=SUB(a, b);

sub=a-b;

end

function multiply=MULTIPLY(a, b);

multiply=a*b;

end

function divide=DIVIDE(a, b);

divide=a/b;

end

fprintf("Addition of two numbers is %d \n", ADD(p,q));

fprintf("Subtraction of two numbers is %d \n", SUB(p,q));

fprintf("Multiplication of two numbers is %d \n", MULTIPLY(p,q));


fprintf("Division of two numbers is %d \n", DIVIDE(p,q));

OUTPUT:
Experiment-5(a)

Write a M-file to calculate the weights for the following patterns using

hetero associative neural net for mapping four input vectors to two output

vectors.

S1 S2 S3 S4 T1 T2
1 1 0 0 1 0
1 0 1 0 1 0
1 1 1 0 0 1
0 1 1 0 0 1

%Hetero associative neural net for mapping input vectors to output vectors

clc;
clear;
x=[1 1 0 0;1 0 1 0;1 1 1 0;0 1 1 0];
t=[1 0;1 0;0 1;0 1];
w=zeros(4,2);
for i=1:4
w=w+x(i,1:4)'*t(i,1:2);
end
disp('weight matrix');
disp(w);
OUTPUT:
Experiment-5(b)

Write a MATLAB program to find the weight matrix of an auto associative

net to store the vector(1 1 -1 -1). Test the response of the network by

presenting the same pattern and recognize whether it is a known vector or

unknown vector.

%Autoassociative net to store the vector

clc;

clear;

x=[1 1 -1 -1];

w=zeros(4,4);

w=x'*x;

yin=x*w;

for i=1:4

if yin(i)>0

y(i)=1;

else

y(i)=-1;

end
end

disp('Weight matrix');

disp(w);

if x==y

disp('The vector is a know Vector');

else

disp('The vector is a Unknown Vector');

end
OUTPUT:
Experiment-5(c)

Write an M-file to store the vector (-1 -1 -1 -1) and (-1 -1 1 1) in an auto

associative net. Find the weight matrix. Test the net with (1 1 1 1 ) as input.

clc;

clear;

x=[-1 -1 -1 -1;-1 -1 1 1];

t=[1 1 1 1];

w=zeros(4,4);

for i=1:2

w=w+x(i,1:4)'*x(i,1:4);

end

yin=t*w;

for i=1:4

if yin(i)>0

y(i)=1;

else

y(i)=-1;

end

end
disp('The calculated weight matrix');

disp(w);

if x(1,1:4)==y(1:4) | x(2,1:4)==y(1:4)

disp('The vector is a Known Vector');

else

disp('The vector is a unknow vector');

end
OUTPUT:
Experiment-5(d)

Write a MATLAB program to find the weight matrix in bipolar form for

the Bi-directional Associative Memory (BMA) network based on the

following binary input output pairs.

S(1) = (1 1 0) t1(1) = (1 0)

S(2) = (1 0 1) t(2) = (0 1)

%Bidirectional Associative Memory neural net

clc;

clear;

s=[1 1 0;1 0 1];

t=[1 0;0 1];

x=2*s-1

y=2*t-1

w=zeros(3,2);

for i=1:2

w=w+x(i,:)'*y(i,:);

end
disp('The calculated weight matrix');

disp(w);

OUTPUT:
Experiment-6

Write a MATLAB program to store the vector (1 1 1 -1). Find the weight

matrix with no self-connection. Test this using discrete hopfield net with

mistake in first and second component of stored vector i.e. (0 0 1 0) [since

discrete Hopfield net uses binary vectors]. Also the given pattern in binary

form is [1 1 1 0].

clc;

clear;

x=[1 1 1 0];

tx=[0 0 1 0];

w=(2*x'-1)*(2*x-1);

for i=1:4

w(i,i)=0;

end

con=1;

y=[0 0 1 0];

while con
up=[4 2 1 3];

for i=1:4

yin(up(i))=tx(up(i))+y*w(1:4,up(i));

if yin(up(i))>0

y(up(i))=1;

end

end

if y==x

disp('convergence has been obtained');

disp('converged output');

disp(y);

con=0;

end

end
OUTPUT:

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy