0% found this document useful (0 votes)
10 views3 pages

Prac 02 ANN

The document presents a program to implement a backpropagation neural network to solve the XOR problem using binary inputs and outputs. The network has 2 input neurons, 2 hidden neurons, and 1 output neuron. It runs the network through 10,000 epochs of training to adjust the weights and biases using backpropagation.

Uploaded by

Sarvesh Dharme
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views3 pages

Prac 02 ANN

The document presents a program to implement a backpropagation neural network to solve the XOR problem using binary inputs and outputs. The network has 2 input neurons, 2 hidden neurons, and 1 output neuron. It runs the network through 10,000 epochs of training to adjust the weights and biases using backpropagation.

Uploaded by

Sarvesh Dharme
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 3

Assignment No.

: 02

Problem statement : Write a program to show back propagation network for XOR function with
binary input and output.

Code :
import numpy as np
def sigmoid (x):
return 1/(1+np.exp(-x))
def sigmoid_derivative(x):
return x * (1 - x)
inputs=np.array([[0,0],[0,1],[1,0],[1,1]])
expected_output=np.array([[0],[1],[1],[0]])
epochs = 10000
lr = 0.1
inputLayerNeurons, hiddenLayerNeurons, outputLayerNeurons = 2,2,1
hidden_weights = np.random.uniform(size=(inputLayerNeurons,hiddenLayerNeurons))
hidden_bias =np.random.uniform(size=(1,hiddenLayerNeurons))
output_weights = np.random.uniform(size=(hiddenLayerNeurons,outputLayerNeurons))
output_bias = np.random.uniform(size=(1,outputLayerNeurons))
print("Initial hidden weights: ",end='')
print(*hidden_weights)
print("Initial hidden biases: ",end='')
print(*hidden_bias)
print("Initial output weights: ",end='')
print(*output_weights)
print("Initial output biases: ",end='')
print(*output_bias)
for _ in range(epochs):
hidden_layer_activation=np.dot(inputs,hidden_weights)
hidden_layer_activation+=hidden_bias
hidden_layer_output=sigmoid(hidden_layer_activation)
output_layer_activation = np.dot(hidden_layer_output,output_weights)
output_layer_activation += output_bias
predicted_output = sigmoid(output_layer_activation)
error = expected_output - predicted_output
d_predicted_output = error * sigmoid_derivative(predicted_output)
error_hidden_layer = d_predicted_output.dot(output_weights.T)
d_hidden_layer = error_hidden_layer * sigmoid_derivative(hidden_layer_output)
output_weights += hidden_layer_output.T.dot(d_predicted_output) * lr
output_bias += np.sum(d_predicted_output,axis=0,keepdims=True) * lr
hidden_weights += inputs.T.dot(d_hidden_layer) * lr
hidden_bias += np.sum(d_hidden_layer,axis=0,keepdims=True) * lr
print("Final hidden weights: ",end='')
print(*hidden_weights)
print("Final hidden bias: ",end='')
print(*hidden_bias)
print("Final output weights: ",end='')
print(*output_weights)
print("Final output bias: ",end='')
print(*output_bias)

print("\nOutput from neural network after 10,000 epochs: ",end='')


print(*predicted_output)
Initial and Final Step :
Initial hidden weights: [0.31571098 0.26619232] [0.58857105 0.43421485]
Initial hidden biases: [0.90187987 0.37322037]
Initial output weights: [0.18711375] [0.61907452]
Initial output biases: [0.34937156]
Final hidden weights: [0.31557938 0.26532654] [0.5884045 0.4332745]
Final hidden bias: [0.9013447 0.37091509]
Final output weights: [0.17352057] [0.60746259]
Final output bias: [0.33190996]

Output from neural network after 10,000 epochs:


[0.70038063] [0.7171184] [0.71074702] [0.72541255]

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy