Da 3 Lab DL 21BCE2687
Da 3 Lab DL 21BCE2687
REG NO-21BCE2687
Course Name- Deep Learning Lab
LAB-4 MULTI LAYER NN WITH GRADIENT DESCENT
Q-1)
Consider the weight, target and input as 0.0,0.8, 1.1 respectively. Repeat 4 iterations of the forward
pass to determine the predicted output. For each forward pass calculate and print the following
(a). Predicted output (Y_pred (b). Error or Delta=(Y-Y_pred) (c). Squared Error=(YY_pred)^2 (d).
Weight_Delta=Delta*input (e). NewWieght=old weight - Weight_Delta.
Ans:
# Initialize variables
= 0.8 input_val =
1.1
i in range(4):
Squared_Error = Delta ** 2
weight = new_weight
Output:
Q-2)
Assume that the neurons have the sigmoid activation function to perform forward and backward
pass on the network. And also assume that the actual output of y is 0.5 and the learning rate is 1.
Now perform the backpropagation using backpropagation algorithm
Ans:
import numpy as np
hidden_size, output_size):
self.input_size = input_size
self.hidden_size = hidden_size
self.output_size = output_size
np.random.randn(self.hidden_size, self.output_size)
return 1 / (1 + np.exp(-x))
return x * (1 - x)
self.sigmoid(self.hidden_activation)
return self.predicted_output
def backward(self, X, y, learning_rate): # Compute the output layer error
self.sigmoid_derivative(self.predicted_output)
hidden_error * self.sigmoid_derivative(self.hidden_output)
output = self.feedforward(X) #
Perform backpropagation
self.backward(X, y, learning_rate)
if epoch % 4000 == 0:
return self.feedforward(X)
epochs=10000, learning_rate=0.1)
nn.predict(X) print("Predictions
Output:
Q-3)
3.Construct a Feedback Network with backpropagation according to the input X = np.array(([2, 9],
[1, 5], [3, 6]), dtype=float) y = np.array(([92], [86], [89]), dtype=float) Construct a neural network
with inputSize = 2 outputSize = 1 and hiddenSize = 3. Train the network using backpropagation and
test it.
Ans:
import numpy as np
class FeedbackNeuralNetwork:
self.input_size = input_size
self.hidden_size = hidden_size
self.output_size = output_size
# Initialize weights with random values self.weights_input_hidden =
np.random.randn(self.hidden_size, self.output_size)
return 1 / (1 + np.exp(-x))
return x * (1 - x)
self.sigmoid(self.hidden_activation)
return self.predicted_output
self.sigmoid_derivative(self.predicted_output)
hidden_error * self.sigmoid_derivative(self.hidden_output)
# Update weights and biases self.weights_hidden_output +=
output = self.feedforward(X) #
Perform backpropagation
self.backward(X, y, learning_rate)
if epoch % 1000 == 0:
return self.feedforward(X)
epochs=10000, learning_rate=0.1)
nn.predict(X) print("Predictions
after training:")
print(output * 100) # Rescale the output back to the original range output:
Q-4)
. Consider a housing price data CSV file consist of 13 input features and 1 output feature that is ‘price’.
Construct a neural network and perform the following tasks a. Load and pre-process the dataset b.
Visulaize the distribution of price amount of the dataset using frequency plot c. Split the dataset into
Training and Testing d. Define a Neural Network using Numpy (do not use any other framework such
as Karas or Pytorch) e. Train the data and build the neural network model f. Evaluate on Testset g.
Calculate the Mean Squared Error and visualize the result.
Ans:
Load and preprocess the data import numpy as np
import pandas as pd from sklearn.model_selection
import train_test_split from sklearn.preprocessing
import StandardScaler
pd.read_csv("C://Users//Hp//Downloads//housing (2).csv")
# Separate input features (X) and output feature (y)
X = data.iloc[:, :-1].values # All columns except the last one as input features y
= StandardScaler()
X = scaler.fit_transform(X)
y = y.reshape(-1, 1)
color='blue', edgecolor='black')
plt.xlabel('Price')
plt.ylabel('Frequency')
plt.show()
Split the Dataset into Training and Testing
self.input_size = input_size
self.hidden_size = hidden_size
self.weights_input_hidden =
np.random.randn(self.input_size,
self.hidden_size)
self.weights_hidden_output =
np.random.randn(self.hidden_size,
self.output_size) self.bias_hidden =
np.zeros((1, self.hidden_size))
self.bias_output = np.zeros((1,
self.output_size))
return 1 / (1 + np.exp(-x))
return x * (1 - x)
self.sigmoid(self.hidden_activation)
return self.predicted_output
self.sigmoid_derivative(self.predicted_output)
# Compute the hidden layer error hidden_error =
hidden_error * self.sigmoid_derivative(self.hidden_output)
self.feedforward(X) # Perform
backpropagation self.backward(X, y,
learning_rate)
if epoch % 1000 == 0:
return self.feedforward(X)
= nn.predict(X_test)
g. Calculate the Mean Squared Error and Visualize the Result from sklearn.metrics
import mean_squared_error
= mean_squared_error(y_test, predictions)
label='Actual Prices')
plt.plot(predictions, label='Predicted
plt.show()