Lab Manual DL (New)
Lab Manual DL (New)
Aim: Implementing a python program for understanding the perceptron using the Iris plants dataset.
Algorithm:
1. Import the specific libraries and also load the Iris dataset.
2. In the Iris Plants dataset we have 3 classes. We are considering 2 classes from it
namely Versicolor and Setosa.
3. Next is to plot the data for two of the four variables. (Assign some colors to the data
points that can be differentiated.
4. We need to split the data into training and testing so we can validate our results.
5. The next is to initialize the random weights and assign the bias value as 1.
6. Also assign or define the hyperparameters. Here hyperparameters are learning rate
and epochs. Here epochs denote the iteration number over the training set.
7. Now we can start the training our perceptron with a for loop.
8. In this for loop we use simple function as If the output is greater than 0.5, we predict
as 1, else 0.
9. In this we are computing MSE and we are updating the weights and bias. And we are
determining the validation accuracy.
10. At last, we will plot the training loss and validation accuracy.
Program:
SEED = 2017
# We've saved the training loss and validation accuracy so that we can
plot them fig = plt.figure(figsize=(8, 4)) a = fig.add_subplot(1,2,1)
imgplot = plt.plot(hist_loss) plt.xlabel('epochs')
a.set_title('Training
loss')
a=fig.add_subplot(1,
2,2) imgplot =
plt.plot(hist_accuracy
) plt.xlabel('epochs')
a.set_title('Validation Accuracy')
plt.show()
Output:
Result:
Ex-2 Understanding the Perceptron using Diabetes Dataset
Aim: Implementing a python program for understanding the perceptron using the Diabetes dataset.
Algorithm:
1. Import the specific libraries and also load the diabetics dataset.
2. In the Iris Plants dataset we have 3 classes. We are considering 2 classes from it namely yes
or no class.
3. Next is to plot the data for two of the four variables. (Assign some colors to the data points
that can be differentiated.
4. We need to split the data into training and testing so we can validate our results.
5. The next is to initialize the random weights and assign the bias value as 1.
6. Also assign or define the hyperparameters. Here hyperparameters are learning rate and
epochs. Here epochs denote the iteration number over the training set.
7. Now we can start the training our perceptron with a for loop.
8. In this for loop we use simple function as If the output is greater than 0.5, we predict as 1,
else 0.
9. In this we are computing MSE and we are updating the weights and bias. And we are
determining the validation accuracy.
10. At last, we will plot the training loss and validation accuracy.
Program:
import numpy as np
from sklearn import datasets
from sklearn.model_selection import train_test_split
from sklearn.linear_model import Perceptron
from sklearn.metrics import accuracy_score, precision_score, recall_score, f1_score
print("Accuracy:", accuracy)
print("Precision:", precision)
print("Recall:", recall)
print("F1 Score:", f1)
OUTPUT:
Accuracy: 0.6629213483146067
Precision: 0.6470588235294118
Recall: 0.55
F1 Score: 0.5945945945945946
Result:
Ex-3 Building a single-layer neural network
Aim: Implementation of a single-layer neural network using a python program.
Algorithm:
1. Import the necessary libraries, such as NumPy and Scikit-Learn.
2. Prepare the dataset for training and testing. This can include loading the data, splitting it into
training and testing sets, and preprocessing the data as needed.
3. Define the architecture of the neural network. This includes the number of input and output
neurons and the activation function to be used.
4. Initialize the weights and biases of the network randomly.
5. Define the forward propagation . This includes calculating the dot product of the input data
and the weights, adding the biases, and passing the result through the activation function.
6. Define the backward propagation . This includes calculating the error, adjusting the weights
and biases, and repeating this process for a number of epochs.
7. Use the trained network to make predictions on new data.
8. Finally, evaluate the performance of the network using metrics such as accuracy or mean
squared error
Program:
# Import libraries and dataset import
numpy as np from
sklearn.model_selection import
train_test_split import matplotlib.pyplot as
plt
SEED = 2017
X = X+1
for e in range(n_epochs):
del_w_hidden =
np.zeros(weights_hidden.shape)
del_w_output =
np.zeros(weights_output.shape)
Output:
Result:
Ex-4 Building a Multi-Layer Neural Network
Algorithm:
1. Import the necessary libraries, such as NumPy and Scikit-Learn.
2. Prepare the dataset for training and testing. This can include loading the data, splitting it into
3. Define the architecture of the neural network. This includes the number of input and output
neurons, the number of hidden layers, and the number of neurons in each hidden layer.
5. Define the forward propagation . This includes calculating the dot product of the input data
and the weights, adding the biases, and passing the result through the activation function.
6. Define the backward propagation . This includes calculating the error, adjusting the weights
8. Finally, evaluate the performance of the network using metrics such as accuracy or mean
squared error.
Program:
import numpy as np import pandas as pd
from sklearn.model_selection import
train_test_split from keras.models import
Sequential from keras.layers import Dense
SEED = 2017
X = data.drop(['quality'], axis=1)
scaler = StandardScaler().fit(X_train)
# Now, let's build our neural network by defining the network architecture
model = Sequential()
# Output layer
model.add(Dense(1,
activation='linear'))
ModelCheckpoint('checkpoints/multi_layer_best_model.h5', monitor='val_accuracy',
save_best_only=True, verbose=0) ]
model.summary()
# We can now print the performance on the test set after loading the optimal weights:
best_model = model
best_model.load_weights('checkpoints/multi_layer_best_model.h5')
best_model.compile(loss='mse', optimizer='adam', metrics=['accuracy'])
Output:
Result:
Ex-5 Experiment with Activation Functions
Aim: Implement a python program for getting started with the activation
function.
Algorithm:
1. Import the necessary libraries for your algorithms, such as NumPy and
3. Define the function for the chosen activation function using NumPy.
4. Apply the activation function to your dataset using the function you just
defined.
5. Visualize the activated data using Matplotlib to observe the effect of the
6. Repeat s 2-5 for any other activation functions you wish to test on your
dataset.
7. Depending on the context, you may want to use the activation function as
part of a neural network. For that you can use pre-built library such as
TensorFlow or PyTorch.
Program:
# Import the
libraries as follows
import numpy as np
import pandas as pd
SEED = 2022
# Show an example of each label and print the count per label
_ = plt.imshow(image,
cmap='gray') plt.show()
print(X_val) print(y_val)
# Normalize data
X_train = X_train.astype('float32')/255.
X_val = X_val.astype('float32')/255.
X_val
# One-Hot-Encode
labels n_classes = 10
y_train = to_categorical(y_train,
n_classes) y_val =
to_categorical(y_val, n_classes)
print(y_train)
X_train
self.losses = [] def
on_batch_end(self, batch,
logs={}):
batch_loss =
logs.get('loss')
self.losses.append(batch
_loss) n_epochs = 10
batch_size = 256
validation_split = 0.2
history_sigmoid =
history_loss()
plt.show()
Output:
6. Experiment with Vehicle Type Recognition
Algorithm:
1. Gather a dataset of vehicle images, labeled with their corresponding vehicle types (e.g., car,
truck, motorcycle).Split the dataset into training and testing sets.
2. Import required libraries, including TensorFlow or PyTorch for deep learning and OpenCV
for image processing.
3. Resize all images to a common size (e.g., 224x224 pixels) to ensure consistent input
dimensions for the CNN.
a. Normalize pixel values to a common range (e.g., [0, 1] or [0, 255]).
b. Optionally, apply data augmentation techniques (e.g., random rotation, flipping) to
increase model robustness.
4. Create a CNN model consisting of convolutional layers (Conv2D), pooling layers
(MaxPooling2D), and fully connected layers (Dense).
a. Adjust the number of layers and filters based on the complexity of your task.
b. Use activation functions like ReLU and appropriate kernel sizes.
c. Add a softmax output layer with as many neurons as there are vehicle classes, and use
categorical cross-entropy as the loss function.
5. Compile the CNN model by specifying the optimizer (e.g., Adam), loss function (categorical
cross-entropy), and evaluation metric (accuracy).
6. Train the model using the training dataset.
a. Specify the number of epochs and batch size.
b. Monitor training progress and loss convergence.
7. Assess the model's performance using the test dataset.Calculate accuracy and other relevant
metrics to evaluate its effectiveness.
8. Use the trained CNN model to make predictions on new vehicle images.
9. Visualize predictions, confusion matrices, and model performance metrics for better
understanding.
10. Experiment with different architectures, hyperparameters, and data augmentation techniques
to improve model performance.
11. If needed, deploy the trained model in a real-world application for vehicle type recognition.
Program:
import os
import numpy as np
from keras.preprocessing.image import ImageDataGenerator
from keras.applications import VGG16
from keras import models, layers, optimizers
# Create a new model by adding custom layers on top of the base model
model = models.Sequential()
model.add(base_model)
model.add(layers.Flatten())
model.add(layers.Dense(256, activation='relu'))
model.add(layers.Dense(4, activation='softmax'))
acc = history.history['acc']
loss = history.history['loss']
epochs_range = range(len(acc))
plt.figure(figsize=(15, 15))
plt.subplot(2, 1, 1)
plt.plot(epochs_range, acc, label='Training Accuracy')
plt.legend(loc='lower right')
plt.title('Training Accuracy')
plt.subplot(2, 1, 2)
plt.plot(epochs_range, loss, label='Training Loss')
plt.legend(loc='upper right')
plt.title('Training Loss')
plt.show()
import matplotlib.pyplot as plt
OUTPUT:
Result:
7. Diabetic Retinopathy
Algorithm:
1. Gather a dataset of retinal images, ideally labeled with diabetic retinopathy severity levels.Split
the dataset into training and testing sets.
2. Import the necessary libraries, including deep learning frameworks like TensorFlow or PyTorch,
and image processing libraries like OpenCV.
3. Preprocess the retinal images to enhance their quality and prepare them for analysis.Resize
images to a common size (e.g., 224x224 pixels) for consistent input dimensions.
a. Normalize pixel values to a common range (e.g., [0, 1] or [0, 255]).
b. Augment the training data with techniques like rotation, flipping, and brightness
adjustments to increase model robustness (optional).
4. Create a CNN model tailored for image classification.
a. Use convolutional layers (Conv2D), pooling layers (MaxPooling2D), and fully
connected layers (Dense).
b. Adjust the number of layers and filters based on the complexity of the task.Employ
activation functions like ReLU and kernel sizes suitable for image analysis.
c. Add a softmax output layer with as many neurons as there are diabetic retinopathy
severity levels (e.g., 0 to 4), and use categorical cross-entropy as the loss function.
5. Compile the CNN model by specifying the optimizer (e.g., Adam), loss function (categorical
cross-entropy), and evaluation metric (e.g., accuracy).
6. Train the model using the training dataset.Specify the number of epochs and batch size.Monitor
training progress, including loss convergence and validation performance.
7. Assess the model's performance using the test dataset.
a. Calculate evaluation metrics such as accuracy, precision, recall, and F1-score.
b. Examine the confusion matrix to understand the model's strengths and weaknesses.
8. Utilize the trained CNN model to make predictions on new retinal images. Interpret the
predictions to determine the severity level of diabetic retinopathy.
9. Visualize model predictions, confusion matrices, and performance metrics for better insights and
communication.
Program:
import os
import random
import sys
import cv2
import matplotlib
def classes_to_int(label):
# label = classes.index(dir)
label = label.strip()
return 5
def int_to_classes(i):
NUM_CLASSES = 5
WIDTH = 128
HEIGHT = 128
DEPTH = 3
# initialize number of epochs to train for, initial learning rate and batch size
EPOCHS = 15
INIT_LR = 1e-3
BS = 32
#global variables
ImageNameDataHash = {}
uniquePatientIDList = []
def readTrainData(trainDir):
global ImageNameDataHash
images = os.listdir(trainDir)
if (imageFileName == "trainLabels.csv"):
continue
# load the image, pre-process it, and store it in the data list
#print(imageFullPath)
img = load_img(imageFullPath)
arr = img_to_array(img) # Numpy array with shape (233,233,3)
dim1 = arr.shape[0]
dim2 = arr.shape[1]
dim3 = arr.shape[2]
#print(arr.shape) # 128,128,3
dim1 = arr.shape[0]
dim2 = arr.shape[1]
dim3 = arr.shape[2]
print("Error after resize, image dimensions are not equal to expected "+str(arr.shape))
#print(type(arr))
# scale the raw pixel intensities to the range [0, 1] - TBD TEST
imageFileName = imageFileName.replace('.jpeg','')
ImageNameDataHash[str(imageFileName)] = np.array(arr)
return
sys.stdout.flush()
readTrainData("/kaggle/working/../input/")
import csv
def readTrainCsv():
raw_df["PatientID"] = ''
header_list = list(raw_df.columns)
ImageLevelHash = {}
patientIDList = []
patientID = patientID.replace('_right','')
patientID = patientID.replace('_left','')
patientIDList.append(patientID)
global uniquePatientIDList
uniquePatientIDList = sorted(set(patientIDList))
count=0;
left_level = ImageLevelHash[str(patientID+'_left')]
right_level = ImageLevelHash[str(patientID+'_right')]
if (left_level != right_level):
count = count+1
print("count of images with both left and right eye level not matching="+str(count)) # 2240
random.seed(10)
print("Reading trainLabels.csv...")
df = readTrainCsv()
for i in range(0,10):
keepImages = list(ImageNameDataHash.keys())
df = df[df['image'].isin(keepImages)]
print(len(df)) # 1000
imageNameArr = []
dataArr = []
key = str(row[0])
if key in ImageNameDataHash:
imageNameArr.append(key)
dataArr.append(np.array(ImageNameDataHash[key])) # np.array
df2_header_list = list(df2.columns)
print(len(df2))
if len(df) != len(df2):
print(df.dtypes)
df_header_list = list(df.columns)
print(len(df)) # 1000
print(df.sample())
print(sample0)
print(sample0.shape) # 128,128,3
plt.imshow(sample0, interpolation='nearest')
plt.show()
print("Sample Image")
X = df['data']
Y = df['level']
#print(type(X)) # 'pandas.core.series.Series'
Y = np.array(Y)
Y = to_categorical(Y, num_classes=NUM_CLASSES)
# partition the data into training and testing splits using 75% training and 25% for validation
sys.stdout.flush()
unique_ids = df.PatientID.unique()
trainid_list = train_ids.tolist()
traindf = df[df.PatientID.isin(trainid_list)]
valSet = df[~df.PatientID.isin(trainid_list)]
print(traindf.head())
print(valSet.head())
traindf = traindf.reset_index(drop=True)
valSet = valSet.reset_index(drop=True)
print(traindf.head())
print(valSet.head())
trainX = traindf['data']
trainY = traindf['level']
valX = valSet['data']
valY = valSet['level']
print("Generating images...")
sys.stdout.flush()
horizontal_flip=True, fill_mode="nearest")
def createModel():
model = Sequential()
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(512, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(output_dim=NUM_CLASSES, activation='softmax'))
return model
print(trainX.shape) # (750,)
Xtrain[i] = trainX[i]
print(Xtrain.shape) # (750,128,128,3)
print(valX.shape) # (250,)
Xval[i] = valX[i]
print(Xval.shape) # (250,128,128,3)
print("compiling model...")
sys.stdout.flush()
model = createModel()
SVG(model_to_dot(model).create(prog='dot', format='svg'))
sys.stdout.flush()
validation_data=(Xval, valY), \
s_per_epoch=len(trainX) // BS, \
epochs=EPOCHS, verbose=1)
sys.stdout.flush()
model.save("/tmp/mymodel")
print("Generating plots...")
sys.stdout.flush()
matplotlib.use("Agg")
matplotlib.pyplot.style.use("ggplot")
matplotlib.pyplot.figure()
N = EPOCHS
matplotlib.pyplot.xlabel("Epoch #")
matplotlib.pyplot.ylabel("Loss/Accuracy")
matplotlib.pyplot.legend(loc="lower left")
matplotlib.pyplot.savefig("plot.png")
OUTPUT:
0 patient's patientID=10
1 patient's patientID=10
2 patient's patientID=13
3 patient's patientID=13
4 patient's patientID=15
5 patient's patientID=15
6 patient's patientID=16
7 patient's patientID=16
8 patient's patientID=17
9 patient's patientID=17
[[[0. 0. 0.]
[0. 0. 0.]
[0. 0. 0.]
...
[0. 0. 0.]
[0. 0. 0.]
[0. 0. 0.]]
[[0. 0. 0.]
[0. 0. 0.]
[0. 0. 0.]
...
[0. 0. 0.]
[0. 0. 0.]
[0. 0. 0.]]
[[0. 0. 0.]
[0. 0. 0.]
[0. 0. 0.]
...
[0. 0. 0.]
[0. 0. 0.]
[0. 0. 0.]]
...
[[0. 0. 0.]
[0. 0. 0.]
[0. 0. 0.]
...
[0. 0. 0.]
[0. 0. 0.]
[0. 0. 0.]]
[[0. 0. 0.]
[0. 0. 0.]
[0. 0. 0.]
...
[0. 0. 0.]
[0. 0. 0.]
[0. 0. 0.]]
[[0. 0. 0.]
[0. 0. 0.]
[0. 0. 0.]
...
[0. 0. 0.]
[0. 0. 0.]
[0. 0. 0.]]]
<class 'numpy.ndarray'>
(128, 128, 3)
Parttition data into 75:25...
unique_ids shape=500
0 10_left ... 10
1 10_right ... 10
4 15_left ... 15
5 15_right ... 15
6 16_left ... 16
[5 rows x 4 columns]
2 13_left ... 13
3 13_right ... 13
12 20_left ... 20
13 20_right ... 20
14 21_left ... 21
[5 rows x 4 columns]
0 10_left ... 10
1 10_right ... 10
2 15_left ... 15
3 15_right ... 15
4 16_left ... 16
_________________________________________________________________
=================================================================
_________________________________________________________________
_________________________________________________________________
_________________________________________________________________
_________________________________________________________________
_________________________________________________________________
_________________________________________________________________
_________________________________________________________________
_________________________________________________________________
_________________________________________________________________
_________________________________________________________________
_________________________________________________________________
_________________________________________________________________
_________________________________________________________________
_________________________________________________________________
_________________________________________________________________
Non-trainable params: 0
_________________________________________________________________
Result:
Ex-8 Experimenting with different optimizers
Aim: Implement a Python program for Experimenting with different optimizers Compare
the results of the training for each optimizer and determine which optimizer performed
the best.
Algorithm:
4: Compile the model by specifying the loss function, metrics, and optimizer.
6: Create a loop that iterates over the list of optimizers. For each iteration: (i) set the
optimizer for the model using the model.compile method (ii) fit the model on the dataset
using the fit method (iii) store the results of the training, such as accuracy or loss.
7: Plot the results of the training for each optimizer, such as accuracy or loss, over the
number of epochs.
8: Compare the results of the training for each optimizer and determine which optimizer
performed .
Program:
import numpy
as np import
pandas as pd
SEED = 2022
SEED
print(np.any(np.isnan(X_test)))
print(np.any(np.isinf(X_test)))
print(np.any(np.isnan(X_train)))
print(np.any(np.isinf(X_train)))
print(np.any(np.isnan(y_test)))
print(np.any(np.isinf(y_test)))
print(np.any(np.isnan(y_train)))
print(np.any(np.isinf(y_train)))
Sequential() model.add(Dense(100,
input_dim=X_train.shape[1],
activation='relu')) model.add(Dense(50,
activation='relu')) model.add(Dense(25,
activation='relu')) model.add(Dense(10,
activation='relu')) model.add(Dense(1,
create_callbacks(opt):
callbacks = [
return
callbacks
opts =
dict({
'sgd': SGD(),
'adadelta': Adadelta(),
'rmsprop': RMSprop(),
'rmsprop-0001': RMSprop(learning_rate=0.0001),
'nadam': Nadam(),
'adamax': Adamax()
}
)
results = []
model =
create_model(opt)
callbacks =
create_callbacks(opt)
model.compile(loss='mse',
optimizer=opts[opt],
metrics=['accuracy'])
Output:
Result:
Algorithm:
1. Define a neural network architecture with a set of parameters that need to be learned through
training.
5. Set the number of epochs and the learning rate for training the model.
a. Feed the training data through the network and compute the loss using a suitable loss
function.
c. Use backpropagation to calculate the gradients of the loss with respect to the weights.
d. Update the weights using the gradients and the learning rate.
7. If the performance on the validation set does not improve for a certain number of epochs,
9. Once the training is complete, use the trained model to make predictions on new data
Program:
# Feature engineering
# 744 rows test_data = data[-31*24:] data = data[:-31*24] # Extract the target field target_fields =
['cnt'] features, targets = data.drop(target_fields, axis=1), data[target_fields]
model = Sequential()
model.add(Dense(250, input_dim=X_train.shape[1],
activation='relu')) model.add(Dense(150, activation='relu'))
model.add(Dense(50, activation='relu')) model.add(Dense(25,
activation='relu')) model.add(Dense(1, activation='linear'))
n_epochs =
4000 batch_size
= 1024
model_reg = Sequential()
model_reg.add(Dense(250, input_dim=X_train.shape[1],
activation='relu',
kernel_regularizer=regularizers.l2(0.005)))
model_reg.add(Dense(150, activation='relu'))
model_reg.add(Dense(50, activation='relu'))
model_reg.add(Dense(25, activation='relu',
kernel_regularizer=regularizers.l2(0.005)))
model_reg.add(Dense(1, activation='linear'))
plt.plot(np.arange(len(history_reg.history['val_loss'])), history_reg.history['val_loss'],
label='validation')
Output:
Result:
Ex-10 Adding dropout to prevent overfitting
Aim: To implement improving generalisation with regularisation.
Algorithm:
2. Choose a dropout rate (e.g., 0.2 to 0.5), which represents the fraction of neurons
mask.
6. Multiply the input to that layer by this mask to deactivate some neurons.
7. Continue with forward and backward propagation as usual, taking into account
8. During evaluation (not training), don't use dropout. Instead, scale the neuron
9. Repeat the training process for multiple epochs while adjusting other training
parameters as needed.
Program:
import numpy
as np import
pandas as pd
import numpy as np
# Feature engineering
data
data = data.drop(drop_features,
axis=1) data
scaled_features
model = Sequential()
model.add(Dense(250, input_dim=X_train.shape[1],
activation='relu')) model.add(Dense(150, activation='relu'))
model.add(Dense(50, activation='relu')) model.add(Dense(25,
activation='relu')) model.add(Dense(1, activation='linear'))
# Compile model
model_drop.add(Dense(250, input_dim=X_train.shape[1],
activation='relu')) model_drop.add(Dropout(0.20))
model_drop.add(Dense(150, activation='relu'))
model_drop.add(Dropout(0.20)) model_drop.add(Dense(50,
activation='relu')) model_drop.add(Dropout(0.20))
model_drop.add(Dense(25, activation='relu'))
model_drop.add(Dropout(0.20)) model_drop.add(Dense(1,
activation='linear'))
plt.plot(np.arange(len(history_drop.history['val_loss'])), history_drop.history['val_loss'],
label='validation')
Output:
Result:
EX-11 IMAGE AGUMENTATIONT
Algorithm:
1. Import the necessary libraries, including OpenCV (cv2), NumPy, and Matplotlib for
visualization (optional).
2. Load the original image that you want to augment using OpenCV's cv2.imread().
3. To create augmented versions of the image, you can apply various transformations such as
rotation, flipping, scaling, and brightness adjustments. Here are some common augmentations
4. Use Matplotlib or any other suitable library to display the augmented images for visual
5. If you want to generate multiple augmented images, you can repeat s 3 and 4 within a loop,
6. If you want to save the augmented images to disk for later use, use OpenCV's cv2.imwrite()
function.
7. Use the augmented images along with the original images in your deep learning model's training
8. If you have multiple images to augment, repeat the above s for each image.
9. Experiment with different augmentation techniques and parameters to find the most effective
import matplotlib.pyplot as
import tensorflow as tf
import
tensorflow_datasets as tfds
import keras
get_label_name =
metadata.features['label'].int2str
fig.add_subplot(1, 6, x+1)
plt.imshow(image)
plt.axis('off')
plt.title(get_label_name(label));
#resize
keras.Sequential([layers.Resizing(IMG_
SIZE,IMG_SIZE,interpolation="lanczos3"),layers.Rescaling
(1./255)]) result =
resize_and_rescale(image)
plt.axis('off')
plt.imshow(result);
#rotate
data_augmentation
keras.Sequential([lay
ers.RandomFlip("ho
rizontal_and_vertica
l"),layers.RandomRo
tation(0.4),])
plt.figure(figsize=(8,
augmented_image = data_augmentation(image)
ax = plt.subplot(2, 3, i + 1)
plt.imshow(augmented_image.numpy()/255)
plt.axis("off")
def
random_invert_img(x,
p=0.5): if
tf.random.uniform([]) <
p:
x = (255-
x) else: x
return x def
random_invert(
factor=0.5):
facto r))
random_invert = random_invert()
plt.figure(figsize=(8, 7))
for i in range(9):
augmented_image = random_invert(image)
ax = plt.subplot(3, 3, i + 1)
plt.imshow(augmented_image.numpy().astype("uint8")) plt.axis("off")
plt.title('Original image')
plt.imshow(original)
plt.subplot(1,2,2)
plt.title('Augmented image')
plt.imshow(augmented) flipped =
tf.image.flip_left_right(image)
= tf.image.rgb_to_grayscale(image)
visualize(image,
tf.squeeze(grayscaled))
_ = plt.colorbar()
saturated =
tf.image.adjust_saturation(image, 4)
tf.image.adjust_brightness(image, 0.4)
range(3):
stateless_random_brightness =
tf.image.central_crop(image, central_fraction=0.5)
stateless_random_crop =
tf.image.stateless_random_crop( image,
stateless_random_crop) rotated =
for i in range(3):
stateless_random_contrast =
stateless_random_contrast)
OUTPUT:
EX-12 Imagenet-LeNet
Aim: To implement Imagenet-LeNet using python programming.
Algorithm:
1. Import deep learning libraries like TensorFlow or PyTorch and other necessary libraries.
2. Download and preprocess the ImageNet dataset, which includes resizing images to a manageable
3. Create a modified version of LeNet architecture suitable for ImageNet by increasing the number
4. Consider using convolutional layers (Conv2D), pooling layers (MaxPooling2D), and fully
connected layers (Dense).Use activation functions like ReLU, and consider adding batch
normalization layers.
5. Compile the LeNet-based model by specifying the optimizer (e.g., Adam, SGD), loss function
6. Apply data augmentation techniques to increase the diversity of training examples, such as
7. Train the model on the ImageNet training dataset.Specify the number of epochs, batch size, and
10. If needed, deploy the trained model in real-world applications for image classification tasks.
11. Experiment with various architectures beyond LeNet, such as deeper convolutional networks
import
tensorflow as tf
from tensorflow
import keras
import numpy as
np
train_x =
tf.expand_dims(train_x,
3) test_x =
tf.expand_dims(test_x,
3) val_x = train_x[:5000]
val_y = train_y[:5000]
lenet_5_model = keras.models.Sequential([
])
lenet_5_model.compile(optimizer='adam', loss=keras.losses.sparse_categorical_crossentropy,
metrics=['accuracy'])
lenet_5_model.evaluate(test_x, test_y)
OUTPUT
Result:
EX-12 Imagenet- AlexNet
Algorithm
1. Import the necessary deep learning libraries (e.g., TensorFlow or PyTorch) and other
supporting libraries.
2. Download and preprocess the ImageNet dataset or a subset of it.Normalize the images.Split
3. Create a neural network model with the following layers:Convolutional layers with
appropriate filter sizes, strides, and padding.Max-pooling layers, Fully connected (dense)
layers, dropout layers to prevent overfitting. Define appropriate activation functions (e.g.,
4. Specify the loss function (e.g., categorical cross-entropy) and optimizer (e.g., SGD or
5. Apply data augmentation techniques such as random cropping, flipping, and rotation to
6. Train the model on the training dataset using the compiled model, specifying the number of
epochs, batch size, and other training parameters.Monitor training and validation performance
to detect overfitting.
necessary.
8. Evaluate the trained model on the test dataset to measure its performance in terms of accuracy
import tensorflow
as tf from
tensorflow import
keras import
matplotlib.pyplot
as plt import os
import time
CLASS_NAMES= ['airplane', 'automobile', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship',
'truck']
tf.data.Dataset.from_tensor_slices((validation_images, validation_labels))
plt.figure(figsize=(20,20)) for i, (image,
label) in enumerate(train_ds.take(5)):
ax = plt.subplot(5,5,i+1)
plt.imshow(image)
plt.title(CLASS_NAMES[label.nump
y()[0]])
plt.axis('off')
image = tf.image.per_image_standardization(image)
277x277 image =
tf.image.resize(image, (227,227))
train_ds_size = tf.data.experimental.cardinality(train_ds).numpy()
test_ds_size = tf.data.experimental.cardinality(test_ds).numpy()
validation_ds_size =
tf.data.experimental.cardinality(validation_ds).numpy()
train_ds = (train_ds
.map(process_images)
.shuffle(buffer_size=train_ds_size)
.batch(batch_size=32, drop_remainder=True))
test_ds = (test_ds
.map(process_images)
.shuffle(buffer_size=train_ds_size)
.batch(batch_size=32, drop_remainder=True))
validation_ds = (validation_ds
.map(process_images)
.shuffle(buffer_size=train_ds_size)
.batch(batch_size=32, drop_remainder=True))
model = keras.models.Sequential([
keras.layers.BatchNormalization(),
keras.layers.MaxPool2D(pool_size=(3,3), strides=(2,2)),
keras.layers.BatchNormalization(),
keras.layers.BatchNormalization(),
keras.layers.MaxPool2D(pool_size=(3,3),
strides=(2,2)), keras.layers.Flatten(),
keras.layers.Dense(4096, activation='relu'),
keras.layers.Dropout(0.5),
keras.layers.Dense(4096, activation='relu'),
keras.layers.Dropout(0.5),
keras.layers.Dense(10, activation='softmax')
])
model.compile(loss='sparse_categorical_crossentropy',
optimizer=tf.optimizers.SGD(lr=0.001), metrics=['accuracy']) model.summary()
run_id = time.strftime("run_%Y_%m_%d-%H_%M_%S")
run_logdir = get_run_logdir()
tensorboard_cb = keras.callbacks.TensorBoard(run_logdir)
model.compile(loss='sparse_categorical_crossentropy',
optimizer=tf.optimizers.SGD(lr=0.001), metrics=['accuracy'])
model.summary()
model.fit(train_ds,
epochs=5,
validation_data=validation_ds,
validation_freq=1,
callbacks=[tensorboard_cb])
model.evaluate(test_ds)
OUTPUT
Result:
EX-13 RNN
Algorithm:
1: Import the deep learning framework you plan to use (e.g., TensorFlow or PyTorch) and other
necessary libraries.
2: Load and preprocess your sequential data. RNNs are commonly used for tasks like sequence
prediction or sequence classification.
Preprocess the data, which may include tokenization, one-hot encoding, or embedding, depending on
your task.
Choose an appropriate activation function (e.g., tanh or ReLU) for the RNN cells.
4: Compile the RNN model by specifying the loss function and optimizer.
Choose appropriate metrics for evaluation (e.g., accuracy or mean squared error).
6: After training, evaluate the RNN model's performance on a validation or test dataset using relevant
metrics.
7:Use the trained RNN model to make predictions on new sequences or data points.
PROGRAM
import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Dropout, LSTM#, CuDNNLSTM
print(x_train.shape)
print(x_train[0].shape)
model = Sequential()
# IF you are running with a GPU, try out the CuDNNLSTM layer type instead (don't pass an
activation, tan h is required)
model.add(LSTM(128, input_shape=(x_train.shape[1:]), activation='relu',
return_sequences=True)) model.add(Dropout(0.2))
model.add(LSTM(128, activation='relu'))
model.add(Dropout(0.1))
model.add(Dense(32, activation='relu'))
model.add(Dropout(0.2))
model.add(Dense(10, activation='softmax'))
# Compile model
model.compile(
loss='sparse_categorical_crossentropy',
optimizer=opt,
metrics=['accuracy'],
)
model.fit(
x_train,
y_train,
epochs=3,
validation_data=(x_test,y_test))
OUTPUT
Result:
EX-13 LSTM
Algorithm:
1. Import the deep learning framework you plan to use (e.g., TensorFlow or PyTorch) and other
necessary libraries.
2. Load and preprocess your sequential data. LSTMs are commonly used for tasks like sequence
prediction, text generation, or sentiment analysis.Preprocess the data, which may include
b. Choose an appropriate activation function (usually 'tanh') for the LSTM cells.
4. Compile the LSTM model by specifying the loss function and optimizer.
Choose appropriate metrics for evaluation (e.g., accuracy or mean squared error).
5. Train the LSTM model using your preprocessed data. Specify the number of epochs and batch
6. After training, evaluate the LSTM model's performance on a validation or test dataset using
relevant metrics.
7. Use the trained LSTM model to make predictions on new sequences or data points.
import numpy as
np import
matplotlib.pyplot
as plt import
pandas as pd
import
tensorflow as tf
from tensorflow.keras.models
import Sequential from
tensorflow.keras.layers import
Dense from tensorflow.keras.layers
import LSTM from
sklearn.preprocessing import
MinMaxScaler from sklearn.metrics
import mean_squared_error
scaler =
MinMaxScaler(feature_range=(0, 1))
dataset = scaler.fit_transform(dataset)
dataY.append(dataset[i +
look_back, 0]) return np.array(dataX),
np.array(dataY)
model.compile(loss='mean_squared_error', optimizer='adam')
model.fit(trainX, trainY, epochs=100, batch_size=1, verbose=2)
# make predictions
trainPredict =
model.predict(trainX)
testPredict =
model.predict(testX)
# invert predictions
trainPredict =
scaler.inverse_transform(trainPredict)
trainY =
scaler.inverse_transform([trainY])
testPredict =
scaler.inverse_transform(testPredict)
testY =
scaler.inverse_transform([testY]) #
calculate root mean squared error
trainScore = np.sqrt(mean_squared_error(trainY[0],
trainPredict[:,0])) print('Train Score: %.2f RMSE' %
(trainScore)) testScore =
np.sqrt(mean_squared_error(testY[0], testPredict[:,0]))
print('Test Score: %.2f RMSE' % (testScore))
trainPredictPlot[look_back:len(trainPredict)+look_back, :] = trainPredict
dataY.append(dataset[i +
look_back, 0]) return np.array(dataX),
np.array(dataY) # fix random seed for
reproducibility tf.random.set_seed(7) #
load the dataset
# make predictions
trainPredict =
model.predict(trainX)
testPredict =
model.predict(testX)
# invert predictions
trainPredict =
scaler.inverse_transform(trainPredict)
trainY =
scaler.inverse_transform([trainY])
testPredict =
scaler.inverse_transform(testPredict)
testY =
scaler.inverse_transform([testY]) #
calculate root mean squared error
trainScore = np.sqrt(mean_squared_error(trainY[0],
trainPredict[:,0])) print('Train Score: %.2f RMSE' % (trainScore))
testScore = np.sqrt(mean_squared_error(testY[0],
testPredict[:,0])) print('Test Score: %.2f RMSE' %
(testScore)) # shift train predictions for plotting
trainPredictPlot = np.empty_like(dataset)
trainPredictPlot[:, :] = np.nan
trainPredictPlot[look_back:len(trainPredict)+look_back, :] = trainPredict
# shift test predictions for plotting testPredictPlot =
np.empty_like(dataset) testPredictPlot[:, :] = np.nan
testPredictPlot[len(trainPredict)+(look_back*2)+1:len(dataset)-1, :]
= testPredict
OUTPUT
Result:
EX-14
GRU
Algorithm:
1:Import the deep learning framework you plan to use (e.g., TensorFlow or PyTorch) and other
necessary libraries.
2:Load and preprocess your sequential data. GRUs are commonly used for tasks like sequence
Preprocess the data, which may include tokenization, one-hot encoding, or embedding, depending on
your task.
Choose an appropriate activation function (usually 'tanh') for the GRU cells.
4:Compile the GRU model by specifying the loss function and optimizer.
Choose appropriate metrics for evaluation (e.g., accuracy or mean squared error).
5:Train the GRU model using your preprocessed data.Specify the number of epochs and batch size.
6:After training, evaluate the GRU model's performance on a validation or test dataset using relevant
metrics.
7:Use the trained GRU model to make predictions on new sequences or data points.
PROGRAM
# Importing the
libraries import
numpy as np
import
matplotlib.pyplo
t as plt
plt.style.use('fiv
ethirtyeight')
import pandas
as pd
from sklearn.preprocessing import
MinMaxScaler from keras.models import
Sequential
from keras.layers import Dense, LSTM, Dropout,
GRU, Bidirectional from keras.optimizers import SGD
import math
from sklearn.metrics import mean_squared_error
# Some functions to
help out with def
plot_predictions(test,pr
edicted):
plt.plot(test, color='red',label='Real IBM Stock
Price') plt.plot(predicted,
color='blue',label='Predicted IBM Stock Price')
plt.title('IBM Stock Price Prediction')
plt.xlabel('Time') plt.ylabel('IBM Stock Price')
plt.legend()
plt.show()
def return_rmse(test,predicted):
rmse = math.sqrt(mean_squared_error(test,
predicted)) print("The root mean squared error is
{}.".format(rmse))
# First, we get the data
dataset = pd.read_csv('../input/IBM_2006-01-01_to_2018-01-
01.csv', index_col='Date', parse_dates=['Date'])
dataset.head()
# Checking for missing values
training_set =
dataset[:'2016'].iloc[:,1:2].values
test_set =
dataset['2017':].iloc[:,1:2].values
# We have chosen 'High' attribute for prices. Let's see
what it looks like
dataset["High"][:'2016'].plot(figsize=(16,4),legend=True)
dataset["High"]['2017':].plot(figsize=(16,4),legend=True)
plt.legend(['Training set (Before 2017)','Test set (2017
and beyond)']) plt.title('IBM stock price') plt.show()
# Scaling the training set
sc = MinMaxScaler(feature_range=(0,1))
training_set_scaled =
sc.fit_transform(training_set)
# Since LSTMs store long term memory state, we create a data structure with 60 times and 1
output
# So for each element of training set, we have 60 previous training set elements
X_train =
[] y_train
= [] for i
in
range(60,
2769):
X_train.append(training_set_scaled[i-60:i,0])
y_train.append(training_set_scaled[i,0])
X_train, y_train = np.array(X_train), np.array(y_train)
# Reshaping X_train for efficient modelling
OUTPUT
Result: