0% found this document useful (0 votes)
2K views34 pages

DL Lab Manual 2022-23

This document provides information about the Laboratory Practice-IV course for the Information Technology program. It includes the course objectives, which are to formulate deep learning problems for applications, apply deep learning algorithms to moderate complexity problems, and apply the algorithms to real-world problems. It also lists the course outcomes, which are to learn and use deep learning tools/packages, build neural network models for applications, apply techniques like CNNs and RNNs to problems, and evaluate models. Finally, it outlines 6 assignments involving implementing neural networks, image classification, anomaly detection with autoencoders, word embeddings, and object detection using transfer learning.

Uploaded by

Shivam Shinde
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2K views34 pages

DL Lab Manual 2022-23

This document provides information about the Laboratory Practice-IV course for the Information Technology program. It includes the course objectives, which are to formulate deep learning problems for applications, apply deep learning algorithms to moderate complexity problems, and apply the algorithms to real-world problems. It also lists the course outcomes, which are to learn and use deep learning tools/packages, build neural network models for applications, apply techniques like CNNs and RNNs to problems, and evaluate models. Finally, it outlines 6 assignments involving implementing neural networks, image classification, anomaly detection with autoencoders, word embeddings, and object detection using transfer learning.

Uploaded by

Shivam Shinde
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 34

Laboratory Practice-IV Class: BE (Information Technology)

ZEAL EDUCATION SOCIETY’s


ZEAL COLLEGE OF ENGINEEIRNG AND RESEARCH,
NARHE, PUNE

DEPARTMENT OF INFORMATION TECHNOLOGY


SEMESTER-I
[A.Y.: 2022-23]

Lab Practice-IV (414447)

LABORATORY MANUAL

Department of Information Technology, ZCOER, Narhe, Pune-41 Page 1


Laboratory Practice-IV Class: BE (Information Technology)

Institute and Department Vision and Mission

INSTITUTE To impart value added technological education through pursuit of academic


VISION excellence, research, and entrepreneurial attitude.

M1: To achieve academic excellence through innovative teaching and learning


processes.
INSTITUTE
MISSION M2: To imbibe the research culture for addressing industry and societal needs.

M3: To provide a conducive environment for building entrepreneurial skills.

M4: To produce competent and socially responsible professionals with core


human values.

To nurture the wisdom of young minds through modern, qualitative, and


DEPARTM
ENT interdisciplinary research-oriented education to become a successful IT
VISION Professional.
M1: Advancing knowledge through fundamental and applied research.
M2: To encourage students for Innovative development and higher studies.
M3: To motivate the crux of learners towards real time solutions.
DEPARTM
ENT M4: To prepare skillful engineers for industry to cater best in IT Enabled
MISSION Services.
M5: To nourish student’s leadership skills by inculcating personal touch and
respect in professional relationship.

Department of Information Technology, ZCOER, Narhe, Pune-41 Page 2


Laboratory Practice-IV Class: BE (Information Technology)

Department
Program Educational Objectives (PEOs)

PEO1: To Impart fundamentals in science, mathematics, and engineering to cater the needs of
society and Industries.

PEO2: Encourage graduates to involve in research, higher studies, and/or to become entrepreneurs.

PEO3: To Work effectively as individuals and as team members in a multidisciplinary environment


with high ethical values for the benefit of society.

Department of Information Technology, ZCOER, Narhe, Pune-41 Page 3


Laboratory Practice-IV Class: BE (Information Technology)

Savitribai Phule Pune University, Pune


Third Year of Information Technology (2019 Course)
414447: Laboratory Practice-IV

Teaching Scheme: Credit Examination Scheme:


PR: 02 Hours/Week 01 PR: 25 Marks
TW: 25 Marks

Course Objectives:
1. To be able to formulate deep learning problems corresponding to different applications.
2. To be able to apply deep learning algorithms to solve problems of moderate complexity.
3. To apply the algorithms to a real-world problem, optimize the models learned and report on the
4. expected accuracy that can be achieved by applying the models.

Course Outcomes:
On completion of the course, student will be able to-

CO1. Learn and Use various Deep Learning tools and packages.
CO2. Build and train deep Neural Network models for use in various applications.
CO3. Apply Deep Learning techniques like CNN, RNN Auto encoders to solve real word
Problems.
CO4. Evaluate the performance of the model build using Deep Learning.

Department of Information Technology, ZCOER, Narhe, Pune-41 Page 4


lOMoAR cPSD| 16277389

Laboratory Practice-IV Class: BE (Information Technology)

List of Assignments

Sr. Statement of Assignments


No.
1 Study of Deep learning Packages: Tensorflow, Keras, Theano and PyTorch.
Document the distinct features and functionality of the packages.
Note: Use a suitable dataset for the implementation of following assignments.

Implementing Feedforward neural networks with Keras and TensorFlow


a. Import the necessary packages.
2 b. Load the training and testing data (MNIST/CIFAR10).
c. Define the network architecture using Keras
d. Train the model using SGD.
e. Evaluate the network.
f. Plot the training loss and accuracy.

Build the Image classification model by dividing the model into following 4
stages:
3 a. Loading and preprocessing the image data.
b. Defining the model’s architecture.
c. Training the model.
d. Estimating the model’s performance.
Use Autoencoder to implement anomaly detection. Build the model by using:
a. Import required libraries.
4 b. Upload / access the dataset.
c. Encoder converts it into latent representation.
d. Decoder networks convert it back to the original input.
e. Compile the models with Optimizer, Loss, and Evaluation Metrics.

Implement the Continuous Bag of Words (CBOW) Model. Stages can be:
a. Data preparation.
5 b. Generate training data.
c. Train model.
d. Output.

Department of Information Technology, ZCOER, Narhe, Pune-41 Page 5


lOMoAR cPSD| 16277389

Laboratory Practice-IV Class: BE (Information Technology)

Object detection using Transfer Learning of CNN architectures.


a. Load in a pre-trained CNN model trained on a large dataset.
6 b. Freeze parameters (weights) in model’s lower convolutional layers.
c. Add custom classifier with several layers of trainable parameters to model.
d. Train classifier layers on training data available for task.
e. Fine-tune hyper parameters and unfreeze more layers as needed.

Department of Information Technology, ZCOER, Narhe, Pune-41 Page 6


lOMoAR cPSD| 16277389

Laboratory Practice-IV Class: BE (Information Technology)

Assignment No.1
Title: Study of Deep learning Packages: Tensorflow, Keras, Theano and PyTorch. Document the
distinct features and functionality of the packages.
Objective: Study and installation of following Deep learning Packages:
i. Tensor Flow
ii. Keras
iii. Theno
iV . PyTorch
Theory:
Python Libraries and functions required
1. Tensorflow,keras
numpy : NumPy is a Python library used for working with arrays. It also has functions for
working in domain of linear algebra, fourier transform, and matrices. NumPy stands for
Numerical Python. To import numpy use
import numpy as np
pandas: pandas is a fast, powerful, flexible and easy to use open source data analysis and
manipulation tool, built on top of the Python programming language. To import pandas use
import pandas as pd
sklearn : Scikit-learn (Sklearn) is the most useful and robust library for machine learning in
Python. It provides a selection of efficient tools for machine learning and statistical modeling
including classification, regression, clustering and dimensionality reduction via a consistence
interface in Python. This library, which is largely written in Python, is built upon NumPy,
SciPy and Matplotlib. For importing train_test_ split use from sklearn.model_selection import
train_test_split
2. For TheaonRequirements:
• Python3
• Python3-pip
• NumPy
• SciPy
• BLAS
Sample Code with comments

Department of Information Technology, ZCOER, Narhe, Pune-41 Page 7


lOMoAR cPSD| 16277389

Laboratory Practice-IV Class: BE (Information Technology)

1. Tensorflow Test program:

2. Keras Test Program:


1from tensorflow import keras
from keras import datasets
#
# Load MNIST data #
(train_images, train_labels), (test_images, test_labels) = datasets.mnist.load_data() #
# Check the dataset loaded #
train_images.shape, test_images.shape
3. Theano test program
# Python program showing # addition of two scalars
# Addition of two scalars import numpy
import theano.tensor as T from theano import function
# Declaring two variables x = T.dscalar('x')
y = T.dscalar('y')
# Summing up the two numbers z = x + y
# Converting it to a callable object
# so that it takes matrix as parameter
Conclusion:
Tensorflow, PyTorch, Keras and Theano all these packages are installed and ready for Deep
learning applications. As per application domain and dataset we can choose the appropriate
package and build the required type of Neural Network.

Department of Information Technology, ZCOER, Narhe, Pune-41 Page 8


lOMoAR cPSD| 16277389

Laboratory Practice-IV Class: BE (Information Technology)

Assignment No.2
Title : Implementing Feedforward neural networks
Objective: Implementing Feedforward neural networks with Keras and TensorFlow
a. Import the necessary packages.
b. Load the training and testing data (MNIST/CIFAR10)
c. Define the network architecture using Keras
d. Train the model using SGD.
e. Evaluate the network.
f. Plot the training loss and accuracy.
Theory :
Steps/ Algorithm
1. Dataset link and libraries :
Dataset : MNIST or CIFAR 10 : kaggel.com
You can download dataset from above mentioned website. Libraries required :
Pandas and Numpy for data manipulation Tensorflow/Keras for Neural Networks
Scikit-learn library for splitting the data into train-test samples, and for some basic model
evaluation
https://pyimagesearch.com/2021/05/06/implementing-feedforward-neural-networks-with-keras-
and-tensorflow/
a) Import following libraries from SKlearn : i) LabelBinarizer (sklearn.preprocessing) ii)
classification_report (sklearn.metrics) .
b) Flatten the dataset.
c) If required do the normalization of data .
d) Convert the labels from integers to vectors.( specially for one hot coding)
e) Decide the Neural Network Architecture : i) Select model (Sequential recommended )
ii) Activation function (sigmoid recommended ) iii) Select the input shape iv) see the
weights in the output layer
f) Train the model : i) Select optimizer (SGD recommended ) ii) use model that .fit to start
training ii) Set Epochs and batch size
g) Call model.predict for class prediction.
h) Plot training and loss accuracy
i) Calculate Precision, Recall, F1-score, Support
j) Repeat for CIFAR dataset.

Department of Information Technology, ZCOER, Narhe, Pune-41 Page 9


lOMoAR cPSD| 16277389

Laboratory Practice-IV Class: BE (Information Technology)

import math
import pandas as pd
from keras import models, layers, optimizers, regularizers
import numpy as np
import random
from sklearn import model_selection, preprocessing
import tensorflow as tf
from tqdm import tqdm
import matplotlib.pyplot as plt
file_name = '../input/SAheart.data'
data = pd.read_csv(file_name, sep=',', index_col=0)
data['famhist'] = data['famhist'] == 'Present'
data.head()
n_test = int(math.ceil(len(data) * 0.3))
random.seed(42)
test_ixs = random.sample(list(range(len(data))), n_test)
train_ixs = [ix for ix in range(len(data)) if ix not in test_ixs]
train = data.iloc[train_ixs, :]
test = data.iloc[test_ixs, :]
print(len(train))
print(len(test))
#features = ['sbp', 'tobacco', 'ldl', 'adiposity', 'famhist', 'typea', 'obesity', 'alcohol', 'age']
features = ['adiposity', 'age']
response = 'chd'
x_train = train[features]
y_train = train[response]
x_test = test[features]
y_test = test[response]
x_train = preprocessing.normalize(x_train)
x_test = preprocessing.normalize(x_test)
hidden_units = 10 # how many neurons in the hidden layer

Department of Information Technology, ZCOER, Narhe, Pune-41 Page 10


lOMoAR cPSD| 16277389

Laboratory Practice-IV Class: BE (Information Technology)

activation = 'relu' # activation function for hidden layer


l2 = 0.01 # regularization - how much we penalize large parameter values
learning_rate = 0.01 # how big our steps are in gradient descent
epochs = 5 # how many epochs to train for
batch_size = 16 # how many samples to use for each gradient descent update
# create a sequential model
model = models.Sequential()

# add the hidden layer


model.add(layers.Dense(input_dim=len(features),
units=hidden_units,
activation=activation))

# add the output layer


model.add(layers.Dense(input_dim=hidden_units,
units=1,
activation='sigmoid'))

# define our loss function and optimizer


model.compile(loss='binary_crossentropy',
# Adam is a kind of gradient descent
optimizer=optimizers.Adam(lr=learning_rate),
metrics=['accuracy'])
# train the parameters
history = model.fit(x_train, y_train, epochs=10, batch_size=batch_size)

# evaluate accuracy
train_acc = model.evaluate(x_train, y_train, batch_size=32)[1]
test_acc = model.evaluate(x_test, y_test, batch_size=32)[1]
print('Training accuracy: %s' % train_acc)
print('Testing accuracy: %s' % test_acc)

losses = history.history['loss']
plt.plot(range(len(losses)), losses, 'r')
plt.show()

Department of Information Technology, ZCOER, Narhe, Pune-41 Page 11


lOMoAR cPSD| 16277389

Laboratory Practice-IV Class: BE (Information Technology)

### RUN IT AGAIN! ###


def train_and_evaluate(model, x_train, y_train, x_test, y_test, n=20):
train_accs = []
test_accs = []
with tqdm(total=n) as progress_bar:
for _ in range(n):
model.fit(
x_train,
y_train,
epochs=epochs,
batch_size=batch_size,
verbose=False)
train_accs.append(model.evaluate(x_train, y_train, batch_size=32, verbose=False)[1])
test_accs.append(model.evaluate(x_test, y_test, batch_size=32, verbose=False)[1])
progress_bar.update()
print('Avgerage Training Accuracy: %s' % np.average(train_accs))
print('Avgerage Testing Accuracy: %s' % np.average(test_accs))
return train_accs, test_accs
_, test_accs = train_and_evaluate(model, x_train, y_train, x_test, y_test)
plt.hist(test_accs)
plt.show()
print('Min: %s' % np.min(test_accs))
print('Max: %s' % np.max(test_accs))
hidden_units = 10 # how many neurons in the hidden layer
activation = 'relu' # activation function for hidden layer
l2 = 0.01 # regularization - how much we penalize large parameter values
learning_rate = 0.01 # how big our steps are in gradient descent
epochs = 5 # how many epochs to train for
batch_size = 16 # how many samples to use for each gradient descent update
# create a sequential model
model = models.Sequential()

# add the hidden layer


model.add(layers.Dense(input_dim=len(features),

Department of Information Technology, ZCOER, Narhe, Pune-41 Page 12


lOMoAR cPSD| 16277389

Laboratory Practice-IV Class: BE (Information Technology)

units=hidden_units,
activation=activation))

# add the output layer


model.add(layers.Dense(input_dim=hidden_units,
units=1,
activation='sigmoid'))

# define our loss function and optimizer


model.compile(loss='binary_crossentropy',
# Adam is a kind of gradient descent
optimizer=optimizers.Adam(lr=learning_rate),
metrics=['accuracy'])

Department of Information Technology, ZCOER, Narhe, Pune-41 Page 13


lOMoAR cPSD| 16277389

Laboratory Practice-IV Class: BE (Information Technology)

Assignment No.3
Title: Build the Image classification model

Objective: Build the Image classification model by dividing the model into following 4 stages:
a. Loading and pre-processing the image data

b. Defining the model’s architecture

c. Training the model

d. Estimating the model’s performance


Theory :
Steps/ Algorithm
1. Choose a dataset of your interest or you can also create your own image dataset (Ref :
https://www.kaggle.com/datasets/) Import all necessary files.
( Ref : https://www.analyticsvidhya.com/blog/2021/01/image-classification-using-
convolutional- neural-networks-a-step-by-step-guide/)
Libraries and functions required
1. Tensorflow,keras
numpy : NumPy is a Python library used for working with arrays. It also has functions for
working in domain of linear algebra, fourier transform, and matrices. NumPy stands for
Numerical Python. To import numpy use
import numpy as np
pandas: pandas is a fast, powerful, flexible and easy to use open source data analysis and
manipulation tool, built on top of the Python programming language. To import pandas use
import pandas as pd
sklearn : Scikit-learn (Sklearn) is the most useful and robust library for machine learning in
Python. It provides a selection of efficient tools for machine learning and statistical modeling
including classification, regression, clustering and dimensionality reduction via a consistence
interface in Python. This library, which is largely written in Python, is built upon NumPy, SciPy
and Matplotlib. For importing train_test_ split use

Department of Information Technology, ZCOER, Narhe, Pune-41 Page 14


lOMoAR cPSD| 16277389

Laboratory Practice-IV Class: BE (Information Technology)

2. Prepare Dataset for Training : //Preparing our dataset for training will involve assigning paths and
creating categories(labels), resizing our images.
3. Create a Training a Data : // Training is an array that will contain image pixel values and the index
at which the image in the CATEGORIES list.
4. Shuffle the Dataset
5. Assigning Labels and Features
6. Normalising X and converting labels to categorical data
7. Split X and Y for use in CNN
8. Define, compile and train the CNN Model
9. Accuracy and Score of model.

import matplotlib.pyplot as plt # for plotting


import numpy as np # for transformation

import torch # PyTorch package


import torchvision # load datasets
import torchvision.transforms as transforms # transform data
import torch.nn as nn # basic building block for neural neteorks
import torch.nn.functional as F # import convolution functions like Relu
import torch.optim as optim # optimizer

Department of Information Technology, ZCOER, Narhe, Pune-41 Page 15


lOMoAR cPSD| 16277389

Laboratory Practice-IV Class: BE (Information Technology)

#
pytho
n
image
libra
ry of
range
[0,
1]
# transform them to tensors of normalized range[-1, 1]

transform = transforms.Compose( # composing several transforms together


[transforms.ToTensor(), # to tensor object
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))]) # mean = 0.5, std = 0.5

# set batch_size
batch_size = 4

# set number of workers


num_workers = 2

# load train data


trainset = torchvision.datasets.CIFAR10(root='./data', train=True,
download=True, transform=transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=batch_size,
shuffle=True, num_workers=num_workers)

# load test data


testset = torchvision.datasets.CIFAR10(root='./data', train=False,
download=True, transform=transform)
testloader = torch.utils.data.DataLoader(testset, batch_size=batch_size,
shuffle=False, num_workers=num_workers)

# put 10 classes into a set


classes = ('plane', 'car', 'bird', 'cat',
'deer', 'dog', 'frog', 'horse', 'ship', 'truck')

def
imshow(im
g):
''' function to show image '''
img = img / 2 + 0.5 # unnormalize
npimg = img.numpy() # convert to numpy objects

Department of Information Technology, ZCOER, Narhe, Pune-41 Page 16


lOMoAR cPSD| 16277389

Laboratory Practice-IV Class: BE (Information Technology)

plt.imshow(np.transpose(npimg, (1, 2, 0)))


plt.show()

# get random training images with iter function


dataiter = iter(trainloader)
images, labels = dataiter.next()

# call function on our images


imshow(torchvision.utils.make_grid(images))

# print the class of the image


print(' '.join('%s' % classes[labels[j]] for j in range(batch_size)))

class
Net(nn.Module):
''' Models a simple Convolutional Neural Network'''

def __init__(self):
''' initialize the network '''
super(Net, self).__init__()
# 3 input image channel, 6 output channels,
# 5x5 square convolution kernel
self.conv1 = nn.Conv2d(3, 6, 5)
# Max pooling over a (2, 2) window
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(6, 16, 5)
self.fc1 = nn.Linear(16 * 5 * 5, 120)# 5x5 from image dimension
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)

def forward(self, x):


''' the forward propagation algorithm '''
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = x.view(-1, 16 * 5 * 5)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))

Department of Information Technology, ZCOER, Narhe, Pune-41 Page 17


lOMoAR cPSD| 16277389

Laboratory Practice-IV Class: BE (Information Technology)

x = self.fc3(x)
return x

net = Net()
print(net)
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9)

start =
torch.cuda.Event(enable_ti
ming=True)
end = torch.cuda.Event(enable_timing=True)

start.record()

for epoch in range(2): # loop over the dataset multiple


times

running_loss = 0.0
for i, data in enumerate(trainloader, 0):
# get the inputs; data is a list of [inputs, labels]
inputs, labels = data

# zero the parameter gradients


optimizer.zero_grad()

# forward + backward + optimize


outputs = net(inputs)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()

# print statistics
running_loss += loss.item()
if i % 2000 == 1999: # print every 2000 mini-
batches
print('[%d, %5d] loss: %.3f' %
(epoch + 1, i + 1, running_loss / 2000))
running_loss = 0.0

# whatever you are timing goes here


end.record()

Department of Information Technology, ZCOER, Narhe, Pune-41 Page 18


lOMoAR cPSD| 16277389

Laboratory Practice-IV Class: BE (Information Technology)

# Waits for everything to finish running


torch.cuda.synchronize()

print('Finished Training')
print(start.elapsed_time(end)) # milliseconds

dataiter =
iter(testlo
ader)
images, labels = dataiter.next()

# print images
imshow(torchvision.utils.make_grid(images)
)
print('GroundTruth: ', ' '.join('%s' %
classes[labels[j]] for j in range(4)))

Department of Information Technology, ZCOER, Narhe, Pune-41 Page 19


lOMoAR cPSD| 16277389

Laboratory Practice-IV Class: BE (Information Technology)

Assignment No.4
Title : ECG Anomaly detection using Autoencoders
Objective: Use Autoencoder to implement anomaly detection. Build the model by using:
a. Import required libraries
b. Upload / access the dataset
c. Encoder converts it into latent representation
d. Decoder networks convert it back to the original input
e. Compile the models with Optimizer, Loss, and Evaluation Metrics
Theory :
Steps/ Algorithm
1. Dataset link and libraries :
Dataset : http://storage.googleapis.com/download.tensorflow.org/data/ecg.csv Libraries required
:
Pandas and Numpy for data manipulation Tensorflow/Keras for Neural Networks
Scikit-learn library for splitting the data into train-test samples, and for some basic model
evaluation
For Model building and evaluation following libraries: sklearn.metrics import accuracy_score
tensorflow.keras.optimizers import Adam
sklearn.preprocessing import MinMaxScaler
tensorflow.keras import Model, Sequential
tensorflow.keras.layers import Dense, Dropout
tensorflow.keras.losses import MeanSquaredLogarithmicError

Ref:https://www.analyticsvidhya.com/blog/2021/05/anomaly-detection-using-autoencoders-a-
walk-through-in-python/
a) Import following libraries from SKlearn : i) MinMaxscaler (sklearn.preprocessing) ii)
Accuracy(sklearn.metrics) . iii) train_test_split (model_selection)
b) Import Following libraries from tensorflow.keras : models , layers,optimizers,datasets , and
set to respective values.
c) Grab to ECG.csv required dataset
d) Find shape of dataset
e) Use train_test_split from sklearn to build model (e.g. train_test_split( features, target,
test_size=0.2, stratify=target)
f) Take usecase Novelty detection hence select training data set as Target class is 1 i.e.

Department of Information Technology, ZCOER, Narhe, Pune-41 Page 20


lOMoAR cPSD| 16277389

Laboratory Practice-IV Class: BE (Information Technology)

Normal class
g) Scale the data using MinMaxScaler.
h) Create Autoencoder Subclass by extending model class from keras.
i) Select parameters as i)Encoder : 4 layers ii) Decoder : 4 layers iii) Activation Function :
Relu iv) Model : sequential.
j) Configure model with following parametrs : epoch = 20 , batch size =512 and compile with
Mean Squared Logarithmic loss and Adam optimizer.
e.g. model = AutoEncoder(output_units=x_train_scaled.shape[1]) # configurations of
model
model.compile(loss='msle', metrics=['mse'], optimizer='adam')
history = model.fit( x_train_scaled, x_train_scaled, epochs=20, batch_size=512,
validation_data=(x_test_scaled, x_test_scaled)
k) Plot loss,Val_loss, Epochs and msle loss
l) Find threshold for anomaly and do predictions :
e.g. : find_threshold(model, x_train_scaled): reconstructions =
model.predict(x_train_scaled) # provides losses of individual instances
reconstruction_errors = tf.keras.losses.msle(reconstructions, x_train_scaled) # threshold
for anomaly scores
threshold = np.mean(reconstruction_errors.numpy()) \
+ np.std(reconstruction_errors.numpy()) return threshold
m) Get accuracy score

# read & manipulate data


import pandas as pd
import numpy as np
import tensorflow as tf

# visualisations
import matplotlib.pyplot as plt
import seaborn as sns
sns.set(style='whitegrid', context='notebook')
%matplotlib notebook

# misc

Department of Information Technology, ZCOER, Narhe, Pune-41 Page 21


lOMoAR cPSD| 16277389

Laboratory Practice-IV Class: BE (Information Technology)

import random as rn

# load the dataset


df = pd.read_csv('../input/creditcard.csv')

# manual parameters
RANDOM_SEED = 42
TRAINING_SAMPLE = 200000
VALIDATE_SIZE = 0.2

# setting random seeds for libraries to ensure reproducibility


np.random.seed(RANDOM_SEED)
rn.seed(RANDOM_SEED)
tf.set_random_seed(RANDOM_SEED)

# let's quickly convert the columns to lower case and rename the Class column
# so as to not cause syntax errors
df.columns = map(str.lower, df.columns)
df.rename(columns={'class': 'label'}, inplace=True)

# print first 5 rows to get an initial impression of the data we're dealing with
df.head()
# add a negligible amount to avoid taking the log of 0
df['log10_amount'] = np.log10(df.amount + 0.00001)
# manual parameter
RATIO_TO_FRAUD = 15

# dropping redundant columns


df = df.drop(['time', 'amount'], axis=1)

# splitting by class
fraud = df[df.label == 1]
clean = df[df.label == 0]
# undersample clean transactions
clean_undersampled = clean.sample(

Department of Information Technology, ZCOER, Narhe, Pune-41 Page 22


lOMoAR cPSD| 16277389

Laboratory Practice-IV Class: BE (Information Technology)

int(len(fraud) * RATIO_TO_FRAUD),
random_state=RANDOM_SEED
)
# concatenate with fraud transactions into a single dataframe
visualisation_initial = pd.concat([fraud, clean_undersampled])
column_names = list(visualisation_initial.drop('label', axis=1).columns)
# isolate features from labels
features, labels = visualisation_initial.drop('label', axis=1).values, \
visualisation_initial.label.values # manual parameter
RATIO_TO_FRAUD = 15
# dropping redundant columns
df = df.drop(['time', 'amount'], axis=1)
# splitting by class
fraud = df[df.label == 1]
clean = df[df.label == 0]
# undersample clean transactions
clean_undersampled = clean.sample(
int(len(fraud) * RATIO_TO_FRAUD),
random_state=RANDOM_SEED
)
# concatenate with fraud transactions into a single dataframe
visualisation_initial = pd.concat([fraud, clean_undersampled])
column_names = list(visualisation_initial.drop('label', axis=1).columns)

# isolate features from labels


features, labels = visualisation_initial.drop('label', axis=1).values, \
visualisation_initial.label.values# keep the label field at the back
df = df[
[col for col in df if col not in ['label', 'log10_amount']] +
['log10_amount', 'label']
]

Department of Information Technology, ZCOER, Narhe, Pune-41 Page 23


lOMoAR cPSD| 16277389

Laboratory Practice-IV Class: BE (Information Technology)

Assignment No.5
Title : Implement the Continuous Bag of Words (CBOW) Model.

Objective: Implement the Continuous Bag of Words (CBOW) Model. Stages can be:
a. Data preparation
b. Generate training data
c. Train model
d. Output
Theory :
Steps/ Algorithm
1. Dataset link and libraries :
Create any English 5 to 10 sententece paragraph as input Import following data from keras :
keras.models import Sequential
keras.layers import Dense, Embedding, Lambda
keras.utils import np_utils
keras.preprocessing import sequence
keras.preprocessing.text import Tokenizer
Import Gensim for NLP operations : requirements :
Gensim runs on Linux, Windows and Mac OS X, and should run on any other platform that
supports Python 3.6+ and NumPy. Gensim depends on the following software: Python, tested
with versions 3.6, 3.7 and 3.8. NumPy for number crunching.
Ref: https://analyticsindiamag.com/the-continuous-bag-of-words-cbow-model-in-nlp-hands-on-
implementation-with-codes/
a) Import following libraries gemsim and numpy set i.e. text file created . It should be
preprocessed.
b) Tokenize the every word from the paragraph . You can call in built tokenizer present in
Gensim
c) Fit the data to tokenizer
d) Find total no of words and total no of sentences.
e) Generate the pairs of Context words and target words :
e.g. cbow_model(data, window_size, total_vocab): total_length = window_size*2
for text in data: text_len = len(text)
for idx, word in enumerate(text): context_word = []
target = []

Department of Information Technology, ZCOER, Narhe, Pune-41 Page 24


lOMoAR cPSD| 16277389

Laboratory Practice-IV Class: BE (Information Technology)

begin = idx - window_size end = idx + window_size + 1


context_word.append([text[i] for i in range(begin, end) if 0 <= i < text_len and i
!= idx])
target.append(word)
contextual = sequence.pad_sequences(context_word, total_length=total_length)
final_target = np_utils.to_categorical(target, total_vocab)
yield(contextual, final_target)
f) Create Neural Network model with following parameters . Model type : sequential
Layers : Dense , Lambda , embedding. Compile Options :
(loss='categorical_crossentropy', optimizer='adam')
g) Create vector file of some word for testing e.g.:dimensions=100
vect_file = open('/content/gdrive/My Drive/vectors.txt' ,'w') vect_file.write('{}
{}\n'.format(total_vocab,dimensions)
h) Assign weights to your trained model
e.g. weights = model.get_weights()[0] for text, i in vectorize.word_index.items():
final_vec = ' '.join(map(str, list(weights[i, :]))) vect_file.write('{} {}\n'.format(text,
final_vec)
Close()
i) Use the vectors created in Gemsim :
e.g. cbow_output = gensim.
models.KeyedVectors.load_word2vec_format('/content/gdrive/My Drive/vectors.txt',
binary=False)
j) choose the word to get similar type of words: cbow_output.most_similar(positive=['Your
word'])

Dataset
In [2]:
sentences = """We are about to study the idea of a computational process.
Computational processes are abstract beings that inhabit computers.
As they evolve, processes manipulate other abstract things called data.
The evolution of a process is directed by a pattern of rules
called a program. People create programs to direct processes. In effect,
we conjure the spirits of the computer with our spells."""
Clean Data
In [3]:
# remove special characters
sentences = re.sub('[^A-Za-z0-9]+', ' ', sentences)

# remove 1 letter words

Department of Information Technology, ZCOER, Narhe, Pune-41 Page 25


lOMoAR cPSD| 16277389

Laboratory Practice-IV Class: BE (Information Technology)

sentences = re.sub(r'(?:^| )\ w(?:$| )', ' ', sentences).strip()

# lower all characters


sentences = sentences.lower()
Vocabulary
In [4]:
words = sentences.split()
vocab = set(words)
In [5]:
vocab_size = len(vocab)
embed_dim = 10
context_size = 2
Implementation
Dictionaries
In [6]:
word_to_ix = {word: i for i, word in enumerate(vocab)}
ix_to_word = {i: word for i, word in enumerate(vocab)}
Data bags
In [7]:
# data - [(context), target]

data = []
for i in range(2, len(words) - 2):
context = [words[i - 2], words[i - 1], words[i + 1], words[i + 2]]
target = words[i]
data.append((context, target))
print(data[:5])
[(['we', 'are', 'to', 'study'], 'about'), (['are', 'about', 'study', 'the'], 'to'), (['about', 'to', 'the', 'idea'], 'study'), (['to', 'study', 'idea', '
of'], 'the'), (['study', 'the', 'of', 'computational'], 'idea')]
Embeddings
In [8]:
embeddings = np.random.random_sample((vocab_size, embed_dim))
Linear Model
In [9]:
def linear(m, theta):
w = theta
return m.dot(w)
Log softmax + NLLloss = Cross Entropy
In [10]:
def log_softmax(x):
e_x = np.exp(x - np.max(x))
return np.log(e_x / e_x.sum())
In [11]:
def NLLLoss(logs, targets):
out = logs[range(len(targets)), targets]
return -out.sum()/ len(out)
In [12]:
def log_softmax_crossentropy_with_logits(logits,target):

Department of Information Technology, ZCOER, Narhe, Pune-41 Page 26


lOMoAR cPSD| 16277389

Laboratory Practice-IV Class: BE (Information Technology)

out = np.zeros_like(logits)
out[np.arange(len(logits)),target] = 1

softmax = np.exp(logits) / np.exp(logits).sum(axis=-1,keepdims=True)

return (- out + softmax) / logits.shape[0]


Forward function
In [13]:
def forward(context_idxs, theta):
m = embeddings[context_idxs].reshape(1, -1)
n = linear(m, theta)
o = log_softmax(n)

return m, n, o
Backward function
In [14]:
def backward(preds, theta, target_idxs):
m, n, o = preds

dlog = log_softmax_crossentropy_with_logits(n, target_idxs)


dw = m.T.dot(dlog)

return dw
Optimize function
In [15]:
def optimize(theta, grad, lr=0.03):
theta -= grad * lr
return theta
Training
In [16]:
theta = np.random.uniform(-1, 1, (2 * context_size * embed_dim, vocab_size))
In [17]:
epoch_losses = {}

for epoch in range(80):

losses = []

for context, target in data:


context_idxs = np.array([word_to_ix[w] for w in context])
preds = forward(context_idxs, theta)

target_idxs = np.array([word_to_ix[target]])
loss = NLLLoss(preds[-1], target_idxs)

losses.append(loss)

grad = backward(preds, theta, target_idxs)


theta = optimize(theta, grad, lr=0.03)

epoch_losses[epoch] = losses

Department of Information Technology, ZCOER, Narhe, Pune-41 Page 27


lOMoAR cPSD| 16277389

Laboratory Practice-IV Class: BE (Information Technology)

Analyze
Plot loss/epoch
In [18]:
ix = np.arange(0,80)

fig = plt.figure()
fig.suptitle('Epoch/ Losses', fontsize=20)
plt.plot(ix,[epoch_losses[i][0] for i in ix])
plt.xlabel('Epochs', fontsize=12)
plt.ylabel('Losses', fontsize=12)
Out[18]:
Text(0, 0.5, 'Losses')

Predict function
In [19]:
def predict(words):
context_idxs = np.array([word_to_ix[w] for w in words])
preds = forward(context_idxs, theta)
word = ix_to_word[np.argmax(preds[-1])]

return word
In [20]:
# (['we', 'are', 'to', 'study'], 'about')
predict(['we', 'are', 'to', 'study'])
Out[20]:
'about'
Accuracy
In [21]:
def accuracy():
wrong = 0
for context, target in data:
if(predict(context) != target):
wrong += 1

return (1 - (wrong / len(data)))


In [22]:
accuracy()

Department of Information Technology, ZCOER, Narhe, Pune-41 Page 28


lOMoAR cPSD| 16277389

Laboratory Practice-IV Class: BE (Information Technology)

Out[22]:
1.0

Department of Information Technology, ZCOER, Narhe, Pune-41 Page 29


lOMoAR cPSD| 16277389

Laboratory Practice-IV Class: BE (Information Technology)

Assignment No.6
Title : Object detection using Transfer Learning of CNN architectures Objective: Object
detection using Transfer Learning of CNN architectures
a. Load in a pre-trained CNN model trained on a large dataset
b. Freeze parameters (weights) in model’s lower convolutional layers
c. Add custom classifier with several layers of trainable parameters to model
d. Train classifier layers on training data available for task
e. Fine-tune hyper parameters and unfreeze more layers as needed
Theory :
Steps/ Algorithm
1. Dataset link and libraries :
https://data.caltech.edu/records/mzrjq-6wc02 separate the data into training, validation,
and
/test
/class1
/class2
Libraries required :
PyTorch
torchvision import transforms
torchvision import d atasets
torch.utils.data import DataLoader torchvision import models torch.nn as nn
torch import optim
Ref: https://towardsdatascience.com/transfer-learning-with-convolutional-neural-
networks-in- pytorch-dd09190245ce
m) Prepare the dataset in splitting in three directories Train , alidation and test with 50 25
25
n) Do pre-processing on data with transform from Pytorch Training dataset
transformation as follows : transforms.Compose([
transforms.RandomResizedCrop(size=256, scale=(0.8, 1.0)),
transforms.RandomRotation(degrees=15), transforms.ColorJitter(),
transforms.RandomHorizontalFlip(), transforms.CenterCrop(size=224), #

Department of Information Technology, ZCOER, Narhe, Pune-41 Page 30


lOMoAR cPSD| 16277389

Laboratory Practice-IV Class: BE (Information Technology)

Image net standards transforms.ToTensor(),


transforms.Normalize([0.485, 0.456, 0.406],
[0.229, 0.224, 0.225]) # Imagenet standards Validation Dataset
transform as follows : transforms.Compose([
transforms.Resize(size=256), transforms.CenterCrop(size=224),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
o) Create Datasets and Loaders : data = {
'train':(Our name given to train data set dir created )
datasets.ImageFolder(root=traindir, transform=image_transforms['train']),
'valid':
datasets.ImageFolder(root=validdir, transform=image_transforms['valid']),

}
dataloaders = {
'train': DataLoader(data['train'], batch_size=batch_size, shuffle=True), 'val':
DataLoader(data['valid'], batch_size=batch_size, shuffle=True)
}

p) Load Pretrain Model : from torchvision import models


model = model.vgg16(pretrained=True)
q) Freez all the Models Weight
for param in model.parameters(): param.requires_grad = False
r) Add our own custom classifier with following parameters : Fully connected with
ReLU activation, shape = (n_inputs, 256) Dropout with 40% chance of dropping
Fully connected with log softmax output, shape = (256, n_classes) import torch.nn
as nn
# Add on classifier model.classifier[6] = nn.Sequential(
nn.Linear(n_inputs, 256), nn.ReLU(),
nn.Dropout(0.4), nn.Linear(256, n_classes), nn.LogSoftmax(dim=1))
s) Only train the sixth layer of classifier keep remaining layers off . Sequential(
(0): Linear(in_features=25088, out_features=4096, bias=True) (1):
ReLU(inplace)

Department of Information Technology, ZCOER, Narhe, Pune-41 Page 31


lOMoAR cPSD| 16277389

Laboratory Practice-IV Class: BE (Information Technology)

(2) : Dropout(p=0.5)
(3) : Linear(in_features=4096, out_features=4096, bias=True) (4): ReLU(inplace)
(5) : Dropout(p=0.5)
(6) : Sequential(
(0): Linear(in_features=4096, out_features=256, bias=True) (1): ReLU()
(2) : Dropout(p=0.4)
(3) : Linear(in_features=256, out_features=100, bias=True) (4): LogSoftmax()
)
)

t) Initialize the loss and optimizer criteration = nn.NLLLoss()


optimizer = optim.Adam(model.parameters())
u) Train the model using Pytorch for epoch in range(n_epochs): for data, targets in
trainloader:
# Generate predictions out = model(data)
# Calculate loss
loss = criterion(out, targets) # Backpropagation loss.backward()
# Update model parameters optimizer.step()
v) Perform Early stopping
w) Draw performance curve
x) Calculate Accuracy
pred = torch.max(ps, dim=1) equals = pred == targets
# Calculate accuracy
accuracy = torch.mean(equals)
from tensorflow.keras.applications.vgg16 import VGG16
base_model = VGG16(input_shape = (224, 224, 3),include_top = False,weights = 'imagenet')
# we don’t have to train all the layers, we make them non_trainable
for layer in base_model.layers:
layer.trainable = False
# Flatten the output layer to 1 dimension
x = Flatten()(base_model.output)
# Add a fully connected layer with 512 hidden units and ReLU activation
x = Dense(512, activation='relu')(x)
# Add a dropout rate of 0.5 or any acceptable values as per your need/ complexity
x = Dropout(0.5)(x)
# Add a final sigmoid layer for classification
x = Dense(1,activation='sigmoid')(x) # Use Softmax if class more than 2 classes
model = tensorflow.keras.models.Model(base_model.input,x)

Department of Information Technology, ZCOER, Narhe, Pune-41 Page 32


lOMoAR cPSD| 16277389

Laboratory Practice-IV Class: BE (Information Technology)

Downloading data from https:/ / storage.googleapis.com/ tensorflow/ keras-applications/ vgg16/ vgg16_weights_tf_dim_o


rdering_tf_kernels_notop.h5
58892288/58889256 [==============================] - 0s 0us/ step
tailModel = base_model.output
tailModel = AveragePooling2D(pool_size=(7, 7))(tailModel)
tailModel = Flatten(name="flatten")(tailModel)
tailModel = Dense(256, activation="relu")(tailModel)
tailModel = Dropout(0.2)(tailModel)
tailModel = Dense(1, activation="softmax")(tailModel)
# place the head FCmodel on top of the base model (this will become
# the actual model we will train)
resmodel = Model(inputs=base_model.input, outputs=tailModel)
# loop over all layers in the base model and freeze them so they will
# layers not freezed will be updated during the training process
for layer in base_model.layers:
layer.trainable = False
mobile = tensorflow.keras.applications.mobilenet.MobileNet()
x = mobile.layers[-6].output
x = Dropout(0.20)(x)
predictions = Dense(1, activation='sigmoid')(x)
# Combining pre-trained MobileNet model with pedictions layer to help it to connect input and output layers for out purpose
mob_model = Model(inputs=mobile.input, outputs=predictions)
Downloading data from https:/ / storage.googleapis.com/ tensorflow/ keras-applications/ mobilenet/ mobilenet_1_0_224
_tf.h5
17227776/17225924 [==============================] - 0s 0us/ step
# Number of layers in pre-trained model and tuned model
print('Number of layers in MobileNet_v1 is ' + str(len(mobile.layers)))
# We have removed 5 layers from the original imagenet model and appended 2 layers on top of it. LET's check if it is 92 - 5 + 2
= 89
print('Number of layers in Tuned MobileNet model is '+ str(len(mob_model.layers)))
Number of layers in MobileNet_v1 is 92
Number of layers in Tuned MobileNet model is 89
from tensorflow.keras.applications.inception_v3 import InceptionV3
base_model = InceptionV3(input_shape = (150, 150, 3), include_top = False, weights = 'imagenet')
Downloading data from https:/ / storage.googleapis.com/ tensorflow/ keras-applications/ inception_v3/ inception_v3_wei
ghts_tf_dim_ordering_tf_kernels_notop.h5
87916544/87910968 [==============================] - 1s 0us/ step
for layer in base_model.layers:
layer.trainable = False
x = Flatten()(base_model.output)
x = Dense(1024, activation='relu')(x)
x = Dropout(0.2)(x)
x = Dense(1, activation='sigmoid')(x)
inception_model = tensorflow.keras.models.Model(base_model.input, x)

Department of Information Technology, ZCOER, Narhe, Pune-41 Page 33


lOMoAR cPSD| 16277389

Laboratory Practice-IV Class: BE (Information Technology)

************************

Department of Information Technology, ZCOER, Narhe, Pune-41 Page 34

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy