0% found this document useful (0 votes)
31 views

AI Lab 12 Lab Tasks - 39

The document discusses experiments with convolutional neural network architectures for image classification. It has students try different numbers of convolutional layers and filters to see the impact on accuracy and training time. Removing layers impacts performance, with more layers often providing better results.

Uploaded by

Maaz
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
31 views

AI Lab 12 Lab Tasks - 39

The document discusses experiments with convolutional neural network architectures for image classification. It has students try different numbers of convolutional layers and filters to see the impact on accuracy and training time. Removing layers impacts performance, with more layers often providing better results.

Uploaded by

Maaz
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 12

Lab Session -12

Artificial Intelligence (Comp – 00634)


Lab Session-12

Objective: Understanding the Convolutional Neural Networks

A- Outcomes:
After completion of the lab session students will be able:
a. To understand shallow neural networks
b. To understand convolutional neural networks
c. To understand the visualization of convolutions and pooling

1
Lab Session -12

Maaz Bin Fazal

F21BETEN1M01039
B- Lab Tasks:
1- Try editing the convolutions. Change the 32s to either 16 or 64. What impact will this
have on accuracy and/or training time.
Write/copy your code here:
Code:
import tensorflow as tf

# Load the Fashion MNIST dataset


fmnist = tf.keras.datasets.fashion_mnist
(training_images, training_labels), (test_images, test_labels) =
fmnist.load_data()

# Normalize the images


training_images = training_images / 255.0
test_images = test_images / 255.0

# Reshape the images to include the channel dimension


training_images = training_images.reshape(-1, 28, 28, 1)
test_images = test_images.reshape(-1, 28, 28, 1)

# Build the model


model = tf.keras.models.Sequential([
tf.keras.layers.Conv2D(32, (3, 3), activation='relu', input_shape=(28,
28, 1)),
tf.keras.layers.MaxPooling2D(2, 2),
tf.keras.layers.Conv2D(32, (3, 3), activation='relu'),
tf.keras.layers.MaxPooling2D(2, 2),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dense(10, activation='softmax')
])

# Compile the model


model.compile(optimizer='adam', loss='sparse_categorical_crossentropy',
metrics=['accuracy'])

# Print message for model training


print('\nModel Training')

# Train the model


model.fit(training_images, training_labels, epochs=5)

2
Lab Session -12

# Print message for model evaluation


print('\nModel Evaluation')

# Evaluate the model


test_loss, test_acc = model.evaluate(test_images, test_labels)
print(f'Test accuracy: {test_acc}')

Output:

Code:
import tensorflow as tf

# Load the Fashion MNIST dataset


fmnist = tf.keras.datasets.fashion_mnist
(training_images, training_labels), (test_images, test_labels) =
fmnist.load_data()

# Normalize the images


training_images = training_images / 255.0
test_images = test_images / 255.0

# Reshape the images to include the channel dimension


training_images = training_images.reshape(-1, 28, 28, 1)
test_images = test_images.reshape(-1, 28, 28, 1)

# Build the model


model = tf.keras.models.Sequential([
tf.keras.layers.Conv2D(16, (3, 3), activation='relu', input_shape=(28,
28, 1)),
tf.keras.layers.MaxPooling2D(2, 2),
tf.keras.layers.Conv2D(16, (3, 3), activation='relu'),

3
Lab Session -12

tf.keras.layers.MaxPooling2D(2, 2),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dense(10, activation='softmax')
])

# Compile the model


model.compile(optimizer='adam', loss='sparse_categorical_crossentropy',
metrics=['accuracy'])

# Print message for model training


print('\nModel Training')

# Train the model


model.fit(training_images, training_labels, epochs=5)

# Print message for model evaluation


print('\nModel Evaluation')

# Evaluate the model


test_loss, test_acc = model.evaluate(test_images, test_labels)
print(f'Test accuracy: {test_acc}')

Output:

Code:
import tensorflow as tf

# Load the Fashion MNIST dataset


fmnist = tf.keras.datasets.fashion_mnist
(training_images, training_labels), (test_images, test_labels) =
fmnist.load_data()

# Normalize the images


training_images = training_images / 255.0
test_images = test_images / 255.0

4
Lab Session -12

# Reshape the images to include the channel dimension


training_images = training_images.reshape(-1, 28, 28, 1)
test_images = test_images.reshape(-1, 28, 28, 1)

# Build the model


model = tf.keras.models.Sequential([
tf.keras.layers.Conv2D(64, (3, 3), activation='relu', input_shape=(28,
28, 1)),
tf.keras.layers.MaxPooling2D(2, 2),
tf.keras.layers.Conv2D(64, (3, 3), activation='relu'),
tf.keras.layers.MaxPooling2D(2, 2),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dense(10, activation='softmax')
])

# Compile the model


model.compile(optimizer='adam', loss='sparse_categorical_crossentropy',
metrics=['accuracy'])

# Print message for model training


print('\nModel Training')

# Train the model


model.fit(training_images, training_labels, epochs=5)

# Print message for model evaluation


print('\nModel Evaluation')

# Evaluate the model


test_loss, test_acc = model.evaluate(test_images, test_labels)
print(f'Test accuracy: {test_acc}')

Output:

5
Lab Session -12

2- Remove the final convolution. What impact will this have on accuracy or training
time?
Write/copy your code here:
Code:
import tensorflow as tf

# Load the Fashion MNIST dataset


fmnist = tf.keras.datasets.fashion_mnist
(training_images, training_labels), (test_images, test_labels) =
fmnist.load_data()

# Normalize the images


training_images = training_images / 255.0
test_images = test_images / 255.0

# Reshape the images to include the channel dimension


training_images = training_images.reshape(-1, 28, 28, 1)
test_images = test_images.reshape(-1, 28, 28, 1)

# Build the model


model = tf.keras.models.Sequential([
tf.keras.layers.Conv2D(32, (3, 3), activation='relu',
input_shape=(28, 28, 1)),
tf.keras.layers.MaxPooling2D(2, 2),
# Uncomment the following lines to add another convolutional layer
and pooling layer
# tf.keras.layers.Conv2D(16, (3, 3), activation='relu'),
# tf.keras.layers.MaxPooling2D(2, 2),
tf.keras.layers.Flatten(),

6
Lab Session -12

tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dense(10, activation='softmax')
])

# Compile the model


model.compile(optimizer='adam', loss='sparse_categorical_crossentropy',
metrics=['accuracy'])

# Print message for model training


print('\nModel Training')

# Train the model


model.fit(training_images, training_labels, epochs=5)

# Print message for model evaluation


print('\nModel Evaluation')

# Evaluate the model


test_loss, test_acc = model.evaluate(test_images, test_labels)
print(f'Test accuracy: {test_acc}')

Output:

3- How about adding more convolutions? What impact do you think this will have?
Experiment with it.
Write/copy your code here:
Code:
import tensorflow as tf

# Load the Fashion MNIST dataset

7
Lab Session -12

fmnist = tf.keras.datasets.fashion_mnist
(training_images, training_labels), (test_images, test_labels) =
fmnist.load_data()

# Normalize the images


training_images = training_images / 255.0
test_images = test_images / 255.0

# Reshape the images to include the channel dimension


training_images = training_images.reshape(-1, 28, 28, 1)
test_images = test_images.reshape(-1, 28, 28, 1)

# Build the model


model = tf.keras.models.Sequential([
tf.keras.layers.Conv2D(16, (3, 3), activation='relu',
input_shape=(28, 28, 1)),
tf.keras.layers.MaxPooling2D(2, 2),
tf.keras.layers.Conv2D(32, (3, 3), activation='relu'),
tf.keras.layers.MaxPooling2D(2, 2),
tf.keras.layers.Conv2D(64, (3, 3), activation='relu'),
tf.keras.layers.MaxPooling2D(2, 2),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dense(10, activation='softmax')
])

# Compile the model


model.compile(optimizer='adam', loss='sparse_categorical_crossentropy',
metrics=['accuracy'])

# Print message for model training


print("\nModel Training")

# Train the model


model.fit(training_images, training_labels, epochs=5)

# Print message for model evaluation


print("\nModel Evaluation")

# Evaluate the model


test_loss, test_acc = model.evaluate(test_images, test_labels)
print(f'Test accuracy: {test_acc}')

Output:

8
Lab Session -12

4- Remove all convolutions but the first. What impact do you think this will have?
Experiment with it.
Write/copy your code here:
Code:
import tensorflow as tf

model = tf.keras.models.Sequential([
tf.keras.layers.Conv2D(16, (3, 3), activation='relu',
input_shape=(28, 28, 1)),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dense(10, activation='softmax')
])

model.compile(optimizer='adam', loss='sparse_categorical_crossentropy',
metrics=['accuracy'])

print('\nModel Training')
model.fit(training_images, training_labels, epochs=5)

print('\nModel Evaluation')
test_loss = model.evaluate(test_images, test_labels)
print(f'Test Loss: {test_loss}')

Output:

9
Lab Session -12

5- In the previous lab you implemented a callback to check on the loss function and to
cancel training once it hit a certain amount. Implement that here.
Write/copy your code here:
Code:
import tensorflow as tf

class MyCallback(tf.keras.callbacks.Callback):
def on_epoch_end(self, epoch, logs={}):
if logs.get('loss') < 0.4:
print("\nLoss is low so cancelling training!")
self.model.stop_training = True

callbacks = MyCallback() # Instantiate the callback

model.fit(training_images, training_labels, epochs=5,


callbacks=[callbacks])
Output:

10
Lab Session -12

6- Write a conclusion of this lab in your own words.

Write your answer here by hand:

11
Lab Session-12

12

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy