0% found this document useful (0 votes)
13 views6 pages

Deep Learning Lab Mannual

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views6 pages

Deep Learning Lab Mannual

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 6

1.

Build Convolution Neural network using image recognition

Convolutional Neural Networks:

A Convolutional Neural Network (CNN) is a type of deep learning algorithm that is


particularly well-suited for image recognition and processing tasks. It is made up of multiple
layers, including convolutional layers, pooling layers, and fully connected layers.

The convolutional layers are the key component of a CNN, where filters are applied to the
input image to extract features such as edges, textures, and shapes. The output of the
convolutional layers is then passed through pooling layers, which are used to down-sample
the feature maps, reducing the spatial dimensions while retaining the most important
information. The output of the pooling layers is then passed through one or more fully
connected layers, which are used to make a prediction or classify the image.

Program:

import tensorflow as tf

from tensorflow.keras import datasets, layers, models

import matplotlib.pyplot as plt

(train_images, train_labels), (test_images, test_labels) = datasets.cifar10.load_data()

train_images, test_images = train_images / 255.0, test_images / 255.0

# Verify the data

class_names = ['airplane', 'automobile', 'bird', 'cat', 'deer',

'dog', 'frog', 'horse', 'ship', 'truck']

plt.figure(figsize=(10,10))

for i in range(25):

plt.subplot(5,5,i+1)
plt.xticks([])

plt.yticks([])

plt.grid(False)

plt.imshow(train_images[i])

plt.xlabel(class_names[train_labels[i][0]])

plt.show()

model = models.Sequential()

model.add(layers.Conv2D(32, (3, 3), activation='relu', input_shape=(32, 32, 3)))

model.add(layers.MaxPooling2D((2, 2)))

model.add(layers.Conv2D(64, (3, 3), activation='relu'))

model.add(layers.MaxPooling2D((2, 2)))

model.add(layers.Conv2D(64, (3, 3), activation='relu'))

model.summary()

model.add(layers.Flatten())

model.add(layers.Dense(64, activation='relu'))

model.add(layers.Dense(10))

model.summary()

# compile and train the model

model.compile(optimizer='adam',

loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),

metrics=['accuracy'])

history = model.fit(train_images, train_labels, epochs=10,

validation_data=(test_images, test_labels))

# Evaluate the model

plt.plot(history.history['accuracy'], label='accuracy')
plt.plot(history.history['val_accuracy'], label = 'val_accuracy')

plt.xlabel('Epoch')

plt.ylabel('Accuracy')

plt.ylim([0.5, 1])

plt.legend(loc='lower right')

test_loss, test_acc = model.evaluate(test_images, test_labels, verbose=2)

Output:

Model: "sequential_4"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d_12 (Conv2D) (None, 30, 30, 32) 896

max_pooling2d_8 (MaxPoolin (None, 15, 15, 32) 0


g2D)

conv2d_13 (Conv2D) (None, 13, 13, 64) 18496

max_pooling2d_9 (MaxPoolin (None, 6, 6, 64) 0


g2D)

conv2d_14 (Conv2D) (None, 4, 4, 64) 36928

=================================================================
Total params: 56320 (220.00 KB)
Trainable params: 56320 (220.00 KB)
Non-trainable params: 0 (0.00 Byte)
_________________________________________________________________
Model: "sequential_4"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d_12 (Conv2D) (None, 30, 30, 32) 896

max_pooling2d_8 (MaxPoolin (None, 15, 15, 32) 0


g2D)

conv2d_13 (Conv2D) (None, 13, 13, 64) 18496

max_pooling2d_9 (MaxPoolin (None, 6, 6, 64) 0


g2D)

conv2d_14 (Conv2D) (None, 4, 4, 64) 36928

flatten_2 (Flatten) (None, 1024) 0

dense_4 (Dense) (None, 64) 65600

dense_5 (Dense) (None, 10) 650

=================================================================
Total params: 122570 (478.79 KB)
Trainable params: 122570 (478.79 KB)
Non-trainable params: 0 (0.00 Byte)
_________________________________________________________________
Epoch 1/10
1563/1563 [==============================] - 44s 28ms/step - loss: 1.4816 - accuracy:
0.4609 - val_loss: 1.1872 - val_accuracy: 0.5808
Epoch 2/10
1563/1563 [==============================] - 41s 26ms/step - loss: 1.0984 - accuracy:
0.6108 - val_loss: 1.0154 - val_accuracy: 0.6423
Epoch 3/10
1563/1563 [==============================] - 43s 28ms/step - loss: 0.9516 - accuracy:
0.6650 - val_loss: 0.9414 - val_accuracy: 0.6725
Epoch 4/10
1563/1563 [==============================] - 44s 28ms/step - loss: 0.8539 - accuracy:
0.7020 - val_loss: 0.9008 - val_accuracy: 0.6860
Epoch 5/10
1563/1563 [==============================] - 40s 26ms/step - loss: 0.7811 - accuracy:
0.7267 - val_loss: 0.8647 - val_accuracy: 0.6992
Epoch 6/10
1563/1563 [==============================] - 39s 25ms/step - loss: 0.7182 - accuracy:
0.7490 - val_loss: 0.8541 - val_accuracy: 0.7063
Epoch 7/10
1563/1563 [==============================] - 40s 26ms/step - loss: 0.6676 - accuracy:
0.7650 - val_loss: 0.8932 - val_accuracy: 0.6989
Epoch 8/10
1563/1563 [==============================] - 45s 29ms/step - loss: 0.6230 - accuracy:
0.7812 - val_loss: 0.8419 - val_accuracy: 0.7144
Epoch 9/10
1563/1563 [==============================] - 46s 30ms/step - loss: 0.5801 - accuracy:
0.7961 - val_loss: 0.8482 - val_accuracy: 0.7213
Epoch 10/10
1563/1563 [==============================] - 43s 28ms/step - loss: 0.5340 - accuracy:
0.8109 - val_loss: 0.9031 - val_accuracy: 0.7210
313/313 - 2s - loss: 0.9031 - accuracy: 0.7210 - 2s/epoch - 7ms/step
2. Understanding and Using ANN : Identifying age group of an actor

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy