0% found this document useful (0 votes)
108 views5 pages

Mnist CNN - Ipynb Colaboratory

This document loads and preprocesses the MNIST dataset for classifying handwritten digits. It defines a convolutional neural network model with convolutional and max pooling layers, followed by flattening, dropout and dense layers. The model is compiled, trained for 15 epochs on the training data and achieves over 99% test accuracy on the test set.

Uploaded by

Sai Raj Lgtv
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
108 views5 pages

Mnist CNN - Ipynb Colaboratory

This document loads and preprocesses the MNIST dataset for classifying handwritten digits. It defines a convolutional neural network model with convolutional and max pooling layers, followed by flattening, dropout and dense layers. The model is compiled, trained for 15 epochs on the training data and achieves over 99% test accuracy on the test set.

Uploaded by

Sai Raj Lgtv
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

3/27/22, 7:19 PM mnist-CNN.

ipynb - Colaboratory

import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
from keras import Sequential

import numpy as np

mnist_dataset = tf.keras.datasets.mnist.load_data(path="mnist.npz")
 
(x_train, y_train), (x_test, y_test) = mnist_dataset
 
print(x_train.shape)
print(y_train.shape)
 
print(x_test.shape)
print(y_test.shape)
num_classes = 10
input_shape = (28, 28, 1)

x_train = x_train.astype("float32") / 255
x_test = x_test.astype("float32") / 255

# Make sure images have shape (28, 28, 1)
x_train = np.expand_dims(x_train, -1)
x_test = np.expand_dims(x_test, -1)

print("x_train shape:", x_train.shape)
print(x_train.shape[0], "train samples")
print(x_test.shape[0], "test samples")

# convert class vectors to binary class matrices
y_train = keras.utils.to_categorical(y_train, num_classes)
y_test = keras.utils.to_categorical(y_test, num_classes)
print(y_train)

https://colab.research.google.com/drive/17fvgPU817qzmm-UXrfDAKjHN1dpkPZcZ#printMode=true 1/5
3/27/22, 7:19 PM mnist-CNN.ipynb - Colaboratory

(60000, 28, 28)

(60000,)

(10000, 28, 28)

(10000,)

x_train shape: (60000, 28, 28, 1)


60000 train samples

10000 test samples

[[0. 0. 0. ... 0. 0. 0.]

[1. 0. 0. ... 0. 0. 0.]

[0. 0. 0. ... 0. 0. 0.]

...

[0. 0. 0. ... 0. 0. 0.]

[0. 0. 0. ... 0. 0. 0.]

[0. 0. 0. ... 0. 1. 0.]]

model = keras.Sequential(

    [

        keras.Input(shape=input_shape),

        layers.Conv2D(32, kernel_size=(3, 3), activation="relu"),

        layers.MaxPooling2D(pool_size=(2, 2)),

        layers.Conv2D(64, kernel_size=(3, 3), activation="relu"),

        layers.MaxPooling2D(pool_size=(2, 2)),

        layers.Flatten(),

        layers.Dropout(0.5),

        layers.Dense(num_classes, activation="softmax"),

    ]

model.summary()

Model: "sequential"

_________________________________________________________________

Layer (type) Output Shape Param #

=================================================================

conv2d (Conv2D) (None, 26, 26, 32) 320

max_pooling2d (MaxPooling2D (None, 13, 13, 32) 0

https://colab.research.google.com/drive/17fvgPU817qzmm-UXrfDAKjHN1dpkPZcZ#printMode=true 2/5
3/27/22, 7:19 PM mnist-CNN.ipynb - Colaboratory

conv2d_1 (Conv2D) (None, 11, 11, 64)


18496

max_pooling2d_1 (MaxPooling (None, 5, 5, 64) 0

2D)

flatten (Flatten) (None, 1600) 0

dropout (Dropout) (None, 1600) 0

dense (Dense) (None, 10) 16010

=================================================================

Total params: 34,826

Trainable params: 34,826

Non-trainable params: 0

_________________________________________________________________

batch_size = 128

epochs = 15

model.compile(loss="categorical_crossentropy", optimizer="adam", metrics=["accuracy"])

model.fit(x_train, y_train, batch_size=batch_size, epochs=15, validation_split=0.1)

Epoch 1/15

422/422 [==============================] - 43s 100ms/step - loss: 0.3684 - accuracy: 0.8887 - val_loss: 0.0835 - val_accuracy:
Epoch 2/15

422/422 [==============================] - 41s 98ms/step - loss: 0.1142 - accuracy: 0.9654 - val_loss: 0.0617 - val_accuracy: 0
Epoch 3/15

422/422 [==============================] - 41s 97ms/step - loss: 0.0863 - accuracy: 0.9733 - val_loss: 0.0518 - val_accuracy: 0
Epoch 4/15

422/422 [==============================] - 41s 97ms/step - loss: 0.0747 - accuracy: 0.9773 - val_loss: 0.0431 - val_accuracy: 0
Epoch 5/15

422/422 [==============================] - 41s 97ms/step - loss: 0.0637 - accuracy: 0.9804 - val_loss: 0.0398 - val_accuracy: 0
Epoch 6/15

422/422 [==============================] - 41s 98ms/step - loss: 0.0579 - accuracy: 0.9815 - val_loss: 0.0394 - val_accuracy: 0
Epoch 7/15

422/422 [==============================] - 41s 98ms/step - loss: 0.0535 - accuracy: 0.9830 - val_loss: 0.0348 - val_accuracy: 0
Epoch 8/15

https://colab.research.google.com/drive/17fvgPU817qzmm-UXrfDAKjHN1dpkPZcZ#printMode=true 3/5
3/27/22, 7:19 PM mnist-CNN.ipynb - Colaboratory

422/422 [==============================] - 42s 99ms/step - loss: 0.0490 - accuracy: 0.9850 - val_loss: 0.0352 - val_accuracy: 0
Epoch 9/15

422/422 [==============================] - 42s 99ms/step - loss: 0.0460 - accuracy: 0.9856 - val_loss: 0.0319 - val_accuracy: 0
Epoch 10/15

422/422 [==============================] - 42s 100ms/step - loss: 0.0416 - accuracy: 0.9871 - val_loss: 0.0334 - val_accuracy:
Epoch 11/15

422/422 [==============================] - 42s 100ms/step - loss: 0.0403 - accuracy: 0.9865 - val_loss: 0.0341 - val_accuracy:
Epoch 12/15

422/422 [==============================] - 42s 100ms/step - loss: 0.0400 - accuracy: 0.9872 - val_loss: 0.0341 - val_accuracy:
Epoch 13/15

422/422 [==============================] - 42s 99ms/step - loss: 0.0381 - accuracy: 0.9874 - val_loss: 0.0323 - val_accuracy: 0
Epoch 14/15

422/422 [==============================] - 42s 100ms/step - loss: 0.0344 - accuracy: 0.9888 - val_loss: 0.0317 - val_accuracy:
Epoch 15/15

422/422 [==============================] - 42s 99ms/step - loss: 0.0337 - accuracy: 0.9892 - val_loss: 0.0302 - val_accuracy: 0
<keras.callbacks.History at 0x7fc06ca2da10>

score = model.evaluate(x_test, y_test, verbose=10)
print(len(score))
print("Test loss:", score[0])
print("Test accuracy:", score[1])

Test loss: 0.025434585288167

Test accuracy: 0.9914000034332275

https://colab.research.google.com/drive/17fvgPU817qzmm-UXrfDAKjHN1dpkPZcZ#printMode=true 4/5
3/27/22, 7:19 PM mnist-CNN.ipynb - Colaboratory

https://colab.research.google.com/drive/17fvgPU817qzmm-UXrfDAKjHN1dpkPZcZ#printMode=true 5/5

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy