0% found this document useful (0 votes)
53 views25 pages

CNN Slides KD

Convolutional neural networks (CNNs) are a type of neural network that can learn image features at multiple levels of abstraction. CNNs use convolutional layers that apply filters in a sliding-window fashion across the input to detect patterns, followed by pooling layers that reduce the spatial size of representations. This allows CNNs to take advantage of the spatial structure of images and learn translation-invariant features. CNNs have been very successful in computer vision tasks like image classification, object detection, and segmentation.

Uploaded by

Khushal Das
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
53 views25 pages

CNN Slides KD

Convolutional neural networks (CNNs) are a type of neural network that can learn image features at multiple levels of abstraction. CNNs use convolutional layers that apply filters in a sliding-window fashion across the input to detect patterns, followed by pooling layers that reduce the spatial size of representations. This allows CNNs to take advantage of the spatial structure of images and learn translation-invariant features. CNNs have been very successful in computer vision tasks like image classification, object detection, and segmentation.

Uploaded by

Khushal Das
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 25

Convolutional Neural Network

(CNN)

Khushal Das – F2020313013

Sohaima Inam – F2020313020


Network: CNN
Consider Learning an Image:

Can represent a small region with fewer parameters

• Some patterns are much smaller than the whole image

“beak” detector
Same pattern appears in different places:
They can be compressed!
What about training a lot of such “small” detectors
and each detector must “move around”.

“upper-left
beak” detector

They can be
compressed
to the same
parameters.
“middle beak”
detector
A convolutional layer
A CNN is a neural network with some convolutional layers
(and some other layers). A convolutional layer has a number
of filters that does convolutional operation.

Beak detector

A filter
Convolution vs. Fully Connected
1 0 0 0 0 1 1 -1 -1 -1 1 -1
0 1 0 0 1 0 -1 1 -1 -1 1 -1
0 0 1 1 0 0 -1 -1 1 -1 1 -1
1 0 0 0 1 0
0 1 0 0 1 0
0 0 1 0 1 0
convolution
image

x1
1 0 0 0 0 1
0 1 0 0 1 0 x2
Fully 0 0 1 1 0 0

connected 1 0 0 0 1 0 …



0 1 0 0 1 0
0 0 1 0 1 0
x36
Convolutional neural network for
Image recognition
Dense neural network and Convolutional neural
network
A simple CNN structure

CONV: Convolutional kernel layer


RELU: Activation function
POOL: Dimension reduction layer
FC: Fully connection layer
Convolutional kernel
This is a gif image
Convolutional kernel

Padding on the input


volume with zeros
in such way that the
conv layer does not
alter the spatial
dimensions of the
input
Rectified linear unit , ReLU
Pooling layer
Pooling
MNIST dataset
 Database of handwritten digits
 It has a training set of 60,000
examples, and a test set of 10,000
examples.
 The digits have been size-
normalized and centered in a
fixed-size image.
LeNet-5 for MNIST
Case studies

1. LeNet-5.
• It is the first successful applications of Convolutional Networks
were developed by Yann LeCun in 1990’s.

• It was proposed in research paper “Gradient Based Learning


Applied to Document Recognition”

• They used this architecture for recognizing the handwritten and


machine-printed characters.

• The network has 5 layers with learnable parameters and hence


named Lenet-5
2. AlexNet.
• It was first work that popularized Convolutional Networks in
Computer Vision, developed by Alex Krizhevsky, Ilya Sutskever
and Geoff Hinton.

• It was submitted to the ImageNet ILSVRC challenge in 2012 and


significantly outperformed the second runner-up (top 5 error of
16% compared to runner-up with 26% error).

• The Network had a very similar architecture to LeNet, but was


deeper, bigger, and featured Convolutional Layers stacked on top
of each other (previously it was common to only have a single
CONV layer always immediately followed by a POOL layer).
3. GoogLeNet.
• The ILSVRC 2014 winner was a Convolutional Network
from Szegedy et al. from Google.

• Its main contribution was the development of an Inception


Module that dramatically reduced the number of parameters in
the network (4M, compared to AlexNet with 60M).

• Additionally, this paper uses Average Pooling instead of Fully


Connected layers at the top of the ConvNet, eliminating a large
amount of parameters that do not seem to matter much.

• There are also several follow-up versions to the GoogLeNet,


most recently Inception-v4.
4. VGGNet.
• It was the runner-up in ILSVRC 2014, proposed by Karen Simonyan and
Andrew Zisserman.

• Its main contribution was in showing that the depth of the network is a critical
component for good performance. Their final best network contains 16
CONV/FC layers and, appealingly, features an extremely homogeneous
architecture that only performs 3x3 convolutions and 2x2 pooling from the
beginning to the end.

• A downside of the VGGNet is that it is more expensive to evaluate and uses a


lot more memory and parameters (140M). Most of these parameters are in the
first fully connected layer, and it was since found that these FC layers can be
removed with no performance downgrade, significantly reducing the number
of necessary parameters.
5. ResNet.
•  Residual Network developed by Kaiming He et al. was the
winner of ILSVRC 2015.

• It features special skip connections and a heavy use of batch


normalization. The architecture is also missing fully connected
layers at the end of the network.

• ResNets are currently by far state of the art Convolutional


Neural Network models and are the default choice for using
ConvNets in practice (as of May 10, 2016).
VGG-16 GoogleNet ResNet
CNN Implementation
(Code at Google-Colab)
Any Questions?

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy