0% found this document useful (0 votes)
8 views16 pages

Deep Learning CNN

The document discusses Convolutional Neural Networks (CNNs) and the mathematical operation of convolution, which is used to extract features from input data like images. It highlights the importance of reducing input size for computational efficiency, preventing overfitting, focusing on important features, and enabling hierarchical learning. Key concepts include sparse interaction, parameter sharing, and equivariance, which enhance the performance of CNNs in image processing tasks.

Uploaded by

aditya0314n
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views16 pages

Deep Learning CNN

The document discusses Convolutional Neural Networks (CNNs) and the mathematical operation of convolution, which is used to extract features from input data like images. It highlights the importance of reducing input size for computational efficiency, preventing overfitting, focusing on important features, and enabling hierarchical learning. Key concepts include sparse interaction, parameter sharing, and equivariance, which enhance the performance of CNNs in image processing tasks.

Uploaded by

aditya0314n
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 16

Deep Learning

Mani Shankar
Convolution Neural Network(CNN)
Convolution Neural Network(CNN)

What is Convolution?

Does ANN will work on image data or not?

If ANN works on image data then, why we need CNN?


Convolution Neural Network(CNN)
Convolution is a mathematical operation used extensively in various fields, including
signal processing, image processing, and deep learning. It involves combining two
functions to produce a third function that expresses how the shape of one is modified
by the other.
In the context of deep learning, particularly Convolutional Neural Networks (CNNs),
convolution is used to extract features from input data, such as images. Here's how it
works:
Kernel/Filter: A small matrix (e.g., 3x3 or 5x5) is moved over the input data.
Sliding/Striding: The filter slides over the input matrix, performing element-wise
multiplication and summing the results to produce a single output value.
Feature Map: The collection of these output values forms the feature map, which
highlights specific features such as edges, textures, or patterns.

Convolution helps reduce the size of the input while preserving essential spatial
features, making it efficient for tasks like image recognition and object detection.
Convolution Neural Network(CNN)
Why we want to reduce the size of input?
Reducing the size of the input in deep learning, particularly in Convolutional Neural Networks
(CNNs), is important for several reasons:
1. Computational Efficiency
Less data to process: Smaller input size reduces the computational load and memory
requirements, speeding up the training and inference processes.
Feasible model size: It allows for smaller, more manageable network architectures.
2. Prevent Overfitting
Less parameter complexity: A reduced input size often leads to fewer trainable parameters,
minimizing the risk of overfitting to the training data.
3. Focus on Important Features
Feature extraction: Downsampling (via pooling or strided convolutions) helps the network
retain the most important features (edges, shapes, patterns) while discarding unnecessary
details, like noise or irrelevant background information.
4. Hierarchical Learning
Capture higher-level features: As the input size reduces, deeper layers in the network can
focus on learning complex, high-level features (like objects) rather than low-level details (like
pixels).
Convolution Neural Network(CNN)

S(t) = output function/Feature Map


x = input
w = kernel/filter
* = convolution operation
Convolution Neural Network(CNN)
Convolution Neural Network(CNN)
Convolution Neural Network(CNN)
Convolution Neural Network(CNN)
Motivation
In order to improve a machine learning system, convolution uses three
important ideas: sparse interaction, parameter sharing, and equivariant
representations.
Convolution Neural Network(CNN)
Sparse
Interaction
Convolution Neural Network(CNN)

Parameter Sharing
Definition: The same set of filter weights (or kernel) is used
across all spatial locations of the input. This means the filters
are "shared" rather than unique for each position.
Benefit:
• Greatly reduces the number of parameters in the network,
especially for large inputs like images or videos.
• Helps the network generalize better since the same patterns
(e.g., edges or textures) can appear in different parts of an
image, and the same filter can detect them regardless of their
position.
Equivariance
• if an object moves in the image, its features in the
convolutional output also "move" correspondingly.
Equivariance
Translation Invariance
Convolution Neural Network(CNN)
• Equivariance concept is being used in convolution.
• Translation Invariance concept is used in pooling.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy