0% found this document useful (0 votes)
73 views11 pages

Imagenet Classification With Deep Convolutional Neural Networks

This document summarizes a 2013 paper on using deep convolutional neural networks for image classification on ImageNet. The key points are: - The paper achieved state-of-the-art results on ImageNet using a convolutional neural network with ReLU activations trained on multiple GPUs. - To reduce overfitting, the paper used data augmentation techniques like random cropping/flipping of images and altering intensity channels. Dropout was also used in fully-connected layers. - The model achieved a top-1 error rate of 37.5% and top-5 error of 17%, significantly outperforming the previous best published results.

Uploaded by

Hemu Sava
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
73 views11 pages

Imagenet Classification With Deep Convolutional Neural Networks

This document summarizes a 2013 paper on using deep convolutional neural networks for image classification on ImageNet. The key points are: - The paper achieved state-of-the-art results on ImageNet using a convolutional neural network with ReLU activations trained on multiple GPUs. - To reduce overfitting, the paper used data augmentation techniques like random cropping/flipping of images and altering intensity channels. Dropout was also used in fully-connected layers. - The model achieved a top-1 error rate of 37.5% and top-5 error of 17%, significantly outperforming the previous best published results.

Uploaded by

Hemu Sava
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 11

ImageNet Classification with Deep

Convolutional Neural Networks

Presenter: Weicong Chen


Deep Convolutional Neural Networks
• Led by Geoffrey Hinton, University of Toronto

• Published in 2013

• Based on the datasets from ImageNet LSVRC-2010 Contest

• Using graphic cards to train the neural network


ImageNet LSVRC-2010 Contest
• 1.2 million high-resolution images

• 1,000 different classes

• 50,000 validation images, 150,000 testing images

• Top-1 error 47.1% best in contest, 45.7% best published

• Top-5 errors 28.2% best in contest, 25.7% best published


Convolutional Neural Networks (CNN)
Architecture
Novel/Unusual features in architecture
• ReLU (Rectified Linear Units ) Nonlinearity
Standard way: f(x) = tanh(x) or f(x) = (1 + e-x)-1 (logistic function)
ReLU: f(x) = max(0, x)
6 times faster than hyperbolic function

• Multiple GPUs
Larger memory
Parallel computing
Control of communication
Overfitting
• Occurs when a statistical model describes random error or noise
instead of the underlying relationship

• Exaggerate minor fluctuations in the data

• Will generally have poor predictive performance


Reducing overfitting
• Data Augmentation
1. Image translation and horizontal reflection
Randomly extracting patches
Four corner and one center patches with reflection for testing

2. Altering the intensities of the RGB channels in training images


Approximately captures an important property of natural images
reduces the top-1 error rate by over 1%
Reducing Overfitting
• Dropout
Zero the output of each hidden neuron with probability 0.5.

No longer contribute to forward pass and backward propagation

Neural network samples a different architecture every time

Reduce complex co-adaptations of neurons

Used in two fully-connected layers


Result
• Two NVIDIA GTX 580 3GB GPUs

• 6 days of training

• 90 cycles

• 60 million parameters

• 37.5% top-1 error (45.7% second best)

• 17% top-5 error (28.2% second best)


Questions?

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy