0% found this document useful (0 votes)
0 views

Deep Learning

The document outlines the history of deep learning, starting from the early developments of artificial neurons in the 1940s to significant breakthroughs like the backpropagation algorithm in the 1980s. It highlights key contributions from researchers such as Frank Rosenblatt and Geoffrey Hinton, as well as the challenges faced during periods like the AI winter. The evolution of neural networks is traced through various milestones, including the introduction of Long Short-Term Memory networks and the ImageNet project, which laid the groundwork for modern deep learning applications.

Uploaded by

metimebest
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
0 views

Deep Learning

The document outlines the history of deep learning, starting from the early developments of artificial neurons in the 1940s to significant breakthroughs like the backpropagation algorithm in the 1980s. It highlights key contributions from researchers such as Frank Rosenblatt and Geoffrey Hinton, as well as the challenges faced during periods like the AI winter. The evolution of neural networks is traced through various milestones, including the introduction of Long Short-Term Memory networks and the ImageNet project, which laid the groundwork for modern deep learning applications.

Uploaded by

metimebest
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 15

History of Deep Learning

History of Deep Learning


Early Beginnings (1940s-1950s)
● 1943: Warren McCulloch and Walter Pitts
publish a seminal paper on artificial neurons,
laying the groundwork for neural network
research.
● They describe a simple neural network model
capable of performing logical operations.
Early Beginnings (1940s-1950s)
● 1957: “the perceptron may eventually be able
to learn, make decisions, and translate
languages” - Frank Rosenblatt.
● Frank Rosenblatt develops the Perceptron,
an early type of artificial neural network
designed for image recognition tasks.
● The perceptron can learn and adapt through
a process called "training."
Perceptron
“The embryo of an electronic computer that the
Navy expects will be able to walk, talk, see,
write, reproduce itself and be conscious of its
existence.” - New York Times
First Generation Multi-layer Perceptrons
● The first generation of multi-layer perceptrons (MLPs)
saw significant contributions from various
researchers, including Alexey Grigorevich Ivakhnenko.
● In 1965, Ivakhnenko, along with his colleague Valentin
Lapa, developed the Group Method of Data Handling
(GMDH), which is considered one of the earliest
practical implementations of multi-layer perceptrons
and deep learning.
Perceptron Limitations
Minsky and Papert's work in their book
"Perceptrons" (1969) identified several limitations of
the perceptron model, which was one of the earliest
neural network architectures.
AI Winter
1960s: The field experiences a period of excitement and rapid development, but early neural networks
face significant limitations, such as the inability to solve non-linear problems.

1969: Marvin Minsky and Seymour Papert publish "Perceptrons," highlighting the limitations of single-layer
perceptrons, particularly their inability to solve the XOR problem. This leads to a decline in interest and
funding for neural network research.
The Backpropagation Breakthrough (1980s)
● Discovered and rediscovered several times throughout
1960’s and 1970’s.
● 1982: Paul Werbos introduces the backpropagation
algorithm in his Ph.D. thesis, providing an efficient
method for training multi-layer neural networks.
● 1986: Geoffrey Hinton, David Rumelhart, and Ronald
Williams popularize backpropagation with their paper,
"Learning representations by back-propagating errors."
This reignites interest in neural networks.
Gradient Descent
Cauchy discovered Gradient Descent motivated by the need
to compute the orbit of heavenly bodies.
Universal Approximation Theorem
A multilayered network of neurons with a single
hidden layer can be used to approximate any
continuous function to any desired precision.
Unsupervised Pre-Training
● The idea of unsupervised pre-training actually dates back to the
1991-1993 (J. Schmidhuber) when it was used to train a “Very Deep
Learner”.
● In particular, Schmidhuber and his collaborators developed methods
for training deep learning models without requiring labeled data,
which paved the way for modern unsupervised learning techniques.
More Insights (2007-2009)
● Further investigations into the effectiveness of Unsupervised Pre-training.
● Greedy Layer-Wise Pre-training of Deep Networks.
● Why does Unsupervised Pre-training Help Deep Learning?
● Exploring Strategies for Training Deep Neural Networks
Neural Networks in Practice: 1990s
● 1990s: Neural networks begin to show practical success in
various applications, such as speech recognition,
handwriting recognition, and simple pattern recognition
tasks.
● 1997: Sepp Hochreiter and Jürgen Schmidhuber propose
Sepp Hochreiter
the Long Short-Term Memory (LSTM) network,
addressing the issue of vanishing gradients in training
recurrent neural networks (RNNs). LSTMs become a key
component for sequential data tasks, such as language
modeling and time-series prediction.

Jürgen Schmidhuber
2000s: Foundations for Modern Deep Learning
2006: Geoffrey Hinton, Simon Osindero, and Yee-Whye
Teh introduce the concept of Deep Belief Networks
(DBNs), demonstrating that deep neural networks can
be pre-trained layer by layer using unsupervised
learning. This helps to initialize deep networks
effectively.

2007: Fei-Fei Li launches the ImageNet project, creating


a large-scale dataset of labeled images that becomes
instrumental for training and evaluating deep learning
models.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy