0% found this document useful (0 votes)
33 views14 pages

ANN (Perceptron) 02

The document discusses artificial neural networks and the perceptron model. It provides background on early contributors who helped develop neural networks. It then describes the perceptron model in more detail, which uses a single neuron for binary classification. Patterns must be linearly separable for the perceptron to work. The perceptron convergence theorem states that if patterns are linearly separable, the perceptron learning algorithm will converge on a solution that separates the classes with a hyperplane. The algorithm adjusts weights iteratively based on whether patterns are correctly or incorrectly classified to minimize errors over time.

Uploaded by

tmaraxx9
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
33 views14 pages

ANN (Perceptron) 02

The document discusses artificial neural networks and the perceptron model. It provides background on early contributors who helped develop neural networks. It then describes the perceptron model in more detail, which uses a single neuron for binary classification. Patterns must be linearly separable for the perceptron to work. The perceptron convergence theorem states that if patterns are linearly separable, the perceptron learning algorithm will converge on a solution that separates the classes with a hyperplane. The algorithm adjusts weights iteratively based on whether patterns are correctly or incorrectly classified to minimize errors over time.

Uploaded by

tmaraxx9
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 14

Artificial Neural Networks (ANNs)

Perceptron
Contents
 Introduction

 Perceptron

 Perceptron Convergence Theorem

 Error-correction learning algorithm


Introduction
In the formative years of neural networks (1943–1958), several researchers stand out for their pioneering
contributions:

• McCulloch and Pitts (1943) for introducing the idea of neural networks as computing machines. They
presented the McCulloch-pitts model of neuron which is a neuron with a threshold activation function
(signum function).

• Hebb (1949) for postulating the first rule for self-organized learning.

• Rosenblatt (1958) for proposing the perceptron as the first model for learning with a teacher (i.e.,
supervised learning).
Perceptron
 The perceptron is the simplest form of a neural network used for the classification of patterns said to be
linearly separable (i.e., patterns that lie on opposite sides of a hyperplane).

 Basically, it consists of a single nonlinear neuron (McCulloch-pitts model of neuron) with adjustable
synaptic weights and bias. It consists of a linear combiner followed by a hard limiter that performs the
signum function. Accordingly, the neuron produces an output equal to +1 if the hard limiter input is
positive, and -1 if it is negative.

 The perceptron built around a single neuron is limited to


performing pattern classification with only two classes
(hypotheses). By expanding the output (computation) layer
of the perceptron to include more than one neuron,
we may correspondingly perform classification with more than
two classes. However, the classes have to be linearly separable
for the perceptron to work properly.
Perceptron Convergence Theorem
 Theorem

if the patterns (vectors) used to train the perceptron are drawn from two linearly separable classes, then the
perceptron algorithm converges and positions the decision surface in the form of a hyperplane between the two
classes.
Perceptron Convergence Theorem

Perceptron forward computation:

1. Compute the net input (v), which is the linear combiner of input signals (x) multiplied by their
corresponding weights (w)

2. Apply activation function to net input (v) to get the final output (y)

The goal of the perceptron is to correctly classify the set of externally


applied stimuli x1, x2, ..., xm into one of two classes, L1 or L2.
The decision rule for the classification is to assign the point
represented by the inputs x1, x2, ..., xm to class L1 if the perceptron
Output y is +1 and to class L2 if it is -1.
Perceptron Convergence Theorem

The perceptron needs to position the decision surface in the form of a hyperplane between the two classes L1
and L2. the equation of the decision surface is given by:

For example, if we only have 2 input signals x1 and x2, then in this case
the decision surface is a straight line and its equation is given by:

The learning process of the perceptron is to adapt the synaptic weights in


iteration-by-iteration until we find the weights that define the decision
Surface that separate the two classes L1 and L2.
Error-correction Learning Algorithm
The Derivation of Error-Correction learning Algorithm (update weight equation):

 The Input fed into the perceptron at time n is (m+1)-by-1 vector

 The weight is also (m+1)-by-1 vector

 The linear combiner (net input)

 The final output

𝒚 (𝒏)=𝝋 (𝒗 (𝒏))
Error-correction Learning Algorithm
 As we said, the perceptron classifies two linearly separable classes L1 and L2. Let’s draw a subset of patterns
H1 from class L1 and a subset H2 from class L2 to train the perceptron. The union of H1 and H2 is the
complete training set H.

 The classification rule : is to assign the pattern (vector) drawn from the training set H to class L1 if y=+1 and
to class L2 if y=-1

 The training process involves the adjustment of the weight vector w in such a way that the two classes are
linearly separable. That is, there exists a weight vector w such that we may state

 The training problem for the perceptron is then to find a weight vector w such that the two inequalities are
satisfied.
Error-correction Learning Algorithm
Error-correction Learning Algorithm

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy