0% found this document useful (0 votes)
18 views4 pages

Neural

nural netwotk

Uploaded by

lolo515199700
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views4 pages

Neural

nural netwotk

Uploaded by

lolo515199700
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 4

Neural Networks - Learning Rule to Classify Data

Perceptron is a single layer neural network and a multi-layer perceptron is called Neural Networks.
Perceptron is a linear classifier (binary). Also, it is used in supervised learning. It helps to classify the
given input data. But how it works?
A normal neural network looks like this as we all know

The perceptron's name was based on its intended use, as a perceiving and recognizing automaton. The
Perceptron--a perceiving and recognizing automaton. (Single Layer)

Where b is the bias; x1, x2, x3, & x4 are the inputs;
Yin = b + x1w1 +x2w2 + x3w3 + x4w4
and w1, w2, w3, & w4 are the associated weights.

After that, we use the activation function to get the calculated output Y:
𝟏 𝒊𝒇 𝒚𝒊𝒏 > 𝟎
Y = f(yin) = 𝟎 𝒊𝒇 𝒚𝒊𝒏 = 𝟎
−𝟏 𝒊𝒇 𝒚𝒊𝒏 < 𝟎
Δw1 = αtx1; Δw2 = αtx2; Δw3 = αtx3; Δw4 = αtx4; Δb = αt;

Where α is the learning rate and t is the target value.

If target and y are not matching, we need to calculate weight changes (Δs)

1
Perceptron Example:
Find the weights required to perform the following classification using the perceptron network.
The vectors (1, 1, 1, 1) and (-1, 1, -1, -1) are belonging to class 1, and the vectors (1, 1, 1, -1) and
(1, -1, -1, 1) are belonging to class -1.

Assume that the learning rate α is set to 1, and that the initial weight (w0) given is 0.

Perceptron Example Solution:


The given vectors can be listed in a table similar to one below where t indicates the target or the class.
X1 X2 X3 X4 (t)
1 1 1 1 1
-1 1 -1 -1 1
1 1 1 -1 -1
1 -1 -1 1 -1
Then we start applying the rules on these vectors as follows:
Yin = b + x1w1 +x2w2 + x3w3 + x4w4
𝟏 𝒊𝒇 𝒚𝒊𝒏 > 𝟎
Y = f(yin) = 𝟎 𝒊𝒇 𝒚𝒊𝒏 = 𝟎
−𝟏 𝒊𝒇 𝒚𝒊𝒏 < 𝟎
Δw1 = αtx1; Δw2 = αtx2; Δw3 = αtx3; Δw4 = αtx4; Δb = αt;
Net
Input Target Output Weight changes Weights
Input
W1 W2 W3 W4 b
X1 X2 X3 X4 (t) (yin) (Y) ΔW1 ΔW2 ΔW3 ΔW4 Δb
(0 0 0 0 0)
0 0 0 0 0
1 1 1 1 1 0 0 1 1 1 1 1 1 1 1 1 1
EPOCH- -1 1 -1 -1 1 -1 -1 -1 1 -1 -1 1 0 2 0 0 2
1 1 1 1 -1 -1 4 1 -1 -1 -1 1 -1 -1 1 -1 1 1
1 -1 -1 1 -1 1 1 -1 1 1 -1 -1 -2 2 0 0 0
1 1 1 1 1 0 0 1 1 1 1 1 -1 3 1 1 1
EPOCH- -1 1 -1 -1 1 3 1 0 0 0 0 0 -1 3 1 1 1
2 1 1 1 -1 -1 3 1 -1 -1 -1 1 -1 -2 2 0 2 0
1 -1 -1 1 -1 -2 -1 0 0 0 0 0 -2 2 0 2 0
1 1 1 1 1 2 1 0 0 0 0 0 -2 2 0 2 0
EPOCH- -1 1 -1 -1 1 2 1 0 0 0 0 0 -2 2 0 2 0
3 1 1 1 -1 -1 -2 -1 0 0 0 0 0 -2 2 0 2 0
1 -1 -1 1 -1 -2 -1 0 0 0 0 0 -2 2 0 2 0

The brown cells with thick borders contain the


final weights and bias, so the network will look
like:

2
Multi-Layer Perceptron (MLP) Network Learning Example
 The given example consists of an Input layer, one Hidden layer, and an Output layer.
 The input layer has 4 neurons (x1, x2, x3, x4), the hidden layer has 2 neurons (x5, x6), and the
output layer has a single neuron (x7).
 x0 is a bias with respect to hidden and output layers (x5, x6, x7).
 Need to: Train the MLP by updating the weights and biases in the network.
 Learning Rate = 0.8
x1 x2 x3 x4 ODesired
1 1 0 1 1

X1 X2 X3 X4 W15 W16 W25 W26 W35 W36 W45 W46 W57 W67 θ5 θ6 θ7
1 1 0 1 0.3 0.1 -0.2 0.4 0.2 -0.3 0.1 0.4 -0.3 0.2 0.2 0.1 -0.3

Multi-Layer Perceptron Network Learning Example Solution


Step 1: Epoch 1
1. Calculate Input and Output in the Input Layer Net Input and Output Calculation
Input layer Ij Oj
x1 1 1
x2 1 1
x3 0 0
x4 1 1

2. Calculate Net Input and Output in the Hidden Layer and Output Layer as:
Unitj Net Input Ij Output Oj
𝐼 = 𝑥 ∗ 𝑤 +∗ 𝑥𝑤 +𝑥 ∗ 𝑤 + 𝑥 ∗ 𝑤 + 𝑥 ∗ 𝜃 1 1
x5 𝑂 = = .
= 0.599
𝐼 = 1 ∗ 0.3 + 1 ∗ −0.2 + 0 ∗ 0.2 + 1 ∗ 0.1 + 1 ∗ 0.2 = 0.4 1+𝑒 1+𝑒
𝐼 = 𝑥 ∗ 𝑤 + 𝑥 ∗ 𝑤 +𝑥 ∗ 𝑤 + 𝑥 ∗ 𝑤 + 𝑥 ∗ 𝜃 1 1
x6 𝑂 = = .
= 0.550
𝐼 = 1 ∗ 0.1 + 1 ∗ −0.4 + 0 ∗ −0.3 + 1 ∗ 0.4 + 1 ∗ 0.1 = 0.2 1+𝑒 1+𝑒
𝐼 = 𝑂 ∗ 𝑤 + 𝑂 ∗ 𝑤 + 𝑥 𝑥𝜃 1 1
x7 𝑂 = = .
= 0.717
𝐼 = 0.599 ∗ −0.3 + 0.550 ∗ 0.20 + 1 ∗ −0.3 = 0.930 1+𝑒 1+𝑒

3. Calculate Error = ODesired – OEstimates . So error for this network is:


Error = ODesired – OEstimates = 1 - 0.717 = 0.283
3
So, we need to back propagate to reduce error.
Step 2: Backward Propagation
1. Calculate Error at each node
For each unit k in output layer, calculate:
Errork = Ok(1-Ok)(ODesired – Ok)
For each unit j in the hidden layer, calculate:
𝐸𝑟𝑟𝑜𝑟 = 𝑂 (1 − 𝑂 ) 𝐸𝑟𝑟𝑜𝑟 𝑤

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy