0% found this document useful (0 votes)
21 views21 pages

Backpropagation Neural Network

AI Based

Uploaded by

man
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views21 pages

Backpropagation Neural Network

AI Based

Uploaded by

man
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 21

Backpropagation

Neural Network
Back-propagation Neural Network

• Back Propagation described by Arthur E. Bryson and Yu-Chi Ho


in 1969
• The term is an abbreviation for backwards propagation of errors,
the errors (and therefore the learning) propagate backwards from
the output nodes to the inner nodes.
• Backpropagation is used to calculate the gradient of the error of
the network with respect to the network's modifiable weights.
• This gradient is almost always then used in a simple stochastic
gradient descent algorithm to find weights that minimize the error.
• Backpropagation usually allows quick convergence on satisfactory
local minima for error in the kind of networks to which it is suited.
Back-propagation Neural Network

x1 x2 t
0 0 1
0 1 0
1 0 0
1 1 1

W11 0.3 W14 0.6


W12 0.4 W15 0.6
W13 0.5 W24 0.6
W21 0.3 W25 0.6
W22 0.4 W34 0.6 W46 0.3
W23 0.5 W35 0.6 W56 0.5
Back-propagation Neural Network

Net 1
1 ( )

W11 0.3 W14 0.6


W12 0.4 W15 0.6
W13 0.5 W24 0.6
W21 0.3 W25 0.6
W22 0.4 W34 0.6 W46 0.3
W23 0.5 W35 0.6 W56 0.5
Back-propagation Neural Network

Net 2
2 ( )

W11 0.3 W14 0.6


W12 0.4 W15 0.6
W13 0.5 W24 0.6
W21 0.3 W25 0.6
W22 0.4 W34 0.6 W46 0.3
W23 0.5 W35 0.6 W56 0.5
Back-propagation Neural Network

Net 3
3 ( )

W11 0.3 W14 0.6


W12 0.4 W15 0.6
W13 0.5 W24 0.6
W21 0.3 W25 0.6
W22 0.4 W34 0.6 W46 0.3
W23 0.5 W35 0.6 W56 0.5
y1 0.5
y2 0.5
y3 0.5
Back-propagation Neural Network

Net 4 Net4 = w14y1 +w24y2 + w34y3


= 0.6*0.5 + 0.6*0.5 + 0.6*0.5
= 0.3 + 0.3 + 0.3
= 0.9
1
𝑦4 = ( )
1+𝑒

1
𝑦4 = . ( )
W11 0.3 W14 0.6 1+𝑒
W12 0.4 W15 0.6
1
W13 0.5 W24 0.6 𝑦4 =
1 + 0.4
W21 0.3 W25 0.6
W22 0.4 W34 0.6 W46 0.3 1
𝑦4 = = 0.71
W23 0.5 W35 0.6 W56 0.5 1.40
y1 0.5
y2 0.5
y3 0.5
Back-propagation Neural Network y4 0.71

Net 5 Net5 = w15y1 +w25y2 + w35y3


= 0.6*0.5 + 0.6*0.5 + 0.6*0.5
= 0.3 + 0.3 + 0.3
= 0.9
1
𝑦5 = ( )
1+𝑒

1
𝑦5 = . ( )
W11 0.3 W14 0.6 1+𝑒
W12 0.4 W15 0.6
1
W13 0.5 W24 0.6 𝑦5 =
1 + 0.4
W21 0.3 W25 0.6
W22 0.4 W34 0.6 W46 0.3 1
𝑦5 = = 0.71
W23 0.5 W35 0.6 W56 0.5 1.40
y1 0.5
y2 0.5
y3 0.5
Back-propagation Neural Network y4 0.71
y5 0.71

Net6 = w46y4 +w56y5


Net 6 = 0.3 * 0.71 + 0.5 * 0.71
= 0.21 + 0.35
= 0.56
1
𝑦6 = ( )
1+𝑒

1
𝑦6 = . ( )
W11 0.3 W14 0.6 1+𝑒
W12 0.4 W15 0.6
1
W13 0.5 W24 0.6 𝑦6 =
1 + 0.57
W21 0.3 W25 0.6
W22 0.4 W34 0.6 W46 0.3 1
𝑦6 = = 0.63
W23 0.5 W35 0.6 W56 0.5 1.57
y1 0.5
y2 0.5
y3 0.5
Back-propagation Neural Network y4 0.71
y5 0.71
y6 0.63
δ6 = Y6 (1 - Y6)(t- Y6) δ6 = Y6 (1 - Y6)(t- Y6)
= 0.63 (1-0.63)(1-0.63)
t = 0.63*0.37*0.37
Y = 0.086

W11 0.3 W14 0.6


W12 0.4 W15 0.6 x1 x2 t
W13 0.5 W24 0.6 0 0 1
W21 0.3 W25 0.6 0 1 0
W22 0.4 W34 0.6 W46 0.3 1 0 0
W23 0.5 W35 0.6 W56 0.5 1 1 1
δ6 0.086 y1 0.5
y2 0.5
y3 0.5
Back-propagation Neural Network y4 0.71
y5 0.71
y6 0.63
 5  Y 5 (1  Y 5 )   6 W 56 δ5 = Y5 (1 – Y5) ∑ δ6 W56
= 0.71 (1-0.71)(0.086 * 0.5)
= 0.71 * 0.29 * 0.043
= 0.0088

W11 0.3 W14 0.6


W12 0.4 W15 0.6
W13 0.5 W24 0.6
W21 0.3 W25 0.6
W22 0.4 W34 0.6 W46 0.3
W23 0.5 W35 0.6 W56 0.5
δ6 0.086 y1 0.5
δ5 0.0088 y2 0.5
y3 0.5
Back-propagation Neural Network y4 0.71
y5 0.71
y6 0.63
 4  Y 4 (1  Y 4 )   6W 46 δ4 = Y4 (1 – Y4) ∑ δ6 W46
= 0.71 (1-0.71)(0.086 * 0.3)
= 0.71 * 0.29 * 0.0258
= 0.0053

W11 0.3 W14 0.6


W12 0.4 W15 0.6
W13 0.5 W24 0.6
W21 0.3 W25 0.6
W22 0.4 W34 0.6 W46 0.3
W23 0.5 W35 0.6 W56 0.5
δ6 0.086 y1 0.5
δ5 0.0088 y2 0.5
δ4 0.0053 y3 0.5
Back-propagation Neural Network y4 0.71
y5 0.71
y6 0.63
 3  Y 3 (1  Y 3 )  ( 4W 34 ,  5W 35 ) δ3=0.5*0.5(0.0053*0.6+0.0088*0.6)
= 0.5 * 0.5 * (0.0032+0.0053)
= 0.5 * 0.5 * 0.0085
= 0.0021

W11 0.3 W14 0.6


W12 0.4 W15 0.6
W13 0.5 W24 0.6
W21 0.3 W25 0.6
W22 0.4 W34 0.6 W46 0.3
W23 0.5 W35 0.6 W56 0.5
δ6 0.086 y1 0.5
δ5 0.0088 y2 0.5
δ4 0.0053 y3 0.5
δ3 0.0021 Back-propagation Neural Network y4 0.71
y5 0.71
y6 0.63
 2  Y 2 (1  Y 2 )  ( 4W 24 ,  5W 25 ) δ2=0.5*0.5(0.0053*0.6+0.0088*0.6)
= 0.5 * 0.5 * (0.0032+0.0053)
= 0.5 * 0.5 * 0.0085
= 0.0021

W11 0.3 W14 0.6


W12 0.4 W15 0.6
W13 0.5 W24 0.6
W21 0.3 W25 0.6
W22 0.4 W34 0.6 W46 0.3
W23 0.5 W35 0.6 W56 0.5
δ6 0.086 y1 0.5
δ5 0.0088 y2 0.5
δ4 0.0053 y3 0.5
δ3 0.0021 Back-propagation Neural Network y4 0.71
δ2 0.0021 y5 0.71
y6 0.63
 1  Y 1(1  Y 1)  ( 4W 14 ,  5W 15 ) δ1=0.5*0.5(0.0053*0.6+0.0088*0.6)
= 0.5 * 0.5 * (0.0032+0.0053)
= 0.5 * 0.5 * 0.0085
= 0.0021

W11 0.3 W14 0.6


W12 0.4 W15 0.6
W13 0.5 W24 0.6
W21 0.3 W25 0.6
W22 0.4 W34 0.6 W46 0.3
W23 0.5 W35 0.6 W56 0.5
δ6 0.086 y1 0.5
δ5 0.0088 y2 0.5
δ4 0.0053 y3 0.5
δ3 0.0021 Back-propagation Neural Network y4 0.71
δ2 0.0021 y5 0.71
δ1 0.0021 y6 0.63

W’11= W11+ α * δ1 * x1
= 0.3 + 0.1 * 0.0021 * 0
= 0.3

W’21= W21+ α * δ1 * x2
= 0.3 + 0.1 * 0.0021 * 0
= 0.3
W11 0.3 W14 0.6
W12 0.4 W15 0.6
W13 0.5 W24 0.6
W21 0.3 W25 0.6
W22 0.4 W34 0.6 W46 0.3
W23 0.5 W35 0.6 W56 0.5
δ6 0.086 y1 0.5
δ5 0.0088 y2 0.5
δ4 0.0053 y3 0.5
δ3 0.0021 Back-propagation Neural Network y4 0.71
δ2 0.0021 y5 0.71
δ1 0.0021 y6 0.63

W’12= W12+ α*δ2*x1


= 0.4 + 0.1 * 0.0021 * 0
= 0.4

W’22= W22+ α*δ2*x2


= 0.4 + 0.1 * 0.0021 * 0
= 0.4
W11 0.3 W14 0.6
W12 0.4 W15 0.6
W13 0.5 W24 0.6
W21 0.3 W25 0.6
W22 0.4 W34 0.6 W46 0.3
W23 0.5 W35 0.6 W56 0.5
δ6 0.086 y1 0.5
δ5 0.0088 y2 0.5
δ4 0.0053 y3 0.5
δ3 0.0021 Back-propagation Neural Network y4 0.71
δ2 0.0021 y5 0.71
δ1 0.0021 y6 0.63

W’13= W13+ α*δ3*x1


= 0.5 + 0.1 * 0.0021 * 0
= 0.5

W’23= W23+ α*δ3*x2


= 0.5 + 0.1 * 0.0021 * 0
= 0.5
W11 0.3 W14 0.6
W12 0.4 W15 0.6
W13 0.5 W24 0.6
W21 0.3 W25 0.6
W22 0.4 W34 0.6 W46 0.3
W23 0.5 W35 0.6 W56 0.5
δ6 0.086 y1 0.5
δ5 0.0088 y2 0.5
δ4 0.0053 y3 0.5
δ3 0.0021 Back-propagation Neural Network y4 0.71
δ2 0.0021 y5 0.71
δ1 0.0021 y6 0.63

W’14= W14+ α*δ4*y1


= 0.6 + 0.1 * 0.0053 * 0.5
= 0.600265
= 0.6

W’24= W24+ α*δ4*y2


= 0.6 + 0.1 * 0.0053 * 0.5
W11 0.3 W14 0.6 = 0.6
W12 0.4 W15 0.6
W13 0.5 W24 0.6 W’34= W34+ α*δ4*y3
W21 0.3 W25 0.6 = 0.6 + 0.1 * 0.0053 * 0.5
W22 0.4 W34 0.6 W46 0.3 = 0.6
W23 0.5 W35 0.6 W56 0.5
δ6 0.086 y1 0.5
δ5 0.0088 y2 0.5
δ4 0.0053 y3 0.5
δ3 0.0021 Back-propagation Neural Network y4 0.71
δ2 0.0021 y5 0.71
δ1 0.0021 y6 0.63

W’15= W15+ α*δ5*y1


= 0.6 + 0.1 * 0.0021 * 0.5
= 0.6

W’25= W25+ α*δ5*y2


= 0.6 + 0.1 * 0.0021 * 0.5
= 0.6
W11 0.3 W14 0.6
W12 0.4 W15 0.6 W’35= W35+ α*δ5*y3
W13 0.5 W24 0.6 = 0.6 + 0.1 * 0.0021 * 0.5
W21 0.3 W25 0.6 = 0.6
W22 0.4 W34 0.6 W46 0.3
W23 0.5 W35 0.6 W56 0.5
δ6 0.086 y1 0.5
δ5 0.0088 y2 0.5
δ4 0.0053 y3 0.5
δ3 0.0021 Back-propagation Neural Network y4 0.71
δ2 0.0021 y5 0.71
δ1 0.0021 y6 0.63

W’46= W46+ α*δ6*y4


= 0.3 + 0.1 * 0.086 * 0.71
= 0.31

W’56= W46+ α*δ6*y5


= 0.5 + 0.1 * 0.086 * 0.71
= 0.51
W11 0.3 W14 0.6
W12 0.4 W15 0.6
W13 0.5 W24 0.6
W21 0.3 W25 0.6
W22 0.4 W34 0.6 W46 0.3
W23 0.5 W35 0.6 W56 0.5

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy