We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 11
Learning Rules
Presentation By:
Dr. Akash Saxena
Senior Professor, School of Computing Science and Engineering, VIT, Bhopal Learning Rules • Hebbian learning rule – It identifies, how to modify the weights of nodes of a network.
• Perceptron learning rule –
Network starts its learning by assigning a random value to each weight
• Delta learning rule –
Modification in sympatric weight of a node is equal to the multiplication of error and the input.
• Correlation learning rule –
The correlation rule is the supervised learning.
• Outstar learning rule –
We can use it when it assumes that nodes or neurons in a network arranged in a layer. Delta Learning Rule
It was developed by Bernard Widrow and Marcian Hoff and It depends on
supervised learning and has a continuous activation function. It is also known as the Least Mean Square method and it minimizes error over all the training patterns.
It is based on a gradient descent approach which continues forever. It states
that the modification in the weight of a node is equal to the product of the error and the input where the error is the difference between desired and actual output. • Assume (x1,x2,x3……………………….xn) –>set of input vectors • and (w1,w2,w3…………………..wn) –>set of weights • y=actual output • wo=initial weight • wnew=new weight • δw=change in weight • Error= ti-y • Learning signal(ej)=(ti-y)y’ • y=f(net input)= ∫wixi • δw=αxiej=αxi(ti-y)y’ • wnew=wo+δw • The updating of weights can only be done if there is a difference between the target and actual output(i.e., error) present: • case I: when t=y • then there is no change in weight • case II: else • wnew=wo+δw Correlation Rule • The correlation learning rule follows the same similar principle as the Hebbian learning rule, i.e., If two neighbor neurons are operating in the same phase at the same period of time, then the weight between these neurons should be more positive. • For neurons operating in the opposite phase, the weight between them should be more negative but unlike the Hebbian rule, the correlation rule is supervised in nature here, the targeted response is used for the calculation of the change in weight. Out Star Learning Rule Out Star Learning Rule is implemented when nodes in a network are arranged in a layer. Here the weights linked to a particular node should be equal to the targeted outputs for the nodes connected through those same weights. Weight change is thus calculated as=δw=α(t-y)
Where α=learning rate, y=actual
output, and t=desired output for n layer nodes. Competitive Learning Rule It is also known as the Winner-takes-All rule and is unsupervised in nature. Here all the output nodes try to compete with each other to represent the input pattern and the winner is declared according to the node having the most outputs and is given the output 1 while the rest are given 0.
There are a set of neurons with arbitrarily
distributed weights and the activation function is applied to a subset of neurons. Only one neuron is active at a time. Only the winner has updated weights, the rest remain unchanged.