ML Lec12
ML Lec12
◼ Using Hebb’s Law we can express the adjustment ◼ Hebbian learning implies that weights can only
applied to the weight wij at iteration p in the following increase. To resolve this problem, we might impose a
form: limit on the growth of synaptic weights. It can be done
by introducing a non-linear forgetting factor into
Hebb’s Law:
◼ As a special case, we can represent Hebb’s Law as
follows:
where is the forgetting factor.
Forgetting factor usually falls in the interval between 0
where a is the learning rate parameter. This equation is
and 1, typically between 0.01 and 0.1, to allow only a
referred to as the activity product rule.
little “forgetting” while limiting the weight growth.
Hebbian learning algorithm
Step 3: Learning.
Step 1: Initialization.
Set initial synaptic weights and thresholds to small Update the weights in the network:
random values, say in an interval [0, 1 ]. wij ( p + 1) = wij ( p) + wij ( p)
Step 2: Activation. where Δwij(p) is the weight correction at iteration p.
Compute the neuron output at iteration p
The weight correction is determined by the generalized
step n
y j ( p) = xi ( p) wij ( p) − j activity product rule:
i =1 wij ( p) = y j ( p)[ xi ( p) − wij ( p)]
where n is the number of neuron inputs, and θj is the Step 4: Iteration.
threshold value of neuron j.
Increase iteration p by one, go back to Step 2.
Competitive learning
◼ In competitive learning, neurons compete among ◼ The basic idea of competitive learning was introduced in
themselves to be activated. the early 1970s.
◼ While in Hebbian learning, several output neurons can ◼ In the late 1980s, Teuvo Kohonen introduced a special
be activated simultaneously, in competitive learning, only class of artificial neural networks called
a single output neuron is active at any time. self-organizing feature maps.
◼ The output neuron that wins the “competition” is called These maps are based on competitive learning.
the winner-takes-all neuron.
Feature-mapping Kohonen model
Kohonen layer Kohonen layer
What is a self-organizing feature map?
Our brain is dominated by the cerebral cortex, a very
complex structure of billions of neurons and hundreds
of billions of synapses. The cortex includes areas that
are responsible for different human activities (motor,
visual, auditory, somatosensory, etc.), and associated
with different sensory inputs. We can say that each
sensory input is mapped into a corresponding area of Input layer Input layer
the cerebral cortex. The cortex is a self-organizing
computational map in the human brain. 1 0 0 1
(a) (b)
◼ To identify the winning neuron, jX, that best matches the 0.52
input vector X, we may apply the following condition: X=
0.12
j X = min X − W j , j = 1, 2, . . ., m
j ◼ The initial weight vectors, Wj, are given by
Step 3: Learning.
Update the synaptic weights
Step 4: Iteration.
wij ( p + 1) = wij ( p) + wij ( p)
Increase iteration p by one, go back to Step 2 and
where Dwij(p) is the weight correction at iteration p. continue until the minimum-distance Euclidean
The weight correction is determined by the criterion is satisfied, or no noticeable changes
competitive learning rule: occur in the feature map.
[ xi − wij ( p)] , j j ( p)
wij ( p) =
0, j j ( p)
where a is the learning rate parameter, and Lj(p) is
the neighbourhood function centred around the
winner-takes-all neuron jX at iteration p.
Initial random weights
Competitive learning in the Kohonen network
1
W(2,j)
0
region. -0.2
-1
-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1
W(1,j)
0 W(2,j)
0
-0.2
-0.2
-0.4
-0.4
-0.6
-0.6
-0.8
-0.8
-1
-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1 -1
-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1
W(1,j) W(1,j)
Network after 10,000 iterations
1
0.8
0.6
0.4
0.2
W(2,j)
-0.2
-0.4
-0.6
-0.8
-1
-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1
W(1,j)