Kohonen Self Organizing Maps
Kohonen Self Organizing Maps
= A
n competitio the loses neuron if , 0
n competitio the wins neuron if ), (
j
j w x
w
ij i
ij
o
The overall effect of the competitive learning rule
resides in moving the synaptic weight vector W
j
of
the winning neuron j towards the input pattern X.
The matching criterion is equivalent to the
minimum Euclidean distance between vectors.
The Euclidean distance between a pair of n-by-1
vectors X and W
j
is defined by
where x
i
and w
ij
are the ith elements of the vectors
X and W
j
, respectively.
2 / 1
1
2
) (
(
(
= =
=
n
i
ij i j
w x d W X
To identify the winning neuron, j
X
, that best
matches the input vector X, we may apply the
following condition:
where m is the number of neurons in the Kohonen
layer.
,
j
j
min j W X
X
=
j = 1, 2, . . ., m
Suppose, for instance, that the 2-dimensional input
vector X is presented to the three-neuron Kohonen
network,
The initial weight vectors, W
j
, are given by
(
=
12 . 0
52 . 0
X
(
=
81 . 0
27 . 0
1
W
(
=
70 . 0
42 . 0
2
W
(
=
21 . 0
43 . 0
3
W
We find the winning (best-matching) neuron j
X
using the minimum-distance Euclidean criterion:
2
21 2
2
11 1 1
) ( ) ( w x w x d + = 73 . 0 ) 81 . 0 12 . 0 ( ) 27 . 0 52 . 0 (
2 2
= + =
2
22 2
2
12 1 2
) ( ) ( w x w x d + = 59 . 0 ) 70 . 0 12 . 0 ( ) 42 . 0 52 . 0 (
2 2
= + =
2
23 2
2
13 1 3
) ( ) ( w x w x d + = 13 . 0 ) 21 . 0 12 . 0 ( ) 43 . 0 52 . 0 (
2 2
= + =
Neuron 3 is the winner and its weight vector W
3
is
updated according to the competitive learning rule.
0.01 ) 43 . 0 52 . 0 ( 1 . 0 ) (
13 1 13
= = = A w x w o
0.01 ) 21 . 0 12 . 0 ( 1 . 0 ) (
23 2 23
= = = A w x w o
The updated weight vector W
3
at iteration (p + 1)
is determined as:
The weight vector W
3
of the wining neuron 3
becomes closer to the input vector X with each
iteration.
(
=
(
+
(
= A + = +
20 . 0
44 . 0
01 . 0
0.01
21 . 0
43 . 0
) ( ) ( ) 1 (
3 3 3
p p p W W W
Step 1: I nitialisation.
Set initial synaptic weights to small random
values, say in an interval [0, 1], and assign a small
positive value to the learning rate parameter o.
Competitive Learning Algorithm
Step 2: Activation and Similarity Matching.
Activate the Kohonen network by applying the
input vector X, and find the winner-takes-all (best
matching) neuron j
X
at iteration p, using the
minimum-distance Euclidean criterion
where n is the number of neurons in the input
layer, and m is the number of neurons in the
Kohonen layer.
, ) ( ) ( ) (
2 / 1
1
2
] [
= =
=
n
i
ij i j
j
p w x p min p j W X
X
j = 1, 2, . . ., m
Step 3: Learning.
Update the synaptic weights
where Aw
ij
(p) is the weight correction at iteration p.
The weight correction is determined by the
competitive learning rule:
where o is the learning rate parameter, and A
j
(p) is
the neighbourhood function centred around the
winner-takes-all neuron j
X
at iteration p.
) ( ) ( ) 1 ( p w p w p w
ij ij ij
A + = +
A e
A e
= A
) ( , 0
) ( , ) (
) (
] [
p j
p j p w x
p w
j
j ij i
ij
o
Step 4: I teration.
Increase iteration p by one, go back to Step 2 and
continue until the minimum-distance Euclidean
criterion is satisfied, or no noticeable changes
occur in the feature map.
To illustrate competitive learning, consider the
Kohonen network with 100 neurons arranged in the
form of a two-dimensional lattice with 10 rows and
10 columns. The network is required to classify
two-dimensional input vectors each neuron in the
network should respond only to the input vectors
occurring in its region.
The network is trained with 1000 two-dimensional
input vectors generated randomly in a square
region in the interval between 1 and +1. The
learning rate parameter o is equal to 0.1.
Competitive learning in the Kohonen network
-0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1
W(1,j)
-1
-1
-0.8
-0.6
-0.4
-0.2
0
0.2
0.4
0.6
0.8
1
W
(
2
,
j
)
Initial random weights
Network after 100 iterations
-0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1
W(1,j)
-1
-1
-0.8
-0.6
-0.4
-0.2
0
0.2
0.4
0.6
0.8
1
W
(
2
,
j
)
Network after 1000 iterations
-0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1
W(1,j)
-1
-1
-0.8
-0.6
-0.4
-0.2
0
0.2
0.4
0.6
0.8
1
W
(
2
,
j
)
Network after 10,000 iterations
-0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1
W(1,j)
-1
-1
-0.8
-0.6
-0.4
-0.2
0
0.2
0.4
0.6
0.8
1
W
(
2
,
j
)
Kohonens: The Basic Idea
Make a two dimensional array, or map, and
randomize it.
Present training data to the map and let the cells
on the map compete to win in some way.
Euclidean distance is usually used.
Stimulate the winner and some friends in the
neighborhood.
Do this a bunch of times.
The result is a 2 dimensional weight map.
Competitive Learning
There are two types of competitive
learning: hard and soft.
Hard competitive learning is
essentially a winner-take-all
scenario. The winning neuron is the
only one which receives any
training response.
(Closer to most supervised learning models?)
Competitive Learning
Soft competitive learning is essentially a share with your
neighbors scenario.
This is actually closer to the real cortical sheet model.
This is obviously what the KSOM and other unsupervised
connectionist methods use.
Feature Vector Presentation
Update Strategy
A Trained KSOM
Applications: General
Oil and Gas exploration.
Satellite image analysis.
Data mining.
Document Organization
Stock price prediction (Zorin, 2003).
Technique analysis in football and other sports (Barlett,
2004).
Spatial reasoning in GIS applications
As a pre-clustering engine for other ANN architectures
Over 3000 journal-documented applications, last count.
Next time.
Satellite Image Analysis
(Questions?)
Conclusion
Kohonen SOMs are competitive networks
SOMs learn via an unsupervised algorithm
SOM training is based on forming clusters
SOMs perform vector quantisation