100% found this document useful (1 vote)
648 views45 pages

Kohonen Self Organizing Maps

This document discusses unsupervised learning using Kohonen self-organizing maps (KSOM). KSOM use competitive learning to map high-dimensional input data onto a lower-dimensional output grid. Neurons in the output layer compete to best match the input pattern. The winning neuron and its neighbors update their weights to more closely match the input. Over many iterations, neighboring neurons in the output map become tuned to similar regions of the input space, resulting in an organized mapping of the input distribution. The document outlines the architecture, algorithm, and training process of KSOM networks.

Uploaded by

Vijaya Lakshmi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
100% found this document useful (1 vote)
648 views45 pages

Kohonen Self Organizing Maps

This document discusses unsupervised learning using Kohonen self-organizing maps (KSOM). KSOM use competitive learning to map high-dimensional input data onto a lower-dimensional output grid. Neurons in the output layer compete to best match the input pattern. The winning neuron and its neighbors update their weights to more closely match the input. Over many iterations, neighboring neurons in the output map become tuned to similar regions of the input space, resulting in an organized mapping of the input distribution. The document outlines the architecture, algorithm, and training process of KSOM networks.

Uploaded by

Vijaya Lakshmi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 45

Artificial neural networks

Kohonen Self Organising Maps


(KSOM)















The main property of a neural network is an
ability to learn from its environment, and to
improve its performance through learning. So
far we have considered supervised or active
learning learning with an external teacher
or a supervisor who presents a training set to the
network. But another type of learning also
exists: unsupervised learning.
Introduction
In contrast to supervised learning, unsupervised or
self-organised learning does not require an
external teacher. During the training session, the
neural network receives a number of different
input patterns, discovers significant features in
these patterns and learns how to classify input data
into appropriate categories. Unsupervised
learning tends to follow the neuro-biological
organisation of the brain.
Unsupervised learning algorithms aim to learn
rapidly and can be used in real-time.
In competitive learning, neurons compete among
themselves to be activated.
While in Hebbian learning, several output neurons
can be activated simultaneously, in competitive
learning, only a single output neuron is active at
any time.
The output neuron that wins the competition is
called the winner-takes-all neuron.
Competitive learning
The basic idea of competitive learning was
introduced in the early 1970s.
In the late 1980s, Teuvo Kohonen introduced a
special class of artificial neural networks called
self-organising feature maps. These maps are
based on competitive learning.
Our brain is dominated by the cerebral cortex, a
very complex structure of billions of neurons and
hundreds of billions of synapses. The cortex
includes areas that are responsible for different
human activities (motor, visual, auditory,
somatosensory, etc.), and associated with different
sensory inputs. We can say that each sensory
input is mapped into a corresponding area of the
cerebral cortex. The cortex is a self-organising
computational map in the human brain.
What is a self-organising feature map?
Feature-mapping Kohonen model
Input layer
Kohonen layer
(a)
Input layer
Kohonen layer
1 0
(b)
0 1
The Kohonen model provides a topological
mapping. It places a fixed number of input
patterns from the input layer into a higher-
dimensional output or Kohonen layer.
Training in the Kohonen network begins with the
winners neighbourhood of a fairly large size.
Then, as training proceeds, the neighbourhood size
gradually decreases.
The Kohonen network
Architecture of the Kohonen Network
Input
layer
O

u

t

p

u

t



S

i

g

n

a

l

s
I

n

p

u

t



S

i

g

n

a

l

s
x
1
x
2
Output
layer
y
1
y
2
y
3
SOM Architecture
Two layers of neurons
Input layer
Output map layer
Each output neuron is connected to each
input neuron
o Fully connected network
Output map usually has two dimensions
o one and three dimensions also used
Neurons in output map can be laid out in
different patterns
o rectangular
o Hexagonal

SOM Architecture
SOMs are competitive networks
Neurons in the network compete with each
other
Other kinds of competitive network exist
o e.g. ART
SOM Algorithm
Each output neuron is connected to each neuron in the
input layer
Therefore, each output neuron has an incoming connection
weight vector
Dimensionality of this vector is the same as the
dimensionality of the input vector
Since the dimensionality of these vectors is the same, we
can measure the Euclidean distance between them


Winning node is that with the least distance
o i.e. the lowest value of D
Outputs from a SOM are binary
A node is either the winner, or it is not
Only one node can win



SOM Training
Based on rewarding the winning node
This is a form of competitive learning
Winners weights are adjusted to be closer to
the input vector
Why not equal?
o We want the output map to learn regions, not
examples
SOM Training
Based on rewarding the winning node
This is a form of competitive learning
Winners weights are adjusted to be closer to
the input vector
Why not equal?
o We want the output map to learn regions, not
examples
SOM Training
An important concept in SOM training is that
of the Neighbourhood
The output map neurons that adjoin the
winner
Neighbourhood size describes how far out
from the winner the neighbours can be
Neighbours weights are also modified
Neighbourhood
SOM Training
Number of neighbours is effected by the
shape of the map
o rectangular grids
4 neighbours
o hexagonal grids
6 neighbours
Neighbourhood size and learning rate is
reduced gradually during training
SOM Training
Overall effect of training
o groups, or clusters form in output map
o clusters represent spatially nearby regions in input
space
o since dimensionality of the output map is less than
the dimensionality of the input space
vector quantisation
The lateral connections are used to create a
competition between neurons. The neuron with the
largest activation level among all neurons in the
output layer becomes the winner. This neuron is
the only neuron that produces an output signal.
The activity of all other neurons is suppressed in
the competition.
The lateral feedback connections produce
excitatory or inhibitory effects, depending on the
distance from the winning neuron. This is
achieved by the use of a Mexican hat function
which describes synaptic weights between neurons
in the Kohonen layer.
The Mexican hat function of lateral connection
Connection
strength
Distance
Excitatory
effect
Inhibitory
effect
Inhibitory
effect
0
1
In the Kohonen network, a neuron learns by
shifting its weights from inactive connections to
active ones. Only the winning neuron and its
neighbourhood are allowed to learn. If a neuron
does not respond to a given input pattern, then
learning cannot occur in that particular neuron.
The competitive learning rule defines the change
Aw
ij
applied to synaptic weight w
ij
as



where x
i
is the input signal and o is the learning
rate parameter.


= A
n competitio the loses neuron if , 0
n competitio the wins neuron if ), (

j
j w x
w
ij i
ij
o
The overall effect of the competitive learning rule
resides in moving the synaptic weight vector W
j
of
the winning neuron j towards the input pattern X.
The matching criterion is equivalent to the
minimum Euclidean distance between vectors.
The Euclidean distance between a pair of n-by-1
vectors X and W
j
is defined by


where x
i
and w
ij
are the ith elements of the vectors
X and W
j
, respectively.
2 / 1
1
2
) (
(
(

= =

=
n
i
ij i j
w x d W X
To identify the winning neuron, j
X
, that best
matches the input vector X, we may apply the
following condition:


where m is the number of neurons in the Kohonen
layer.
,
j
j
min j W X
X
=
j = 1, 2, . . ., m
Suppose, for instance, that the 2-dimensional input
vector X is presented to the three-neuron Kohonen
network,


The initial weight vectors, W
j
, are given by
(

=
12 . 0
52 . 0
X
(

=
81 . 0
27 . 0
1
W
(

=
70 . 0
42 . 0
2
W
(

=
21 . 0
43 . 0
3
W
We find the winning (best-matching) neuron j
X

using the minimum-distance Euclidean criterion:
2
21 2
2
11 1 1
) ( ) ( w x w x d + = 73 . 0 ) 81 . 0 12 . 0 ( ) 27 . 0 52 . 0 (
2 2
= + =
2
22 2
2
12 1 2
) ( ) ( w x w x d + = 59 . 0 ) 70 . 0 12 . 0 ( ) 42 . 0 52 . 0 (
2 2
= + =
2
23 2
2
13 1 3
) ( ) ( w x w x d + = 13 . 0 ) 21 . 0 12 . 0 ( ) 43 . 0 52 . 0 (
2 2
= + =
Neuron 3 is the winner and its weight vector W
3
is
updated according to the competitive learning rule.
0.01 ) 43 . 0 52 . 0 ( 1 . 0 ) (
13 1 13
= = = A w x w o
0.01 ) 21 . 0 12 . 0 ( 1 . 0 ) (
23 2 23
= = = A w x w o
The updated weight vector W
3
at iteration (p + 1)
is determined as:



The weight vector W
3
of the wining neuron 3
becomes closer to the input vector X with each
iteration.
(

=
(

+
(

= A + = +
20 . 0
44 . 0
01 . 0
0.01
21 . 0
43 . 0
) ( ) ( ) 1 (
3 3 3
p p p W W W
Step 1: I nitialisation.
Set initial synaptic weights to small random
values, say in an interval [0, 1], and assign a small
positive value to the learning rate parameter o.
Competitive Learning Algorithm
Step 2: Activation and Similarity Matching.
Activate the Kohonen network by applying the
input vector X, and find the winner-takes-all (best
matching) neuron j
X
at iteration p, using the
minimum-distance Euclidean criterion




where n is the number of neurons in the input
layer, and m is the number of neurons in the
Kohonen layer.
, ) ( ) ( ) (
2 / 1
1
2
] [

= =

=
n
i
ij i j
j
p w x p min p j W X
X
j = 1, 2, . . ., m
Step 3: Learning.
Update the synaptic weights

where Aw
ij
(p) is the weight correction at iteration p.
The weight correction is determined by the
competitive learning rule:


where o is the learning rate parameter, and A
j
(p) is
the neighbourhood function centred around the
winner-takes-all neuron j
X
at iteration p.
) ( ) ( ) 1 ( p w p w p w
ij ij ij
A + = +

A e
A e
= A
) ( , 0
) ( , ) (
) (
] [
p j
p j p w x
p w
j
j ij i
ij
o
Step 4: I teration.
Increase iteration p by one, go back to Step 2 and
continue until the minimum-distance Euclidean
criterion is satisfied, or no noticeable changes
occur in the feature map.
To illustrate competitive learning, consider the
Kohonen network with 100 neurons arranged in the
form of a two-dimensional lattice with 10 rows and
10 columns. The network is required to classify
two-dimensional input vectors each neuron in the
network should respond only to the input vectors
occurring in its region.
The network is trained with 1000 two-dimensional
input vectors generated randomly in a square
region in the interval between 1 and +1. The
learning rate parameter o is equal to 0.1.
Competitive learning in the Kohonen network
-0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1
W(1,j)
-1
-1
-0.8
-0.6
-0.4
-0.2
0
0.2
0.4
0.6
0.8
1
W
(
2
,
j
)
Initial random weights
Network after 100 iterations
-0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1
W(1,j)
-1
-1
-0.8
-0.6
-0.4
-0.2
0
0.2
0.4
0.6
0.8
1
W
(
2
,
j
)
Network after 1000 iterations
-0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1
W(1,j)
-1
-1
-0.8
-0.6
-0.4
-0.2
0
0.2
0.4
0.6
0.8
1
W
(
2
,
j
)
Network after 10,000 iterations
-0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1
W(1,j)
-1
-1
-0.8
-0.6
-0.4
-0.2
0
0.2
0.4
0.6
0.8
1
W
(
2
,
j
)
Kohonens: The Basic Idea
Make a two dimensional array, or map, and
randomize it.
Present training data to the map and let the cells
on the map compete to win in some way.
Euclidean distance is usually used.
Stimulate the winner and some friends in the
neighborhood.
Do this a bunch of times.
The result is a 2 dimensional weight map.
Competitive Learning
There are two types of competitive
learning: hard and soft.

Hard competitive learning is
essentially a winner-take-all
scenario. The winning neuron is the
only one which receives any
training response.
(Closer to most supervised learning models?)
Competitive Learning
Soft competitive learning is essentially a share with your
neighbors scenario.
This is actually closer to the real cortical sheet model.
This is obviously what the KSOM and other unsupervised
connectionist methods use.
Feature Vector Presentation
Update Strategy
A Trained KSOM
Applications: General
Oil and Gas exploration.
Satellite image analysis.
Data mining.
Document Organization
Stock price prediction (Zorin, 2003).
Technique analysis in football and other sports (Barlett,
2004).
Spatial reasoning in GIS applications
As a pre-clustering engine for other ANN architectures
Over 3000 journal-documented applications, last count.
Next time.
Satellite Image Analysis
(Questions?)
Conclusion
Kohonen SOMs are competitive networks
SOMs learn via an unsupervised algorithm
SOM training is based on forming clusters
SOMs perform vector quantisation

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy