0% found this document useful (0 votes)
22 views49 pages

4.self Organizing Maps 5

Uploaded by

Ashna Gautam
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
22 views49 pages

4.self Organizing Maps 5

Uploaded by

Ashna Gautam
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 49

Self-Organizing Maps

Unit-5
Self-Organizing Maps
self-organizing systems by considering a special class
of artificial neural networks known as self-organizing
maps.
These networks are based on competitive learning;
the output neurons of the network compete among
themselves to be activated or fired, with the result
that only one output neuron, or one neuron per
group, is on at any one time. An output neuron that
wins the competition is called a winner-takes all
neuron, or simply a winning neuron.
Self-Organizing Maps
.A self-organizing map is therefore characterized
by the formation of a topographic map of the
input patterns, in which the spatial locations
(i.e., coordinates) of the neurons in the lattice
are indicative of intrinsic statistical features
contained in the input patterns—hence, the
name “self-organizing map.”
As a neural model, the self-organizing map provides
a bridge between two levels of adaptation:
• Adaptation rules formulated at the microscopic
level of a single neuron;
• formation of experientially better and physically
accessible patterns of feature selectivity at the
microscopic level of neural layers.
The self-organizing map is inherently nonlinear.
TWO BASIC FEATURE-MAPPING MODELS
1. In each map, neurons act in parallel and process pieces of
information that are similar in nature, but originate from
different regions in the sensory input space.
2. At each stage of representation, each incoming piece of
information is kept in its proper context. 426 Chapter 9 Self-
Organizing Maps
3. Neurons dealing with closely related pieces of information
are close together so that they can interact via short synaptic
connections.
4. Contextual maps can be understood in terms of decision-
reducing mappings from higher dimensional parameter
spaces onto the cortical surface.
TWO BASIC FEATURE-MAPPING MODELS

The spatial location of an output neuron in a


topographic map corresponds to a particular
domain or feature of data drawn from the input
space.
This principle has provided the neurobiological
motivation for two different feature mapping
models2 , described herein.
TWO BASIC FEATURE-MAPPING MODELS

Figure 9.1 displays the


layout of the two models. In
both cases, the output
neurons are arranged in a
two-dimensional lattice. This
kind of topology ensures
that each neuron has a set
of neighbors. The models
differ from each other in the
manner in which the input
patterns are specified.
TWO BASIC FEATURE-MAPPING MODELS
The model in Fig. 9.1a was originally proposed by Willshaw and von der
Malsburg (1976) on biological grounds to explain the problem of retinotopic
mapping from the retina to the visual cortex (in higher vertebrates).
Specifically, there are two separate two dimensional lattices of neurons
connected together, one projecting onto the other.
One lattice represents presynaptic (input) neurons, and the other lattice
represents postsynaptic (output) neurons.
The postsynaptic lattice uses a short-range excitatory mechanism as well as a
long-range inhibitory mechanism. These two mechanisms are local in nature
and critically important for self-organization.
The two lattices are interconnected by modifiable synapses of a Hebbian
type.
The basic idea of the Willshaw–von der
Malsburg model is for the geometric proximity
of presynaptic neurons to be coded in the form
of correlations in their electrical activity, and to
use these correlations in the postsynaptic lattice
so as to connect neighboring presynaptic
neurons to neighboring postsynaptic neurons.
A topologically ordered mapping is thereby
produced through a process of self-organization.
The second model of Fig. 9.1b,
introduced by Kohonen (1982), is not
meant to explain neurobiological details.
Rather, the model captures the essential
features of computational maps in the
brain and yet remains computationally
tractable.
It appears that the Kohonen
model is more general than the
Willshaw–von der Malsburg
model in the sense that it is
capable of performing data
compression (i.e., dimensionality
reduction on the input).
In reality, the Kohonen model belongs to the class of
vector-coding algorithms. The model provides a
topological mapping that optimally places a fixed
number of vectors (i.e., code words) into a higher-
dimensional input space, thereby facilitating data
compression.
The Kohonen model may therefore be derived in two
ways. First, we may use basic ideas of self-organization,
motivated by neurobiological considerations, to derive
the model, which is the traditional approach
The principal goal of the self-organizing map
(SOM) is to transform an incoming signal pattern
of arbitrary dimension into a one- or two-
dimensional discrete map, and to perform this
transformation adaptively in a topologically
ordered fashion. Figure 9.2 shows the schematic
diagram of a two-dimensional lattice of neurons
commonly used as a discrete map.
Each neuron in the lattice is fully connected to
all the source nodes in the input layer. This
network represents a feedforward structure
with a single computational layer consisting of
neurons arranged in rows and columns. A one-
dimensional lattice is a special case of the
configuration depicted in Fig. 9.2: in this special
case, the computational layer consists simply of
a single column or row of neurons.
Each input pattern presented to the network typically consists of a localized
region or “spot” of activity against a quiet background.

The location and nature of such a spot usually varies from one realization of
the input pattern to another.

All the neurons in the network should therefore be exposed to a sufficient


number of different realizations of the input pattern in order to ensure that
the self-organization process has a chance to develop properly.

The algorithm responsible for the formation of the self-organizing map


proceeds first by initializing the synaptic weights in the network.

This can be done by assigning them small values picked from a random-
number generator; in so doing, no prior order is imposed on the feature
map.
Once the network has been properly initialized,
there are three essential processes involved in the formation of the self-
organizing map

1. Competition. For each input pattern, the


neurons in the network compute their
respective values of a discriminant function.
This discriminant function provides the basis for
competition among the neurons. The particular
neuron with the largest value of discriminant
function is declared winner of the competition.
2. Cooperation. The winning neuron determines the
spatial location of a topological neighborhood of excited
neurons, thereby providing the basis for cooperation
among such neighboring neurons.
3. Synaptic Adaptation. This last mechanism enables the
excited neurons to increase their individual values of the
discriminant function in relation to the input pattern
through suitable adjustments applied to their synaptic
weights.
The adjustments made are such that the response of the
winning neuron to the subsequent application of a similar
input pattern is enhanced.
Cooperative Process
The winning neuron locates the center of a topological
neighborhood of cooperating neurons. The key question
is: How do we define a topological neighborhood that is
neurobiologically correct?
To answer this basic question, remember that there is
neurobiological evidence for lateral interaction among a
set of excited neurons in the human brain. In particular, a
neuron that is firing tends to excite the neurons in its
immediate neighborhood more than those farther away
from it, which is intuitively satisfying.
Two Phases of the Adaptive Process: Ordering and
Convergence

1. Self-organizing or ordering phase. It is during


this first phase of the adaptive process that the
topological ordering of the weight vectors takes
place.
The SOM algorithm embodies two
ingredients that define the algorithm:
• A projection from the continuous input data space
onto the discrete output neural space . In this way, an
input vector is mapped onto a “winning neuron” in
the lattice structure in accordance with the similarity
matching step.

• A pointer from the output space back to the input


space. In effect, the pointer defined by the weight vector
of the winning neuron identifies a particular point in the
input data space as the “image” of the winning neuron.
Approximation of the Input Space
Approximation of the Input Space
Property 2. Topological Ordering
Property 3. Density Matching
Two different results are reported in the literature, depending on
the encoding method
Property 4. Feature Selection
Given data from an input space, the self-organizing
map is able to select a set of best features for
approximating the underlying distribution.
This property is a natural culmination of Properties 1
through 3. In a loose sense, Property 4 brings to mind
the idea of principal-components analysis that was
discussed in the previous chapter, but with an
important difference, as illustrated in Fig. 9.7. Figure
9.7a shows a two-dimensional distribution of zero-
mean data points resulting from a linear

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy