0% found this document useful (0 votes)
61 views4 pages

Self Organizing Map

SOM

Uploaded by

learnpoltics
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
61 views4 pages

Self Organizing Map

SOM

Uploaded by

learnpoltics
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 4

Self Organizing Map (or Kohonen Map or SOM) is a type of Artificial Neural Network which is also

inspired by biological models of neural systems from the 1970s. It follows an unsupervised learning
approach and trained its network through a competitive learning algorithm. SOM is used for clustering
and mapping (or dimensionality reduction) techniques to map multidimensional data onto lower-
dimensional which allows people to reduce complex problems for easy interpretation. SOM has two
layers, one is the Input layer and the other one is the Output layer.

The architecture of the Self Organizing Map with two clusters and n input features of any sample is given
below:

How do SOM works?

Let’s say an input data of size (m, n) where m is the number of training examples and n is the number of
features in each example. First, it initializes the weights of size (n, C) where C is the number of clusters.
Then iterating over the input data, for each training example, it updates the winning vector (weight
vector with the shortest distance (e.g Euclidean distance) from training example). Weight updation rule
is given by :

wij = wij(old) + alpha(t) * (xik - wij(old))

where alpha is a learning rate at time t, j denotes the winning vector, i denotes the ith feature of training
example and k denotes the kth training example from the input data. After training the SOM network,
trained weights are used for clustering new examples. A new example falls in the cluster of winning
vectors.
Algorithm

Training:

Step 1: Initialize the weights wij random value may be assumed. Initialize the learning rate α.

Step 2: Calculate squared Euclidean distance.

D(j) = Σ (wij – xi)^2 where i=1 to n and j=1 to m

Step 3: Find index J, when D(j) is minimum that will be considered as winning index.

Step 4: For each j within a specific neighborhood of j and for all i, calculate the new weight.

wij(new)=wij(old) + α[xi – wij(old)]

Step 5: Update the learning rule by using :

α(t+1) = 0.5 * t

Step 6: Test the Stopping Condition.

Self-Organizing Maps (SOMs) are a type of neural network used for clustering and visualization
of high-dimensional data. They are unsupervised learning models that map input data onto a
lower-dimensional space (usually 2D), preserving the topological relationships in the data.

Two Basic Feature Mapping Models in SOM:

1. Kohonen’s SOM
o Description: The Kohonen Self-Organizing Map is the most common SOM
model. It arranges neurons on a regular grid (like a 2D lattice) and uses
competitive learning to organize the neurons based on input data. Each neuron is
associated with a weight vector that represents a position in the input space.
o Process:
1. Input data is fed to the network.
2. The neuron with the weight vector closest to the input (the Best Matching
Unit, BMU) is identified.
3. The weights of the BMU and its neighboring neurons are updated to make
them closer to the input.
4. This process repeats for multiple iterations, gradually organizing the map.
o Key Feature: Preserves topological relationships, meaning similar inputs are
mapped to nearby neurons.
2. Adaptive Resonance Theory (ART) Map
o Description: ART-based models are designed for pattern recognition and
clustering with a focus on learning stability. Unlike Kohonen’s SOM, they
adaptively learn without disrupting previously learned patterns.
o Process:
1. Input is compared with existing clusters in the network.
2. If it fits within an acceptable "vigilance" threshold, it is assigned to an
existing cluster.
3. If it doesn’t fit, a new cluster is created to represent this input.
o Key Feature: Balances stability (preservation of learned patterns) with plasticity
(ability to learn new patterns).

Comparison:

 Learning Type: Both use unsupervised learning, but ART focuses more on stability and
plasticity, while Kohonen’s SOM emphasizes topological mapping.
 Output: Kohonen’s SOM produces a structured map (grid), whereas ART creates
clusters based on similarity and vigilance.
 Application: Kohonen’s SOM is used for visualization and dimensionality reduction,
while ART is used for dynamic clustering where data evolves over time
 ------- properties

The Feature Map in neural networks, particularly in Self-Organizing Maps (SOMs), has distinct
properties that make it effective for tasks like clustering, dimensionality reduction, and data
visualization. These properties include:

1. Topology Preservation

 The feature map maintains the topological relationships of input data.


 Inputs that are close in the input space are mapped to nearby neurons in the feature map,
preserving the neighborhood structure.

2. Dimensionality Reduction

 High-dimensional input data is projected onto a lower-dimensional grid (often 2D),


making it easier to visualize and analyze.

3. Competitive Learning

 Neurons in the feature map compete to respond to input data, with only the "winner"
(Best Matching Unit, BMU) and its neighbors being updated.

4. Neighborhood Adaptation

 During training, not only the BMU but also its neighboring neurons are adjusted. The size
of the neighborhood decreases over time, fine-tuning the map to the input data.

5. Unsupervised Learning
 Feature maps learn patterns in the data without requiring labeled input, making them
suitable for clustering and exploratory data analysis.

6. Self-Organization

 The weights of the neurons in the map automatically adjust to organize themselves based
on the input data patterns.

7. Nonlinear Mapping

 The feature map can capture nonlinear relationships in the data, making it more flexible
than linear methods like PCA.

8. Density Matching

 Regions of the map with a higher density of neurons correspond to regions of the input
space with more data points, effectively reflecting the input data distribution.

9. Smoothing and Generalization

 By involving neighboring neurons during weight updates, the feature map achieves a
smooth representation of the input data, avoiding overfitting.

10. Scalability

 The size of the feature map (number of neurons) can be adjusted to handle varying data
sizes and complexities, providing scalability for different applications.

Applications Based on These Properties:

 Clustering: Grouping similar data points together.


 Data Visualization: Mapping high-dimensional data onto a 2D grid for easy
interpretation.
 Dimensionality Reduction: Reducing the complexity of the data while preserving its
structure.
 Anomaly Detection: Identifying data points that deviate significantly from learned
patterns.

These properties make feature maps versatile tools in machine learning and data analysis.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy