0% found this document useful (0 votes)
10 views12 pages

Self-Organizing Maps

ML notes
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views12 pages

Self-Organizing Maps

ML notes
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 12

Self-Organizing Maps

Self-Organizing Maps (SOM)


• The concept of a self-organizing
map, or SOM, was first put forth
by Kohonen.
• It is a way to reduce data
dimensions since it is an
unsupervised neural network
that is trained using
unsupervised learning
techniques to build a low-
dimensional, discretized
representation from the input
space of the training samples.
What Are Self-Organizing Maps?
• A sort of artificial neural network called a self-organizing
map, often known as a Kohonen map or SOM, was influenced
by 1970s neural systems’ biological models.
• It employs an unsupervised learning methodology and uses
a competitive learning algorithm to train its network.
• To minimize complex issues for straightforward
interpretation, SOM is utilized for mapping and clustering (or
dimensionality reduction) procedures to map
multidimensional data onto lower-dimensional spaces.
• The output layer and the input layer are the two layers that
make up the SOM. This is also known as the Kohonen Map.
How Do SOMs Work?
•Consider an input set with the dimensions (m, n), where m represents the
number of features present in each example, n represents and the number of
training examples.
•The weights of size (n, C), where C is the number of clusters, are first initialized.
•The winning vector (the weight vector with the shortest distance from the
training example, for example, the Euclidean distance) is then updated after
iterating over the input data for each training by:

wij = wij(old) + alpha(t) * (xik - wij(old))

•Here, i stands for the ith feature of the training example, j is the winning vector,
alpha is the learning rate at time t, and k is the input data’s kth training example.
Algorithm
•Step 1: Initialize each node weight's w_ij to a random value.
•Step 2: Select input vector x k at random.
•Step 3: For each node on the map, repeat steps 4 and 5 once more.
•Step 4: Find the distance in Euclid between the input vector x(t), and the weight
vector wij connected to the first node, where t, i, and j are all equal to 0.
•Step 5: Keep an eye on the node that produces the least t-distance.
•Step 6: Make a global Best Matching Unit calculation (BMU). It refers to the node that
is closest to all other calculated nodes.
•Step 7: Find the BMU's topological neighborhood and its radius in the Kohonen Map.
•Note: Steps 2 through 9 represent the training phase, whereas step 1 represents the
initiation phase.
•Here,
•X → input vector
•The neighborhood function's radius, o(t), determines how far neighbor nodes in the 2D grid are
inspected when updating vectors. Over time, it gradually gets smaller.
•w_ij → is the association weight between grid nodes I j.
•t → current iteration
•At iteration t, X(t) equals the input vector instance.
•i → is the grid's row coordinate for nodes.
•w → Weight vector
•The neighborhood function, which represents the distance between nodes I j and the BMU, is β_ij.
•j → is the grid's column coordinate for nodes.
Uses of Self-Organizing Maps
• Self-Organizing Maps, which are not always linear, have the
advantage of keeping the structural information from the
training data intact.
• Principal Component Analysis may simply result in data loss
when used to high dimensional data when the dimension is
decreased to two.
• Self-Organizing Maps can be a great alternative to PCA for
the reduction in dimensionality if the data has several
dimensions and each preset dimension is relevant.
• Using this technique, organized relational clusters are
created by identifying feature organizations in the dataset.
Self-Organizing Maps Architecture
•Two crucial layers make up self-organizing maps: the input layer and the output layer,
commonly referred to as a feature map.
•The input layer is the initial layer in a self-organizing map.
•Every dataset’s data point competes for a representation in order to recognize itself.
•To put it simply, learning takes place in the following ways:

• To determine whether appropriate weights are similar to the input vector, each node is
analyzed. The best matching unit is the term used to describe the appropriate node.
• The Best Matching Unit's neighborhood value is then determined. Over time, the neighbors
tend to decline in number.
• The appropriate weight further evolves into something more resembling the sample vector.
• The surrounding areas change similarly to the selected sample vector.
• A node's weights change more as it gets closer to the Best Matching Unit (BMU), and less
as it gets farther away from its neighbor.
• For N iterations, repeat step two.
Pros Of Self-Organizing Maps
•Techniques like dimensionality reduction and grid
clustering can make it simple to understand and
comprehend data.
•Self-organizing maps can handle a variety of
categorization issues while simultaneously producing an
insightful and practical summary of the data.
Cons Of Self-Organizing Maps
•The model cannot grasp how data is formed since it
does not generate a generative data model.
•When dealing with categorical data, Self-Organizing
Maps perform poorly, and when dealing with mixed forms
of data, they do much worse.
•In comparison, the model preparation process is
extremely slow, making it challenging to train against
slowly evolving data
Backpropagation Example

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy