Unsupervised Learning
Unsupervised Learning
CHAPTER 5
UNSUPERVISED
LEARNING NETWORKS
UNSUPERVISED LEARNING
• These competitive nets arc those where the weights remain fixed,
even during training process. The idea of competition is used
among neurons for enhancement of contrast in their activation
functions.
• Three such nets are- Maxnet, Mexican hat and Hamming net.
Original Slides: (Sivanandam & Deepa). Edited
2/6/2019 2
by: (Chiranji Lal)
1
06-02-2019
2
06-02-2019
3
06-02-2019
COMPETITIVE LEARNING
Output units compete, so that eventually only one neuron (the one
with the most input) is active in response to each output pattern.
The total weight from the input layer to each output neuron is
limited. If some connections are strengthened, others must be
weakened.
4
06-02-2019
• where x is the input vector; wj the weight vector for unit j; alpha the
learning rare whose value decreases monotonically as training
continues.
• There exist two methods to determine the winner of the network
during competition: (a) Euclidean distance and (b) dot product.
Original Slides: (Sivanandam & Deepa). Edited
2/6/2019 10
by: (Chiranji Lal)
5
06-02-2019
Maxnet
• This is also a fixed weight network, which serves as a subnet for
selecting the node having the highest input. All the nodes are fully
interconnected and there exists symmetrical weights in all these
weighted interconnections.
6
06-02-2019
7
06-02-2019
8
06-02-2019
9
06-02-2019
•Hamming Network
• In most of the neural networks using unsupervised learning, it is
essential to compute the distance and perform comparisons.
This kind of network is Hamming network, where for every
given input vectors, it would be clustered into different groups.
• Following are some important features of Hamming Networks −
• Lippmann started working on Hamming networks in 1987.
• It is a single layer network.
• The inputs can be either binary {0, 1} or bipolar {-1, 1}.
• The weights of the net are calculated by the exemplar vectors.
• (the weight vector for an output unit in a clustering net is exemplar
vector or code book vector for the pattern of inputs, which the net
has placed on that output unit)
• It is a fixed weight network which means the weights would
remain the same even during training.
10
06-02-2019
11
06-02-2019
12
06-02-2019
UNSUPERVISED LEARNING
No help from the outside.
Learning by doing.
13
06-02-2019
Introduction
The network by itself should discover any relationship of interest,
such as features, patterns, contours, correlations or categories,
classification in the input data, and thereby translate the
discovered relationship into output.
Such network are also called as Self-organizing network.
14
06-02-2019
SELF-ORGANIZATION
Network Organization is fundamental to the brain
• Functional structure.
• Layered structure.
• Both parallel processing and serial processing require organization
of the brain.
SELF-ORGANIZING NETWORKS
Discover significant patterns or features in the input data.
15
06-02-2019
Architecture
16
06-02-2019
ARCHITECTURE OF KSOFM
COMPETITION OF KSOFM
Each neuron in an SOM is
assigned a weight vector with the
same dimensionality N as the
input space.
17
06-02-2019
CO-OPERATION OF KSOFM
The activation of the winning neuron is spread to neurons in its
immediate neighborhood.
• This allows topologically close neurons to become
sensitive to similar patterns.
ADAPTATION OF KSOFM
During training, the winner neuron
and its topological neighbors are
adapted to make their weight
vectors more similar to the input
pattern that caused the activation.
18
06-02-2019
KSOFM ALGORITHM
EXAMPLE OF KSOFM
19
06-02-2019
20
06-02-2019
21
06-02-2019
22
06-02-2019
LEARNING VECTOR
QUANTIZATION
(LVQ)
Supervised or Unsupervised?
Supervised or Unsupervised?
• The Learning Vector Quantization algorithm is a
supervised neural network that uses a competitive
(winner-take-all) learning strategy.
• It is related to other supervised neural networks
such as the Perceptron and the Back-propagation
algorithm.
• It is related to other competitive learning neural
networks such as the Self-Organizing Map algorithm
that is a similar algorithm for unsupervised learning
with the addition of connections between the
neurons.
Original Slides: (Sivanandam & Deepa). Edited
2/6/2019 46
by: (Chiranji Lal)
23
06-02-2019
24
06-02-2019
LVQ: Flowchart
• In case of
training, a set of
training input
vectors with a
known
classification is
provided with
some initial
distribution of
reference vector.
• Assign initial
weights and
classifications
randomely.
• K-mean
clustering
method.
Original Slides: (Sivanandam & Deepa). Edited
2/6/2019 50
by: (Chiranji Lal)
25
06-02-2019
• VARIANTS OF LVQ
• LVQ 1
• LVQ 2
• LVQ 2.1
• LVQ 3
26
06-02-2019
27
06-02-2019
ADAPTIVE
RESONANCE THEORY
(ART) NETWORK
• Adaptive Resonance Theory (ART) networks is
always open to new learning (adaptive) without
losing the old patterns (resonance).
• Basically, ART network is a vector classifier which
accepts an input vector and classifies it into one of
the categories depending upon which of the stored
pattern it resembles the most.
28
06-02-2019
29
06-02-2019
Fundamental Architecture
• Three groups of neurons are used to build an ART
network.
• Input processing unit (F1 layer)
• Input portion
• Interface portion
• Clustering units (F2)
• Control mechanism (control degree of similarity of patterns place on
the same cluster)
30
06-02-2019
ART1
• In ART1 network, it is not necessary to present an input
pattern in a particular order; it can be presented in any
order.
• ART 1 network can be practically implemented by analog
circuits and governing the different equations, i.e., the
bottom-up and top-down weights are controlled by
differential equations.
• ART 1 network runs throughout autonomously.
• It doesn't require any external control signals and can run
stably with infinite patterns of input data.
• ART1 net is trained using fast learning method, in which the
weights reach equilibrium during each learning trial.
• During the resonance phase, the activations of F1 units do
not change; hence the equilibrium weights can be
determined exactly.
Original Slides: (Sivanandam & Deepa). Edited
2/6/2019 62
by: (Chiranji Lal)
31
06-02-2019
ART1 UNITS
ART1 Network is made up of two units
Computational units
• Input unit (F1 unit – input and interface).
• Cluster unit (F2 unit – output).
• Reset control unit (controls degree of similarity).
Supplemental units
• One reset control unit.
• Two gain control units.
32
06-02-2019
ART2 NETWORK
33
06-02-2019
ART2 ALGORITHM
SUMMARY
This chapter discussed on the various unsupervised learning networks
like:
34