Adaptive Resonance Theory (Art) Network
Adaptive Resonance Theory (Art) Network
Adaptive Resonance Theory (ART) is a family of algorithms for unsupervised learning developed by Carpenter and Grossberg. ART is similar to many iterative clustering algorithms where each pattern is processed by -finding the "nearest" cluster (a prototype or template) to that exemplar (desired). - updating that cluster to be "closer" to the exemplar.
ART - Principle
incorporate new data by checking for similarity between this new data and data already learned; memory. If there is a close enough match, the new data is learned. Otherwise, this new data is stored as a new memory. In ART, the changes in the activation of units and weights are governed by the differential equations. For matches, only weight change should be done for a period resonance period
2
Types
ART1 Designed for discrete input. ART2 Designed for continuous input. ARTMAP Combines two ART models to form a supervised learning model.
Example
Identification and recognition of Food varieties Texture and Images
Classification Process
ART classification process consists of three phases
Recognition
Stores a set of patterns in the weights associated with the recognition layer neurons, one for each classification category For best match, output becomes 1, otherwise zero
Comparison
Looks for similarity
Architecture
Input processing Field (F1 layer) Cluster Units (F2 layer) Reset Mechanism controls the degree of similarity of patterns placed on the same cluster
F1(a) just presents the input vectors and some processing may be performed F1(b) combines the input signal from the F1(a) with F2 layer for comparison of similarity
10
Cluster Units
This is a competitive layer Cluster units with larger net input is selected to learn the input patterns. The activation of all other F2 units are set to zero. Interface units now combine information from the input and cluster units
11
Reset Mechanism
Cluster may or may not be allowed to learn the pattern depending on the similarity between the top-down weight and input vector. If the cluster unit is not allowed to learn, it is inhibited and a new cluster unit is selected as the candidate.
12
y F2
F1
Input (I)
F1 (short term memory) contains a vector of size M, and there are N nodes within F2. Each node within F2 is a vector of size M. The set of nodes within F2 is referred to as y.
14
Step 2: Once the hypothesis has been formed, it is sent back to the F1 layer for matching. Let Tj(I*) represent the level of matching between I and I* for node j (minimum fraction of the input that must remain in the matched pattern for resonance to occur). Then:
F1
T j ( I *) =
Input (I)
15
Reset F1
Step 3: If the hypothesis is rejected, a reset command is sent back to the F2 layer. In this situation, the jth node within F2 is no longer a candidate so the process repeats for node j+1.
Input (I)
16
1. If the hypothesis was accepted, the winning node assigns its values to it. 2. If none of the nodes accepted the hypothesis, a new node is created within F2. As a result, the system forms a new memory.
F1
Input (I)
In either case, the vigilance parameter ensures that the new information does not cause older knowledge to be forgotten.
17
18
Neural Selector
Basically, most applications of neural networks fall into the following five categories: Prediction Classification Data association Data conceptualization Data filtering
19
20
21
22