0% found this document useful (0 votes)
4 views9 pages

Unit4

Uploaded by

Sahil Phogat
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views9 pages

Unit4

Uploaded by

Sahil Phogat
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

Associative memory: Auto associative and Hetro associative memory and their architecture

Associative memory is a type of memory that allows for the retrieval of information based on a
partial or complete input. It can be categorized into two main types: auto-associative memory and
hetero-associative memory. Each has distinct architectures and functions.

1. Auto-Associative Memory
Definition: Auto-associative memory, also known as auto-associative networks, is designed to
retrieve the entire stored pattern when provided with a partial or noisy version of that pattern. It
effectively maps input patterns to themselves.
Architecture:
• Neurons: The network consists of a set of neurons, usually arranged in a single layer.
• Weights: The weights are typically symmetric (i.e., wij = wji). This symmetry allows the
network to recall the stored pattern even if part of it is missing.
• Activation Function: Commonly, a binary activation function (such as a step function) is
used, but continuous activation functions can also be applied.
Learning Rule: One common learning rule for auto-associative memory is Hebbian learning,
which updates the weights based on the outer product of the input vector with itself:

where xp represents the p-th pattern.


Operation:
1. When an input pattern is presented, the neurons calculate their activations based on the
weights and the input.
2. The network iterates until it converges to one of the stored patterns.
Applications:
• Pattern completion (e.g., recalling a complete word from a few letters).
• Noise reduction (e.g., recognizing a corrupted image).

2. Hetero-Associative Memory
Definition: Hetero-associative memory retrieves a different output pattern based on a given input
pattern. It maps input patterns to different output patterns, which may not resemble the input.
Architecture:
• Neurons: The network typically consists of two layers: an input layer and an output layer.
• Weights: The weights are generally not symmetric and are learned to connect specific input
patterns to specific output patterns.
Learning Rule: A common approach is to use supervised learning, where the weights are adjusted
based on the difference between the desired output and the actual output. This can be achieved
through methods like the delta rule:

where yj is the desired output and y^j is the predicted output.


Operation:
1. When an input pattern is presented, it is processed through the network to produce an output
pattern.
2. The network learns to adjust its weights based on the input-output pairs.
Applications:
• Translation tasks (e.g., translating phrases from one language to another).
• Classifying inputs into different categories.

Summary of Differences
Feature Auto-Associative Memory Hetero-Associative Memory
Input/Output Maps input patterns to different output
Maps patterns to themselves
Relation patterns
Architecture Single-layer (often symmetric) Two-layer (input and output)
Learning Rule Hebbian learning Supervised learning (e.g., delta rule)
Pattern completion, noise
Applications Classification, translation
reduction

Conclusion
Associative memory, through its auto-associative and hetero-associative types, plays a crucial role
in various applications that require pattern recognition and retrieval. Understanding their
architectures and functions helps in designing effective systems for tasks like data reconstruction,
translation, and classification. If you have any further questions or want to explore specific aspects
in more detail, feel free to ask!
training (insertion) and testing (Retrieval) algorithm using Hebb rule and Outer Product rule.

1. Hebbian Learning Rule


Hebbian Learning Rule: Hebbian learning is based on the principle that connections between
neurons strengthen when they are activated simultaneously. This is often summarized by the phrase
"cells that fire together, wire together."

Training (Insertion) Algorithm


1. Initialize Weights:
• Create a weight matrix W initialized to zero. If there are n neurons, W will be an
n × n matrix.
2. Input Patterns:
• Let Pp be the p-th pattern, where each pattern is represented as a binary vector
[x1p, x2p, … , xnp].
3. Weight Update:
• For each pattern Pp, update the weight matrix using the outer product:
W = W + Pp ⋅ (Pp)T
This means for each neuron i and j:
wij = wij + xip ⋅ xjp
(Note: Diagonal elements wii can be adjusted based on application, often set to zero.)
4. Repeat for all Patterns:
• Continue the weight update for each input pattern until all desired patterns are stored.

Testing (Retrieval) Algorithm


1. Present Input:
• Provide a partial or noisy version of a stored pattern .
2. Activation Calculation:
• Compute the output activations using the weight matrix:

where Y is the output vector.


3. Thresholding (or Activation Function):
• Apply a threshold or an activation function (like a step function) to determine the
final output:

4. Iteration (if necessary):


• If using a recurrent architecture, iterate the activation calculation until the outputs
stabilize.

2. Outer Product Rule


Outer Product Rule: This rule is commonly used for hetero-associative memory networks where
input patterns map to different output patterns.

Training (Insertion) Algorithm


1. Initialize Weights:
• Similar to the Hebbian rule, create a weight matrix W initialized to zero.
2. Input and Output Patterns:
• Let Xp be the input pattern and Yp be the desired output pattern.
3. Weight Update:
• For each input-output pair (Xp, Yp), update the weight matrix as follows:
W = W + Xp ⋅ (Yp)T
This means for each neuron i in the input layer and j in the output layer:
wij = wij + xip ⋅ yjp
4. Repeat for all Patterns:
• Continue updating the weight matrix for all input-output pairs.

Testing (Retrieval) Algorithm


1. Present Input:
• Provide an input pattern X to the network.
2. Output Calculation:
• Calculate the output using the weight matrix:
Y=W⋅X
3. Activation Function (if needed):
• If outputs are continuous, you may apply an activation function (like softmax or a
threshold) to convert them to a desired format.

Summary
Both Hebbian learning and the Outer Product rule provide effective methods for training associative
memory networks.
• Hebbian learning emphasizes pattern retrieval based on similarity, allowing for the
completion of partial patterns.
• The Outer Product rule facilitates the mapping of distinct input patterns to specific output
patterns, suitable for tasks like classification.
Storage capacity
The storage capacity of associative memory networks refers to the maximum number of distinct
patterns that the network can reliably store and retrieve. This capacity is influenced by the
architecture of the network and the learning rules used. Let’s explore the storage capacity for both
auto-associative and hetero-associative memory, focusing on the Hebbian learning rule and the
Outer Product rule.

1. Auto-Associative Memory
Hebbian Learning Rule
Storage Capacity: The theoretical storage capacity of an auto-associative memory network, such
as a Hopfield network, is approximately:
Capacity ≈ 0.15 × N
where N is the number of neurons in the network. This implies that a network can store about 15%
of its neurons as distinct patterns.
Factors Influencing Capacity:
• Noise: The presence of noise in the input patterns can reduce the effective capacity.
• Correlation: Patterns that are highly correlated may interfere with each other, leading to a
decrease in retrieval accuracy.
• Network Size: Larger networks can typically store more patterns, but the ratio remains
roughly the same.

2. Hetero-Associative Memory
Outer Product Rule
Storage Capacity: For hetero-associative memory, the capacity can be significantly higher than
that of auto-associative networks, depending on the specific architecture. Theoretical limits suggest:
Capacity ≈ N
where N is the number of distinct input-output pairs that can be stored. This means that the network
can ideally store as many output patterns as there are neurons.
Factors Influencing Capacity:
• Input-Output Correlation: If the input patterns are highly correlated, it can lead to
confusion in output retrieval.
• Dimensionality of Patterns: The complexity and dimensionality of patterns affect how well
they can be distinguished during retrieval.

Practical Considerations
1. Overfitting:
• While networks can theoretically store a certain number of patterns, practical
applications often require fewer patterns to avoid overfitting and ensure
generalization.
2. Interference:
• When patterns are stored close to one another in the input space, the likelihood of
interference increases, which can reduce the effective storage capacity.
3. Training Methods:
• The method of training (e.g., batch vs. online learning) can also influence the
effective capacity of the memory network.
4. Testing Performance:
• Ultimately, the effectiveness of storage capacity is assessed through testing, which
evaluates how well the network retrieves stored patterns under various conditions.
Summary
• Auto-Associative Memory: Capacity is approximately 0.15N, constrained by interference
and noise.
• Hetero-Associative Memory: Capacity can be up to N, limited by input-output correlation
and the network design.
Understanding storage capacity is crucial for designing effective associative memory systems and
ensuring they can handle the intended tasks. If you have more specific questions or want to explore
any aspects further, feel free to ask!
Testing of associative memory for missing and mistaken data
Testing associative memory systems for missing and mistaken data is critical for evaluating their
robustness and effectiveness in real-world scenarios. Here’s how testing can be approached for both
auto-associative and hetero-associative memory systems.

1. Auto-Associative Memory
Objective: Auto-associative memory is particularly suited for tasks where input patterns may be
incomplete or corrupted. The goal is to assess the network's ability to recover the original pattern
from a partial or noisy version.

Testing Procedure
1. Input Presentation:
• Present a partial or corrupted version of a stored pattern. For instance, if the original
pattern is P = [1,0,1,1], a test input might be = [1,0,∗,1], where ∗ denotes a
missing or unknown value.
2. Activation Calculation:
• Calculate the output activations based on the weight matrix:

3. Output Recovery:
• Apply an activation function (like thresholding) to determine the final output. The
output should ideally resemble the original pattern, even with missing or mistaken
data.
4. Evaluation Metrics:
• Use metrics such as:
• Accuracy: Proportion of correctly retrieved elements.
• Error Rate: Number of incorrect recoveries versus the total number of
elements.
5. Noise Testing:
• Introduce different levels of noise (random flips in binary patterns) to evaluate how
well the network can recover original patterns across varying degrees of corruption.
2. Hetero-Associative Memory
Objective: Hetero-associative memory systems are tested to ensure that they can retrieve the
correct output for a given input, even when the input is noisy or mistaken.

Testing Procedure
1. Input Presentation:
• Present an input pattern that is either a complete pattern but slightly altered or a
completely corrupted version. For instance, if the original input-output pair is
(X = [1,0,1], Y = [0,1] ), an altered input might be = [1,1,1].
2. Output Calculation:
• Calculate the output using the weight matrix:

3. Comparison with Expected Output:


• Compare the retrieved output Y′ with the correct output Y. Determine how well the
network performed despite the input distortion.
4. Evaluation Metrics:
• Use metrics such as:
• Accuracy of Retrieval: How many outputs match the expected outputs.
• Confusion Matrix: For multi-class outputs, a confusion matrix can help
visualize where the network fails.
5. Robustness Testing:
• Introduce various types of distortions (random noise, shifting bits, etc.) to assess how
resilient the system is to input errors.

Summary of Key Testing Strategies


• Input Variability: Test the network with both partial patterns and completely erroneous
inputs to evaluate robustness.
• Activation Thresholding: Use appropriate activation functions to ensure effective retrieval
from noisy inputs.
• Performance Metrics: Employ metrics such as accuracy, error rate, and confusion matrices
to quantify performance under different conditions.

Conclusion
Testing associative memory for missing and mistaken data is essential to ensure that the systems
can handle real-world scenarios where data is often incomplete or erroneous. By systematically
presenting distorted inputs and evaluating the recovery or output quality, we can assess the
reliability and robustness of both auto-associative and hetero-associative memory systems.
Bidirectional memory
Bidirectional memory refers to a type of associative memory architecture that can store and
retrieve information in both directions—meaning it can map input patterns to output patterns and
vice versa. This contrasts with traditional unidirectional networks, where information flows in one
direction only (from input to output).

Key Features of Bidirectional Memory


1. Two-way Associations:
• The memory can associate an input pattern with a corresponding output pattern, and
it can also retrieve the input pattern when given the output pattern.
2. Architecture:
• Typically consists of two layers of neurons: an input layer and an output layer. Each
layer is fully connected to the other, allowing for the bidirectional flow of
information.
3. Symmetric Weights:
• In many implementations, weights between the input and output layers are
symmetric, facilitating the retrieval process in both directions.

Types of Bidirectional Memory


1. Bidirectional Associative Memory (BAM):
• BAM is a specific architecture that consists of two layers (input and output) where:
• The input layer can be activated by an input pattern.
• The output layer can be activated in response to a corresponding output
pattern.
• The network is trained to establish associations between pairs of patterns.
2. Hebbian Bidirectional Memory:
• Uses Hebbian learning principles to adjust weights based on the co-activation of
input and output patterns, reinforcing connections that are frequently activated
together.

Training Algorithm
1. Weight Initialization:
• Initialize the weight matrix W between the input and output layers.
2. Input-Output Pair Presentation:
• For each training pair (Xp, Yp), where Xp is an input pattern and Yp is the
corresponding output pattern, update the weights as follows:
W = W + Xp ⋅ (Yp)T
(This can be done for both directions, i.e., from input to output and from output to input.)
3. Repeat for All Pairs:
• Iterate through the training dataset until the weights stabilize.
Testing Algorithm
1. Forward Retrieval:
• Present an input pattern X and compute the output using the weight matrix:
Y′ = W ⋅ X
2. Backward Retrieval:
• Present an output pattern Y and compute the input:
X′ = WT ⋅ Y
3. Evaluation:
• Compare the retrieved patterns Y′ and X′ with the expected outputs to assess the
accuracy of the retrieval.

Applications
• Pattern Recognition: Used in various fields such as image recognition and natural language
processing, where bidirectional associations are beneficial.
• Translation Systems: Helpful in translating phrases between languages, associating words
in one language with their equivalents in another.
• Recommender Systems: Can be used to relate users to items they like, and vice versa.

Advantages
• Robustness: The ability to retrieve patterns in both directions increases robustness against
missing or corrupted data.
• Flexibility: Supports various applications where input-output relationships are non-linear or
complex.

Limitations
• Complexity: The architecture can become complex with large datasets and many patterns,
leading to challenges in training and retrieval.
• Memory Capacity: Similar to other associative memory systems, the capacity is limited,
often affected by noise and correlation between patterns.

Summary
Bidirectional memory enhances traditional associative memory systems by allowing two-way
associations between input and output patterns. It is particularly useful in applications that require
flexible pattern recognition and retrieval. Understanding its architecture, training, and retrieval
processes can help in designing effective memory systems for various tasks. If you have further
questions or want to explore specific aspects in more detail, feel free to ask!

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy