Pattern Recognition 21BR551 MODULE 05 NOTES
Pattern Recognition 21BR551 MODULE 05 NOTES
Pattern Recognition
(21BR551)
MODULE 05 NOTES
Prepared by: Mr Thanmay J S, Assistant Professor, Bio-Medical & Robotics Engineering, UoM, SoE, Mysore 57006
Mysore University School of Engineering
8J99+QC7, Manasa Gangothiri, Mysuru, Karnataka 570006
Prepared by: Mr Thanmay J S, Assistant Professor, Bio-Medical & Robotics Engineering, UoM, SoE, Mysore 57006
Mysore University School of Engineering
8J99+QC7, Manasa Gangothiri, Mysuru, Karnataka 570006
5.0 Introduction to Artificial Neural Networks (ANNs)
Artificial Neural Networks (ANNs) are computational models inspired by the human brain. They are
composed of nodes (also called neurons) that are interconnected by links, each with an associated weight.
ANNs are used to model complex patterns, data relationships, and to solve tasks such as classification,
regression, pattern recognition, and optimization.
Key Components of ANNs:
1. Neurons (Nodes): Basic units that process information. Each neuron receives inputs, processes them,
and produces an output.
2. Layers: ANNs consist of multiple layers:
o Input layer: Takes input data.
o Hidden layers: Process inputs from the input layer and pass the results to the output layer.
o Output layer: Produces the final result (prediction or classification).
3. Weights: Each connection between neurons has a weight that determines the influence of the input on
the output.
4. Bias: An additional parameter added to the weighted sum before applying the activation function,
allowing the network to fit data better.
5. Activation Function: Determines whether a neuron fires or not. Common functions include:
6. Training: The weights of the network are adjusted during training to minimize the error, typically
using backpropagation and optimization algorithms like gradient descent.
Prepared by: Mr Thanmay J S, Assistant Professor, Bio-Medical & Robotics Engineering, UoM, SoE, Mysore 57006
Mysore University School of Engineering
8J99+QC7, Manasa Gangothiri, Mysuru, Karnataka 570006
Key Features of Nets Without Hidden Layers:
1. Linear Transformation: A neural network without hidden layers applies a linear transformation to
the input data. It can be seen as a linear model that tries to map input data directly to output using
weights and biases.
2. Simple Function: These networks represent linear functions like 𝑦 = 𝑊 ⋅ 𝑥 + 𝑏, where:
o 𝑥 is the input vector,
o 𝑊 is the weight matrix,
o 𝑏 is the bias term,
o 𝑦 is the output vector.
3. No Non-Linearities: Without hidden layers, the network lacks the ability to model complex non-linear
relationships. Essentially, the output is a weighted sum of the input features.
Prepared by: Mr Thanmay J S, Assistant Professor, Bio-Medical & Robotics Engineering, UoM, SoE, Mysore 57006
Mysore University School of Engineering
8J99+QC7, Manasa Gangothiri, Mysuru, Karnataka 570006
5.2 Sequential MSE Algorithm
The Sequential MSE (Mean Squared Error) Algorithm is used for adjusting a model’s parameters (like
weights in a neural network) to minimize the error in predictions. The process works step by step, taking one
training example at a time and adjusting the model based on the error. The goal is to reduce the error by
iterating over the data multiple times.
Steps:
1. Pick a training example.
2. Calculate the prediction error (difference between predicted and actual values).
3. Adjust the model parameters using the error.
4. Repeat for each training example until the model improves.
Prepared by: Mr Thanmay J S, Assistant Professor, Bio-Medical & Robotics Engineering, UoM, SoE, Mysore 57006
Mysore University School of Engineering
8J99+QC7, Manasa Gangothiri, Mysuru, Karnataka 570006
Prepared by: Mr Thanmay J S, Assistant Professor, Bio-Medical & Robotics Engineering, UoM, SoE, Mysore 57006
Mysore University School of Engineering
8J99+QC7, Manasa Gangothiri, Mysuru, Karnataka 570006
5.5 The Backpropagation Algorithm
Backpropagation is a key algorithm for training neural networks. It works by calculating the error at the
output and then propagating this error backward through the network to update the weights and minimize the
error.
Steps:
1. Forward pass: Input data is passed through the network, and predictions are made.
2. Calculate error: The error (difference between predicted and actual output) is calculated at the output
layer.
3. Backward pass: The error is propagated backward from the output layer to the input layer, using the
chain rule of calculus to compute gradients.
4. Update weights: Adjust the weights using the gradient descent method to minimize the error.
5. Repeat: This process is repeated for multiple epochs (iterations) over the training data until the
network learns.
Prepared by: Mr Thanmay J S, Assistant Professor, Bio-Medical & Robotics Engineering, UoM, SoE, Mysore 57006
Mysore University School of Engineering
8J99+QC7, Manasa Gangothiri, Mysuru, Karnataka 570006
5.7 Storage and Retrieval (with algorithm steps)
Storage and Retrieval algorithms are used to store data and then retrieve it later, typically using a model like
Hopfield networks or associative memory.
Steps:
1. Storage:
o Present data patterns to the network.
o The network updates its internal states to store these
patterns.
2. Retrieval:
o Provide a noisy or partial input.
o The network computes the best match to the stored
patterns.
o Return the closest matching pattern.
Prepared by: Mr Thanmay J S, Assistant Professor, Bio-Medical & Robotics Engineering, UoM, SoE, Mysore 57006
Mysore University School of Engineering
8J99+QC7, Manasa Gangothiri, Mysuru, Karnataka 570006
5.10 VC Dimension
The VC (Vapnik-Chervonenkis) Dimension measures the capacity of a model to learn different patterns. It
tells you the largest number of points a model can shatter (i.e., perfectly separate into different classes). A high
VC dimension means the model is complex and can handle more diverse patterns, but it also risks overfitting.
Example: VC dimension (h) for a set of function (F) is defined as the largest number of points that can be
shattered by F. VC dimension talks about the complexity of model. Here shattered means classified /
partitioned.
2 Class Classification
3 Class Classification
Prepared by: Mr Thanmay J S, Assistant Professor, Bio-Medical & Robotics Engineering, UoM, SoE, Mysore 57006
Mysore University School of Engineering
8J99+QC7, Manasa Gangothiri, Mysuru, Karnataka 570006
Modal Questions
3 to 5 Marks Questions
1) Explain Artificial neural networks and its main components
2) Explain Sequential MSE algorithm
3) Explain with a neat sketch Nets with hidden layers
4) Explain the back propagation algorithm
5) Describe Hopfield nets with a neat sketch
6) Explain the loop in Storage and retrieval algorithms
7) What are the significant features of Support vector machines,
8) Explain Risk minimization principles
9) Explain the Concept of uniform Convergence
10) Describe VC dimension for 2 class and 3 class classification
8 to 10 Marks Questions
1) Describe Nets without hidden layers with any two examples
Solved problems on Steepest descent method
2) Question: Solve the function with minimum 3 iteration using Steepest descent method
----------------------------------------------------------------------------------------------------------------------------------
Prepared by: Mr Thanmay J S, Assistant Professor, Bio-Medical & Robotics Engineering, UoM, SoE, Mysore 57006