Unit I
Unit I
UNIT-I
Soft Computing
A8711
Introduction
Computing Methods, Fundamentals of Artificial Neural
Syllabus Details
Network: Model of Biological Neuron, Mathematical Model of
Neuron, ANN Architec- ture, Learning Rules, Learning
Paradigms, Perceptron Network, Adaline and Madaline
Networks, Applications of Neural Network, Associative
Memory.
Text Books:
B K Tripathy | J Anuradha, Soft Computing Advances and
Applications:, CENGAGE Learning, 2015.
Reference Books:
J.S.R. Jang, C.T. Sun and E. Mizutani., Neuro-Fuzzy and Soft
Computing, Pearson Education, 2004
N.P.Padhy., Artificial Intelligence and Intelligent Systems,
Oxford University Press, 2005.
, Himalaya Publications, 2006
3 0 0 45 0 0 3 40 60 100
Computing
Hard Computing
Hard computing uses traditional mathematical
methods to solve problems, such as algorithms and
mathematical models.
It is based on deterministic and precise calculations
and is ideal for solving problems that have well-
defined mathematical solutions.
The term, hard computing, was coined by Dr Lotfi Zadeh.
Here ‘hard’ has nothing to do with hardware.
The principles of hard computing are precision, certainty
and rigor.
Hard computing is achieved using sequential programs that
use binary logic. It is deterministic in nature. The input data
should be exact and the output will be precise and
verifiable.
Hard Computing
Advantages
Accurate solutions can be obtained
Faster
Disadvantages
Not suitable for real world problems
Cannot handle imprecision and partial truth
Soft Computing
Soft Computing refers to a consortium of
computational methodologies like fuzzy logic,
neural networks, genetic algorithms etc
All having their roots in the Artificial Intelligence
Artificial Intelligence is an area of computer
science concerned with designing intelligent
computer systems.
Systems that exhibit the characteristics we
associate with intelligence in human behavior.
Soft Computing
Soft Computing was introduced by Lotfi A zadeh of
the university of California, Berkley, U.S.A
The soft computing differs from hard computing in
its tolerance to imprecision, uncertainty and partial
truth.
Soft Computing is the fusion of methodologies
designed to model and enable solutions to real world
problems, which are not modeled or too difficult to
model mathematically.
The aim of Soft Computing is to exploit the tolerance
for imprecision, uncertainty, approximate reasoning,
and partial truth in order to achieve close
resemblance with human like decision making.
Soft Computing
Soft Computing
The Soft Computing consists of several computing
paradigms mainly :
Fuzzy Systems, Neural Networks, and Genetic
Algorithms.
Fuzzy set : for knowledge representation via fuzzy
If – Then rules.
Neural Networks : for learning and adaptation
Genetic Algorithms : for evolutionary computation
Soft Computing
Its aim is to exploitthetolerance for
Approximation,Uncertainty, Imprecision, and Partial
Truth in order to achieve close resemblance with
human like decision making.
Approximation : here the model features are similar to
the real ones, but not the same.
Uncertainty : here we are not sure that the features of
the model are the same as that of the entity (belief).
Imprecision : here the model features (quantities) are
not the same as that of the real ones, but close to them.
Soft Computing
Soft Computing
Soft Computing
Soft Computing
Soft Computing
Rough Sets:
Introduction to Neural Network
As we have noted, a glimpse into the natural world reveals that
even a small child is able to do numerous tasks at once.
The example of a child walking, probably the first time that child
sees an obstacle, he/she may not know what to do.
But afterward, whenever he/she meets obstacles, she simply takes
another route.
It is natural for people to both appreciate the observations and
learn from them.
An intensive study by people coming from multidisciplinary fields
marked the evolution of what we call the artificial neural network
(ANN).
ANNs- How They Operate
Connection Neurons
Strengths
Connections
w
w
Input 1
Output 1
Input 2
Output 2
Input 3
w w
Dendrites Inputs
Synapse Weights
Axon Output
McCulloch-Pitts neuron model
the inputs are marked X1, X2, X3, … , Xn; the weights
associated with each Connection are given by W1, W2, W3, … ,
Wn; b denotes the bias; and the output is denoted by O.
Because there is one weight for every input, the number of
inputs is equal to the number of weights in a neuron.
The Processing of the Neuron
The summation forms the input to the next block. This is the block
of the activation function, where the input is made to pass through
a function called the activation function.
The Activation Function
• Calculate the output for a neuron. The inputs are (0.10, 0.90, 0.05), and the
corresponding weights are (2, 5, 2). Bias is given to be 1. The activation
function is linear. Also draw the neuron architecture.
Solution:
Using Equation, we have
f(x)=x
Hence the neuron’s output is
f(W1 * I1 + W2 * I2 + W3 * I3+b)
= f(2 * 0.1 + 5 * 0.9 + 2 * 0.05 +1)
= f(0.2 + 4.5 + 0.1+1)
= f(5.8)
=5.8
Example
• Calculate the output for a neuron. The inputs are (0.10, 0.90, 0.05), and the
corresponding weights are (2, 5, 2). Bias is given to be 0. The activation
function is logistic. Also draw the neuron architecture.
Solution:
Using Equation, we have
• Calculate the output for a neuron. The inputs are (0.10, 0.90, 0.05), and the
corresponding weights are (2, 5, 2). Bias is given to be 0. The activation
function is Binary. Also draw the neuron architecture.
Solution:
Using Equation, we have
f(W1 * I1 + W2 * I2 + W3 * I3)
= f(2 * 0.1 + 5 * 0.9 + 2 * 0.05)
= f(0.2 + 4.5 + 0.1)
= f(4.8)
=1
Example
• Calculate the output for a neuron. The inputs are (0.10, 0.90, 0.05), and the
corresponding weights are (2, 5, 2). Bias is given to be 0. The activation
function is bipolar. Also draw the neuron architecture.
Solution:
Using Equation, we have
f(W1 * I1 + W2 * I2 + W3 * I3)
= f(2 * 0.1 + 5 * 0.9 + 2 * 0.05)
= f(0.2 + 4.5 + 0.1)
= f(4.8)
=
Example
• Calculate the output for a neuron. The inputs are (0.10, 0.90, 0.05), and the
corresponding weights are (2, 5, 2). Bias is given to be 0. The activation
function is signum. Also draw the neuron architecture.
Solution:
Using Equation, we have
f(W1 * I1 + W2 * I2 + W3 * I3)
= f(2 * 0.1 + 5 * 0.9 + 2 * 0.05)
= f(0.2 + 4.5 + 0.1)
= f(4.8)
=1
Example
• Calculate the output for a neuron. The inputs are (0.10, 0.90, 0.05), and the
corresponding weights are (2, 5, 2). Bias is given to be 0. The activation
function is Relu. Also draw the neuron architecture.
Solution:
Using Equation, we have
f(x)=max(0,x)
Hence the neuron’s output is
f(W1 * I1 + W2 * I2 + W3 * I3)
= f(2 * 0.1 + 5 * 0.9 + 2 * 0.05)
= f(0.2 + 4.5 + 0.1)
= f(4.8)
=4.8
Example
• Calculate the output for a neuron. The inputs are (0.10, 0.10, 0.2), and the
corresponding weights are (2, -6, 2). Bias is given to be 0. The activation
function is sigmoid. Also draw the neuron architecture.
Solution:
Using Equation, we have
f(x)=sigmoid
Hence the neuron’s output is
f(W1 * I1 + W2 * I2 + W3 * I3)
=f(0)=1/(1+e-0)=1/(1+1)=0.5
Example
• Calculate the output for a neuron. The inputs are (0.10, 0.10, 0.2), and the
corresponding weights are (2, -6, 2). Bias is given to be 0. The activation
function is hyperbolic. Also draw the neuron architecture.
Solution:
Using Equation, we have
f(x)=Hyperbolic
Hence the neuron’s output is
f(W1 * I1 + W2 * I2 + W3 * I3)
=f(0)=(e0-e-0)/ (e0+e-0)= (1-1)/(1+1)=0/2=0
Classification of ANNs
Classification of ANNs
Classification of ANNs
A network topology is the arrangement of a network along with
its nodes and connecting lines. According to the topology, ANN
can be classified as the following kinds −
Feedforward Network:
It is a non-recurrent network having processing units/nodes in
layers and all the nodes in a layer are connected with the
nodes of the previous layers.
The connection has different weights upon them.
There is no feedback loop means the signal can only flow in
one direction, from input to output.
It may be divided into the following two types −
1. Single layer feedforward network
2. Multilayer feedforward network
Classification of ANNs
Single layer feedforward network:
The concept is of feedforward ANN having only one
weighted layer.
In other words, we can say the input layer is fully
connected to the output layer.
Classification of ANNs
Multy layer feedforward network:
The concept is of feedforward ANN having more than one
weighted layer.
As this network has one or more layers between the input
and the output layer, it is called hidden layers.
Classification of ANNs
Feedback Network:
As the name suggests, a feedback network has feedback
paths, which means the signal can flow in both directions
using loops.
This makes it a non-linear dynamic system, which
changes continuously until it reaches a state of equilibrium.
x1*w1+x2*w2=sum
0*1+0*1=0
0*1+1*1=1
1*1+0*1=1
1*1+1*1=2
McCulloch-Pitts neuron model
x1*w1+x2*w2=sum
0*2+0*2=0
0*2+1*2=2
1*2+0*2=2
1*2+1*2=4
Q&A
Initial weight
W1=0.15 w5=0.40
W2=0.20 w6=0.45
W3=0.25 w7=0.50
W4=0.30 w8=0.55
Bias Values
b1=0.35 b2=0.60
Target Values
T1=0.01 T2=0.99
Back propagation Algorithm
Back propagation Algorithm
Back propagation Algorithm
Back propagation Algorithm
Back propagation Algorithm
Back propagation Algorithm
Back propagation Algorithm
Example
Input values
X1=0.05 X2=0.10
Initial weight
W1=0.15 w5=0.40
W2=0.20 w6=0.45
W3=0.25 w7=0.50
W4=0.30 w8=0.55
Bias Values
b1=0.35 b2=0.60
Target Values
T1=0.01 T2=0.99
Assignment
ADALINE MODEL OF ANN
ADALINE (Adaptive Linear Neuron or later Adaptive Linear Element) is
an early single-layer artificial neural network and the name of the
physical device that implemented this network.
The network uses memistors.
It was developed by Professor Bernard Widrow and his graduate
student Ted Hoff at Stanford University in 1960.
It is based on the McCulloch–Pitts neuron. It consists of a weight, a
bias and a summation function.
The difference between Adaline and the standard (McCulloch–Pitts)
perceptron is that in the learning phase, the weights are adjusted
according to the weighted sum of the inputs (the net).
In the standard perceptron, the net is passed to the activation
(transfer) function and the function's output is used for adjusting the
weights.
ADALINE MODEL OF ANN
The basic structure of Adaline is similar to perceptron having an extra
feedback loop with the help of which the actual output is compared
with the desired/target output. After comparison on the basis of
training algorithm, the weights and bias will be updated.
ADALINE MODEL OF ANN
ADALINE MODEL OF ANN
Architecture of MADALINE
Architecture of MADALINE
Architecture of MADALINE
Architecture of MADALINE
Architecture of MADALINE
Applications of ANN
1.Social Media:Artificial Neural Networks are used heavily in Social Media. For
example, let’s take the‘People you may know’feature on Facebook that suggests
people that you might know in real life so that you can send them friend requests.
Well, this magical effect is achieved by using Artificial Neural Networks that analyze
your profile, your interests, your current friends, and also their friends and various
other factors to calculate the people you might potentially know. Another common
application ofMachine Learningin social media isfacial recognition. This is done by
finding around 100 reference points on the person’s face and then matching them
with those already available in the database using convolutional neural networks.
2.Marketing and Sales:When you log onto E-commerce sites like Amazon and
Flipkart, they will recommend your products to buy based on your previous browsing
history. Similarly, suppose you love Pasta, then Zomato, Swiggy, etc. will show you
restaurant recommendations based on your tastes and previous order history. This is
true across all new-age marketing segments like Book sites, Movie services,
Hospitality sites, etc. and it is done by implementingpersonalized marketing. This
uses Artificial Neural Networks to identify the customer likes, dislikes, previous
shopping history, etc., and then tailor the marketing campaigns accordingly.
Applications of ANN
3.Healthcare: Artificial Neural Networks are used in Oncology to train algorithms
that can identify cancerous tissue at the microscopic level at the same accuracy as
trained physicians. Various rare diseases may manifest in physical characteristics
and can be identified in their premature stages by usingFacial Analysison the
patient photos. So the full-scale implementation of Artificial Neural Networks in the
healthcare environment can only enhance the diagnostic abilities of medical experts
and ultimately lead to the overall improvement in the quality of medical care all over
the world.
4.Personal Assistants:I am sure you all have heard of Siri, Alexa, Cortana, etc.,
and also heard them based on the phones you have!!! These are personal assistants
and an example of speech recognition that usesNatural Language Processingto
interact with the users and formulate a response accordingly. Natural Language
Processing uses artificial neural networks that are made to handle many tasks of
these personal assistants such as managing the language syntax, semantics,
correct speech, the conversation that is going on, etc.
Applications of ANN
Applications of ANN
Associative Memory
Associative memory is also known as content addressable
memory (CAM) or associative storage or associative array. It is
a special type of memory that is optimized for performing
searches through data, as opposed to providing a simple direct
access to the data based on the address.
it can store the set of patterns as memories when the
associative memory is being presented with a key pattern, it
responds by producing one of the stored pattern which closely
resembles or relates to the key pattern.
It can be viewed as data correlation here. input data is
correlated with that of stored data in the CAM.
It forms of two type:
1. auto associative memory network
2. hetero associative memory network :
Auto Associative Memory
auto associative memory network :
An auto-associative memory network, also known as a
recurrent neural network, is a type of associative memory that is
used to recall a pattern from partial or degraded inputs.
In an auto-associative network, the output of the network is
fed back into the input, allowing the network to learn and
remember the patterns it has been trained on.
This type of memory network is commonly used in
applications such as speech and image recognition, where the
input data may be incomplete or noisy.
Auto Associative Memory
AANN contains five-layer perceptron feed-forward network,
that can be divided into two neural networks of 3 layers
each connected in series (similar to autoencoder
architecture).
The network consists of an input layer followed by a hidden
layer and bottleneck layer.
This bottleneck layer is common between both the network
and a key component of the network.
It provides data compression to the input and topology with
powerful feature extraction capabilities.
The bottleneck layer is followed by a second non-linear
hidden layer and the output layer of the second network.
The first network compresses the information of the n-dimensional
vector to smaller dimension vectors that contain a smaller number of
characteristic variables and represent the whole process.
The second network works opposite to the first and uses
compressed information to regenerate the original n redundant
Auto Associative Memory
Auto Associative Memory
Hetero Associative Memory
A hetero-associative memory network is a type of
associative memory that is used to associate one set
of patterns with another.
In a hetero-associative network, the input pattern is
associated with a different output pattern, allowing the
network to learn and remember the associations
between the two sets of patterns.
This type of memory network is commonly used in
applications such as data compression and data
retrieval.
Hetero Associative Memory
Hetero Associative Memory
Bidirectional Associative Network
Bidirectional Associative Memory (BAM) is a
supervised learning model in Artificial Neural Network.
This is hetero-associative memory, for an input
pattern, it returns another pattern which is potentially
of a different size. This phenomenon is very similar to
the human brain. Human memory is necessarily
associative.
It uses a chain of mental associations to recover a
lost memory like associations of faces with names, in
exam questions with answers, etc.
In such memory associations for one type of object
with another, a Recurrent Neural Network (RNN) is
needed to receive a pattern of one set of neurons as
an input and generate a related, but different, output
pattern of another set of neurons.
Bidirectional Associative Network
Why BAM is required? The main objective to
introduce such a network model is to store hetero-
associative pattern pairs. This is used to retrieve a
pattern given a noisy or incomplete pattern.
BAM Architecture: When BAM accepts an input of n-
dimensional vector X from set A then the model
recalls m-dimensional vector Y from set B. Similarly
when Y is treated as input, the BAM recalls X.
Bidirectional Associative Network
Bidirectional Associative Network
Organization of associative memory
Organization of associative memory
Argument Register: It contains words to be searched.
It contains ‘n’ number of bits.
Match Register: It has m-bits, One bit corresponding
to each word in the memory array. After the making
process, the bits corresponding to matching words in
match register are set to ‘1’.
Key Register: It provides a mask of choosing a
particular field/key in argument register. It specifies
which part of the argument word need to be
compared with words in memory.
Associative Memory Array: It combines word in that
are to be compared with the arguments word in
parallel. It contains ‘m’ words with ‘n’ bit per word.
Applications of Associative Memory
It can be only used in memory allocation format.
It is widely used in the database management systems, etc.
Networking: Associative memory is used in network routing
tables to quickly find the path to a destination network based
on its address.
Image processing: Associative memory is used in image
processing applications to search for specific features or
patterns within an image.
Artificial intelligence: Associative memory is used in artificial
intelligence applications such as expert systems and pattern
recognition.
Database management: Associative memory can be used in
database management systems to quickly retrieve data based
on its content.
12/12/14 CSE510, Lovely Professional University, Phagwara