08 09 23 Soft Computing - ANN - PPT
08 09 23 Soft Computing - ANN - PPT
It is based on binary logic crisp system, It is based on fuzzy logic, neural nets and
numerical analysis and crisp software. probabilities reasoning.
It requires exact input data It can deal with ambiguous and noisy data
It is strictly sequential. It allows parallel computations
Brain Computer
1. Analogue 1.Digital
2. uses content addressable memory 2.Uses byte addressable memory
3. Massive parallel mechanism 3. Serial and modular approach
4. Processing speed is not fixed; no 4. Processing speed is fixed
system clock
5.Synapses are far more complex than 5. Easier than the architecture of Brain
logic gates
6. Much bigger than any computer 6. Not so big.
1989: Yann LeCun published a paper illustrating how the use of constraints in
backpropagation and its integration into the neural network architecture can be
used to train algorithms. This research successfully leveraged a neural network to
recognize hand-written zip code digits provided by the U.S. Postal Service.
(ii) A summation junction for the input signals is weighted by the respective
synaptic weight. Because it is a linear combiner or adder of the weighted input
signals, the output of the summation junction can be expressed as follows:
Where k = steepness or slope parameter, By varying the value of k, sigmoid function with different slopes can be obtained. It has a range
of (0,1). The slope of origin is k/4. As the value of k becomes very large, the sigmoid function becomes a threshold function.
(b)Bipolar Sigmoid Function
• A bipolar sigmoid function is of the form
The range of values of sigmoid functions can be varied depending on the application. However, the range
of (-1,+1) is most commonly adopted.
(v) Hyperbolic Tangent Function
• It is bipolar in nature. It is a widely adopted activation
function for a special type of neural network known
as Backpropagation Network. The hyperbolic tangent
function is of the form
• Here, x1, x2, · · · , xn are the n inputs to the artificial neuron. w1, w2, · · · , wn are
weights attached to the input links.
• Note that, a biological neuron receives all inputs through the dendrites, sums
them and produces an output if the sum is greater than a threshold value.
• The input signals are passed on to the cell body through the synapse, which
may accelerate or retard an arriving signal.
• It is this acceleration or retardation of the input signals that is modeled by the
weights.
• An effective synapse, which transmits a stronger signal will have a
correspondingly larger weights while a weak synapse will have smaller
weights.
• Thus, weights here are multiplicative factors of the inputs to account for the
strength of the synapse.
• So, the value of both scenarios can be either 0 or 1. We can use the value
of both weights X1 and X2 as 1 and a threshold function as 1. So, the neural
network model will look like:
1 0 0 0 0
2 0 1 1 1
3 1 0 1 1
4 1 1 2 1
(1)
Introduction to ANN
1) An ANN learn quickly if ŋ, the learning rate assumes the following value(s).
(a) ŋ = 1
(b) ŋ < 1
(c) ŋ > 1
(d) ŋ = 0
Answer: (c) ŋ > 1
2) Which of the following is true for neural networks?
i. The training time depends on the size of the network.
ii. Neural networks can be simulated on a conventional computer.
iii. Artificial neurons are identical in operation to biological ones.
(a) i and ii are true
(b) i and iii are true
(c) ii is true.
(d) all of them are true
Answer: (b) i and iii are true
<BCSE0103> <Soft Computing> 92
MCQ
**A stochastic policy requires learning a probability distribution over actions, which
can be challenging. On the other hand, a deterministic policy only requires
selecting the best action for each state, which is relatively easy.
The XOR function is not linearly separable, which means we cannot draw a
single straight line to separate the inputs that yield different outputs.
The XOR problem with neural networks can be solved by using Multi-Layer
Perceptrons or a neural network architecture with an input layer, hidden
layer, and output layer. So during the forward propagation through the neural
networks, the weights get updated to the corresponding layers and the XOR
logic gets executed.
The Neural network architecture to solve the XOR problem will be as shown
below:-
The single-layer networks suffer from the disadvantage that they are only able to
solve linearly separable classification problems.
However the computational effort needed for finding the correct combination of
weights increases substantially when more parameters and more complicated
topologies are considered.
The gradient of the error function is computed and used to correct the
initial weights.
The back propagation algorithm includes two passes through the network:
(i) forward pass (ii) backward pass.
In Adaline, all the input neuron is directly connected to the output neuron with
the weighted connected path. There is a bias b of activation function 1 is
present.
x1 x2 t
1 1 1
1 -1 1
-1 1 1
-1 -1 -1
1 1 1
1 -1 1
-1 1 1
-1 -1 -1
This is epoch 1 where the total error is 0.49 + 0.69 + 0.83 + 1.01 = 3.02 so more epochs
will run until the total error becomes less than equal to the least squared error i.e 2.