Neural Network
Neural Network
MODEL OF NEURONS:
The neural model of Fig. 5 also includes an externally applied bias, denoted by bk.
In mathematical terms, we may describe the neuron k depicted by writing the pair of
equations:
vk = uk + bk
The induced local field, or activation potential, vk of neuron k and the linear
combiner output uk
yk = 0 (vk)
x0 = +1
wk0 = bk
A signal flows along a link only in the direction defined by the arrow on the link.
Rule 2.
A node signal equals the algebraic sum of all signals entering the pertinent node via
incoming link & this second rule is for the case of synaptic convergence or fan-in.
Rule 3.
The signal at a node is transmitted to each outgoing link originating from that node,
with the transmission being entirely independent of the transfer functions of the
outgoing links.
Signal-flow graph of a neuron
Architectural graph of a neuron.
FEEDBACK:
Feedback networks also known as recurrent neural network or interactive neural
network are the deep learning models in which information flows in backward
direction. It allows feedback loops in the network. Feedback networks are
dynamic in nature, thereby giving rise to one or more closed paths for the transmission of signals
around the system.
y k (n) = A [ x’ j (n) ]
Where the input signal xj (n), internal signal x’j (n), and output signal yk(n) are functions of the discrete-
time variable n. feedback path that are characterized by the “operators” A and B, respectively.
yk(n) = A / (1 – AB ) [xj(n)]
We refer to A/ (1 - AB) as the closed-loop operator of the system, and to AB as the open-loop operator.
NETWORK ARCHITECHTURES:
Single-layer feed-forward network:
In this type of network, we have only two layers input layer and the output layer but
the input layer does not count because no computation is performed in this layer.
The output layer is formed
when different weights are
applied to input nodes and the
cumulative effect per
node is taken. After this, the
neurons collectively give the
output layer to compute the
output signals.
Multilayer feed-forward network:
This layer also has a hidden layer that is internal to the network and has no direct
contact with the external layer. The existence of one or more hidden layers enables
the network to be computationally stronger, a feed-forward network because of
information flow through the input function.
Recurrent Networks:
A recurrent neural network distinguishes itself from a feedforward neural network in
that it has at least one feedback loop. For example, a recurrent network may consist
of a single layer of neurons with each neuron feeding its output signal back to the
inputs of all the other neurons.
KNOWLEDGE REPRESENTATION
Knowledge refers to stored information or models used by a person or machine to
interpret, predict, and appropriately respond to the outside world.
Neural networks are a type of knowledge representation, which are used to represent
knowledge in the form of a network of interconnected neurons. These neurons,
which are connected to each other, process information through the connections. The
information is then stored and used to make decisions or predictions.
LEARNING PROCESS:
Learning, in artificial neural network, is the method of modifying the weights of
connections between the neurons of a specified network. Learning in ANN can be
classified into three categories namely supervised learning, unsupervised learning,
and reinforcement learning.
Supervised Learning
This type of learning is done under the supervision of a teacher. This learning
process is dependent.
The input vector is presented to the network, which will give an output vector. This
output vector is compared with the desired output vector. An error signal is
generated, if there is a difference between the actual output and the desired output
vector.
Unsupervised Learning
This type of learning is done without the supervision of a teacher. This learning
process is independent.
The input vectors of similar type are combined to form clusters. When a new input
pattern is applied, then the neural network gives an output response indicating the
class to which the input pattern belongs.
There is no feedback from the environment as to what should be the desired output
and if it is correct or incorrect.
Reinforcement Learning
During the training of network under reinforcement learning, the network receives
some feedback from the environment. This makes it somewhat similar to supervised
learning.
LEARNING TASKS:
Pattern Association
An associative memory is a brain-like distributed memory that learns by association. Association has been
known to be a prominent feature of human memory.
Pattern Recognition:
Pattern recognition is formally defined as the process whereby a received pattern/signal is assigned to one
of a prescribed number of classes.
Function Approximation
The third learning task of interest is that of function approximation. Consider a
nonlinear input–output mapping described by the functional relationship.
1).System identification
Describe the input–output relation of an unknown memoryless multiple input–multiple output
(MIMO) system.
Inverse modeling:
Suppose next we are given a known memoryless MIMO system whose input–output
relation.
Control
The control of a plant is another learning task that is well suited for neural networks;
by a “plant” we mean a process or critical part of a system that is to be maintained in
a controlled condition.
Chp#1
PERCEPTRONS:
A perceptron is a simple model of a biological neuron in an artificial neural network.
Perceptron is also the name of an early algorithm for supervised learning of binary
classifiers.
There are two kinds of perceptions
Single-layer
Multilayer
Single layer
Single layer perceptron can learn only linearly separable patterns.
Multilayer perceptron
Multilayer perceptron can learn about two or more layers having a greater
processing power.
Chp#4
Batch Learning
In the batch method of supervised learning, adjustments to the synaptic weights of
the multilayer perceptron are performed after the presentation of all the N examples
in the training sample t that constitute one epoch of training. In other words, the cost
function for batch learning is defined by the average error energy Eav. Adjustments to
the synaptic weights of the multilayer perceptron are made on an epoch-by-epoch
basis.
The advantages of batch learning include the following:
• Accurate estimation of the gradient vector (i.e., the derivative of the cost function
Eav with respect to the weight vector w), thereby guaranteeing, under simple
conditions, convergence of the method of steepest descent to a local minimum;
• Parallelization of the learning process.
On-line Learning
In the on-line method of supervised learning, adjustments to the synaptic weights of
the multilayer perceptron are performed on an example-by-example basis. The cost
function to be minimized is therefore the total instantaneous error energy E(n).
To summarize, despite the disadvantages of on-line learning, it is highly popular for
solving pattern-classification problems for two important practical reasons:
• On-line learning is simple to implement.
• It provides effective solutions to large-scale and difficult pattern-classification
problems.