0% found this document useful (0 votes)
57 views23 pages

Nature and Scope of AI Techniques: Seminar Report Nov2011

The document discusses artificial neural networks and their applications. It begins with an introduction to the structure and functioning of biological neural networks in the human brain. It then describes how artificial neural networks are modeled after biological neural systems using simplified mathematical representations. The key components of artificial neural networks include artificial neurons, connection weights, and transfer functions. Various network architectures are discussed including feedforward and recurrent networks. Different learning methods in neural networks including supervised, unsupervised, and reinforcement learning are also summarized. Hebbian learning and its role in most neural network learning techniques is explained.

Uploaded by

BaneeIshaqueK
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
57 views23 pages

Nature and Scope of AI Techniques: Seminar Report Nov2011

The document discusses artificial neural networks and their applications. It begins with an introduction to the structure and functioning of biological neural networks in the human brain. It then describes how artificial neural networks are modeled after biological neural systems using simplified mathematical representations. The key components of artificial neural networks include artificial neurons, connection weights, and transfer functions. Various network architectures are discussed including feedforward and recurrent networks. Different learning methods in neural networks including supervised, unsupervised, and reinforcement learning are also summarized. Hebbian learning and its role in most neural network learning techniques is explained.

Uploaded by

BaneeIshaqueK
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
You are on page 1/ 23

Seminar Report Nov2011 Artifcial Neural

Networks
1.INTRODUCTION
Nature and Scope of AI Techniques
The human brain provides proof of the existence of massive neural
networks that can succeed at those cognitive, perceptual, and control tasks
in which umans are successful. The brain is capable of computationally
demanding perceptual acts (e.g. recognition of faces, speech) and control
activities (e.g. body movements and body functions). The advantage of the
brain is its effective use of massive parallelism, the highly parallel
computing structure, and the imprecise information-processing capability.
The human brain is a collection of more than 1 billion interconnected
neurons. !ach neuron is a cell ("igure 1) that uses biochemical reactions to
receive, process, and transmit information. Treelike networks of nerve fibers
called dendrites are connected to the cell body or soma, where the cell
nucleus is located. !xtending from the cell body is a single long fiber called
the axon, which eventually branches into strands and substrands, and are
connected to other neurons through synaptic terminals or synapses. The
transmission of signals from one neuron to another at synapses is a complex
chemical process in which specific transmitter substances are released from
the sending end of # the $unction. The effect is to raise or lower the
electrical potential inside the body of the receiving cell. %f the potential
reaches a threshold, a pulse is sent down the axon and the cell is &fired'.
(rtificial neural networks ((##) have been developed as generali)ations of
mathematical models of biological nervous systems. ( first wave of interest
in neural networks (also known as connectionist models or parallel
distributed processing) emerged after the introduction of simplified neurons
by *c+ulloch and ,itts (1-./). The basic processing elements of neural
Dept.Of Electronics C S !""#$%& 1
Seminar Report Nov2011 Artifcial Neural
Networks
networks are called artificial neurons, or simpl' neurons or nodes. %n a
simplified mathematical model of the neuron, the effects of the synapses are
represented by connection weights that modulate the effect of the associated
input signals, and the nonlinear characteristic exhibited by neurons is
represented by a transfer function. The neuron impulse is then computed as
the weighted sum of the input signals, transformed by the transfer function.
The learning capability of an artificial neuron is achieved by ad$usting the
weights in accordance to the chosen learning algorithm.
( typical artificial neuron and the modeling of a multilayered neural
network are illustrated in "igure 0. 1eferring to "igure 0, the signal flow
from inputs x1( . . . ( xn is considered to be unidirectional, which are
indicated by arrows, as is a neuron's output signal flow (O). The neuron
output signal O is given by the following relationship2
Dept.Of Electronics C S !""#$%& 0
Seminar Report Nov2011 Artifcial Neural
Networks
where )* is the weight vector, and the function f+net, is referred to as an
activation (transfer) function. The variable net is defined as a scalar product
of the weight and input vectors,
where T is the transpose of a matrix, and, in the simplest case, the output
value O is computed as
where - is called the threshold level3 and this type of node is called a linear
t.res.old unit.
Dept.Of Electronics C S !""#$%& /
Seminar Report Nov2011 Artifcial Neural
Networks
2 NEURAL NETOR! ARC"ITECTURES
The basic architecture consists of three types of neuron layers2 input,
hidden, and output layers. %n feed-forward networks, the signal flow is from
input to output units, strictly in a feed-forward direction. The data
processing can extend over multiple (layers of) units, but no feedback
connections are present. 1ecurrent networks contain feedback connections.
+ontrary to feed-forward networks, the dynamical properties of the network
are important. %n some cases, the activation values of the units undergo a
relaxation process such that the network will evolve to a stable state in
which these activations do not change anymore. %n other applications, the
changes of the activation values of the output neurons are significant, such
that the dynamical behavior constitutes the output of the network. There are
several other neural network architectures (!lman network, adaptive
resonance theory maps, competitive networks, etc.), depending on the
properties and re4uirement of the application. The reader can refer to
5ishop (1--6) for an extensive overview of the different neural network
architectures and learning algorithms. ( neural network has to be configured
such that the application of a set of inputs produces the desired set of
outputs. 7arious methods to set the strengths of the connections exist. 8ne
way is to set the weights explicitly, using a priori knowledge. (nother way
is to train the neural network by feeding it teaching patterns and letting it
change its weights according to some learning rule. The learning situations
in neural networks may be classified into three distinct sorts. These are
supervised learning, unsupervised learning, and reinforcement learning. %n
supervised learning, an input vector is presented at the inputs together with a
set of desired responses, one for each node, at the output
Dept.Of Electronics C S !""#$%& .
Seminar Report Nov2011 Artifcial Neural
Networks
layer. ( forward pass is done, and the errors or discrepancies between the
desired and actual response for each node in the output layer are found.
These are then used to determine weight changes in the net according to the
prevailing learning rule. The term supervised originates from the fact that
the desired signals on individual output nodes are provided by an external
teacher.
The best-known examples of this techni4ue occur in the backpropagation
algorithm, the delta rule, and the perceptron rule. %n unsupervised learning
(or self-organi)ation), a (output) unit is trained to respond to clusters of
pattern within the input. %n this paradigm, the system is supposed to
discover statistically salient features of the input population. 9nlike the
supervised learning paradigm, there is no a priori set of categories into
which the patterns are to be classified3 rather, the system must develop its
own representation of the input stimuli. 1einforcement learning is learning
what to do : how to map situations to actions : so as to maximi)e a
numerical reward signal. The learner is not told which actions to take, as in
most forms of machine learning, but instead must discover which actions
yield the most reward by trying them. %n the most interesting and
challenging cases, actions may affect not only the immediate reward, but
also the next situation and, through that, all subse4uent rewards. These two
Dept.Of Electronics C S !""#$%& 6
Seminar Report Nov2011 Artifcial Neural
Networks
characteristics, trial-anderror search and delayed reward are the two most
important distinguishing features of reinforcement learning.
# NEURAL NETOR! LEARNIN$
#.1 "e%%ian &earnin'
The learning paradigms discussed above result in an ad$ustment of the
weights of the connections between units, according to some modification
rule. ,erhaps the most influential work in connectionism's history is the
contribution of ;ebb (1-.-), where he presented a theory of behavior based,
as much as possible, on the physiology of the nervous system. The most
important concept to emerge from ;ebb's work was his formal statement
(known as /ebb0s postulate) of how learning could occur. <earning was
based on the modification of synaptic connections between neurons.
=pecifically, when an axon of cell ( is near enough to excite a cell 5 and
repeatedly or persistently takes part in firing it, some growth process or
metabolic change takes place in one or both cells such that ('s efficiency, as
one of the cells firing 5, is increased. The principles underlying this
statement have become known as /ebbian %earning. 7irtually, most of the
neural network learning techni4ues can be considered as a variant of the
;ebbian learning rule. The basic idea is that if two neurons are active
simultaneously, their interconnection must be strengthened. %f we consider
a single layer net, one of the interconnected neurons will be an input unit
and one an output unit. %f the data are represented in bipolar form, it is easy
to express the desired weight update as
Dept.Of Electronics C S !""#$%& >
Seminar Report Nov2011 Artifcial Neural
Networks
where o is the desired output for
i ? 1 to n+inputs,.
9nfortunately, plain ;ebbian learning continually strengthens its weights
without bound (unless the input data is properly normali)ed).
#.2 (erceptron &earnin' ru&e
The perceptron is a single layer neural network whose weights and biases
could be trained to produce a correct target vector when presented with the
corresponding input vector. The training techni4ue used is called the
perceptronlearning rule. ,erceptrons are especially suited for simple
problems in pattern classification. =uppose we have a set of learning
samples consisting of an input vector x and a desired output d+1,. "or a
classification task, the d+1, is usually @1 or A1. The perceptron-learning rule
is very simple and can be stated as follows2 1. =tart with random weights for
the connections. 0. =elect an input vector x from the set of training samples.
/. %f output '1 B? d+1, (the perceptron gives an incorrect response), modify
all connections )i according to2
2)i ? 3+d1 A '1,xi3 (3 ? learning rate).
.. Co back to step 0.
#ote that the procedure is very similar to the ;ebb rule3 the only difference
is that when the network responds correctly, no connection weights are
modified.
Dept.Of Electronics C S !""#$%& D
Seminar Report Nov2011 Artifcial Neural
Networks
) *AC!(RO(A$ATION LEARNIN$
The simple perceptron is $ust able to handle linearly separable or linearly
independent problems. 5y taking the partial derivative of the error of the
network with respect to each weight, we will learn a little about the
direction the error of the network is moving. %n fact, if we take the negative
of this derivative (i.e. the rate change of the error as the value of the weight
increases) and then proceed to add it to the weight, the error will decrease
until it reaches a local minima. This makes sense because if the derivative is
positive, this tells us that the error is increasing when the weight is
increasing. The obvious thing to do then is to add a negative value to the
weight and vice versa if the derivative is negative. 5ecause the taking of
these partial derivatives and then applying them to each of the weights takes
place, starting from the output layer to hidden layer weights, then the hidden
layer to input layer weights (as it turns out, this is necessary sincechanging
these set of weights re4uires that we know the partial derivatives calculated
in the layer downstream), this algorithm has been called the
bac1propagation algorit.m. ( neural network can be trained in two different
modes2 online and batch modes. The number of weight updates of
the two methods for the same number of data presentations is very different.
The online method weight updates are computed for each input data sample,
and the weights are modified after each sample. (n alternative solution is to
compute the weight update for each input sample, but store these values
during one pass through the training set which is called an epoc.. (t the end
of the epoch, all the contributions are added, and only then the weights will
Dept.Of Electronics C S !""#$%& E
Seminar Report Nov2011 Artifcial Neural
Networks
be updated with the composite value. This method adapts the weights with a
cumulative weight update, so it will follow the gradient more closely. %t is
called the batc.4training mode. Training basically involves feeding training
samples as input vectors through a neural network, calculating the error
of the output layer, and then ad$usting the weights of the network to
minimi)e the error. The average of all the s4uared errors (E) for the outputs
is computed to make the derivative easier. 8nce the error is computed, the
weights can be updated one by one. %n the batched mode variant, the descent
is based on the gradient E for the total training set
5)i* +n, ? A3 2E
2)i* @ 65)i* +n A 1, +.,
where 3 and 6 are the learning rate and momentum respectively. The
momentum term determines the effect of past weight changes on the current
direction of movement in the weight space. ( good choice of both 3 and 6
are re4uired for the training success and the speed of the neuralnetwork
learning. %t has been proven that backpropagation learning with sufficient
hidden layers can approximate any nonlinear function to arbitrary accuracy.
This makes backpropagation learning neural network a good candidate for
signal prediction and system modeling.
Dept.Of Electronics C S !""#$%& -
Seminar Report Nov2011 Artifcial Neural
Networks
+ TRAININ$ AND TESTIN$ NEURAL NETOR!S
The best training procedure is to compile a wide range of examples
(for more complex problems, more examples are re4uired), which exhibit all
the different characteristics of the problem. To create a robust and reliable
network, in some cases, some noise or other randomness is added to the
training data to get the network familiari)ed with noise and natural
variability in real data. ,oor training data inevitably leads to an unreliable
and unpredictable network. 9sually, the network is trained for a prefixed
number of epochs or when the output error decreases below a particular
error threshold. =pecial care is to be taken not to overtrain the network. 5y
overtraining, the network may become too adapted in learning the samples
from the training set, and thus may be unable to accurately classify samples
outside of the training set. "igure / illustrates the classification results of an
overtrained network. The task is to correctly classify two patterns F and G.
Training patterns are shown by & ' and test patterns by & '. The test patterns
were not shown during the training phase. (s shown in "igure / (left side),
each class of test data has been classified correctly, even though they were
not seen during training. The trained network is said to have good
generali)ation performance. "igure / (right side) illustrates some
misclassification of the test data. The network initially learns to detect the
global features of the input and, as a conse4uence, generali)es very well.
5ut after prolonged training, the network starts to recogni)e individual
inputHoutput pairs rather than settling for weights that generally describe the
mapping for the whole training set ("ausett, 1--.).
+.1 Choosin' the nu,%er of neurons
Dept.Of Electronics C S !""#$%& 1
Seminar Report Nov2011 Artifcial Neural
Networks
The number of hidden neurons affects how well the network is able to
separate the data. ( large number of hidden neurons will ensure correct
learning, and the network is able to correctly predict the data it has been
trained on, but its performance on new data, its ability to generali)e, is
compromised. Iith too few hidden neurons, the network may be unable to
learn the relationships amongst the data and the error will fail to fall below
an acceptable level. Thus, selection of the number of hidden neurons is a
crucial decision.
+.2 Choosin' the initia& -ei'hts
The learning algorithm uses a steepest descent techni4ue, which rolls
straight downhill in weight space until the first valley is reached. This
makes the choice of initial starting point in the multidimensional weight
space critical. ;owever, there are no recommended rules for this selection
except trying several different starting weight values to see if the network
results are improved.
+.# Choosin' the &earnin' rate
<earning rate effectively controls the si)e of the step that is taken in
multidimensional weight space when each weight is modified. %f the
selected learning rate is too large, then the local minimum may be
Dept.Of Electronics C S !""#$%& 11
Seminar Report Nov2011 Artifcial Neural
Networks
overstepped constantly, resulting in oscillations and slow convergence to the
lower error state. %f the learning rate is too low, the number of iterations
re4uired may be too large, resulting in slow performance.
. "I$"ER ORDER LEARNIN$ AL$ORIT"/S
5ackpropagation (5,) often gets stuck at a local minimum mainly
because of the random initiali)ation of weights. "or some initial weight
settings, 5, may not be able to reach a global minimum of weight space,
while for other initiali)ations the same network is able to reach an optimal
minimum. ( long recogni)ed bane of analysis of the error surface and the
performance of training algorithms is the presence of multiple stationary
points, including multiple minima. !mpirical experience with training
algorithms show that different initiali)ation of weights yield different
resulting networks. ;ence, multiple minima not only exist, but there may be
huge numbers of them.
%n practice, there are four types of optimi)ation algorithms that are
used to optimi)e the weights. The first three methods, gradient descent,
con$ugate gradients, and 4uasi- #ewton, are general optimi)ation methods
whose operation can be understood in the context of minimi)ation of a
4uadratic error function. (lthough the error surface is surely not 4uadratic,
for differentiable node functions, it will be so in a sufficiently small
neighborhood of a local minimum, and such an analysis provides
information about the behavior of the training algorithm over the span of a
few iterations and also as it approaches its goal. The fourth method of
<evenberg and *ar4uardt is specifically adapted to the minimi)ation of an
error function that arises from a s4uared error criterion of the form we are
assuming. ( common feature of these training algorithms is the re4uirement
of repeated efficient calculation of gradients. The reader can refer to 5ishop
Dept.Of Electronics C S !""#$%& 10
Seminar Report Nov2011 Artifcial Neural
Networks
(1--6) for an extensive coverage of higher-order learning algorithms. !ven
though artificial neural networks are capable of performing a wide variety of
tasks, in practice, sometimes, they deliver only marginal performance.
%nappropriate topology selection and learning algorithm are fre4uently
blamed. There is little reason to expect that one can find a uniformly best
algorithm for selecting the weights in a feedforward artificial neural
network. This is in accordance with the no free lunch theorem, which
explains that for any algorithm, any elevated performance over one class of
problems is exactly paid for in performance over another class (*acready
and Iolpert, 1--D). The design of artificial neural networks using
evolutionary algorithms has been widely explored. !volutionary algorithms
are used to adapt the connection weights, network architecture, and so on,
according to the problem environment. ( distinct feature of evolutionary
neural networks is their adaptability to a dynamic environment. %n other
words, such neural networks can adapt to an environment as well as changes
in the environment. The two forms of adaptation, evolution and learning in
evolutionary artificial neural networks, make their adaptation to a dynamic
environment much more effective and efficient than the conventional
learning approach. 1efer to (braham (0.) for more technical information
related to evolutionary design of neural networks.
Dept.Of Electronics C S !""#$%& 1/
Seminar Report Nov2011 Artifcial Neural
Networks
0 DESI$NIN$ ARTI1ICIAL NEURAL NETOR!S
To illustrate the design of artificial neural networks, the *ackey-
Class chaotic time series (5ox and Jenkins, 1-D) benchmark is used. The
performance of the designed neural network is evaluated for different
architectures and activation functions. The *ackey-Class differential
e4uation is a chaotic time series for some values of the parameters x+,
and 7 .
Ie used the value x+t A 1E,, x+t A 10,, x+t A >,, x+t,
to predict x+t @ >,. "ourth order 1unge-Kutta method was used to generate
1 data series. The time step used in the method is .1 and initial
condition were x+, ? 1.0( 7 ?
Ta%&e 1. Training and test performance for *ackey-Class =eries for
different architectures.
Dept.Of Electronics C S !""#$%& 1.
Seminar Report Nov2011 Artifcial Neural
Networks
0.1 Net-or2 architecture
( feed-forward neural network with four input neurons, one hidden layer
and one output neuron is used. Ieights were randomly initiali)ed and the
learning rate and momentum are set at .6 and .1 respectively. The
numbers of hidden neurons are varied (1., 1>, 1E, 0, 0.) and the
generali)ation performance is reported in Table 1. (ll networks were trained
for an identical number of stochastic updates (06 epochs).
0.2 Ro&e of acti3ation functions
The effect of two different node activation functions in the hidden layer,
log-sigmoidal activation function <=(" and tanh-sigmoidal activation
function T=("), keeping 0. hidden neurons for the backpropagation
learning algorithm, is illustrated in "igure .. Table 0 summari)es the
empirical results for training and generali)ation for the
Dept.Of Electronics C S !""#$%& 16
Seminar Report Nov2011 Artifcial Neural
Networks
Figure 4. Convergence of training for diferent node
transfer
function.
Table 2. Mackey-Glass time series: training and
generalization performance for diferent activation functions.
Dept.Of Electronics C S !""#$%& 1>
Seminar Report Nov2011 Artifcial Neural
Networks
Figure 5. Computational complexity for diferent
arcitectures.
two node transfer functions. The generali)ation looks better with T=(".
"igure 6 illustrates the computational complexity in billion flops for
different numbers of hidden neurons. (t present, neural network design
relies heavily on human experts who have sufficient knowledge about the
different aspects of the network and the problem domain. (s the complexity
of the problem domain increases, manual design becomes more difficult.
4 SEL15OR$ANI6IN$ 1EATURE /A(AND
RADIAL *ASIS 1UNCTION NETOR!
4.1 Se&f5or'ani7in' feature ,ap
=elf-organi)ing "eature *aps =8"* is a data visuali)ation techni4ue
proposed by Kohonen (1-EE), which reduces the dimensions of data through
the use of self-organi)ing neural networks. ( =8"* learns the
categori)ation, topology, and distribution of input vectors. =8"* allocate
more neurons to recogni)e parts of the input space where many input
vectors occur and allocate fewer neurons to parts of the input space where
few input vectors occur. #eurons next to each other in the network learn to
Dept.Of Electronics C S !""#$%& 1D
Seminar Report Nov2011 Artifcial Neural
Networks
respond to similar vectors. =8"* can learn to detect regularities and
correlations in their input and adapt their future responses to that input
accordingly. (n important feature of the =8"* learning algorithm is that it
allows neurons that are neighbors to the winning neuron to be output values.
Thus, the transition of output vectors is much smoother than that obtained
with competitive layers, where only one neuron has an output at a time.
The problem that data visuali)ation attempts to solve is that humans simply
cannot visuali)e high-dimensional data. The way =8"* goes about
reducing dimensions is by producing a map of usually 1 or 0 dimensions,
which plot the similarities of the data by grouping similar data items
together (data clustering). %n this process, =8"* accomplish two things,
they reduce dimensions and display similarities. %t is important to note that
while a self-organi)ing map does not take long to organi)e itself so that
neighboring neurons recogni)e similar inputs, it can take a long time for
the map to finally arrange itself according to the distribution of input
vectors.
4.2 Radia& %asis function net-or2
The 1adial 5asis "unction (15") network is a three-layer feed-forward
network that uses a linear transfer function for the output units and a
nonlinear transfer function (normally the Caussian) for the hidden layer
neurons (+hen, +owan and Crant, 1--1). 1adial basis networks may re4uire
more neurons than standard feed-forward backpropagation networks, but
often they can be designed with lesser time.They perform well when many
training data are available.*uch of the inspiration for 15" networks has
come from traditional statistical pattern classification techni4ues.
Dept.Of Electronics C S !""#$%& 1E
Seminar Report Nov2011 Artifcial Neural
Networks
The input layer is simply a fan-out layer and does no processing. The
second or hidden layer performs a nonlinear mapping from the input space
into a (usually) higher dimensional space whose activation function is
selected from a class of functions called basis functions. The final layer
performs a simple weighted sum with a linear output. +ontrary to 5,
networks, the weights of the hidden layer basis units (input to hidden layer)
are set using some clustering techni4ues. The idea is that the patterns in the
input space form clusters. %f the centers of these clusters are known, then the
!uclidean distance from the cluster center can be measured. (s the input
data moves away from the connection weights, the activation value reduces.
This distance measure is made nonlinear in such a way that for input data
close to a cluster center gets a value close to 1. 8nce the hidden layer
weights are set, a second phase of training (usually backpropagation) is used
to ad$ust the output weights.
8 RECURRENT NEURAL NETOR!S
AND ADA(TI9E RESONANCE T"EOR:
8.1 Recurrent neura& net-or2s
1ecurrent networks are the state of the art in nonlinear time series
prediction, system identification, and temporal pattern classification. (s the
output of the network at time t is used along with a new input to compute
the output of the network at time t @ 1, the response of the network is
dynamic (*andic and +hambers, 01).
Dept.Of Electronics C S !""#$%& 1-
Seminar Report Nov2011 Artifcial Neural
Networks
Time <ag 1ecurrent #etworks (T<1#) are multilayered perceptrons
extended with short-term memory structures that have local recurrent
connections. The recurrent neural network is a very appropriate model for
processing temporal (time-varying) information. !xamples of temporal
problems include time-series prediction, system identification, and temporal
pattern recognition. ( simple recurrent neural network could be constructed
by a modification of the multilayered feed-forward network with the
addition of a &context layer'. The context layer is added to the structure,
which retains information between observations. (t each time step, new
inputs are fed to the network. The previous contents of the hidden layer are
passed into the context layer. These then feed back into the hidden layer in
the next time step. %nitially, the context layer contains nothing, so the output
from the hidden layer after the first input to the network will be the same as
if there is no context layer. Ieights are calculated in the same way for the
new connections from and to the context layer from the hidden layer. The
training algorithm used in T<1# (backpropagation through time) is more
advanced than standard backpropagation algorithm. 7ery often, T<1#
re4uires a smaller network to learn temporal problems when compared to
*<, that use extra inputs to represent the past samples. T<1# is
biologically more plausible and computationally more powerful than other
adaptive models such as the hidden *arkov model. =ome popular recurrent
network architectures are the !lman recurrent network in which the hidden
unit activation values are fed back to an extra set of input units and the
Jordan recurrent network in which output values are fed back into hidden
units.
8.2 Adapti3e resonance theor;
(daptive 1esonance Theory ((1T) was initially introduced by Crossberg
(1-D>) as a theory of human information processing. (1T neural networks
Dept.Of Electronics C S !""#$%& 0
Seminar Report Nov2011 Artifcial Neural
Networks
are extensively used for supervised and unsupervised classification tasks
and function approximation. There exist many different variations of (1T
networks today (+arpenter and Crossberg, 1--E). "or example, (1T1
performs unsupervised learning for binary input patterns, (1T0 is modified
to handle both analog and binary input patterns, and (1T/ performs parallel
searches of distributed recognition codes in a multilevel network hierarchy.
"u))y (1T*(, represents a synthesis of elements from neural networks,
expert systems, and fu))y logic.
CONCLUSION
This section presented the biological motivation and fundamental
aspects of modeling artificial neural networks. ,erformance of feed-forward
artificial neural networks for a function approximation problem is
demonstrated. (dvantages of some specific neural network architectures and
learning algorithms are also discussed.
Dept.Of Electronics C S !""#$%& 01
Seminar Report Nov2011 Artifcial Neural
Networks
RE1ERENCES
(braham, (. (0.) *eta-<earning !volutionary (rtificial #eural
#etworks, Neurocomputing 8ournal, 7ol. 6>c, !lsevier =cience,
#etherlands, (1:/E).
5ishop, +.*. (1--6) Neural Net)or1s for 9attern Recognition, 8xford
9niversity ,ress, 8xford, 9K.
Dept.Of Electronics C S !""#$%& 00
Seminar Report Nov2011 Artifcial Neural
Networks
5ox, C.!.,. and Jenkins, C.*. (1-D) "ime Series nal'sis( :orecasting
and Control, ;olden Lay, =an "rancisco, +(.
+arpenter, C. and Crossberg, =. (1--E) in daptive Resonance ".eor'
+R",( ".e /andboo1 of ;rain ".eor' and Neural
Net)or1s, (ed. *.(. (rbib), *%T ,ress, +ambridge, *(, (pp. D-:E0).
+hen, =., +owan, +.".#. and Crant, ,.*. (1--1) 8rthogonal <east =4uares
<earning (lgorithm for 1adial 5asis "unction #etworks. <EEE
"ransactions on Neural Net)or1s, 2(0), /0:/-.
Dept.Of Electronics C S !""#$%& 0/

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy