Knowledge Based and Neural Network Learning
Knowledge Based and Neural Network Learning
CS1545
The ability of the neural network (NN) to learn form its environment and to improve
its performance through learning.
Hebb: The organization of behavior, 1949→ long term potential , LPT, (1973 Bliss,
Lomo), AMPA receptor, Long Term Depression, LTD, NMDA receptor.
The idea of competitive learning : Von de Malsburg 1973, the self- organization of
orientation -sensitive nerve cells in the striate cortex.
Statistical thermodynamics:
Statistical thermodynamics in the study of computing machinery, John Von Neumann,
theory and organization of complicated Automata, 1949.
2. Memory Based learning: In memory based learning all of the past experiences
are explicitly stored in a large memory of correctly classified input-output examples.
=> {( xidi)}n i=1
=> Criterian used for defining the local neighbourhood of the test vector Xtest.
=> Learning rule applied to the training examples in the local neighborhood of
xtest.
=>Nearest neighbor rule; the vector X’N ε{X1,X2,----XN}is the nearest neighbor
of Xtest if minid (xi,xtest)=d(XN,Xtest)
=>If the classified examples d(xi,di) are independently and identically distributed
according to the joint probability distribution of the example(x,d).
=> If the sample size N is infinitely large.
=> The classification error incurred by the nearest neighbor rule is bounded
above twice the Bayes probability of error.
=>K- nearest neighbor classifier.
=> Identify the K classified patterns that lie nearest to the test vector Xtest for
some integer K.
=> Assign Xtest to the class that is most frequently represented in the K nearest
neighbors to Xtest.
3. Hebbian Learning:
a) If two neurons on either side of synapse(condition) are activated simultaneously,
then the strength of that synapse is selectively increased.
b) If two neurons on either side of a synapse are activated asynchronously then that
synapse is selectively weakened or eliminated.
i) Time dependent mechanism.
ii) Local mechanism( spatiotemporal cogtiguity)
iii) Interactive mechanism
iv) Conjunctional or correlational mechanism.
Note That:a) Synaptic weight Wkj is enhanced if the conditions xj> x’ and yk>y’are
both are satisfied.
b)Synaptic weight wkj is depressed if there is xj>x’ and yk<y’ or
Yk>y’ and xj<x’.
4. Competitive Learning:
→ The output neurons of a neural network compete among themselves to become
active.
→ A set of neurons that are all the same ( excepts for synaptic weights).
→ A limit imposed on the strength of each neuron.
→ A mechanism that permits the neurons to compete→ a winner -takes -all
→the standard competitive learning rule
∆Wkj=ɳ( xj-wkj) if neuron k wins the competition.
=0 if the neuron k loses the competition.
5. Boltzmann Learning:
→ The neurons constitute a recurrent structure and they operate in a binary manner.
The machine is characterized by an energy function E
j
k
Clamped Conditions:
The visible neurons are al lclamped onto specific states determined by the
environment.
Free -Running Conditions:
→ All the neurons (visible and hidden) are allowed to operate freely.
→The Boltzmann learning rule:
∆Wkj= ɳ(Pkj+ -Pkj-),
J≠K
Environment Teacher
Desired
Response
Actual Response +
Learning System
-
Error Signal
State(Input) Vector
Environment Critic
Learning System
Delayed reinforcement :
→ Which means that the system observes a temporal sequence of stimuli.
→Difficult to performfor two reasons:
→There is no teacher to provide a desire response at each step of the learning process.
→The delay incurred in the generation of the primary reinforcement signal implies
that the machine must solve a temporal credit assignment problem.
→Reinforcement learning is closely related to dynamic programming.
Unsupervised learning:
Heteroassociation:
It differs from autoassociation in that an arbitary set of input patterns is paired with
another arbitary set of output patterns.
X Y
Pattern associator