ML Unit 1
ML Unit 1
Machine Learning
TEXT BOOKS:
• 2. Tom M Mitchell, ―Machine Learning‖, First Edition, McGraw Hill Education, 2013.
REFERENCES:
• Peter Flach, ―Machine Learning: The Art and Science of Algorithms that
Make Sense of Data‖, First Edition, Cambridge University Press, 2012.
❑ Firstly, it will interpret the raw data to find the hidden patterns
from the data and then will apply suitable algorithms such as k-
means clustering, Decision tree, etc.
Brain
Neuron and myelinated axon, with signal flow from inputs at dendrites to outputs
at axon terminals
Methods
a)Hebb’s Rule :
• Introduced by Donald Hebb
• Change in the strength of the synaptic connections are ‘α’(correlation in firing
of 2 connecting neurons )else the neurons will die. It is a learning rule that
describes how the neuronal activities influence the connection between neurons
2. For each input vector, S(input vector) : t(target output pair), repeat steps 3-5.
3. Set activations for input units with the input vector Xi = Si for i = 1 to n.
Ex:
When a dog presented with a meat naturally salivates to the
meat, naturally salivates to the meat especially when hungry
c) Mc Culloch and Pits Neurons : (1943)
θ >nw-P
n → no. of neurons in input
w → positive weight
p → negative weight
Hence , Mc culloch and pitts is a binary threshold device .
Yin - x1w1+x2w2+ … ,xmwm
Now need to find out the threshold value in such a way the y
neuron should fire as per the threshold value.
θ >= nw -p
θ >= 2*1-0 = 2
θ >= 2
system
1.2) Degree to which the learner controls the sequence of training
examples
1.3) How well it represents distribution of examples
2. Choosing the target function
3. Choosing a representation for the target function
4. Choosing a Function Approximation Algorithm
4.1) Estimating training values
4.2) Adjusting the weights
5. The final design
Designing A Learning System
1) Choosing the training system:
Type of training experience available :It has impact
on success or failure of the learner .
Key attributes :
i) whether the training experience provides –direct /
indirect feedback regarding the choices made by
performance of the system.
Eg: Checkers game : system learns from direct training
(board states ,correct move sequence , final outcome )
Credit assignment : degree to which each move in the
sequence deserves credit/blame for final outcome .
Learning from direct training is easier than indirect
training .
ii) Degree to which the learner controls the sequence of training
examples :
• Relies on the teacher (board states and correct moves)
• Learner has control over (board states and indirect training classification)
Learning Problem
Task T : playing checkers
P : % of games won in the tournament
E : games played against itself
Choice :
1. Exact type of knowledge to be learned
2. Representation for target knowledge
2:Choosing Target Function
• Determine exactly what type of knowledge will
be learned
• How it will be used by performance program?
Answer: choose best legal moves in large search
space Function ChooseMove
= represents Approximation
successor(b) = next board state following ‘b’
(Estimating that this move will help/destroy
opponent)
4.2 Adjusting the weights
• h is specific(–ve)
• Place’?’ to any
position (3rd ).
• This is –ve(no
revision is
required for h
4th training is a positive example
Search moves from h to h from most specific to most general hypothesis
along partial ordering
•No explicit enumerated members
•IT uses more _general_then partial ordering to main representation of set of H .
•The subset of all hypothesis is called as version space with respect to H and D.
Version spaces types
1.List _Then _eliminate
• List all of its members
• 1st initialize version space containing all H
• Eliminate any H focused inconsistent
• Version space shrinks as training examples are observed
• Only one H remains that is consistent with all examples
giving Target Concept( c )
• If insufficient data = FOUND then output entire set of H
consistent with observed data
• It can be applied when H is finite
• It output all H
Example:
2.More compact Representation
• Represented by most general and least general
members
• They form general and specific boundary sets with
partially ordered hypothesis space.
Boundary sets G and S
Version space representation theorem
• Let X be any arbitrary set of instances
• Let H be set of Boolean values hypothesis
defined over X
• Let c:X->[0,1] be arbitrary target defined over
X
• Let D be an arbitrary set of training examples
{<x, c(x)>} for all values of X,H,c,D such that S
and G are well defined .
Perceptron
• Type of ANN is called as perceptron, is the first neural
network model designed in 1958
• Perceptron is linear binary classifier used for supervised
learning
• Perceptron learning model is a combination of McCulloch-
pitts model and Hebbian learning rule of adjusting weights
• It takes a value of real valued inputs and
calculates a linear combination of these
inputs .
• Given <x1,x2, ….. , xn> as inputs and a
(x1,x2,…. , xn) is computed by perceptron
y = c + b*x
where y = estimated dependent variable score,
c = constant,
b = regression coefficient,
x = score on the independent variable.
• Naming the Variables. There are many names for a
regression’s dependent variable. It may be called an outcome
variable, criterion variable, endogenous variable, or
regressand.