Cs Ai Lecture Notes 02
Cs Ai Lecture Notes 02
Theories
• Precept = state
• Actions = move
• Goal : T/F
• State = [7,2,4,5,B,6,8,3,1]
O(b1+d)
• Breadth-first = first
search lower costs
from START
• Fringe: FIFO
O(b1+C/ ε)
8
Soccer
Search
Uninformed search strategies
• Uninformed search strategies use only the
information available in the problem definitio
• Breadth-first search
• Uniform-cost search
• Depth-first search
• Depth-limited search
• Iterative deepening search
Breadth-first search
• Expand shallowest unexpanded node
1. States:
2. Actions :
3. Goal test:
4. Cost:
8-puzzle heuristics
Admissible:
θ
Soccer : Shooting at goal
where h() is drawn from the hypothesis space – e.g. the space of
radial basis functions, or polynomials, etc.
Polynomial Curve Fitting
Marginal Probability
Sum Rule
Product Rule
Rules of Probability
Sum Rule
Product Rule
Example
A disease d occurs in 0.05% of population. A test is
99% effective in detecting the disease, but 5% of
the cases test positive in absence of d.
10000 people are tested. How many are expected to
test positive?
p(d) = 0.0005 ; p(t/d) = 0.99 ; p(t/~d) = 0.05
p(t) = p(t,d) + p(t,~d) [Sum Rule]
= p(t/d)p(d) + p(t/~d)p(~d) [Product Rule]
= 0.99*0.0005 + 0.05 * 0.9995 = 0.0505 505 +ve
Bayes’ Theorem
Posterior probability
Bayesian Inference
The fruit picked is an orange
(o). What is the probability
that it’s from the blue box (B)?
orange
P(B|o) =
P(o|B)p(B) / P(o)
discrete x continuous x
Observations
assumed to be
indpendently
drawn from same
distribution (i.i.d)
Likelihood function
Maximum (Log) Likelihood
Distributions over
Multi-dimensional spaces
The Multivariate Gaussian
lines of equal
probability densities
Multivariate distribution
As dimensionality
increases, bulk of
data moves away
from center
Probability of k Heads:
k 0 1 2 3
P(k) 1/8 3/8 3/8 1/8
Probability of success: p, failure q, then
Model Selection
Model Selection
Cross-Validation
Quantized-Cell Classification
flow data
red: ‘homogenous’,
green : ‘annular’,
blue : ‘laminar’.
Curse of Dimensionality
Gaussian Densities in
higher dimensions
Regression with Polynomials
Curve Fitting Re-visited
Bayesian Inference
Testing for hypothesis H given evidence E
likelihood
Bayesian inference:
P (H|E) = P(E|H) P(H) / P(E)
prior
posterior
Maximum Likelihood
Evidence = t; Hypothesis = poly(x,w)
Maximum Likelihood
Evidence = t; Hypothesis = poly(x,w)
Stupid approach:
Guesser: Is it Valmiki?
Knower: No.
Interestingness ∝ unpredictability
A: 00010001000100010001. . . 0001000100010001000100010001
B: 01110100110100100110. . . 1010111010111011000101100010
C: 00011000001010100000. . . 0010001000010000001000110000
Used in
• coding theory
• statistical physics
• machine learning
Entropy
Entropy
In how many ways can N identical objects be allocated M
bins?
For P(Y)=1/1028
entropy = - 1/1028 * -10 - eps = 0.01