0% found this document useful (0 votes)
15 views32 pages

Pert15 - Probabilistic Reasoning Over Time

Uploaded by

82gfmcz5fs
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views32 pages

Pert15 - Probabilistic Reasoning Over Time

Uploaded by

82gfmcz5fs
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 32

Course : Artificial Intelligence (COMP6065)

Non-official Slides

Probabilistic Reasoning over


Time

Session 15

Revised by Williem, S. Kom., Ph.D.


1
Learning Outcomes
At the end of this session, students will be able to:
• LO 5 : Apply various techniques to an agent when acting under
certainty

2
Outline
1. Time and Uncertainty
2. Inference in Temporal Model
3. Hidden Markov Models
4. Dynamic Bayesian Networks
5. Summary

3
Time and Uncertainty
• How we estimate the probability of changing random
variable?
– When a car is broken, remains broken during the process
diagnosis (static)
– On the other hand, a diabetic patient has changing
evidence (blood sugar, insulin doses, etc) (dynamic)
• We view the world as a series of snapshots (time slices)
– Xt denotes the set of state variables at time t
– Et denotes the observable evidences at time t

4
Time and Uncertainty
• Simple example

– We work in the secret underground installation

– We want to know whether it is raining or not today

– Our access to outside is when our director comes in with or


without umbrella in the morning

– Xt = Raint (Rt)  True/False

– Et = Umbrellat (Ut)  True/False

5
Time and Uncertainty
• How we construct the Bayesian network? What is the transition
model?

– Too complex, we need an assumption (Markov assumption)

– The current state depends on only a finite fixed number of


previous states (Markov chains)

First-order

Second-order

6
Time and Uncertainty
First-order

Second-order

• First-order Markov chain: P(Xt | Xt-1)

• Second-order Markov chain: P(Xt | Xt-2,Xt-1)

• Sensor Markov assumption: P(Et | X0:t,E0:t-1) = P(Et | Xt)

• Stationary process: transition model and sensor model fixed

7
Time and Uncertainty
• The complete joint distribution is the combination of the
transition model and sensor model

Bayesian network for umbrella world 8


Markov Chain
• Suppose that the possible states of the process are given by
the finite set  S = {1, 2, . . . , n}.

• Let stS denote the state occupied by the process in period


t{0, 1, 2, . . .}. Further suppose that

• The parameters of a Markov chain process can thus be


summarized by a transition matrix, written as:

9
Markov Chain
• A Markov process has 3 states, with the transition matrix

• Transition diagram for the 3 × 3 transition matrix given above.

10
Markov Chain
• Example
– A child with a lower-class parent has a 60% chance of
remaining in the lower class, has a 40% chance to rise to the
middle class, and has no chance to reach the upper class. A
child with a middle-class parent has a 30% chance of falling
to the lower class, a 40% chance of remaining middle class,
and a 30% chance of rising to the upper class. Finally, a child
with an upper-class parent have no chance of falling to the
lower class, has a 70% chance of falling to the middle class,
and has a 30% chance of remaining in the upper class.
– Assuming that 20% of the population belongs to the lower
class, that 30% belong to the middle class, and that 50%
belong to the upper class.

11
Markov Chain
• Solution
• Transition matrix

• Transition diagram

• Initial condition (0) reflected the individual’s class

12
Markov Chain
• Solution

• Illustrate, consider population dynamics over the next 4


generations is :

13
Inference in Temporal Model
• Inference tasks:

– Filtering: computing the belief state

• Current state estimation P(Xt | e1:t)

– Prediction: computing the posterior distribution of future state

• Future state prediction P(Xt+k | e1:t)

– Smoothing: computing the posterior distribution of past state

• Past state analysis P(Xk | e1:t)

– Most likely explanation: pattern analysis P(X1:t | e1:t)

14
Inference in Temporal Model
• Filtering and prediction

– To recursively update the distribution using a forward


message from previous states

new evidence
– Prediction can be seen simply as filtering without the
addition of new evidence

15
Inference in Temporal Model
• Filtering

16
Inference in Temporal Model
• Smoothing

– Process of computing the distribution over past states


given evidence up to the present

– The computation can be split into two parts: forward


message and backward message

forward backward
17
Inference in Temporal Model
• Smoothing

– The process of backward message

18
Inference in Temporal Model
• Smoothing

19
Inference in Temporal Model
• Most likely explanation

– Suppose that [true, true, false, true, true] is the umbrella


sequence for the security guard’s first five days on the job

– What is the weather sequence most likely to explain this?

• Here, we want find the possible sequence with high


probability!

• How?

20
Inference in Temporal Model
• Most likely explanation

– There is a recursive relationship between the most likely


paths to each state xt+1 and most likely paths to each state
xt (Markov property)

– Thus, we can write the relationship as

21
Inference in Temporal Model
• Most likely explanation

22
Hidden Markov Models
• Simple Markov models  The observer know the state
directly

• Hidden Markov models  The observer know the state


indirectly (through an output state or observed data)

• Umbrella world is an HMM, since the security only knows the


rain state from his director’s umbrella existence

23
Hidden Markov Models

Hidden states

H1 H2 Hi HL-1 HL

X1 X2 Xi XL-1 XL

Observed data
24
Hidden Markov Models
0.9
0.9
0.1 transition probabilities
fair loaded
0.1
1/2 1/2 3/4 1/4 emission probabilities

H T H T

Fair/Loaded

H1 H2 Hi HL-1 HL

X1 X2 Xi XL-1 XL

Head/Tail
25
Hidden Markov Models

We don’t know
the location, but
we know the
output of the
sensors

26
Dynamic Bayesian Network
• A dynamic Bayesian network or DBN is a Bayesian network
that represents a temporal probability model

– Example: The umbrella world

• Every HMM is a DBN with a single state variable and a single


evidence variable, vice versa

• The relation between HMM and DBN is analogous to the


relation between Bayesian networks and full tabulated joint
distributions

27
Dynamic Bayesian Network
• To construct a DBN, we must specify three kinds of
information:

– The prior distribution over the state variables P(X0)

– The transition model P(Xt+1 | Xt)

– The sensor model P(Et | Xt)

28
Dynamic Bayesian Network
• Example

– Monitoring a battery-powered robot moving in the X-Y plane

– State:

• State for position and velocity

• Measurement state

• Battery level state

• Battery charge level state

– Describe the relation between states!

29
Dynamic Bayesian Network

Inference in DBNs: Unrolling a dynamic Bayesian network

30
Summary
• The changing state of the world is handled by using a set of
random variables

• Representations can be designed to satisfy the Markov


property, so that the future is independent of the past given
the present

• The principal inference tasks in temporal models are filtering,


prediction, smoothing and computing the most likely
explanation

31
References
• Stuart Russell, Peter Norvig. 2010. Artificial Intelligence : A
Modern Approach. Pearson Education. New Jersey.
ISBN:9780132071482

• http://aima.cs.berkeley.edu

32

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy