Lecture 2 Post
Lecture 2 Post
Emma Brunskill
Spring 2024
Question for today’s lecture (not for poll): Can we construct algorithms
for computing decision policies so that we can guarantee with additional
computation / iterations, we monotonically improve the decision policy?
Do all algorithms satisfy this property?
Yes it is possible! We will see this today. Not all of them do.
Last Time:
Introduction
Components of an agent: model, value, policy
This Time:
Making good decisions given a Markov decision process
Next Time:
Policy evaluation when don’t have a model of how the world works
For finite state MRP, we can express V (s) using a matrix equation
P(s1 |s1 ) · · ·
P(sN |s1 )
V (s1 ) R(s1 ) P(s1 |s2 ) · · · V (s1 )
.. .. P(sN |s2 ) .
. = . +γ ..
.. .. ..
. . .
V (sN ) R(sN ) V (sN )
P(s1 |sN ) · · · P(sN |sN )
V = R + γPV
For finite state MRP, we can express V (s) using a matrix equation
P(s1 |s1 ) · · · P(sN |s1 )
V (s1 ) R(s1 ) P(s1 |s2 ) · · · V (s1 )
.. .. P(sN |s2 ) .
. = . +γ ..
.. .. ..
. . .
V (sN ) R(sN ) V (sN )
P(s1 |sN ) · · · P(sN |sN )
V = R + γPV
V − γPV = R
(I − γP)V = R
V = (I − γP)−1 R
Dynamic programming
Initialize V0 (s) = 0 for all s
For k = 1 until convergence
For all s in S
X
Vk (s) = R(s) + γ P(s ′ |s)Vk−1 (s ′ )
s ′ ∈S
1
Reward is sometimes defined as a function of the current state, or as a function of
the (state, action, next state) tuple. Most frequently in this class, we will assume reward
is a function of state and action
Emma Brunskill (CS234 Reinforcement Learning)
Lecture 2: Making Sequences of Good Decisions Given a Model
Spring of
2024
the World 12 / 65
Example: Mars Rover MDP
1 0 0 0 0 0 0 0 1 0 0 0 0 0
1 0 0 0 0 0 0 0 0 1 0 0 0 0
0 1 0 0 0 0 0 0 0 0 1 0 0 0
′ ′
P(s |s, a1 ) =
0 0 1 0 0 0 0 P(s |s, a2 ) =
0 0 0 0 1 0 0
0 0 0 1 0 0 0 0 0 0 0 0 1 0
0 0 0 0 1 0 0 0 0 0 0 0 0 1
0 0 0 0 0 1 0 0 0 0 0 0 0 1
2 deterministic actions
Emma Brunskill (CS234 Reinforcement Learning)
Lecture 2: Making Sequences of Good Decisions Given a Model
Spring of
2024
the World 13 / 65
MDP Policies
Set i = 0
Initialize π0 (s) randomly for all states s
While i == 0 or ∥πi − πi−1 ∥1 > 0 (L1-norm, measures if the policy
changed for any state):
V πi ← MDP V function policy evaluation of πi
πi+1 ← Policy improvement
i =i +1
Set i = 0
Initialize π0 (s) randomly for all states s
While i == 0 or ∥πi − πi−1 ∥1 > 0 (L1-norm, measures if the policy
changed for any state):
V πi ← MDP V function policy evaluation of πi
πi+1 ← Policy improvement
i =i +1
X
Q πi (s, a) = R(s, a) + γ P(s ′ |s, a)V πi (s ′ )
s ′ ∈S
X
Q πi (s, a) = R(s, a) + γ P(s ′ |s, a)V πi (s ′ )
s ′ ∈S
X
max Q πi (s, a) ≥ R(s, πi (s)) + γ P(s ′ |s, πi (s))V πi (s ′ ) = V πi (s)
a
s ′ ∈S
Suppose we take πi+1 (s) for one action, then follow πi forever
Our expected sum of rewards is at least as good as if we had always
followed πi
But new proposed policy is to always follow πi+1 ...
Definition
V π1 ≥ V π2 : V π1 (s) ≥ V π2 (s), ∀s ∈ S
Proposition: V πi+1 ≥ V πi with strict inequality if πi is suboptimal,
where πi+1 is the new policy we get from policy improvement on πi
Set k = 1
Initialize V0 (s) = 0 for all states s
Loop until convergence: (for ex. ||Vk+1 − Vk ||∞ ≤ ϵ)
For each state s
" #
X
′ ′
Vk+1 (s) = max R(s, a) + γ P(s |s, a)Vk (s )
a
s ′ ∈S
Vk+1 = BVk
" #
X
′ ′
πk+1 (s) = arg max R(s, a) + γ P(s |s, a)Vk (s )
a
s ′ ∈S
To do policy improvement
" #
X
πk+1 (s) = arg max R(s, a) + γ P(s ′ |s, a)V πk (s ′ )
a
s ′ ∈S
Vk+1 = BVk
′ ′ ′ ′ ′ ′
X X
∥BVk − BVj ∥ = max R(s, a) + γ P(s |s, a)Vk (s ) − max R(s, a ) + γ P(s |s, a )Vj (s )
a a′
s ′ ∈S s ′ ∈S
′ ′ ′ ′ ′ ′
X X
∥BVk − BVj ∥ = max R(s, a) + γ P(s |s, a)Vk (s ) − max R(s, a ) + γ P(s |s, a )Vj (s )
a a′
s ′ ∈S s ′ ∈S
′ ′ ′ ′
X X
≤ max R(s, a) + γ P(s |s, a)Vk (s ) − R(s, a) − γ P(s |s, a)Vj (s )
a
s ′ ∈S s ′ ∈S
′ ′ ′
X
= max γ P(s |s, a)(Vk (s ) − Vj (s ))
a
s ′ ∈S
′
X
≤ max γ P(s |s, a)∥Vk − Vj ∥)
a
s ′ ∈S
′
X
= max γ∥Vk − Vj ∥ P(s |s, a))
a
s ′ ∈S
= γ∥Vk − Vj ∥
Note: Even if all inequalities are equalities, this is still a contraction if γ < 1
Set k = 1
Initialize V0 (s) = 0 for all states s
Loop until k == H:
For each state s
X
Vk+1 (s) = max R(s, a) + γ P(s ′ |s, a)Vk (s ′ )
a
s ′ ∈S
X
πk+1 (s) = arg max R(s, a) + γ P(s ′ |s, a)Vk (s ′ )
a
s ′ ∈S
Set k = 1
Initialize V0 (s) = 0 for all states s
Loop until k == H:
For each state s
X
Vk+1 (s) = max R(s, a) + γ P(s ′ |s, a)Vk (s ′ )
a
s ′ ∈S
X
πk+1 (s) = arg max R(s, a) + γ P(s ′ |s, a)Vk (s ′ )
a
s ′ ∈S
Value iteration:
Compute optimal value for horizon = k
Note this can be used to compute optimal policy if horizon = k
Increment k
Policy iteration
Compute infinite horizon value of a policy
Use to select another (better) policy
Closely related to a very popular method in RL: policy gradient
Last Time:
Introduction
Components of an agent: model, value, policy
This Time:
Making good decisions given a Markov decision process
Next Time:
Policy evaluation when don’t have a model of how the world works
0.6 0.4 0 0 0 0 0
0.4 0.2 0.4 0 0 0 0
0 0.4 0.2 0.4 0 0 0
0
P= 0 0.4 0.2 0.4 0 0
0 0 0 0.4 0.2 0.4 0
0 0 0 0 0.4 0.2 0.4
0 0 0 0 0 0.4 0.6