0% found this document useful (0 votes)
91 views67 pages

F90de-Introduction To Reinforcement Learning

Uploaded by

yangzcrenee
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
91 views67 pages

F90de-Introduction To Reinforcement Learning

Uploaded by

yangzcrenee
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 67

Introduction to reinforcement learning

Contributors:
• Damien Ernst (dernst@uliege.be),
• Arthur Louette (arthur.uliege@uliege.be)
February 6, 2024
Outline

Introduction to the reinforcement learning framework

Characterization of RL problems

Goal of reinforcement learning

RL problems with small state-action spaces

RL problems with large state-action spaces

Convergence of Q-learning

1/60
Introduction to the reinforcement learning
framework
Artificial autonomous intelligent agent: formal definition

Definition (Agent)
An agent is anything that is capable of acting upon information it perceives.

Definition (Intelligent agent)


An intelligent agent is an agent capable of making decisions about how it acts
based on experience, that is of learning decision from experience.

Definition (Autonomous intelligent agent)


An autonomous intelligent agent is an intelligent agent that is free to choose
between different actions.

2/60
Definition (Artificial autonomous intelligent agent)
An artificial autonomous intelligent agent is anything we create that is capable
of actions based on information it perceives, its own experience, and its own
decisions about which actions to perform.

Since “artificial autonomous intelligent agent” is quite mouthful, we follow the


convention of using “intelligent agent” or “autonomous agent” for short.

3/60
Application of intelligent agents

Intelligent agents are applied in a variety of areas: project management,


electronic commerce, robotics, information retrieval, military, networking,
planning and scheduling, etc.
Examples:
• A predictive maintenance agent for industrial equipment that analyzes
sensor data to predict failures before they happen, scheduling maintenance
only when needed and reducing downtime and costs Leroy et al. [2023].
• An autonomous delivery drone system that optimizes delivery routes and
times based on traffic, weather conditions, and customer availability,
learning from each delivery to improve efficiency and customer satisfaction.
• An alignment agent fine-tunes LLMs like ChatGPT, to better match user
intentions. It learns from feedback to improve question interpretation and
ensure accurate, relevant responses. See lecture 11 on RL and LLMs.
• A robotic harvesting assistant that navigates through orchards, using visual
recognition to identify ripe fruits and vegetables. It gently picks produce
with precision, minimizing damage and waste. By learning from each harvest
what conditions lead to the best yield and quality it helps farmers optimize
picking schedules. See lecture 10 on robotic RL.
4/60
Machine learning and reinforcement learning: definitions

Definition (Machine learning)


Machine learning is a broad subfield of artificial intelligence is concerned with
the development of algorithms and techniques that allow computers to “learn”.

Definition (Reinforcement Learning)


Reinforcement Learning (RL in short) refers to a class of problems in machine
learning which postulate an autonomous agent exploring an environment in
which the agent perceives information about its current state and takes actions.
The environment, in return, provides a reward signal (which can be positive or
negative). The agent has as objective to maximize the (expected) cumulative
reward signal over the course of the interaction.

5/60
Definition (The policy)
The policy of an agent determines the way the agent selects its action based on
the information it has. A policy can be either deterministic or stochastic and
either stationary or history-dependant.

Research in reinforcement learning aims at designing policies which lead to large


(expected) cumulative reward.

Where does the intelligence come from? The policies process in an “intelligent
way” the information to select “good actions”.

6/60
An RL agent interracting with its environment

1
1
Source: Kaufmann et al. [2023]
7/60
Demo

https://www.youtube.com/watch?v=EtRXay2kqtc

8/60
Some generic difficulties with designing intelligent agents

• Inference problem. The environment dynamics and the mechanism behind the
reward signal are (partially) unknown. The policies need to be able to infer from
the information the agent has gathered from interaction with the system, “good
control actions”.

• Computational complexity. The policy must be able to process the history of


the observation within limited amount of computing time and memory.

• Tradeoff between exploration and exploitation.2 To obtain a lot of reward, a


reinforcement learning agent must prefer actions that it has tried in the past and
found to be effective in producing reward. But to discover such actions, it has to
try actions that it has not selected before.

2
May be seen as a subproblem of the general inference problem. This problem is often
referred to in the “classical control theory” as the dual control problem.

9/60
The agent has to exploit what it already knows in order to obtain reward, but it
also has to explore in order to make better action selections in the future. The
dilemma is that neither exploration nor exploitation can be pursued exclusively
without failing at the task. The agent must try a variety of actions and
progressively favor those that appear to be best. On a stochastic task, each
action must be tried many times to gain a reliable estimate its expected reward.

• Exploring safely the environment. During an exploration phase (more


generally, any phase of the agent’s interaction with its environment), the agent
must avoid reaching unacceptable states (e.g., states that may for example
endanger its own integrity). By associating rewards of −∞ to those states,
exploring safely can be assimilated to a problem of exploration-exploitation.

• Adversarial environment. The environment may be adversarial. In such a


context, one or several other players seek to adopt strategies that oppose the
interests of the RL agent.

10/60
Characterization of RL problems
Different characterizations of RL problems

• Stochastic (e.g., st+1 = f (st , at , wt ) where the random disturbance wt is


drawn according to the conditional probability distribution Pw (·|st , at ))
versus deterministic (e.g., st+1 = f (st , at ))

• Partial observability versus full observability. The environment is said to be


partially (fully) observable if the signal ot describes partially (fully) the
environment’s state st at time t.

• Time-invariant (e.g., st+1 = f (st , at , wt ) with wt = Pw (·|st , at )) versus


time-variant (e.g., st+1 = f (st , at , wt , t)) dynamics.

• Continuous (e.g., ṡ = f (s, a, w)) versus discrete dynamics (e.g.,


st+1 = f (st , at , wt )).

• Finite time versus infinite time of interaction.

11/60
• Multi-agent framework versus single-agent framework. In a multi-agent
framework the environment may be itself composed of (intelligent) agents. A
multi-agent framework can often be assimilated to a single-agent framework
by considering that the internal states of the other agents are unobservable
variables. Game theory and, more particularly, the theory of learning in
games study situations where various intelligent agents interact with each
other.

• Single state versus multi-state environment. In single state environment,


computation of an optimal policy for the agent is often reduced to the
computation of the maximum of a stochastic function (e.g., find
a∗ ∈ arg max E [r(a, w)]).
a∈A w∼Pw (·|a)

• Multi-objective reinforcement learning agent (reinforcement learning signal


can be multi-dimensional) versus single-objective RL agent.

• Risk-adverse reinforcement learning agent. The goal of the agent is not


anymore to maximize the expected cumulative reward but maximize the
lowest cumulative reward it could possibly obtain.

12/60
Characterization of the RL problem adopted in this class

• Dynamics of the environment:

st+1 = f (st , at , wt ) t = 0, 1, 2 . . .

where for all t, the state st is an element of the state space S, the action at is an
element of the action space A and the random disturbance wt is an element of
the disturbance space W . Disturbance wt generated by the time-invariant
conditional probability distribution Pw (·|s, a).
• Reward signal:
The function r(s, a, w) is the so-called reward function supposed to be bounded
by below by 0 and by above by a constant Br ≥ 0.
To the transition from t to t + 1 is associated a reward signal
γ t rt = γ t r(st , at , wt ) where r(s, a, w) is a reward function supposed to be
bounded by a constant Br and γ ∈ [0, 1[ a decay factor.

13/60
Definition (Cumulative reward signal)
Let ht ∈ H be the trajectory from instant time 0 to t in the combined state,
action, reward spaces: ht = (s0 , a0 , r0 , s1 , a1 , r1 , . . . , at−1 , rt−1 , st ). Let π ∈ Π be
a stochastic policy such that π : H × A → [0, 1] and let us denote by J π (s) the
expected return of a policy π (or expected cumulative reward signal) when the
system starts from s0 = s
T
X
J π (s) = lim E[ γ t r(st , at ∼ π(ht , .), wt )|s0 = s]
T →∞
t=0

• Information available:
The agent does not know f , r and Pw . The only information it has on these
three elements is the information contained in ht .

14/60
Exercise 1: Computation of the cumulative reward signal

Compute the cumulative reward signal J π (1) with policy π(s) = 1, ∀s ∈ S.


The state space and action space are respectively defined by:

S = {s ∈ {1, 2, 3, 4}}, A = {a ∈ {−1, 1}}

S=1 S=2 S=3 S=4

+1

The reward function and the dynamics are the following:

f (s, a) = (min(max(s + a, 1), 4)



1 if f (s, a) = 4
r(s, a) =
0 otherwise

15/60
Exercise 1: Solution

Solution:

J π (1) = lim [r(1, 1) + γr(2, 1) + γ 2 r(3, 1) + γ 3 r(4, 1) + ... + γ T r(4, 1)]


T →∞

= lim [γ 2 + ... + γ T ]
T →∞

=γ 2 lim [1 + ... + γ T −2 ]
T →∞

γ2
=
1−γ

PT
Reminder: lim t=0 γt = 1
1−γ
T →∞

16/60
Goal of reinforcement learning
Goal of reinforcement learning

Theorem (Optimal policy existence)


Let π ∗ ∈ Π a policy such that ∀s ∈ S,


J π (s) = maxJ π (s) (1)
π∈Π

Under some mild assumptions3 on f , r and Pw , such a policy π ∗ indeed exists.



• In reinforcement learning, we want to build policies π̂ ∗ such that J π̂

is as close as possible (according to specific metrics) to J π .
• If f , r and Pw were known, we could, by putting aside the difficulty of finding
in Π the policy π ∗ , design the optimal agent by solving the optimal control
problem (1). However, J π depends on f , r and Pw which are supposed to be
unknown ⇒ How can we solve this combined inference - optimization problem?

3
We will suppose that these mild assumptions are always satisifed afterwards.

17/60
Dynamic Programming (DP) theory reminder: optimality of station-
ary policies

Definition (Deterministic stationary policy)


A deterministic stationary control policy µ : S → A selects at time t the action
at = µ(st ). We denote Πµ the set of stationary policies.

Definition (Expected return)


The expected return of a stationary policy, when the system starts from s0 = s,
is:
T
X
J µ (s) = lim E [ γ t r(st , µ(st ), wt )|s0 = s]. (2)
T →∞ w0 ,w1 ,...,wT
t=0


Let µ∗ be a policy such that J µ (s) = max J µ (s) everywhere on S. We name
µ∈Πµ
such a policy an optimal deterministic stationary policy.
We denote π ∗ the optimal history-dependent policy, from classical dynamic
∗ ∗
programming theory, we know J µ (s) = J π (s) everywhere.
⇒ considering only stationary policies is not suboptimal!

18/60
µ
Truncated return JN
µ
Definition (Truncated return JN )
µ
We define the functions JN : S → R by the recurrence equation
µ µ
JN (s) = E [r(s, µ(s), w) + γJN −1 (f (s, µ(s), w))], ∀N ≥ 1 (3)
w∼Pw (·|s,a)

with J0µ (s) ≡ 0.

We have as result of the dynamic programming (DP) theory


lim ∥J µ − JN
µ
∥∞ → 0. (4)
N →∞

Similarly, we can also write


J µ (s) = JN
µ
(s) + γ N J µ (sN ), ∀s ∈ S (5)
where sN is the state after N steps.
Moreover, we can show that there exists a bound on the value of J µ (s)
N
X 1 − γ N +1 Br
∥J µ ∥∞ = lim γ t Br = lim Br = (6)
N →∞
t=0
N →∞ 1−γ 1−γ
where Br = ∥r∥∞ and iff γ ∈ [0; 1[ and r : S × A → R+ .
19/60
µ
Relation between J µ and JN

µ
Using (5) and (6), we can show that there exists a bound on ∥J µ − JN ∥∞

γN
∥J µ − JN
µ
∥∞ ≤ Br . (7)
1−γ
Indeed,
γN
∥J µ − JN
µ
∥∞ ≤ γ N ∥J µ ∥∞ = Br .
1−γ

20/60
µ
Exercise 2: Computation of the JN function

Compute J4µ (1), J4µ (2), J4µ (3) and J4µ (4) with policy π(s) = 1, ∀s ∈ S.
The state space and action space are respectively defined by:

S = {s ∈ {1, 2, 3, 4}}, A = {a ∈ {−1, 1}}

S=1 S=2 S=3 S=4

+1

The reward function and the dynamics are the following:

f (s, a) = (min(max(s + a, 1), 4)



1 if f (s, a) = 4
r(s, a) =
0 otherwise

21/60
Exercise 2: Solution

Solution:

µ µ
JN (s) = r(s, µ(s)) + γJN −1 (f (s, µ(s))), J0µ (s) ≡ 0

J1µ (1) = 0, J1µ (2) = 0, J1µ (3) = 1, J1µ (4) = 1


J2µ (1) = 0, J2µ (2) = γ, J2µ (3) = 1 + γ, J2µ (4) = 1 + γ
J3µ (1) = γ 2 , J3µ (2) = γ + γ 2 , J3µ (3) = J3µ (4) = 1 + γ + γ 2
J4µ (1) 2
=γ +γ , 3
J4µ (2) = γ + γ2 + γ3, J4µ (3) = J4µ (4) = 1 + γ + γ 2 + γ 3

22/60
DP theory reminder: QN -functions and Bellman equation

Definition (QN -functions)


We define the functions QN : S × A → R by the recurrence equation

QN (s, a) = E [r(s, a, w) + γmax



QN −1 (f (s, a, w), a′ )], ∀N ≥ 1 (8)
w∼Pw (·|s,a) a ∈A

with Q0 (s, a) ≡ 0. These QN -functions are also known as state-action value


functions.

Definition (Q-function)
We define the Q-function as being the unique solution of the Bellman equation:

Q(s, a) = E [r(s, a, w) + γmax



Q(f (s, a, w), a′ )]. (9)
w∼Pw (·|s,a) a ∈A

Theorem (Convergence of QN )
The sequence of functions QN converges to the Q-function in the infinite norm,
i.e. lim ∥QN − Q∥∞ → 0 .
N →∞

23/60
Exercise 3: Computation of the QN function

Compute Q4 (s, a), ∀(s, a) ∈ S × A with policy π(s) = 1, ∀s ∈ S.


The state space and action space are respectively defined by:

S = {s ∈ {1, 2, 3, 4}}, A = {a ∈ {−1, 1}}

S=1 S=2 S=3 S=4

+1

The reward function and the dynamics are the following:

f (s, a) = (min(max(s + a, 1), 4)



1 if f (s, a) = 4
r(s, a) =
0 otherwise

24/60
Solution

Solution:

QN (s, a) = r(s, a) + γmax



QN −1 (f (s, a), a′ ), Q0 (s, a) ≡ 0
a ∈A

N 1 2 3 4 5
QN (1, −1) 0 0 0 γ3 γ3 + γ4
QN (1, 1) 0 0 γ2 γ2 + γ3 γ2 + γ3 + γ4
QN (2, −1) 0 0 0 γ3 γ3 + γ4
QN (2, 1) 0 γ γ + γ2 γ + γ2 + γ3 γ + γ2 + γ3 + γ4
QN (3, −1) 0 0 γ2 γ2 + γ3 γ2 + γ3 + γ4
QN (3, 1) 1 1+γ 1 + γ + γ2 1 + γ + γ2 + γ3 1+γ + γ 2 + γ 3 + γ 4
QN (4, −1) 0 γ γ + γ2 γ + γ2 + γ3 γ + γ2 + γ3 + γ4
QN (4, 1) 1 1+γ 1 + γ + γ2 1 + γ + γ2 + γ3 1 + γ + γ2 + γ3 + γ4

25/60
Optimal stationary policy

Theorem (Optimal stationary policy)


A stationary policy µ∗ is optimal if and only if

µ∗ (s) ∈ arg maxQ(s, a) (10)


a∈A


We also have J µ (s) = maxQ(s, a), which is also called the value function.
a∈A

Theorem (N-optimal stationary policy)


A stationary policy µ∗N is N-optimal iff it selects an optimal action when there
remains exactly N steps.

µ∗N (s) ∈ arg max QN (s, a) (11)


a∈A

Necessarily, µ∗N is suboptimal with respect to µ∗ , i.e.,


∗ ∗
J µ (s) ≥ J µN (s) (12)

26/60

Bound on the suboptimality of J µN


Theorem (Bound on the suboptimality of J µN )

There exists a bound on the suboptimality of µ∗N in comparison to µ∗ , which is


given by the following inequality:
∗ ∗ 2γ N Br
J µ − J µN ≤ (13)
∞ (1 − γ)2

27/60
RL problems with small state-action spaces
A pragmatic model-based approach for designing good policies π̂ ∗

Information Action

Inferring from the Computation of a "Randomization" of the


information a stationary policy stationary policy to
model of the which is optimal address the
environment with respect to the exploration-
model exploitation tradeoff.

Select an action a
according to this
randomized policy

We focus first on to the design of functions π̂ ∗ which realize sequentially the


following three tasks:

1. “System identification” phase. Estimation from ht of an approximate system


dynamics fˆ, an approximate probability distribution P̂w and an approximate
reward function r̂.

28/60
2. Resolution of the optimization problem.

Find in Πµ the policy µ̂∗ such that ∀s ∈ S, J µ̂ (s) = max Jˆµ (s)
µ∈Πµ

where Jˆµ̂ is defined similarly as function J µ but with fˆ, P̂w and r̂ replacing f ,
Pw and r, respectively.

3. Afterwards, the policy π̂ selects with a probability 1 − ε(ht ) actions according


to the policy µ̂∗ and with a probability ε(ht ) at random. Step 3 has been
introduced to address the dilemma between exploration and exploitation.4

4
We will not address further the design of the ’right function’ ε : H → [0, 1]. In many
applications, it is chosen equal to a small constant (say, 0.05) everywhere.

29/60
Some constructive algorithms for designing π̂ ∗ when dealing with finite
state-action spaces

• Until say otherwise, we consider the particular case of finite state and action
spaces (i.e., S × A finite).

• When S and A are finite, there exists a vast panel of ‘well-working’


implementable RL algorithms.

• We focus first on approaches which solve separately Step 1. and Step 2. and
then on approaches which solve both steps together.

• The proposed algorithms infer µ̂∗ from ht . They can be adapted in a


straigthforward way to episode-based reinforcement learning where a model of µ∗
must be inferred from several trajectories ht1 , ht2 , . . ., htm with ti ∈ N0 .

30/60
Reminder on Markov Decision Processes

Definition (Markov Decision Process)


A Markov Decision Process (MDP) is defined through the following objects: a
state space S, an action space A, transition probabilities p(s′ |s, a) ∀s, s′ ∈ S,
a ∈ A and a reward function r(s, a).

• p(s′ |s, a) gives the probability of reaching state s′ after taking action a while
being in state s.

• We consider MDPs for which we want to find decision policies that maximize
the reward signal γ t r(st , at ) over an infinite time horizon.

• MDPs can be seen as a particular type of the discrete-time optimal control


problem introduced earlier.

31/60
MDP Structure Definition from the System Dynamics and Reward
Function

• We define5

r(s, a) = E [r(s, a, w)] ∀s ∈ S, a ∈ A (14)


w∼Pw (·|s,a)

p(s′ |s, a) = E [I{s′ =f (s,a,w)} ] ∀s, s′ ∈ S, a ∈ A (15)


w∼Pw (·|s,a)

• Equations (14) and (15) define the structure of an equivalent MDP in the sense
that the expected return of any policy applied to the original optimal control
problem is equal to its expected return for the MDP.

• The recurrence equation defining the functions QN can be rewritten:


QN (s, a) = r(s, a) + γ s′ ∈S p(s′ |s, a)max QN −1 (s′ , a′ ), ∀N ≥ 1 with
P
′ a ∈A
Q0 (s, a) ≡ 0.
5
I{logical expression} = 1 if logical expression is true and 0 if logical expression is f alse.

32/60
Reminder: Random Variable and Strong Law of Large Numbers

• A random variable is not a variable but rather a function that maps outcomes
(of an experiment) to numbers. Mathematically, a random variable is defined as
a measurable function from a probability space to some measurable space. We
consider here random variables θ defined on the probability space (Ω, P ).6

• E [θ] is the mean value of the random variable θ.


P

• Let θ1 , θ2 , . . ., θ2 be n values of the random variable θ which are drawn


R
independently. Suppose also that E [|θ|] = Ω |θ|dP is smaller than ∞. In such a
P
case, the strong law of large number states that:
θ1 + θ2 + . . . + θn P
lim → E [θ] (16)
n→∞ n P

6
For the sake of simplicity, we have considered here that (Ω, P ) indeed defines a probability
space which is not rigorous.

33/60
Step 1. Identification by learning the structure of the equivalent
MPD

• The objective is to infer some ‘good approximations’ of p(s′ |s, a) and r(s, a)
from:

ht = (s0 , a0 , r0 , s1 , a1 , r1 , . . . , at−1 , rt−1 , st ).

Estimation of r(s, a):


Let X(s, a) = {k ∈ {0, 1, . . . , t − 1}|(sk , ak ) = (s, a)}. Let k1 , k2 , . . ., k#X(s,a)
denote the elements of the set.7 The values rk1 , rk2 , . . ., rk#X(s,a) are #X(s, a)
values of the random variable r(s, a, w) which are drawn independently. It follows
therefore naturally that to estimate its mean value r(s, a), we can use the
following unbiased estimator:
P
k∈X(s,a) rk
r̂(s, a) = (17)
#X(s, a)

7
If X is a set of elements, #X denote the cardinality of X.

34/60
Estimation of p(s′ |s, a):
The values I{s′ =sk1 +1 } , I{s′ =sk2 +1 } , . . ., I{s′ =sk } are #X(s, a) values of
#X(s,a) +1
the random variable I{s′ =f (s,a,w)} which are drawn independently. To estimate
its mean value p(s′ |s, a), we can use the unbiased estimator:
P
k∈X(s,a) I{sk+1 =s }

p̂(s′ |s, a) = (18)
#X(s, a)

35/60
Step 2. Computation of µ̂∗ identification by learning the structure of
the equivalent MPD

• We compute the Q̂N -functions from the knowledge of r̂ and p̂ by exploiting the
recurrence equation:

p̂(s′ |s, a)max QN −1 (s′ , a′ ),


P
Q̂N (s, a) = r̂(s, a) + γ s′ ∈S ′
∀N ≥ 1 with
a ∈A
Q̂0 (s, a) ≡ 0 and then take

µ̂∗N = arg maxQ̂N (s, a) ∀s ∈ S (19)


a∈A

as approximation of the optimal policy, with N ’large enough’ (e.g., right hand
side of inequality (13) drops below ε).

• One can show that if the estimated MDP structure lies in an ‘ε-neighborhood’
∗ ∗
of the true structure, then, J µ̂ is in a ‘O(ε)-neighborhood’ of J µ where
µ̂∗ (s) = lim arg maxQ̂N (s, a).
N →∞ a∈A

36/60
The Case of Limited Computational Resources

• Number of operations to estimate the MDP structure grows linearly with t.


Memory requirements needed to store ht also grow linearly with t ⇒ an agent
having limited computational resources will face problems after certain time of
interaction.
• We describe an algorithm which requires at time t a number of operations that
does not depend on t to update the MDP structure and for which the memory
requirements do not grow with t:
At time 0, set N (s, a) = 0, N (s, a, s′ ) = 0, R(s, a) = 0, p(s′ |s, a) = 0, ∀s, s′ ∈ S
and a ∈ A.
At time t ̸= 0, do
1. N (st−1 , at−1 ) ← N (st−1 , at−1 ) + 1
2. N (st−1 , at−1 , st ) ← N (st−1 , at−1 , st ) + 1
3. R(st−1 , at−1 ) ← R(st−1 , at−1 ) + rt
R(s ,at−1 )
4. r(st−1 , at−1 ) ← N (st−1t−1 ,at−1 )
N (st−1 ,at−1 ,s)
5. p(s|st−1 , at−1 ) ← N (st ,at )
∀s ∈ S

37/60
The Q-learning Algorithm

Idea: merge steps 1 and 2 to learn directly the Q-function.

The Q-learning algorithm is an algorithm that infers directly from

ht = (s0 , a0 , r0 , s1 , a1 , r1 , . . . , at−1 , rt−1 , st )

an approximate value of the Q-function, without identifying the structure of a


Markov Decision Process.
The algorithm can be described by the following steps:
1. Initialisation of Q̂(s, a) to 0 everywhere. Set k = 0.
2. Q̂(sk , ak ) ← (1 − αk )Q̂(sk , ak ) + αk (rk + γmaxQ̂(sk+1 , a))
a∈A
3. k ← k + 1. If k = t, return Q̂ and stop. Otherwise, go back to 2.

38/60
Q-learning: some remarks

• Iteration 2. can be rewritten as Q̂(sk , ak ) ← Q̂(sk , ak ) + αk δ(sk , ak ) where the


term:
δ(sk , ak ) = rk + γmaxQ̂(sk+1 , a) − Q̂(sk , ak ), (20)
a∈A

called the temporal difference.


• Learning ratio αk : The learning ratio αk is often chosen constant with k and
equal to a small value (e.g., αk = 0.05, ∀k).
• Consistency of the Q-learning algorithm: Under some particular conditions on
Pt−1 Pt−1 2
the way αk decreases to zero ( lim k=0 αk → ∞ and lim
t→∞ k=0 αk < ∞) and
t→∞
the history ht (when t → ∞, every state-action pair needs to be visited an
infinite number of times), Q̂ → Q when t → ∞. (e.g. αk = k1 )
• Experience replay: At each iteration, the Q-learning algorihtm uses a sample
lk = (sk , ak , rk , sk+1 ) to update the function Q̂. If rather that to use the finite
sequence of sample l0 , l2 , . . ., lt−1 , we use the infinite size sequence li1 , li2 , . . . to
update in a similar way Q̂, where the ij are i.i.d. with uniform distribution on
{0, 2, . . . , t − 1}, then Q̂ converges to the approximate Q-function computed from
the estimated equivalent MDP structure.
39/60
RL problems with large state-action spaces
Inferring µ̂∗ from ht when dealing with very large or infinite state-
action spaces

• Up to now, we have considered problems having discrete (and not too large)
state and action spaces ⇒ µ̂∗ and the Q̂N -functions could be represented in a
tabular form.
• We consider now the case of very large or infinite state-action spaces: functions
approximators need to be used to represent µ̂∗ and the Q̂N -functions.
• These function approximators need to be used in a way that there are able to
‘well generalize’ over the whole state-action space the information contained in ht .
• There is a vast literature on function approximators in reinforcement learning.
We focus first on one algorithm named ‘fitted Q iteration’ which computes the
functions Q̂N from ht by solving a sequence of batch mode supervised learning
problems.

40/60
Reminder: Batch mode supervised learning

• A batch mode Supervised Learning (SL) algorithm infers from a set of


input-output (input = information state); ( output = class label, real number,
graph, etc) a model which explains “at best” these input-output pairs.

• A loose formalisation of the SL problem: Let I be the input space, O the


output space, Ξ the disturbance space. Let g : I × Ξ → O. Let Pξ (·|i) a
conditional probability distribution over the disturbance space.

We assume that we have a training set T S = {(il , ol )}#T S l


l=1 such that o has been
generated from i by the following mechanism: draw ξ ∈ Ξ according to Pξ (·|il )
l

and then set ol = g(il , ξ).

From the sole knowledge of T S, supervised learning aims at finding a function


ĝ : I → O which is a ‘good approximation’ of the function g(i) = E [g(i, ξ)]
ξ∼Pξ (·)

41/60
• Typical supervised learning methods are: kernel-based methods, (deep) neural
networks, tree-based methods.

Gender

Fe
ale

m
M

ale
Batch mode Age Survived

supervised

≥2
<2

0
learning
Survived Class

2
Survived Died

• Supervised learning highly successful: state-of-the art SL algorithms have been


successfully applied to problems where the input state was composed thousands
of components.
42/60
The fitted Q iteration algorithm

• Fitted Q iteration computes from ht the functions Q̂1 , Q̂2 , . . ., Q̂N ,


approximations of Q1 , Q2 , . . ., QN . At step N > 1, the algorithm uses the
function Q̂N −1 together with ht to compute a new training set from which a SL
algorithm outputs Q̂N . More precisely, this iterative algorithm works as follows:

First iteration: the algorithm determines a model Q̂1 of


Q1 (s, a) = E [r(s, a, w)] by running a SL algorithms on the training set:
w∼Pw (·|s,a)

T S = {((sk , ak ), rk )}t−1
k=0 (21)

Motivation: One can assimilate S × A to I, R to O, W to Ξ, Pw (·|s, a) to


Pξ (·|s, a), r(s, a, w) to g(i, ξ) and Q1 (s, a) to g. From there, we can observe that
a SL algorithm applied to the training set described by equation (21) will
produce a model of Q1 .

43/60
Iteration N > 1: the algorithm outputs a model Q̂N of
QN (s, a) = E [r(s, a, w) + γmax

QN −1 (f (s, a, w), a′ )] by running a SL
w∼Pw (·|s,a) a ∈A
algorithms on the training set:

T S = {((sk , ak ), rk + γmax

Q̂N −1 (sk+1 , a′ )}t−1
k=0
a ∈A

Motivation: One can reasonably suppose that Q̂N −1 is a a sufficiently good


approximation of QN −1 to be consider to be equal to this latter function.
Assimilate S × A to I, R to O, W to Ξ, Pw (·|s, a) to Pξ (·|s, a), r(s, a, w) to g(i, ξ)
and QN (s, a) to g. From there, we observe that a SL algorithm applied to the
training set described by equation (22) will produce a model of QN .

• The algorithm stops when N is ‘large enough’ and µ̂∗N (s) ∈ arg maxQ̂N (s, a) is
a∈A
taken as approximation of µ∗ (s).

44/60
The fitted Q iteration algorithm: some remarks

• Performances of the algorithm depends on the supervised learning (SL) method


chosen.

• Excellent and stable performances have been observed when combined with
supervised learning methods based on ensemble of regression trees and of course,
with deep neural nets, especially when images are used as input.

• Fitted Q iteration algorithm can be used with any set of one-step system
transitions (st , at , rt , st+1 ) where each one-step system transition gives
information about: a state, the action taken while being in this state, the reward
signal observed and the next state reached.

• Consistency, that is convergence towards an optimal solution when the number


of one-step system transitions tends to infinity, can be ensured under appropriate
assumptions on the SL method, the sampling process, the system dynamics and
the reward function.

45/60
Computation of µ̂∗ : from an inference problem to a problem of com-
putational complexity

• When having at one’s disposal only a few one-step system transitions, the main
problem is a problem of inference.

• Computational complexity of the fitted Q iteration algorithm grows with the


number M of one-step system transitions (sk , ak , rk , sk+1 ) (e.g., it grows as
M log M when coupled with tree-based methods).

• Above a certain number of one-step system transitions, a problem of


computational complexity appears.

• Should we rely on algorithms having less inference capabilities than the ‘fitted
Q iteration algorithm’ but which are also less computationally demanding to
mitigate this problem of computational complexity ⇒ Open research question.

46/60
• There is a serious problem plaguing every reinforcement learning algorithm
known as the curse of dimensionality8 : whatever the mechanism behind the
generation of the trajectories and without any restrictive assumptions on
f (s, a, w), r(s, a, w), S and A, the number of computer operations required to
determine (close-to-) optimal policies tends to grow exponentially with the
dimensionality of S × A.

• This exponentional growth makes these techniques rapidly computationally


impractical when the size of the state-action space increases.

• Many researchers in reinforcement learning/dynamic programming/optimal


control theory focus their effort on designing algorithms able to break this curse
of dimensionality. Deep neural nets give strong hopes for some classes of
problems.

8
A term introduced by Richard Bellman (the founder of the DP theory) in the fifties.

47/60
Q-learning with parametric function approximators

Let us extend the Q-learning algorithm to the case where a parametric


Q-function of the form Q̃(s, a, θ) is used:

1. Equation (20) provides us with a desired update for Q̃(st , at , θ), here:
δ(st , at ) = rt + γmaxQ̂(st+1 , a, θ) − Q̂(st , at , θ), after observing (st , at , rt , st+1 ).
a∈A

2. It follows the following change in parameters:


∂ Q̃(st , at , θ)
θ ← θ + αδ(st , at ) . (22)
∂θ

48/60
Convergence of Q-learning
Contraction mapping

Let B(E) be the set of all bounded real-valued functions defined on an arbitrary
set E. With every function R : E → R that belongs to B(E), we associate the
scalar:

∥R∥∞ = sup|R(e)|. (23)


e∈E

A mapping G : B(E) → B(E) is said to be a contraction mapping if there exists a


scalar ρ < 1 such that:

∥GR − GR′ ∥∞ ≤ ρ∥R − R′ ∥∞ ∀R, R′ ∈ B(E). (24)

49/60
Fixed point

R∗ ∈ B(E) is said to be a fixed point of a mapping G : B(E) → B(E) if:

GR∗ = R∗ . (25)

If G : B(E) → B(E) is a contraction mapping then there exists a unique fixed


point of G. Furthermore if R ∈ B(E), then

lim ∥Gk R − R∗ ∥∞ = 0. (26)


k→∞

From now on, we assume that:


1. E is finite and composed of n elements
2. G : B(E) → B(E) is a contraction mapping whose fixed point is denoted by R∗
3. R ∈ B(E).

50/60
Algorithmic models for computing a fixed point

All elements of R are refreshed: Suppose have the algorithm that updates at
stage k (k ≥ 0) R as follows:

R ← GR. (27)

The value of R computed by this algorithm converges to the fixed point R∗ of G.


This is an immediate consequence of equation (26).
One element of R is refreshed: Suppose we have the algorithm that selects at
each stage k (k ≥ 0) an element e ∈ E and updates R(e) as follows:

R(e) ← (GR)(e) (28)

leaving the other components of R unchanged. If each element e of E is selected


an infinite number of times then the value of R computed by this algorithm
converges to the fixed point R∗ .

51/60
One element of R is refreshed and noise introduction: Let η ∈ R be a noise
factor and α ∈ R. Suppose we have the algorithm that selects at stage k (k ≥ 0)
an element e ∈ E and updates R(e) according to:

R(e) ← (1 − α)R(e) + α((GR)(e) + η) (29)

leaving the other components of R unchanged.


We denote by ek the element of E selected at stage k, by ηk the noise value at
stage k and by Rk the value of R at stage k and by αk the value of α at stage k.
In order to ease further notations we set αk (e) = αk if e = ek and αk (e) = 0
otherwise.
With this notation equation (29) can be rewritten equivalently as follows:

Rk+1 (ek ) = (1 − αk )Rk (ek ) + αk ((GRk )(ek ) + ηk ). (30)

52/60
We define the history Fk of the algorithm at stage k as being:

Fk = {R0 , . . . , Rk , e0 , . . . , ek , α0 , . . . , αk , η0 , . . . , ηk−1 }. (31)

We assume moreover that the following conditions are satisfied:


1. For every k, we have

E[ηk |Fk ] = 0. (32)

2. There exist two constants A and B such that ∀k

E[ηk2 |Fk ] ≤ A + B∥Rk ∥2∞ . (33)

3. The αk (e) are nonnegative and satisfy



X ∞
X
αk (e) = ∞, αk2 (e) < ∞. (34)
k=0 k=0

Then the algorithm converges with probability 1 to R∗ .

53/60
The Q-function as a fixed point of a contraction mapping

We define the mapping H: B(S × A) → B(S × A) such that

(HK)(s, a) = E [r(s, a, w) + γmax



K(f (s, a, w), a′ )] (35)
w∼Pw (·|s,a) a ∈A

∀(s, a) ∈ S × A.

• The recurrence equation (8) for computing the QN -functions can be rewritten
QN = HQN −1 ∀N > 1, with Q0 (s, a) ≡ 0.

• We prove afterwards that H is a contraction mapping. As immediate


consequence, we have, by virtue of the properties algorithmic model (27), that
the sequence of QN -functions converges to the unique solution of the Bellman
equation (9) which can be rewritten: Q = HQ. Afterwards, we proof, by using
the properties of the algorithmic model (30), the convergence of the Q-learning
algorithm.

54/60
H is a contraction mapping

This H mapping is a contraction mapping. Indeed, we have for any functions


K, K ∈ B(S × A):9

∥HK − HK∥∞ = γ max | E [maxK(f (s, a, w), a′ ) −


(s,a)∈S×A w∼Pw (·|s,a) a′ ∈A

maxK(f (s, a, w), a′ )]|


a′ ∈A

≤ γ max | E [max|K(f (s, a, w), a′ ) −


(s,a)∈S×A w∼Pw (·|s,a) a′ ∈A

K(f (s, a, w), a′ )|]|


≤ γmaxmax|K(s, a) − K(s, a)|
s∈S a∈A

= γ∥K − K∥∞

9
We make as additional assumption here that the rewards are strictly positive.

55/60
Q-learning convergence proof

The Q-learning algorithm updates Q at stage k in the following way10

Qk+1 (sk , ak ) = (1 − αk )Qk (sk , ak ) + αk (r(sk , ak , wk ) + (36)


γmaxQk (f (sk , ak , wk ), a)), (37)
a∈A

Qk representing the estimate of the Q-function at stage k. wk is drawn


independently according to Pw (·|sk , ak ).

10
The element (sk , ak , rk , sk+1 ) used to refresh the Q-function at iteration k of the Q-learning
algorithm is “replaced” here by (sk , ak , r(sk , ak , wk ), f (sk , ak , wk )).

56/60
By using the H mapping definition (equation (35)), equation (37) can be
rewritten as follows:

Qk+1 (sk , ak ) = (1 − αk )Qk (sk , ak ) + αk ((HQk )(sk , ak ) + ηk ) (38)

with

ηk = r(sk , ak , wk ) + γmaxQk (f (sk , ak , wk ), a) − (HQk )(sk , ak )


a∈A

= r(sk , ak , wk ) + γmaxQk (f (sk , ak , wk ), a) −


a∈A

E [r(sk , ak , w) + γmaxQk (f (sk , ak , w), a)]


w∼Pw (·|s,a) a∈A

which has exactly the same form as equation (30) (Qk corresponding to Rk , H to
G, (sk , ak ) to ek and S × A to E).

57/60
We know that H is a contraction mapping. If the αk (sk , ak ) terms satisfy
expression (34), we still have to verify that ηk satisfies expressions (32) and (33),
where

Fk = {Q0 , . . . , Qk , (s0 , a0 ), . . . , (sk , ak ), α0 , . . . , αk , η0 , . . . , ηk−1 }, (39)

in order to ensure the convergence of the Q-learning algorithm.


We have:
E[ηk |Fk ] = E [r(sk , ak , wk ) + γmaxQk (f (sk , ak , wk ), a) −
wk ∼Pw (·|sk ,ak ) a∈A

E [r(sk , ak , w) + γmaxQk (f (sk , ak , w), a)]|Fk ]


w∼Pw (·|sk ,ak ) a∈A

= 0

and expression (32) is indeed satisfied.

58/60
In order to prove that expression (33) is satisfied, one can first note that :

|ηk | ≤ 2Br + 2γ max Qk (s, a) (40)


(s,a)∈S×A

where Br is the bound on the rewards. Therefore we have :

ηk2 ≤ 4Br2 + 4γ 2 ( max Qk (s, a))2 + 8Br γ max Qk (s, a) (41)


(s,a)∈S×A (s,a)∈S×A

By noting that

8Br γ max Qk (s, a) < 8Br γ + 8Br γ( max Qk (s, a))2 (42)
(s,a)∈S×A (s,a)∈S×A

and by choosing A = 8Br γ + 4Br2 and B = 8Br γ + 4γ 2 we can write

ηk2 ≤ A + B∥Qk ∥2∞ (43)

and expression (33) is satisfied. QED

59/60
References and additional readings

References
Pascal Leroy, Pablo G. Morato, Jonathan Pisane, Athanasios Kolios, and Damien
Ernst. Imp-marl: a suite of environments for large-scale infrastructure
management planning via marl, 2023.
Elia Kaufmann, Leonard Bauersfeld, Antonio Loquercio, Matthias Mueller,
Vladlen Koltun, and Davide Scaramuzza. Champion-level drone racing using
deep reinforcement learning. Nature, 620:982–987, 08 2023.
Lucian Busoniu, Robert Babuska, Bart De Schutter, and Damien Ernst.
Reinforcement learning and dynamic programming using function
approximators. CRC press, 2017.
Richard S Sutton and Andrew G Barto. Reinforcement learning: An
introduction. MIT press, 2018.
Dimitri Bertsekas. Dynamic programming and optimal control: Volume I,
volume 4. Athena scientific, 2012.
Csaba Szepesvári. Algorithms for reinforcement learning. Springer Nature, 2022.
60/60

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy