0% found this document useful (0 votes)
19 views16 pages

STA03B3 Lecture 2

Uploaded by

mashudu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
19 views16 pages

STA03B3 Lecture 2

Uploaded by

mashudu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 16

Statistics 3B (STA03B3)

STOCHASTIC PROCESSES

Lecture 2

Chapter 1 → Introduction and Review

Dr V. van Appel
Department of Statistics
Faculty of Science, University of Johannesburg

1 / 16
Overview

1. Introduction and Review


1.3 Monte Carlo Simulation

A Review of Probability
1.4 Conditional Probability (Self Study)
1.5 Conditional Expectation (Self Study)

2 / 16
1.3 Monte Carlo Simulation

▶ Advancements in modern computing have revolutionized the study


of stochastic systems, allowing for the visualization and simulation
of increasingly complex models.
▶ At the heart of the many simulation techniques developed to
generate random variables and stochastic processes lies the Monte
Carlo method.
▶ Given a random experiment and event A, a Monte Carlo estimate of
P (A) is obtained by repeating the random experiment many times
and taking the proportion of trials in which A occurs as an
approximation for P (A).
▶ The name Monte Carlo evidently has its origins in the fact that the
mathematician Stanislaw Ulam, who developed the method in 1946,
had an uncle who regularly gambled at the Monte Carlo casino in
Monaco.

3 / 16
▶ Monte Carlo simulation is intuitive and matches up with our sense
of how probabilities should behave.
▶ The relative frequency interpretation of probability says that the
probability of an event is the long-term proportion of times that the
event occurs in repeated trials.
▶ It is justified theoretically by the strong law of large numbers.
▶ Consider repeated independent trials of a random experiment.
▶ Define the sequence X1 , X2 , · · · , where

1, if A occurs on the kth trial,
Xk =
0, if A does not occur on the kth trial,

for k ≥ 1.

4 / 16
▶ Then, (X1 + · · · + Xn )/n is the proportion of trials in which A
occurs. The Xk are identically distributed with common mean
E(Xk ) = P (A).
▶ By the strong law of large numbers,

X1 + · · · + Xn
lim = P (A). (1)
n→∞ n

▶ For large n, the Monte Carlo estimate of P (A) is

X1 + · · · + Xn
P (A) ≈ .
n

5 / 16
1.4 Conditional Probability

Self study!

6 / 16
1.5 Conditional Expectation

Self study!

7 / 16
Example (1.10 Gambler’s ruin)

▶ The gambler’s ruin problem was introduced in Example 1.6.


▶ A gambler starts with k dollars.
▶ On each play a fair coin is tossed and the gambler wins $1 if
heads occurs, or loses $1 if tails occurs.
▶ The gambler stops when he reaches $n(n > k) or loses all
his money.
▶ Find the probability that the gambler will eventually lose.

8 / 16
Solution
▶ We make two observations, which are made more precise in later
chapters.
▶ First, the gambler will eventually stop playing, either by
reaching n or by reaching 0.
▶ One might argue that the gambler could play forever.
▶ However, it can be shown that that event occurs with
probability 0.
▶ Second, assume that after, say, 100 wagers, the gambler’s
capital returns to $k.
▶ Then, the probability of eventually winning $n is the same as
it was initially.
▶ The memoryless character of the process means that the
probability of winning $n or losing all his money only depends
on how much capital the gambler has, and not on how many
previous wagers the gambler made.

9 / 16
Solution Cont.
▶ Let pk denote the probability of reaching n when the gambler’s
fortune is k.
▶ What is the gambler’s status if heads is tossed? Their fortune
increases to k + 1 and the probability of winning is the same as it
would be if the gambler had started the game with k + 1. If a
gambler wins one dollar in the first game, then their wealth
becomes k + 1.
▶ Similarly, if tails is tossed and the gambler’s fortune decreases to
k − 1. In other words, if they lost one dollar in the first game, their
fortune becomes k − 1.
▶ We can use elementary algebra to aggregate these equations into a
standardized format that can be simplified into a single formula.

10 / 16
Solution Cont.
▶ Assume a fair game, where p = 1/2.
▶ Hence,    
1 1
pk = pk+1 + pk−1 ,
2 2
or

pk+1 − pk = pk − pk−1 , for k = 1, . . . , n − 1, (2)


with p0 = 0 and pn = 1.

11 / 16
Solution Cont.
▶ Unwinding the recurrence gives

pk − pk−1 = pk−1 − pk−2 = pk−2 − pk−3 = · · · = p1 − p0 = p1 ,

for k = 1, . . . , n.
▶ We have that p2 − p1 = p1 , giving p2 = 2p1 .
▶ Also, p3 − p2 = p3 − 2p1 = p1 , giving p3 = 3p1 .
▶ More generally, pk = kp1 , for k = 1, . . . , n.

12 / 16
Solution Cont.
▶ Sum Equation (2) over suitable k to obtain

n−1
X n−1
X
(pk+1 − pk ) = (pk − pk−1 ) .
k=1 k=1

▶ Both sums telescope to

pn − p1 = pn−1 − p0 ,

which gives

1 − p1 = pn−1 = (n − 1)p1 , so p1 = 1/n.

Thus,
k
pk = kp1 = , for k = 0, · · · , n.
n

13 / 16
Solution Cont.
▶ The probability that the gambler eventually wins $n is k/n.

According to this equation, given that the game is fair, the


probability that gamblers will end up with n dollars before
they end up with zero dollars is equal to their initial wealth k
divided by the total wealth of both gamblers, n.

▶ Hence, the probability of the gambler’s ruin is (n − k)/n.

14 / 16
References I

Dobrow, Robert P. (2016). Introduction to Stochastic


Processes with R, John Wiley & Sons, Incorporated.

15 / 16
Questions?

16 / 16

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy