0% found this document useful (0 votes)
69 views34 pages

L-10 Expectation & Variance PDF

The document discusses expectation and variance of random variables. It defines expectation as the long-term average value of a random variable. Expectation is calculated differently for continuous and discrete random variables using integrals or sums. Variance measures how spread out the values of a random variable are from the mean and is defined as the expected value of the squared deviations from the mean. The properties of expectation and how to calculate expectation and variance for specific random variables are also covered.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
69 views34 pages

L-10 Expectation & Variance PDF

The document discusses expectation and variance of random variables. It defines expectation as the long-term average value of a random variable. Expectation is calculated differently for continuous and discrete random variables using integrals or sums. Variance measures how spread out the values of a random variable are from the mean and is defined as the expected value of the squared deviations from the mean. The properties of expectation and how to calculate expectation and variance for specific random variables are also covered.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 34

44

Chapter 3: Expectation and Variance


In the previous chapter we looked at probability, with three major
themes: 1. Conditional probability: P(A | B).

2. First-step analysis for calculating eventual probabilities in a stochastic


process.
3. Calculating probabilities for continuous and discrete random variables.

In this chapter, we look at the same themes for expectation and variance.
The expectation of a random variable is the ​long-term  average of the random 
variable.  

Imagine observing many thousands of independent random values from the


random variable of interest. Take the average of these random values. The
expectation is the value of this average as the sample size tends to infinity.

We will repeat the three themes of the previous chapter, but in a different order.

1. Calculating expectations for continuous and discrete random variables.


2. Conditional expectation: the expectation of a random variable X, condi
tional on the value taken by another random variable Y . If the value of
Y affects the value of X (i.e. X and Y are dependent), the conditional
expectation of X given the value of Y will be different from the overall
expectation of X.
3. First-step analysis for calculating the expected amount of time needed
to reach a particular state in a process (e.g. the expected number of
shots before we win a game of tennis).

We will also study similar themes for variance.

45
3.1 Expectation

The mean, expected value, or expectation of a random variable X is writ ten


as E(X) or µ​X​. If we observe N random values of X, then the mean of the N
values will be approximately equal to E(X) for large N. The expectation is
defined differently for continuous and discrete random variables.

Definition: Let X be a continuous random variable with p.d.f. f​X​(x). The ex


pected value of Z ​ −∞
∞​
X is
xf​X​(x) dx.
E(X) =

Definition: Let X be a discrete random variable with probability function f​X​(x).


The expected value of X is

xP(X = x). ​ ​x
E(X) = X

Expectation of g(X) ​ ​x
xf​X​(x) = X

Let g(X) be a function of X. We can imagine a long-term average of g(X)


just as we can imagine a long-term average of X. This average is written as
E(g(X)). Imagine observing X many times (N times) to give results x​1​, x​2​, . . .
, x​N .​ Apply the function g to each of these observations, to give g(x​1​), . . . ,
g(x​N ). ​ The mean of g(x​1​), g(x​2​), . . . , g(x​N )​ approaches E(g(X)) as the
number of observations N tends to infinity.

Definition: Let X be a continuous random variable, and let g be a function. The


expected g(X) is E g(X) = g(x)f​X​(x)
value of Z ​∞​ −∞ dx.

Definition: Let X be a discrete random variable, and let g be a function. The


expected value of g(X) is
g(X)
E =​X ​x g(x)f​X​(x g(x)P(X
= x).
​ ​x
)=X

46

Expectation of XY : the definition of E(XY )


Suppose we have two random variables, X and Y . These might be
independent, in which case the value of X has no effect on the value of Y .
Alternatively, X and Y might be dependent: when we observe a random
value for X, it might influence the random values of Y that we are most likely
to observe. For example, X might be the height of a randomly selected
person, and Y might be the weight. On the whole, larger values of X will be
associated with larger values of Y .

To understand what E(XY ) means, think of observing a large number of


pairs (x​1​, y​1​),(x​2​, y​2​), . . . ,(x​N ,​ y​N ).
​ If X and Y are dependent, the value x​i
might affect the value y​i​, and vice versa, so we have to keep the
observations together in their pairings. As the number of pairs N tends to
infinity, the average
P​N
1

i=1 ​x​i ​× y​i ​approaches the expectation E(XY ).


N

For example, if X is height and Y is weight, E(XY ) is the average of (height


× weight). We are interested in E(XY ) because it is used for calculating the
covariance and correlation, which are measures of how closely related X
and Y are (see Section 3.2).

Properties of Expectation

i) Let g and h be functions, and let a and b be constants. For any random
variable X (discrete or continuous),
o ag(X) + = aE o h(X)
E In bh(X) n + bE
particular, o n
. n g(X)
E(aX + b) = aE(X) + b.

ii) Let X and Y be ANY random variables (discrete, continuous, independent,


non-independent). Then​
or ​ E(X + Y ) = E(X) + E(Y ).

More generally, for ANY random variables X​1​, . . . , X​n​,


E(X​1 +
​ . . . + X​n​) = E(X​1​) + . . . + E(X​n​).

47

iii) Let X and Y be independent random variables, and g, h be functions. Then

E(XY ) = E(X)E(Y )
E (Y ) g(X) E .
=E h(Y )
g(X)h

​ . E(XY ) = E(X)E(Y ) is ONLY generally true if X and Y are


Notes: 1
INDEPENDENT.  
2. If X and Y are independent, then E(XY ) = E(X)E(Y ). However, the
converse is not generally true: it is possible for E(XY ) = E(X)E(Y ) even
though X and Y are dependent.

Probability as an Expectation

Let A be any event. We can write P(A) as an expectation, as


follows. Define the ​indicator function:
occurs, ​0
I​A = otherwise​.

(​
1 i​ f event ​A

​ a ​random variable,​ and


Then I​A is
E(I​A​) = X​ 1 rP(I​A =
​ r)

r=0

= 0 × P(I​A =
​ 0) + 1 × P(I​A =
​ 1)
= P(I​A =
​ 1)
= P(A).

Thus ​
P(A) = E(I​A​) ​for any event ​A​. 

48

3.2 Variance, covariance, and correlation

The variance of a random variable X is a measure of how ​spread  out  ​it is.
Are the values of X clustered tightly around their mean, or can we
commonly observe values of X a long way from the mean value? The
variance measures how far the values of X are from their mean, on
average.

Definition: Let X be any random variable. The variance of X is



Var​(X) = (X − µ​ )​2 = E(X​2​) 2​
.
X​
E E(X)

The variance is the ​mean squared deviation ​of a random variable from its own
mean.

If X has high variance, we can observe values of X a long way from the mean.

If X has low variance, the values of X tend to be clustered tightly around the
mean value.

Example: ​Let X be a continuous random variable with p.d.f.


(​
2x​−2​for 1 < x < 2, 0
f​X​(x) =
otherwise.
Find E(X) and Var(X).
Z ​∞​ −∞ Z ​2​ 1 = h i​2
Z ​2
x f​X​(x) x × 2x​−1​dx
E(X) = dx = 2x​−2​dx 1

= log(x)
2 1

= 2 log(2) − 2 log(1)

= 2 log(2).

49

For Var​(X)​, we use  


Var​(X) = E(X​2​) − {E(X)}​2​.
Now   dx = 2x​−2​dx = 2 dx i​2
1
Z ​∞​ −∞
Z ​2​ 1 x​2 ​× h
E(X​2​) = Z ​2
x​2​f​X​(x)
= 1

2x
=2×2−2×1

= 2.
Thus  
Var​(X) = E(X​2​) − {E(X)}​2
= 2 − {2 log(2)}​2
= 0.0782.

Covariance
Covariance is a measure of the association or dependence between two
random variables X and Y . Covariance can be either positive or negative.
(Variance is always positive.)

Definition: Let X and Y be any random variables. The covariance between X


and Y is given by
n
cov​(X, Y ) = E (X − µ​X​)(Y − µ​Y )​ = E(XY ) − E(X)E(Y ),
o
where ​µ​X =
​ E(X), µ​Y =
​ E(Y )​.  

1. cov(X, Y ) will be ​positive ​if large values of X tend to occur with large values
of Y , and small values of X tend to occur with small values of Y . For
example, if X is height and Y is weight of a randomly selected person, we
would expect cov(X, Y ) to be positive.

50

2. cov(X, Y ) will be ​negative ​if large values of X tend to occur with small values
of Y , and small values of X tend to occur with large values of Y . For
example, if X is age of a randomly selected person, and Y is heart rate, we
would expect X and Y to be negatively correlated (older people have slower
heart rates).
3. If X and Y are independent, then there is no pattern between large values of
X and large values of Y , so cov(X, Y ) = 0. However, cov(X, Y ) = 0 does
NOT imply that X and Y are independent, unless X and Y are Normally
distributed.

Properties of Variance

i) Let g be a function, and let a and b be constants. For any random variable X
(discrete or continuous),
ag(X) + = o​
Var   b g(X)​ .
2​ n
n a​ Var​
o

In particular, ​Var​(aX + b) = a​2​Var​(X).


ii) Let X and Y be independent random variables. Then
Var​(X + Y ) = ​Var​(X) + ​Var​(Y ).

iii) If X and Y are NOT independent, then


Var​(X + Y ) = ​Var​(X) + ​Var​(Y ) + 2​cov​(X, Y ).
Correlation (non-examinable)
The correlation coefficient of X and Y is a measure of the linear association
between X and Y . It is given by the covariance, scaled by the overall
variability in X and Y . As a result, the correlation coefficient is always
between −1 and +1, so it is easily compared for different quantities.
Definition: The correlation between X and Y , also called the correlation coefficient, is
given by

corr(X, Y ) = cov(X,
​ Y)
p​
Var(X)Var(Y )​.

51

The correlation measures linear association between X and Y . It takes


values only between −1 and +1, and has the same sign as the covariance.

The correlation is ±1 if and only if there is a perfect linear relationship


between X and Y , i.e. corr(X, Y ) = 1 ⇐⇒ Y = aX + b for some constants a
and b.

The correlation is 0 if X and Y are independent, but a correlation of 0 does


not imply that X and Y are independent.

3.3 Conditional Expectation and Conditional Variance

Throughout this section, we will assume for simplicity that X and Y are dis
crete random variables. However, exactly the same results hold for
continuous random variables too.
Suppose that X and Y are discrete random variables, possibly dependent
on each other. Suppose that we fix Y at the value y. This gives us a set of
conditional probabilities P(X = x | Y = y) for all possible values x of X. This is
called the ​conditional distribution of ​X​, given that ​Y = y​.  

Definition: Let X and Y be discrete random variables. The conditional probability


function of X, given that Y = y, is:

P(X = x | Y = y) = P(X
​ = x ​AND ​Y = y)

P(Y = y)​.
We write the conditional probability function as:
f​X | Y (x
​ | y) = P(X = x | Y = y).

Note: ​The conditional probabilities f​X | Y (x


​ | y) sum to one, just like any other
probability function:
X x​ P​{Y =y}​(X = x) =
P(X = x | Y = 1,

y) = X ​ ​x

using the subscript notation P​{Y =y} of


​ Section 2.3.

52

We can also find the expectation and variance of X with respect to this
condi tional distribution. That is, if we know that the value of Y is fixed at y,
then we can find the mean value of X given that Y takes the value y, and
also the variance of X given that Y = y.

Definition: Let X and Y be discrete random variables. The conditional expectation


of X, given that Y = y, is
µ​X | Y =y ​= E(X | Y = y)xf​X | Y (x
​ | y).

​ ​x
=X
E(X | Y = y) is the ​mean value of ​X​, when ​Y ​is fixed at ​y​.  

Conditional expectation as a random variable

The unconditional expectation of X, E(X), is just ​a number:  


e.g. ​EX = 2 ​or ​EX = 5.8​.  

The conditional expectation, E(X | Y = y), is ​a number depending on ​y​.  

If Y has an influence on the value of X, then Y will have an influence on the


average value of X. So, for example, we would expect E(X | Y = 2) to be
different from E(X | Y = 3).

We can therefore view E(X | Y = y) as a ​function of ​y​, say ​E(X | Y =y) = h(y)​. 

To evaluate this function, h(y) = E(X | Y = y), we:

i) ​fix ​Y ​at the chosen value ​y​;  

ii) ​find the expectation of ​X ​when ​Y ​is fixed at this value. 

53

However, we could also evaluate the function at a random value of

Y : i) ​observe a random value of ​Y ​;  

ii) ​fix ​Y ​at that observed random value;  

iii) ​evaluate ​E(X | Y = ​observed random value​)​.  

We obtain a ​random variable: ​E(X | Y ) = h(Y )​.  

The randomness comes from the randomness in ​Y ​, not in ​X​.  

Conditional expectation, ​E(X | Y )​, is a random variable  


with randomness inherited from ​Y ​, not ​X​.  

and X | Y =
Example: ​Suppose Y =

​ 2Y with probability 3/4 ,
1 with probability 1/8 , 2
3Y with probability 1/4 .
with probability 7/8 ,

Conditional expectation of X given Y = y is a number depending on y:


with probability ​3/4
If ​Y = 1​, then: ​X |(Y = 1) = ​If ​Y = 2​,  6 ​with probability ​1/4

then: ​X |(Y = 2) =

​ so ​E(X | Y = 2) = 4 ×​3​4 + 1​
​ 6 ×​ 4 =​

18​
4​.
2 ​with probability ​3/4
3 ​with probability ​1/4


so ​E(X | Y = 1) = 2 ×​3​4 + 1​ 9​
​ 4​. ​ 4
​ 3 ×​ 4 =​
y = 2.
Thus ​E(X | Y = y) =

9/4 ​if ​y = 1 18/4 ​if 

So ​E(X | Y = y) ​is a number depending on ​y​, or a function of ​y​. 

54

Conditional expectation of X given random Y is a random variable:


18/4 ​if ​Y = 2 (​probability ​7/8).
From above, ​E(X | Y ) =

9/4 ​if ​Y = 1 (​probability ​1/8),

9/4 ​with probability 
So ​E(X | Y ) = 1/8, 18/4 ​with 
probability ​7/8.
Thus ​E(X | Y ) ​is a random variable.  
The randomness in ​E(X | Y ) ​is inherited from ​Y ​, not from ​X​.  

Conditional expectation is a very useful tool for finding the ​unconditional


expectation of X (see below). Just like the Partition Theorem, it is useful
because it is often easier to specify conditional probabilities than to specify
overall probabilities.

Conditional variance

The conditional variance is similar to the conditional expectation. • Var(X |


Y = y) is the variance of X, when Y is fixed at the value Y = y.

• Var(X | Y ) is a random variable, giving the variance of X when Y is


fixed at a value to be selected randomly.

Definition: Let X and Y be random variables. The conditional variance of X,


given Y , is given by
Y)− 2​ n​ 2​
n o​ = E​ (X − µ​X | Y )​​ |
E(X | Y )
Var​(X | Y ) = E(X​2​| Y​o

Like expectation, Var​(X | Y = y) ​is a number depending on ​y ​(a function of y​ ​), 


while Var​(X | Y ) ​is a random variable with randomness inherited from ​Y ​. 

55

Laws of Total Expectation and Variance

If all the expectations below are finite, then for ANY random variables X and
Y , we have:
Law of Total 
E(X | Y ) Expectation.  
i) E(X) = E​Y

Note that we can pick any r.v. ​Y ​, to make the expectation as easy as we can.  

E(g(X)| Y ) for any function 


ii) E(g(X)) = E​Y g​.  

Var​(X | Y ) E(X | Y )
Law of Total 
+ ​Var​Y
iii) ​Var​(X) = E​Y Variance.  

Note: ​E​Y and


​ Var​Y denote
​ expectation over Y and variance over Y ,
i.e. the expectation or variance is computed over the distribution of the
random variable Y .

The Law of Total Expectation says that ​the total average is the average of case 
by-case averages.  

• The total average is E(X)​;  


• The case-by-case averages are E(X | Y ) ​for the di​ff​erent values of ​Y ​; ​•
The average of case-by-case averages is ​the average over ​Y ​of the ​Y​-case  
average
s: ​E​Y E(X | Y . 
)

56


9/4 with probability 1/8, 18/4 with
Example: ​In the example above, we had:
probability 7/8.
E(X | Y ) = The total average is:
n 9​ 1​ 18​ 7
E(X | Y ) =​ 4×​ 8+​ 4×​
o
E(X) = E​Y 8​= 4.22.

Proof of (i), (ii), (iii):


(i) is a special case of (ii), so we just need to prove (ii). Begin at RHS:
"​ g(x)P(X = x
i X ​x
RHS = E​Y |Y)
= E​Y
h # #
E(g(X)| Y )
"​
=​ X ​ y X ​x
g(x)P(X = P(Y = y)
x | Y = y)
Y = y)P(Y =
X ​ y y) g(x)​X ​y
=​ P(X = x | Y =
X x​
y)P(Y = y)
=​X ​x
g(x)P(X = x |
g(x)P(X = x)
=​X x​ (partition rule)

= E(g(X)) = LHS.

(iii) Wish to prove Var(X) = E​Y [Var(X


​ | Y )] + Var​Y [E(X
​ | Y )]. Begin at RHS:

E​Y [Var(X
​ | Y )] + Var​Y [E(X
​ | Y )]
(E(X | Y ​ 2​o​ h​
n E​
​ Y )]​ −​ E​Y
2​o​ i​2
= E​Y ))​ + (E(X | Y ))
| {z } ​ ​
E(X​2​| Y ) − ​ [E(X | Y n ​
E(X) by part (i)

= E​Y {E(X​ 2​
| Y )} −E​Y {[E(X
​ | Y )]​2​} + E​Y {[E(X
​ |Y

| {z } ​E(X​ ) by part (i)
2​ )]​2​} − (EX)​2

= E(X​2​) − (EX)​2
= Var(X) = LHS.

57

3.4 Examples of Conditional Expectation and Variance

1. Swimming with dolphins

Fraser runs a dolphin-watch business.


Every day, he is unable to run the trip
due to bad weather with probability p,
independently of all other days. Fraser works every day except the
bad-weather days, which he takes as holiday.

Let Y be the number of consecutive days Fraser has to work between bad
weather days. Let X be the total number of customers who go on Fraser’s
trip in this period of Y days. Conditional on Y , the distribution of X is
(X | Y ) ∼ Poisson(µY ).

(a) Name the distribution of Y , and state E(Y ) and Var(Y ).


(b) Find the expectation and the variance of the number of customers
Fraser sees between bad-weather days, E(X) and Var(X).

(a) ​Let ‘success’ be ‘bad-weather day’ and ‘failure’ be ‘work-day’. 


Then ​P(​success​) = P(​bad-weather​) = p.
Y ​is the number of failures before the first success.  
So  
Y ∼ ​Geometric​(p).
Thus  

E(Y ) = 1
​ −p

p​,
Var​(Y ) = 1
​ −p
2​
p​ .

(b) ​We know ​(X | Y ) ∼ ​Poisson​(µY )​: so  


E(X | Y ) = ​Var​(X | Y ) = µY.

58

By the Law of Total Expectation:  


n
o
E(X) = E​Y E(X | Y )
= E​Y (µY
​ )
= µE​Y (Y
​ )

∴ E(X) = µ(1
​ − p) ​p​.

Variance: 
By the Law 
Var​(X) = E​Y Var​(X | Y ) + ​Var​Y E(X | Y )
of Total 
+
µY Var​Y µY
= E​Y

= µE​Y (Y
​ ) + µ​2​Var​Y (Y
​ )
p ​
=µ 1− 2
p​
​ p
1− p + µ​2

=​µ(1 − p)(p + µ)
2​
p​ .
Checking your answer in R:

If you know how to use a statistical package like R, you can check your
answer to the question above as follows.

> # Pick a value for p, e.g. p = 0.2.


> # Pick a value for mu, e.g. mu = 25
>
> # Generate 10,000 random values of Y ~ Geometric(p = 0.2): > y <-
rgeom(10000, prob=0.2)
>
> # Generate 10,000 random values of X conditional on Y: > #
use (X | Y) ~ Poisson(mu * Y) ~ Poisson(25 * Y)
> x <- rpois(10000, lambda = 25*y)

59

> # Find the sample mean of X (should be close to E(X)): >


mean(x)
[1] 100.6606
>
> # Find the sample variance of X (should be close to var(X)): > var(x)
[1] 12624.47
>
> # Check the formula for E(X):
> 25 * (1 - 0.2) / 0.2
[1] 100
>
> # Check the formula for var(X):
> 25 * (1 - 0.2) * (0.2 + 25) / 0.2^2
[1] 12600

The formulas we obtained by working give E(X) = 100 and Var(X) = 12600.
The sample mean was x = 100.6606 (close to 100), and the sample
variance was 12624.47 (close to 12600). Thus our working seems to have
been correct.
2. Randomly stopped sum

This model arises very commonly in stochastic


processes. A random number N of events occur,
and each event i has associated with it some cost,
penalty, or reward X​i​. The question is to find the
mean and variance of the total cost / reward:
T​N =
​ X​1 +
​ X​2 +
​ . . . + X​N ​.
The difficulty is that the number N of terms in the sum is itself random.

​ called a ​randomly stopped sum: it is a sum of ​X​i​’s, randomly stopped at 


T​N is
the random number of ​N ​terms.  

​ hink of a cash machine, which has to be loaded with enough money


Example: T
to cover the day’s business. The number of customers per day is a random
number N. Customer i withdraws a random amount X​i​. The total amount
withdrawn during the day is a randomly stopped sum: T​N =
​ X​1 +
​ . . . + X​N .​

60

Cash machine example

The citizens of Remuera withdraw money from a cash machine according


to the following probability function (X):
Amount, x ($) 50 100 200

P(X = x) 0.3 0.5 0.2


The number of customers per day has the distribution N ∼ Poisson(λ).

Let T​N =
​ X​1 +
​ X​2 +
​ . . . + X​N be
​ the total amount of money withdrawn in a
day, where each X​i has​ the probability function above, and X​1​, X​2​, . . . are
independent of each other and of N.
T​N is
​ a randomly stopped sum, stopped by the random number of N

customers. (a) Show that E(X) = 105, and Var(X) = 2725.


(b) Find E(T​N )​ and Var(T​N ):
​ the mean and variance of the amount of
money withdrawn each day.

Solution

(a) Exercise.
N​
(b) ​Let ​T​N ​=P​
​ i=1 X​
​ i​. If we knew how many terms were in the sum, we could easily 
find ​E(T​N )​ ​and Var​(T​N )​ ​as the mean and variance of a sum of independent r.v.s. So 
‘pretend’ we know how many terms are in the sum: i.e. condition on ​N​.  

E(T​N |​ N) = E(X​1 +
​ X​2 +
​ . . . + X​N |​ N)
= E(X​1 +
​ X​2 +
​ . . . + X​N ​)
(because all ​X​i​s are independent of ​N​)  
= E(X​1​) + E(X​2​) + . . . + E(X​N ​)
where ​N ​is now considered constant;  
(we do NOT need independence of ​X​i​’s for this)  
= N × E(X) ​(because all ​X​i​’s have same mean, ​E(X)​)  
= 105N.

61

Similarly,  

Var​(T​N |​ N) = ​Var​(X​1 +
​ X​2 +
​ . . . + X​N |​ N)
= ​Var​(X​1 +​ X​2 +
​ . . . + X​N )​
where ​N ​is now considered constant;  
(because all ​X​i​’s are independent of ​N​)  
= ​Var​(X​1​) + ​Var​(X​2​) + . . . + ​Var​(X​N )​
(we DO need independence of ​X​i​’s for this)  
= N × ​Var​(X) ​(because all ​X​i​’s have same variance, Var​(X)​) ​=
2725N.

n
So   E(T​N )​ = E​N E(T​N |​ N)
o

= E​N (105N)

= 105E​N (N)

= 105λ,

because ​N ∼ ​Poisson​(λ) ​so ​E(N) = λ​. 


Similarly,  
n N) E(T​N |​ N)
Var​(T​N )​ = Var​(T​ | o + ​Var​N o
N​
E​N n

= E​N {2725N}
​ + ​Var​N {105N}

2​
= 2725E​N (N)
​ + 105​ Var​N (N)

= 2725λ + 11025λ
= 13750λ,

because ​N ∼ ​Poisson​(λ) ​so ​E(N) = ​Var​(N) = λ​. 

62

Check in R (advanced)

> # Create a function tn.func to calculate a single value of T_N > # for a
given value N=n:
> tn.func <- function(n){
sum(sample(c(50, 100, 200), n, replace=T,
prob=c(0.3, 0.5, 0.2)))
}

> # Generate 10,000 random values of N, using lambda=50: > N


<- rpois(10000, lambda=50)
> # Generate 10,000 random values of T_N, conditional on N: > TN
<- sapply(N, tn.func)
> # Find the sample mean of T_N values, which should be close to > #
105 * 50 = 5250:
> mean(TN)
[1] 5253.255
> # Find the sample variance of T_N values, which should be close > # to
13750 * 50 = 687500:
> var(TN)
[1] 682469.4
All seems well. Note that the sample variance is often some distance from
the true variance, even when the sample size is 10,000.

General result for randomly stopped sums:

Suppose X​1​, X​2​, . . . each have the same mean µ and variance σ​2​, and X​1​,
X​2​, . . . , and N are mutually independent. Let T​N =
​ X​1 +
​ . . . + X​N be
​ the
randomly stopped sum. By following similar working to that above:
X​i​ ​X​i ) = µE(N)

E(T​N )​ = E

)
Var(T​N )​ =
Var
(​ N
X​

i=1

(​ N​ = σ​2 ​E(N) + µ​2


X​ i=1
Var(N).

63

3.5 First-Step Analysis for calculating expected reaching times

Remember from Section 2.6 that we use First-Step Analysis for finding the
probability of eventually reaching a particular state in a stochastic process.
First-step analysis for probabilities uses ​conditional  probability  and  the 
Partition Theorem (Law of Total Probability).  
In the same way, we can use first-step analysis for finding the ​expected 
reaching time for a state.  

This is the expected number of steps that will be needed to reach a


particular state from a specified start-point, or the expected length of time it
will take to get there if we have a continuous time process.

Just as first-step analysis for probabilities uses conditional probability and


the law of total probability (Partition Theorem), first-step analysis for
expectations uses ​conditional expectation and the law of total expectation.  

First-step analysis for probabilities:

The first-step analysis procedure for probabilities can be summarized as follows:


options  
P(​eventual goal ​|
P(​eventual goal​) = ​X
option​)P(​option​).
first-ste
p  

This is because the first-step options form a ​partition of the sample space.  

First-step analysis for expected reaching times:

The expression for expected reaching times is very similar:


options  
E(​reaching time​) = X
​ E(​reaching time ​|
first-ste option​)P(​option​).
p  

64
This follows immediately from the law of total expectation:
n o E(X | Y = y).
E(X) = E​Y E(X | Y ) y)P(Y =
=​ X ​ y

Let X be the reaching time, and let Y be the label for possible
options: ​i.e. ​Y = 1, 2, 3, . . . ​for options 1, 2, 3, . . .  

We then obtain:
y)
​ ​y
E(X) = X
E(X | Y = y)P(Y =

E(​reaching time ​| o
​ ption​)P(​option​).
​ ​first-step  
i.e. ​E(​reaching time​) = X
options  

Example 1: Mouse in a Maze

A mouse is trapped in a room with three exits at


the centre of a maze.

• Exit 1 leads outside the maze after 3 minutes.


• Exit 2 leads back to the room after 5 minutes.
• Exit 3 leads back to the room after 7 minutes.
Every time the mouse makes a choice, it is equally likely to choose any of
the three exits. What is the expected time taken for the mouse to leave the
maze?
Exit 2  
Let ​X ​= ​time taken  5​ mins  
1/3  
for mouse to  
leave maze, starting from room  1/3 ​
Room​3 mins​Exit 1 7​ ​mins 1/3  

R. Let ​Y ​= ​exit the mouse 
chooses first (1, 2, or 3).  Exit 3  

65

Then:  

E(X) = E​Y​ ​=​X​3


E(X | Y = y)P(Y =
y=1
y)
E(X | Y )

1​ 1​ 1​
= E(X | Y = 1) × 3+
​ E(X | Y = 2) × 3+
​ E(X | Y = 3) × 3​.

But:  
E(X | Y = 1) = 3 ​minutes  
E(X | Y = 2) = 5 + E(X) ​(after 5 mins back in Room, time ​E(X) ​to get out) ​E(X
| Y = 3) = 7 + E(X) ​(after 7 mins, back in Room)  
1​ 1​
So   3+
​ × 3

× 1​
3+

7 + EX
E(X) = 3 × 5 + EX
1​ 1​
= 15 × 3 ​+ 2(EX) × 3
1
= 15 × 1​3
3​E(X)
⇒ E(X) = 15 ​minutes.  

Notation for quick solutions of first-step analysis problems


As for
probabilities, first-step analysis for expectations relies on a good notation.
The best way to tackle the problem above is as follows.
Define ​m​R =
​ E(​time to leave maze ​|​start in Room​)​.  

First-step analysis:  
1​ 1​ 1​
m​R =
​ 3×
​ 3+ 3×
​ (5 + m​R​) + 3×
​ (7 + m​R​)
⇒ 3m​R =
​ (3 + 5 + 7) + 2m​R

​ 15 ​minutes (as before)​.


⇒ m​R =

66

Example 2: Counting the steps

The most common questions involving first-step analysis for expectations


ask for the ​expected number of steps before finishing. ​The number of steps
is usually equal to the ​number  of  arrows  traversed  from the current state to the 
end.  
The key point to remember is that when we take expectations, we are
usually ​counting something.  

You must remember to ​add on whatever we are counting, to every step taken.  
expected notation: let   1/3 1/3  
The mouse is number of 1/3 1/3  
put in a new steps the
Room 1  
maze with two mouse takes
rooms, before it
1/3 1/3 ​Room 2  
pictured here. reaches the EXIT  
Starting from exit?
Room 1, what
is the 1. Define 

​ E(​number of steps to finish ​| ​start in Room 1​)


m​1 =

​ E(​number of steps to finish ​| ​start in Room 2​).


m​2 =
2. First-step analysis:  
1​ 1​ 1​
m​1 =
​ 3×
​ 1+ 3​(1 + m​1​) + 3​(1 + m​2​) (a)
1​ 1​ 1​
m​2 =
​ 3×
​ 1+ 3​(1 + m​1​) + 3​(1 + m​2​) (b)

We could solve as simultaneous equations, as usual, but in this case inspection 


of (a) and (b) shows immediately that ​m​1 =
​ m​2​. Thus:  

(a) ​⇒ 3m​1 =
​ 3 + 2m​1

​ 3 ​steps​.
⇒ m​1 =

Further, ​m​2 = ​ 3 ​steps also. 


​ m​1 =

67

Incrementing before partitioning


1/3  
In many every possible step there are two
1/3  
problems, all adds 1 to the   ways of
possible first-step total number of  proceeding:
options incur the steps taken.  
Room 1   1/3  
same initial
penalty. The last In a case where 1/3 1/3 ​Room 2   1/3  

example is such a all steps incur the EXIT  


case, because same penalty,

1. ​Add the penalty onto each option separately: e.g.  


1​ 1​ 1​
m​1 =
​ 3×
​ 1+ 3​(1 + m​1​) + 3​(1 + m​2​).
2. ​(Usually quicker) Add the penalty once only, at the 
1​ 1​
beginning: ​m1​ ​= 1 + 3 ​× 0+ 3​m​1 ​+

1​
3​m​2​.
In each case, we will get the same answer (check). This is because the
option probabilities sum to 1, ​so  in  Method  1  we  are  adding  ​( 1​3 +

1​
3+

1​
3​)×1
= 1×1 = 1​, just as we are in Method 2.  

3.6 Probability as a conditional expectation

Recall from Section 3.1 that for any event A, we can write P(A) as an expecta
tion as follows. 1) = P(A).

Define the indicator random



1 if event A occurs, 0
variable: I​A =
​ Then E(I​A​) = P(I​A =
​ otherwise.

We can refine this expression further, using the idea of conditional


expectation. Let Y be any random variable. Then
.
P(A) =
E(I​A​) = E​Y E(I​A |​ Y )

68

But
0
1
E(I​A ​| Y ) = X​

r= rP(I​A =
​ r|Y)

= 0 × P(I​A =
​ 0 | Y ) + 1 × P(I​A =
​ 1|Y)=

P(I​A ​= 1 | Y )
= P(A | Y ).
E(I​A |​ Y ) .
P(A) = E​Y P(A | Y )
Thus = E​Y
This means that for ​any ​random variable X (discrete or continuous), and for
any set of values S (a discrete set or a continuous set), we can write:

• for any ​discrete ​random variable Y ,


= y).
​ ​y
P(X ∈ S) = X
P(X ∈ S | Y = y)P(Y

• for any ​continuous r​ andom variable Y ,


Z
P(X ∈ S) = y​ (y) dy.
P(X ∈ S | Y = y)f​Y

Example of probability as a conditional expectation: winning a lottery

Suppose that a million people have bought tickets for the


weekly lottery draw. Each person has a probability of one
in-a-million of selecting the winning numbers. If more than
one person selects the winning numbers, the winner will be
chosen at random from all those with matching numbers.

69

You watch the lottery draw on TV and your numbers match the winners!!
You had a one-in-a-million chance, and there were a million players, so it
must be YOU, right?

Not so fast. Before you rush to claim your prize, let’s calculate the
probability that you really will win. You definitely win if you are the only
person with matching numbers, but you can also win if there there are
multiple matching tickets and yours is the one selected at random from the
matches.
Define Y to be the number of OTHER matching tickets out of the OTHER 1
million tickets sold. (If you are lucky, Y = 0 so you have definitely won.)

If there are 1 million tickets and each ticket has a one-in-a-million chance of
having the winning numbers, then
Y ∼ ​Poisson​(1) ​approximately.  
The relationship Y ∼ Poisson(1) arises because of the Poisson
approximation to the Binomial distribution.

(a) What is the probability function of Y , f​Y (y)?


y
f​Y (y) = P(Y = y) = 1​

−1 ​
y!​e​ =​1
e × y!​for ​y = 0, 1, 2, . . ..

(b) What is the probability that yours is the only matching ticket?

1​
P(​only one matching ticket​) = P(Y = 0) = e​=

0.368.

(c) The prize is chosen at random from all those who have matching
tickets. What is the probability that you win if there are Y = y OTHER
matching tickets?
Let ​W ​be the event that I win.  

P(W | Y = y) = 1

y + 1​.

70

(d) Overall, what is the probability that you win, given that you have a
match ing ticket?
E​Y P(W | Y =
n y)
P(W) = o
P(W | Y =
y)P(Y = y)
∞​
=​X​ y=0
y=0 y+1
​ ​ e × y!
1 1

=​X​
1

(y + 1)y!
1​ ∞
= e​X​
y=0
1
1​ ∞
= e​X​ (y + 1)!
y=0
)
1​ y!−
= e 1​
(​ ∞​ 0!1
X​ y=0

1​
= e​{e − 1}
1​
=1− e
= 0.632.

Disappointing?

71
3.7 Special process: a model for gene spread

Suppose that a particular gene comes in two variants (alleles): A and B. We


might be interested in the case where one of the alleles, say A, is harmful
— for example it causes a disease. All animals in the population must have
either allele A or allele B. We want to know how long it will take before all
animals have the same allele, and whether this allele will be the harmful
allele A or the safe allele B. This simple model assumes asexual
reproduction. It is very similar to the famous Wright-Fisher model, which is a
fundamental model of population genetics.
Assumptions:
1. The population stays at constant size N for all generations.
2. At the end of each generation, the N animals create N offspring and
then they immediately die.
3. If there are x parents with allele A, and N − x with allele B, then each
offspring gets allele A with probability x/N and allele B with 1 − x/N.
4. All offspring are independent.

Stochastic process:

The state of the process at time t is X​t ​= ​the number of animals with allele A 
at generation ​t​.  

Each X​t could


​ be ​0, 1, 2, . . . , N. ​The state space is {0, 1, 2, . . ., N}.

Distribution of [ X​t+1 |​ X​t ]​

Suppose that X​t =


​ x, so x of the animals at generation t have allele A.
x​
Each of the N offspring will get A with probability N​and B with probability
1 − x​N​.  
x​ ​
Thus the number of offspring at time t+1 with allele A is: X​t+1 ∼​ ​Binomial​N, N​ .
We write this as follows: ​ x ] ∼ ​Binomial  
[ X​t+1 ​| X​t =

x​ ​
N, N​ .
[ X​t+1 ​| X​t =
​ x ] ∼ Binomial
72

If

x​ ​
N, N​ ,

then y​
​ ​ 1 −​x​N
Ny
P(X​t+1 =
​ y | X​t =
​ x)
N−y​
Example with N = (Binomial 
=
formula)  
3 x
N

This process becomes complicated to do by hand when N is large. We can


use small N to see how to use first-step analysis to answer our questions.

Transition diagram:

Exercise: find the missing probabilities a, b, c, and d when N = 3. Express


them all as fractions over the same denominator.

a   d  
c   b  
0 1 2 3 
b   d  
c  
a  
Probability the harmful allele A dies out

Suppose the process starts at generation 0. One of the three animals has
the harmful allele A. Define a suitable notation, and find the probability that
the harmful allele A eventually dies out.

Exercise: answer ​= ​2/3​. 

73

Expected number of generations to fixation

Suppose again that the process starts at generation 0, and one of the three
animals has the harmful allele A. Eventually all animals will have the same
allele, whether it is allele A or B. When this happens, the population is said
to have reached fixation: it is fixed for a single allele and no further changes
are possible.

Define a suitable notation, and find the expected number of generations to


fixation.

Exercise: answer ​= ​3 ​generations on average.  

Things get more interesting for large N. When N = 100, and x = 10 animals
have the harmful allele at generation 0, there is a 90% chance that the
harmful allele will die out and a 10% chance that the harmful allele will take
over the whole population. The expected number of generations taken to
reach fixation is 63.5. If the process starts with just x = 1 animal with the
harmful allele, there is a 99% chance the harmful allele will die out, but the
expected number of generations to fixation is 10.5. Despite the allele being
rare, the average number of generations for it to either die out or saturate
the population is quite large.

Note: ​The model above is also an example of a process called the ​Voter
Process.​ The N individuals correspond to N people who each support one
of two political candidates, A or B. Every day they make a new decision
about whom to support, based on the amount of current support for each
candidate. Fixation in the genetic model corresponds to concensus in the
Voter Process.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy