0% found this document useful (0 votes)
105 views21 pages

Discrete Random Variables

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
105 views21 pages

Discrete Random Variables

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 21

CHAPTER 5

Discrete random variables

5.1. Denition, properties, expectation, moments


As before, suppose S is a sample space.

Denition 5.1 (Random variable)


A random variable is a real-valued function on S. Random variables are usually
denoted by X, Y, Z, . . .. A discrete random variable is one that can take on only
countably many values.

Example 5.1. If one rolls a die, let X denote the outcome, i.e. taking values 1, 2, 3, 4, 5, 6.

Example 5.2. If one rolls a die, let Y be 1 if an odd number is showing, and 0 if an even
number is showing.

Example 5.3. If one tosses 10 coins, let X be the number of heads showing.

Example 5.4. In n trials, let X be the number of successes.

Denition ( PMF or density of a random variable)


For a discrete random variable X, we dene the probability mass function (PMF) or
the density of X by
pX (x) := P(X = x),
where P(X = x) is a standard abbreviation for

P(X = x) = P X −1 (x) .


Note that the pre-image X −1 (x) is the event {ω ∈ S : X(ω) = x}.


Suppose X is a discrete random variable taking on values {xi }i∈N , then

X
pX (xi ) = P(S) = 1.
i∈N

Let X be the number showing if we roll a die. The expected number to show up on a roll of
a die should be 1 · P(X = 1) + 2 · P(X = 2) + · · · + 6 · P(X = 6) = 3.5. More generally, we
dene

59
60 5. DISCRETE RANDOM VARIABLES

Denition 5.2 (Expectation of a discrete random variable)


For a discrete random variable X we dene the expected value or expectation or mean
of X as X
EX := xpX (x)
{x:pX (x)>0}
provided this sum converges absolutely. In this case we say that the expectation of X
is well-dened.

We need absolute convergence of the sum so that the expectation does not depend on
the order in which we take the sum to dene it. We know from calculus that we need
to be careful about the sums of conditionally convergent series, though in most of the
examples we deal with this will not be a problem. Note that pX (x) is nonnegative for
all x, but x itself can be negative or positive, so in general the terms in the sum might
have dierent signs.

Example 5.5. If we toss a coin and X is 1 if we have heads and 0 if we have tails, what
is the expectation of X?

Solution : 
1
2, x = 1

pX (x) = 12 , x = 0

0, all other values of x.
Hence EX = (1)( 12 ) + (0)( 21 ) = 1
2
.

Example 5.6. Suppose X = 0 with probability 12 , 1 with probability 41 , 2 with probability


1
8
, and more generally n with probability 1/2n+1 . This is an example where X can take
innitely many values (although still countably many values). What is the expectation of
X?

Solution : Here pX (n) = 1/2n+1 if n is a nonnegative integer and 0 otherwise. So

EX = (0) 21 + (1) 14 + (2) 81 + (3) 16


1
+ ··· .

This turns out to sum to 1. To see this, recall the formula for a geometric series

1
1 + x + x2 + x3 + · · · = .
1−x
If we dierentiate this, we get

1
1 + 2x + 3x2 + · · · = .
(1 − x)2
5.1. DEFINITION, PROPERTIES, EXPECTATION, MOMENTS 61

We have

EX = 1( 14 ) + 2( 81 ) + 3( 16
1
) + ···
h i
1 1 1
= 4 1 + 2( 2 ) + 3( 4 ) + · · ·
1 1
= = 1.
4
(1 − 12 )2

Example 5.7. Suppose we roll a fair die. If 1 or 2 is showing, let X = 3; if a 3 or 4 is


showing, let X = 4, and if a 5 or 6 is showing, let X = 10. What is EX ?

Solution : We have P(X = 3) = P(X = 4) = P(X = 10) = 31 , so


X
EX = xP(X = x) = (3)( 13 ) + (4)( 13 ) + (10)( 31 ) = 17
3
.

Example 5.8. Consider a discrete random variable taking only positive integers as values
1
with P (X = n) = n(n+1)
. What is the expectation EX ?

Solution : First observe that this is indeed a probability since we can use telescoping partial
sums to show that


X 1
= 1.
n=1
n(n + 1)
Then
∞ ∞ ∞
X 1
X X 1
EX = n · P (X = n) = n· = = +∞,
n=1 n=1
n(n + 1) n=1 n + 1
so the expectation of X is innite.

If we list all possible values of a discrete random variable X as {xi }i∈N , then we can write

X ∞
X
EX = xpX (x) = xi pX (xi ) .
{x:pX (x)>0} i=1

We would like to show that the expectation is linear, that is, E [X + Y ] = EX + EY .


We start by showing that we can write the expectation of a discrete random variable in a
slightly dierent form. Note that in our denition of the expectation we rst list all possible
values of X and weights with probability that X attains these values. That is, we look at
the range of X. Below we instead look at the domain of X and list all possible outcomes.

Proposition 5.1
If X is a random variable on a nite sample space S, then
X
EX = X(ω) P ({ω}) .
ω∈S
62 5. DISCRETE RANDOM VARIABLES

Proof. For each i ∈ N we denote by Si the event {ω ∈ S : X (ω) = xi }. Then {Si }i∈N
is a partition of the space S into disjoint sets. Note that since S is nite, then each set Si is
nite too, moreover, we only have a nite number of sets Si which are non-empty.

∞ ∞ ∞
!
X X X X
EX = xi p (xi ) = xi P (X = xi ) = xi P ({ω})
i=1 i=1 i=1 ω∈Si

! ∞
X X X X
= xi P ({ω}) = X(ω)P ({ω})
i=1 ω∈Si i=1 ω∈Si
X
= X(ω)P ({ω}) ,
ω∈S

where we used properties of sets {Si }∞


i=1 . 

Proposition 5.1 is true even if S is countable as long as EX is well-dened. First,


observe that if S is countable then the random variable X is necessarily discrete.
Where do we need to use the assumption that all sums converge absolutely? Note
that the identity

X X
xi P (X = xi ) = X(ω)P ({ω})
i=1 ω∈S
is a re-arrangement in the rst sum, which we can do as long as the sums (series)
converge absolutely. Note that if either the number of values of X or the sample space
S is nite, we can use this argument.

Proposition 5.1 can be used to prove linearity of the expectation.

Theorem 5.1 (Linearity of expectation)


X and Y are discrete random
If variables dened on the same sample space S and
a ∈ R, then
(i) E [X + Y ] = EX + EY ,
(ii) E [aX] = aEX ,

as long as all expectations are well-dened.

Proof. Consider a random variable Z := X + Y which is a discrete random variable on


the sample space S. We use P (X = x, Y = y) to denote the probability of the event

{ω ∈ S : X(ω) = x} ∩ {ω ∈ S : Y (ω) = y}.


Denote by {xi }i∈N the values that X is taking, and by {yj }j∈N the values that Y is taking.
Denote by {zk }k∈N the values that Z is taking. Since we assume that all random variables
have well-dened expectations, we can interchange the order of summations freely. Then by
the law of total probability (Proposition 4.4) twice we have
5.1. DEFINITION, PROPERTIES, EXPECTATION, MOMENTS 63

∞ ∞ ∞
!
X X X
EZ = zk P(Z = zk ) = zk P(Z = zk , X = xi )
k=1 k=1 i=1
∞ ∞
!
X X
= zk P(X = xi , Y = zk − xi )
k=1 i=1

XXX ∞ ∞
= zk P(X = xi , Y = zk − xi , Y = yj ).
k=1 i=1 j=1

Now P(X = xi , Y = zk − xi , Y = yj ) will be 0, unless zk − xi = yj . For each pair (i, j),


this will be non-zero for only one value k, since the zk are all dierent. Therefore, for each i
and j

X
zk P(X = xi , Y = zk − xi , Y = yj )
k=1

X
= (xi + yj )P(X = xi , Y = zk − xi , Y = yj )
k=1
= (xi + yj )P(X = xi , Y = yj ).
Substituting this to the above sum we see that

∞ X
X ∞
EZ = (xi + yj )P(X = xi , Y = yj )
i=1 j=1

XX ∞ ∞ X
X ∞
xi P(X = xi , Y = yj ) + yj P(X = xi , Y = yj )
i=1 j=1 i=1 j=1
∞ ∞
! ∞ ∞
!
X X X X
= xi P(X = xi , Y = yj ) + yj P(X = xi , Y = yj )
i=1 j=1 j=1 i=1
X∞ ∞
X
= xi P(X = xi ) + yj P(Y = yj ) = EX + EY,
i=1 j=1

where we used the law of total probability (Proposition 4.4) again.

Note that if we have a countable sample space all these sums converge absolutely and so we
can justify writing this similarly to Proposition 5.1 as
X
E [X + Y ] = (X(ω) + Y (ω)) P (ω)
ω∈S
X
= (X(ω)P (ω) + Y (ω)P (ω))
ω∈S
X X
= X(ω)P (ω) + Y (ω)P (ω)
ω∈S ω∈S
= EX + EY.
64 5. DISCRETE RANDOM VARIABLES

For a∈R we have


X X
E [aX] = (aX(ω)) P (ω) = a X(ω)P (ω) = aEX
ω∈S ω∈S

since these sums converge absolutely as long as EX is well-dened. 

Using induction on the number of random variables linearity holds for a collection of random
variables X1 , X2 , . . . , Xn .
Corollary
If X1 , X2 , . . . , Xn are random variables, then

E (X1 + X2 + · · · + Xn ) = EX1 + EX2 + · · · + EXn .

Example 5.9. Suppose we roll a die and let X be the value that is showing. We want to
2
nd the expectation EX .

Solution : Let Y = X 2, so that P(Y = 1) = 1


6
, P(Y = 4) =
1
6
etc. and

EX 2 = EY = (1) 16 + (4) 16 + · · · + (36) 16 .


We can also write this as

EX 2 = (12 ) 16 + (22 ) 61 + · · · + (62 ) 16 ,


2
P 2
which suggests that a formula for EX is x x P(X = x). This turns out to be correct.

The only possibility where things could go wrong is if more than one value of X leads to
2 1 1
the same value of X . For example, suppose P(X = −2) = , P(X = −1) = , P(X = 1) =
8 4
3
8
, P(X = 2) = 14 . Then if Y = X 2 , P(Y = 1) = 58 and P(Y = 4) = 38 . Then
EX 2 = (1) 58 + (4) 83 = (−1)2 41 + (1)2 38 + (−2)2 18 + (2)2 14 .
2
P 2
But even in this case EX = x x P(X = x).

Theorem 5.2
For a discrete random variable X taking values {xi }∞
i=1 and a real-valued function g
dened on this set, we have

X ∞
X
Eg(X) = g (xi ) P (X = xi ) = g (xi ) p (xi ) .
i=1 i=1

Proof. Let Y := g(X), then


X X X
EY = yP(Y = y) = y P(X = x)
y y {x:g(x)=y}
X
= g(x)P(X = x).
x

5.1. DEFINITION, PROPERTIES, EXPECTATION, MOMENTS 65

Example 5.10. EX 2 = x2 pX (x).


P
As before we see that Also if g (x) ≡ c is a constant
function, then we see that the expectation of a constant is this constant


X ∞
X
Eg(X) = cp (xi ) = c p (xi ) = c · 1 = c.
i=1 i=1

Denition (Moments)
EX n is called the nth moment of a random variable X. If M := EX is well dened,
then
Var (X) = E(X − M )2
is called the variance of X . The square root of Var (X) is called the standard deviation
of X p
SD (X) := Var(X).

By Theorem 5.2 we know that the nth moment can be calculated by

X
EX n = xn pX (x).
x:pX (x)>0

The variance measures how much spread there is about the expected value.

Example 5.11. We toss a fair coin and let X = 1 if we get heads, X = −1 if we get tails.
Then EX = 0, so X − EX = X , and then Var X = EX 2 = (1)2 12 + (−1)2 21 = 1.

Example 5.12. We roll a die and let X be the value that shows. We have previously
7
calculated EX = . So X − EX equals
2

5 3 1 1 3 5
− ,− ,− , , , ,
2 2 2 2 2 2
1
each with probability . So
6

Var X = (− 25 )2 16 + (− 32 )2 61 + (− 12 )2 16 + ( 21 )2 16 + ( 23 )2 16 + ( 25 )2 16 = 35
12
.

Using the fact that the expectation of a constant is the constant we get an alternate expres-
sion for the variance.

Proposition 5.2 (Variance)


Suppose X is a random variable with nite rst and second moments. Then

Var X = EX 2 − (EX)2 .
66 5. DISCRETE RANDOM VARIABLES

Proof. Denote M := EX , then

Var X = EX 2 − 2E(XM ) + E(M 2 )


= EX 2 − 2M 2 + M 2 = EX 2 − (EX)2 .

5.2. FURTHER EXAMPLES AND APPLICATIONS 67

5.2. Further examples and applications


5.2.1. Discrete random variables. Recall that we dened a discrete random variable
in Denition 5.1 as the one taking countably many values. A random variable is a function
X : S −→ R, and we can think of it as a numerical value that is random. When we perform
an experiment, many times we are interested in some quantity (a function) related to the
outcome, instead of the outcome itself. That means we want to attach a numerical value to
each outcome. Below are more examples of such variables.

Example 5.13. Toss a coin and dene

(
1 if outcome is heads (H)
X=
0 if outcome is tails (T).

As a random variable, X (H) = 1 and X (T ) = 0. Note that we can perform computations


on real numbers but directly not on the sample space S = {H, T }. This shows the need to
covert outcomes to numerical values.

Example 5.14. Let X be the amount of liability (damages) a driver causes in a year. In
this case, X can be any dollar amount. Thus X can attain any value in [0, ∞).

Example 5.15. Toss a coin 3 times. Let X be the number of heads that appear, so that
X can take the values 0, 1, 2, 3. What are the associated probabilities to each value?
Solution :

1 1
P (X = 0) =P ((T, T, T )) = = ,
23 8
3
P (X = 1) =P ((T, T, H) , (T, H, T ) , (H, T, T )) = ,
8
3
P (X = 2) =P ((T, H, H) , (H, H, T ) , (H, T, H)) = ,
8
1
P (X = 3) =P ((H, H, H)) = .
8

Example 5.16. Toss a coin n times. Let X be the number of heads that occur. This
random variable can take the values 0, 1, 2, . . . , n. From the binomial formula we see that
 
1 n
P(X = k) = n .
2 k

© Copyright 2017 Phanuel Mariano, Patricia Alonso Ruiz, Copyright 2020 Masha Gordina.
68 5. DISCRETE RANDOM VARIABLES

Example 5.17. Suppose we toss a fair coin, and we let X be 1 if we have H and X be 0
if we have T. The probability mass function of this random variable is

1

2 x=0
1
pX (x) = x = 1,
2
0 otherwise .

Often the probability mass function (PMF) will already be given and we can then use it to
compute probabilities.

Example 5.18. The PMF of a random variable X taking values in N ∪ {0} is given by

λi
pX (i) = e−λ , i = 0, 1, 2, . . . ,
i!
where λ is a positive real number.

(a) Find P (X = 0).


Solution : by denition of the PMF we have

λ0
P (X = 0) = pX (0) = e−λ = e−λ .
0!
(b) Find P (X > 2).
Solution : note that

P (X > 2) = 1 − P (X 6 2)
= 1 − P (X = 0) − P (X = 1) − P (X = 2)
= 1 − pX (0) − pX (1) − pX (2)
−λ −λ λ2 e−λ
=1−e − λe − .
2

5.2.2. Expectation. We dened the expectation in Denition 5.2 in the case when X
is a discrete random variable X taking values {xi }i∈N . Then for a random variable X with
PMF pX (x) the expectation is given by

X ∞
X
E [X] = xpX (x) = xi pX (xi ) .
x:p( x)>0 i=1

Example 5.19. Suppose again that we have a coin, and let X(H) = 0 and X (T ) = 1.
What is EX if the coin is not necessarily fair?

EX = 0 · pX (0) + 1 · pX (1) = P(T ).

Example 5.20. Let X be the outcome when we roll a fair die. What is EX ?
1 1 1 1 21 7
EX = 1 · + 2 · + · · · + 6 · = (1 + 2 + 3 + 4 + 5 + 6) = = = 3.5.
6 6 6 6 6 2
5.2. FURTHER EXAMPLES AND APPLICATIONS 69

Note that in the last example X can never be 3.5. This means that the expectation may not
be a value attained by X. It serves the purpose of giving an average value for X.

Example 5.21. Let X be the number of insurance claims a person makes in a year. Assume
2 2
that X can take the values 0, 1, 2, 3 . . . with P (X = 0) = , P (X = 1) = , . . . , P (X = n) =
3 9
2
. Find the expected number of claims this person makes in a year.
3n+1
Solution : Note that X has innite but countable number of values, hence it is a discrete
2
random variable. We have that pX (i) = 3i+1 . We compute using the denition of expectation,

EX = 0 · pX (0) + 1 · pX (1) + 2 · pX (2) + · · ·


2 2 2 2
= 0 · + 1 2 + 2 3 + 3 4 + ···
3
 3 3 3 
2 1 1 1
= 2 1 + 2 + 3 2 + 4 3 + ···
3 3 3 3
2 1
1 + 2x + 3x2 + · · · , where x =

=
9 3
2 1 2 2 1
= = 2 = 2 = .
9 (1 − x)2 9 1− 1 2 2
3

Example 5.22. Let S = {1, 2, 3, 4, 5, 6} and assume that X(1) = X(2) = 1, X(3) =
X(4) = 3, and X(5) = X(6) = 5.

(1) Using the initial denition, the random variable X takes the values 1, 3, 5 and pX (1) =
pX (3) = pX (5) = 31 . Then

1 1 1 9
EX = 1 · + 3 + 5 = = 3.
3 3 3 3

(2) Using the equivalent denition, we list all of S = {1, 2, 3, 4, 5, 6} and then

1 1 1 1 1 1
EX = X(1)P ({1}) + · · · + X(6) · P ({6}) = 1 + 1 + 3 + 3 + 5 + 1 = 3.
6 6 6 6 6 6

5.2.3. The cumulative distribution function (CDF). We implicitly used this char-
acterization of a random variable, and now we dene it.

Denition 5.3 (Cumulative distribution function)


Let X be a random variable. The cumulative distribution function (CDF) or the
distribution function of X is dened as

FX (x) := P (X 6 x) ,
for any x ∈ R.
70 5. DISCRETE RANDOM VARIABLES

Note that if X is discrete and pX is its PMF , then


X
F (x0 ) = pX (x).
x6x0

Example 5.23. Suppose that X has the following PMF

1
pX (0) = P (X = 0) =
8
3
pX (1) = P (X = 1) =
8
3
pX (2) = P (X = 2) =
8
1
pX (3) = P (X = 3) = .
8
Find the CDF for X and plot the graph of the CDF.

Solution : summing up the probabilities up to the value of x we get the following




 0 −∞ < x < 0,
 1


8 0 6 x < 1,
4
FX (x) = 8
1 6 x < 2,
7


 2 6 x < 3,
8


1 3 6 x < ∞.
This is a step function shown in the Figure below on page 71.

Proposition 5.3 (Properties of cumulative distribution functions(CDF))


1. F is nondecreasing , that is, if x < y , then F (x) 6 F (y).
2. limx→∞ F (x) = 1.
3. limx→−∞ F (x) = 0.
4. F is right continuous, that is, lim FX (u) = FX (x), where u ↓ x means that u
u↓x
approaches x from above (from the right).

Example 5.24. Let X have distribution




 0 x < 0,
 x


2 0 6 x < 1,
2
FX (x) = 3
1 6 x < 2,
11




 12
2 6 x < 3,

1 3 6 x.

(a) Compute P (X < 3).


Solution : 1 1 11
 
We have that P (X < 3) = lim P X 6 3 − n
= lim FX 3 − n
= 12
.
n→∞ n→∞
5.2. FURTHER EXAMPLES AND APPLICATIONS 71

0.8

0.6

0.4

0.2

0
−1 0 1 2 3 4

Above are two Graphs for FX (x) in Example 5.23.

(b) Compute P(X = 1).


Solution : We have that
x 2 1 1
P(X = 1) = P(X 6 1) − P (X < 1) = FX (1) − lim = − = .
x→1 2 3 2 6
(c) Compute P(2 < X 6 4).
Solution : We have that
1
P(2 < X 6 4) = FX (4) − FX (2) = .
12

5.2.4. Expectation of a function of a random variable. Given a random variable


X we would like to compute the expected value of expressions such as X 2 , eX or sin X . How
can we do this?

Example 5.25. Let X be a random variable whose PMF is given by

P (X = −1) = 0.2,
P (X = 0) = 0.5,
P (X = 1) = 0.3.
Let Y = X 2, nd E[Y ].
Solution : Note that Y takes the values 02 , (−1)2 and 12 , which reduce to 0 or 1. Also notice
that pY (1) = 0.2 + 0.3 = 0.5 and pY (0) = 0.5. Thus, E[Y ] = 0 · 0.5 + 1 · 0.5 = 0.5.
72 5. DISCRETE RANDOM VARIABLES

Note that EX 2 = 0.5 . While (EX)2 = 0.01 since EX = 0.3 − 0.2 = 0.1. Thus in general

EX 2 6= (EX)2 .

In general, there is a formula for g(X) where g is function that uses the fact that g(X) will
be g(x) for some x such that X = x. We recall Theorem 5.2. If X is a discrete distribution
that takes the values xi , i ≥ 1 with probability pX (xi ), respectively, then for any real valued
function g we have that
X∞
E [g (X)] = g (xi ) pX (xi ).
i=1

Note that

X
2
EX = x2i pX (xi )
i=1

will be useful.

Example 5.26. Let us revisit the previous example. Let X denote a random variable such
that

P (X = −1) = 0.2
P (X = 0) = 0.5
P (X = 1) = 0.3.
Let Y = X 2. Find EY .
Solution : We have that

X
EX = 2
x2i pX (xi ) = (−1)2 (0.2) + 02 (0.5) + 12 (0.3) = 0.5
i=1

5.2.5. Variance. The variance of a random variable is a measure of how spread out the
values of X are. The expectation of a random variable is quantity that help us dierentiate
between random variables, but it does not tell us how spread out its values are. For example,
consider

X = 0 with probability 1
(
−1 p = 21
Y =
1 p = 12
(
−100 p = 12
Z = .
100 p = 12

What are the expected values? The are 0, 0 and 0. But there is much greater spread in Z
than Y and Y than X. Thus expectation is not enough to detect spread, or variation.

Example 5.27. Calculate Var(X) if X represents the outcome when a fair die is rolled.
5.2. FURTHER EXAMPLES AND APPLICATIONS 73

Solution : recall that we showed Equation 5.2 to nd the variance

Var (X) = EX 2 − (EX)2 .


7
Previously we calculated that EX = 2
. Thus we only need to nd the second moment
 
2 1 2 1 91
EX = 1 + · · · + 62 = .
6 6 6
Using our formula we have that
 2
91 7 35
Var (X) = E X 2 − (E [X])2 =
 
− = .
6 2 12

Another useful formula is the following.

Proposition 5.4
For any constants a, b ∈ R we have that Var (aX + b) = a2 Var (X).

Proof. By Equation 5.2 and linearity of expectation

Var (aX + b) = E (aX + b)2 − (E (aX + b))2


= E a2 X 2 + 2abX + b2 − (aEX + b)2


= a2 EX 2 + 2abEX + b2 − a2 (EX)2 − 2abEX − b2


= a2 EX 2 − a2 (EX)2 = a2 Var (X) .

74 5. DISCRETE RANDOM VARIABLES

5.3. Exercises
Exercise 5.1. Three balls are randomly chosen with replacement from an urn containing
5 blue, 4 red, and 2 yellow balls. Let X denote the number of red balls chosen.

(a) What are the possible values of X?


(b) What are the probabilities associated to each value?

Exercise 5.2. Two cards are chosen from a standard deck of 52 cards. Suppose that
you win $2 for each heart selected, and lose $1 for each spade selected. Other suits (clubs
or diamonds) bring neither win nor loss. Let X denote your winnings. Determine the
probability mass function of X.

Exercise 5.3. A nancial regulator from the FED will evaluate two banks this week. For
each evaluation, the regulator will choose with equal probability between two dierent stress
tests. Failing under test one costs a bank 10K fee, whereas failing test 2 costs 5K. The
probability that the rst bank fails any test is 0.4. Independently, the second bank will fail
any test with 0.5 probability. Let X denote the total amount of fees the regulator can obtain
after having evaluated both banks. Determine the cumulative distribution function of X.

Exercise 5.4. Five buses carry students from Hartford to campus. Each bus carries,
respectively, 50, 55, 60, 65, and 70 students. One of these students and one bus driver are
picked at random.

(a) What is the expected number of students sitting in the same bus that carries the ran-
domly selected student?
(b) Let Y be the number of students in the same bus as the randomly selected driver. Is
E[Y ] larger than the expectation obtained in the previous question?

Exercise 5.5. Two balls are chosen randomly from an urn containing 8 white balls, 4
black, and 2 orange balls. Suppose that we win $2 for each black ball selected and we lose
$1 for each white ball selected. Let X denote our winnings.

(a) What are the possible values of X?


(b) What are the probabilities associated to each value?

Exercise 5.6. A card is drawn at random from a standard deck of playing cards. If it is
a heart, you win $1. If it is a diamond, you have to pay $2. If it is any other card, you win
$3. What is the expected value of your winnings?

Exercise 5.7. The game of roulette consists of a small ball and a wheel with 38 numbered
pockets around the edge that includes the numbers 1 − 36, 0 and 00. As the wheel is spun,
the ball bounces around randomly until it settles down in one of the pockets.

(a) Suppose you bet $1 on a single number and random variable X represents the (monetary)
outcome (the money you win or lose). If the bet wins, the payo is $35 and you get
5.3. EXERCISES 75

your money back. If you lose the bet then you lose your $1. What is the expected prot
on a 1 dollar bet?
(b) Suppose you bet $1 on the numbers 1 − 18 and random variable X represents the
(monetary) outcome (the money you win or lose). If the bet wins, the payo is $1
and you get your money back. If you lose the bet then you lose your $1. What is the
expected prot on a 1 dollar bet ?

Exercise 5.8. An insurance company nds that Mark has a 8% chance of getting into a
car accident in the next year. If Mark has any kind of accident then the company guarantees
to pay him $10, 000. The company has decided to charge Mark a $200 premium for this one
year insurance policy.

(a) Let X be the amount prot or loss from this insurance policy in the next year for the
insurance company. Find EX , the expected return for the Insurance company? Should
the insurance company charge more or less on its premium?
(b) What amount should the insurance company charge Mark in order to guarantee an
expected return of $100?

Exercise 5.9. A random variable X has the following probability mass function: pX (0) =
1 1 1 1
, pX (1) = , pX (2) = , pX (3) = . Find its expected value, variance, and standard
3 6 4 4
deviation, and plot its CDF.

Exercise 5.10. Suppose X is a random variable such that E [X] = 50 and Var(X) = 12.
Calculate the following quantities.

(a) E [X 2 ],
(b) E [3X
 + 2],2 
(c) E (X + 2) ,
(d) Var [−X],
(e) SD (2X).

Exercise 5.11. Does there exist a random variable X such that E [X] = 4 and E [X 2 ] = 10?
Why or why not? (Hint: look at its variance)

Exercise 5.12. A box contains 25 peppers of which 5 are red and 20 green. Four peppers
are randomly picked from the box. What is the expected number of red peppers in this
sample of four?
76 5. DISCRETE RANDOM VARIABLES

5.4. Selected solutions


Solution to Exercise 5.1:
(a) X can take the values 0, 1, 2 and 3.
(b) Since balls are withdrawn with replacement, we can think of choosing red as a success
4
and apply Bernoulli trials with p = P(red) = . Then, for each k = 0, 1, 2, 3 we have
11

   k  3−k
3 4 7
P(X = k) = · .
k 11 11

Solution to Exercise 5.2: The random variable X can take the values −2, −1, 0, 1, 2, 4.
Moreover,

13

2
P(X = −2) = P(2♠) = 52 ,

2
13 · 26
P(X = −1) = P(1♠ and 1(♦ or ♣)) = 52
 ,
2
26

2
P(X = 0) = P(2(♦ or ♣)) = 52 ,

2
13 · 13
P(X = 1) = P(1♥ and 1♠) = 52
 ,
2
P(X = 2) = P(1♥ and 1(♦ or ♣)) = P(X = −1),
P(X = 4) = P(2♥) = P(X = −2).

Thus the probability mass function is given by pX (x) = P(X = x) for x = −2, −1, 0, 1, 2, 4
and pX (x) = 0 otherwise.

Solution to Exercise 5.3: The random variable X can take the values 0, 5, 10, 15 and 20
depending on which test was applied to each bank, and if the bank fails the evaluation or
not. Denote by Bi the event that the ith bank fails and by Ti the event that test i applied.
Then

P(T1 ) = P(T2 ) = 0.5, P(B1 ) = P(B1 | T1 ) = P(B1 | T2 ) = 0.4


P(B2 ) = P(B2 | T1 ) = P(B2 | T2 ) = 0.5.

Since banks and tests are independent we have

P(X = 0) = P(B1c ∩ B2c ) = P(B1c ) · P(B2c ) = 0.6 · 0.5 = 0.3,


P(X = 5) = P(B1 )P(T1 )P(B2c ) + P(B1c )P(B2 )P(T2 ) = 0.25,
P(X = 10) = P(B1 )P(T1 )P(B2c ) + P(B1 )P(T2 )P(B2 )P(T2 ) + P(B1c )P(B2 )P(T1 ) = 0.3
P(X = 15) = P(B1 )P(T1 )P(B2 )P(T2 ) + P(B1 )P(T2 )P(B2 )P(T1 ) = 0.1
P(X = 20) = P(B1 )P(T1 )P(B2 )P(T1 ) = 0.05.
5.4. SELECTED SOLUTIONS 77

0.8

0.6

0.4

0.2

0
−5 0 5 10 15 20 25

The graph of the probability distribution function for Exercise 5.3

The probability distribution function is given by



 0 x < 0,

0.3 0 6 x < 5,





0.55 5 6 x < 10,
FX (x) =

 0.85 10 6 x < 15,

0.95 15 6 x < 20,





1 x > 20.

Solution to Exercise 5.4: Let X denote the number of students in the bus that carries
the randomly selected student.

50 55
(a) In total there are 300 students, hence P(X = 50) = 300 , P(X = 55) = , P(X = 60) =
300
60 65 70
, P(X = 65) = and P(X = 70) = 300 . The expected value of X is thus
300 300

50 55 60 65 70
E[X] = 50 + 55 + 60 + 65 + 70 ≈ 60.8333.
300 300 300 300 300
1
(b) In this case, the probability of choosing a bus driver is , so that
5

1
E[Y ] = (50 + 55 + 60 + 65 + 70) = 60
5
which is slightly less than the previous one.

Solution to Exercise 5.5(A): Note that X = −2, −1, −0, 1, 2, 4.


Solution to Exercise 5.5(B): below is the list of all probabilities.
78 5. DISCRETE RANDOM VARIABLES

4

2 6
P (X = 4) = P ({BB}) = 14 = ,
2
91
2

2 1
P (X = 0) = P ({OO}) = 14 =
2
91
4 2
 
8
P (X = 2) = P ({BO}) = 1 141 = ,
2
91
8 2
 
16
P (X = −1) = P ({W O}) = 1 141 = ,
2
91
4 8
 
32
P (X = 1) = P ({BW }) = 1 141 = ,
2
91
8

2 28
P (X = −2) = P ({W W }) = 14 =
2
91

Solution to Exercise 5.6:


1 1 1 5
EX = 1 · + (−2) + 3 · =
4 4 2 4

Solution to Exercise 5.7(A): The expected prot is EX = 35 · 381 − 1 · 37



38
= −$0.0526.
Solution to Exercise 5.7(B): If you will then yourprot will be $1. If you lose then you
18 20
lose your $1 bet. The expected prot is EX = 1 · 38
−1· 38
= −$0.0526.
Solution to Exercise 5.8(A): If Mark has no accident then the company makes a prot
of 200 dollars. If Mark has an accident they have to pay him 10, 000 dollars, but regardless
they received 200 dollars from him as an yearly premium. We have

EX = (200 − 10, 000) · (0.08) + 200 · (0.92) = −600.

On average the company will lose $600 dollars. Thus the company should charge more.

Solution to Exercise 5.8(B): Let P be the premium. Then in order to guarantee an


expected return of 100 then

100 = EX = (P − 10, 000) · (0.08) + P · (0.92)

and solving for P we get P = $900.


Solution to Exercise 5.9: we start with the expectation

1 1 1 1 34
EX = 0 · +1· +2· +3· = .
3 6 4 4 24
5.4. SELECTED SOLUTIONS 79

0.8

0.6

0.4

0.2

0
−1 0 1 2 3 4

The plot of the CDF for Exercise 5.9

Now to nd the variance we have

Var(X) = E X 2 − (EX)2
 
 2
1 1 1 1 34
= 0 · − 12 + 22 · + 32 · −
2
3 6 4 4 24
2
82 34 812
= − 2 = 2.
24 24 24
Taking the square root gives us

2 203
SD(X) = .
24

Solution to Exercise 5.10(A): Since Var(X) = E [X 2 ] − (EX)2 = 12 then


E X 2 = Var(X) + (EX)2 = 12 + 502 = 2512.
 

Solution to Exercise 5.10(B):


E [3X + 2] = 3E [X] + E [2] = 3 · 50 + 2 = 152.
Solution to Exercise 5.10(C):
E (X + 2)2 = E X 2 + 4E [X] + 4 = 2512 + 4 · 50 + 4 = 2716.
   

Solution to Exercise 5.10(D):


Var [−X] = (−1)2 Var(X) = 12
Solution to Exercise 5.10(E):
p p √ √
SD (2X) = Var(2X) = 22 Var(X) = 48 = 2 12.

Solution to Exercise 5.11: Using the hint let's compute the variance of this random
2 2 2
variable which would be Var(X) = E [X ] − (EX) = 10 − 4 = −6. But we know a random
variable cannot have a negative variance. Thus no such a random variable exists.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy