MIT18 440S14 Lecture28 PDF
MIT18 440S14 Lecture28 PDF
440: Lecture 28
Lectures 17-27 Review
Scott Sheffield
MIT
1
18.440 Lecture 28
Outline
2
18.440 Lecture 28
Outline
3
18.440 Lecture 28
Continuous random variables
4
18.440 Lecture 28
Expectations of continuous random variables
I Recall that when X was a discrete random variable, with
p(x) = P{X = x}, we wrote
X
E [X ] = p(x)x.
x:p(x)>0
6
18.440 Lecture 28
Outline
7
18.440 Lecture 28
Outline
8
18.440 Lecture 28
It’s the coins, stupid
I Much of what we have done in this course can be motivated
by the i.i.d. sequence Xi where
Peach Xi is 1 with probability p
and 0 otherwise. Write Sn = ni=1 Xn .
I Binomial (Sn — number of heads in n tosses), geometric
(steps required to obtain one heads), negative binomial
(steps required to obtain n heads).
n −E [Sn ]
I Standard normal approximates law of SSD(S ) . Here
p √ n
E [Sn ] = np and SD(Sn ) = Var(Sn ) = npq where
q = 1 − p.
I Poisson is limit of binomial as n → ∞ when p = λ/n.
I Poisson point process: toss one λ/n coin during each length
1/n time increment, take n → ∞ limit.
I Exponential: time till first event in λ Poisson point process.
I Gamma distribution: time till nth event in λ Poisson point
process. 9
18.440 Lecture 28
Discrete random variable properties derivable from coin
toss intuition
10
18.440 Lecture 28
Continuous random variable properties derivable from coin
toss intuition
I Sum of n independent exponential random variables each
with parameter λ is gamma with parameters (n, λ).
I Memoryless properties: given that exponential random
variable X is greater than T > 0, the conditional law of
X − T is the same as the original law of X .
I Write p = λ/n. Poisson random variable expectation is
limn→∞ np = limn→∞ n λn = λ. Variance is
limn→∞ np(1 − p) = limn→∞ n(1 − λ/n)λ/n = λ.
I Sum of λ1 Poisson and independent λ2 Poisson is a
λ1 + λ2 Poisson.
I Times between successive events in λ Poisson process are
independent exponentials with parameter λ.
I Minimum of independent exponentials with parameters λ1
and λ2 is itself exponential with parameter λ1 + λ2 .
11
18.440 Lecture 28
DeMoivre-Laplace Limit Theorem
Sn − np
lim P{a ≤ √ ≤ b} → Φ(b) − Φ(a).
n→∞ npq
12
18.440 Lecture 28
Problems
13
18.440 Lecture 28
Properties of normal random variables
14
18.440 Lecture 28
Properties of exponential random variables
15
18.440 Lecture 28
Defining Γ distribution
16
18.440 Lecture 28
Outline
17
18.440 Lecture 28
Outline
18
18.440 Lecture 28
Properties of uniform random variables
I Suppose X is a ran
( dom variable with probability density
1
x ∈ [α, β]
function f (x) = β−α
0 x 6∈ [α, β].
α+β
I Then E [X ] = 2 .
I And Var[X ] = Var[(β − α)Y + α] = Var[(β − α)Y ] =
(β − α)2 Var[Y ] = (β − α)2 /12.
19
18.440 Lecture 28
Distribution of function of random variable
20
18.440 Lecture 28
Joint probability mass functions: discrete random variables
21
18.440 Lecture 28
Joint distribution functions: continuous random variables
22
18.440 Lecture 28
Independent random variables
23
18.440 Lecture 28
Summing two random variables
25
18.440 Lecture 28
Maxima: pick five job candidates at random, choose best
27
18.440 Lecture 28
Properties of expectation
30
18.440 Lecture 28
Defining covariance and correlation
31
18.440 Lecture 28
Basic covariance facts
I Cov(X , Y ) = Cov(Y , X )
I Cov(X , X ) = Var(X )
I Cov(aX , Y ) = aCov(X , Y ).
I Cov(X1 + X2 , Y ) = Cov(X1 , Y ) + Cov(X2 , Y ).
I General statement of bilinearity of covariance:
Xm n
X m X
X n
Cov( ai Xi , bj Yj ) = ai bj Cov(Xi , Yj ).
i=1 j=1 i=1 j=1
I Special case:
Xn n
X X
Var( Xi ) = Var(Xi ) + 2 Cov(Xi , Xj ).
i=1 i=1 (i,j):i<j
32
18.440 Lecture 28
Defining correlation
Cov(X , Y )
ρ(X , Y ) := p .
Var(X )Var(Y )
I Correlation doesn’t care what units you use for X and Y . If
a > 0 and c > 0 then ρ(aX + b, cY + d) = ρ(X , Y ).
I Satisfies −1 ≤ ρ(X , Y ) ≤ 1.
I If a and b are positive constants and a > 0 then
ρ(aX + b, X ) = 1.
I If a and b are positive constants and a < 0 then
ρ(aX + b, X ) = −1.
33
18.440 Lecture 28
Conditional probability distributions
35
18.440 Lecture 28
Conditional expectation as a random variable
38
18.440 Lecture 28
Moment generating functions
39
18.440 Lecture 28
Moment generating functions for sums of i.i.d. random
variables
40
18.440 Lecture 28
Examples
41
18.440 Lecture 28
Cauchy distribution
42
18.440 Lecture 28
Beta distribution
43
18.440 Lecture 28
MIT OpenCourseWare
http://ocw.mit.edu
For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.