Statistics I - Unit 5.bidimensional Random Variables
Statistics I - Unit 5.bidimensional Random Variables
University of Valladolid
Outline
p(X = xi , Y = yj ), for i = 1, 2, . . . , j = 1, 2, . . .
Probability of an event:
P
p[(X , Y ) ∈ B] = p(X = xi , Y = yj )
(xi ,yj )∈B
• Marginal variances:
Var (X |Y = yj ) = E (X 2 |Y = yj ) − [E (X |Y = yj )]2
Var (Y |X = xi ) = E (Y 2 |X = xi ) − [E (Y |X = xi )]2
E (Y |X = x) is a function of x.
Then, g (X ) = E (Y |X ) is a transformation of X and, therefore, is another
unidimensional random variable.
It can be shown that:
E (E (Y |X )) = E (Y )
18 1
p(X = 1) · p(Y = 0) = (3/10) · (6/10) = 100 ̸= p(X = 1, Y = 0) = 10
Example 3:
X = monthly disposable income
Y = consumption of perishable goods
From the density function of X , we can obtain its expected value and vari-
ance:
From the density function of X , we can obtain its expected value and vari-
ance:
From the density function of Y , we can obtain its expected value and vari-
ance:
From the density function of Y , we can obtain its expected value and vari-
ance:
In the same way, given a value, x, such that fX (x) ̸= 0, the conditional
density function of Y given the event (X = x) is the function
Only defined for x > 0 (for fX (x) > 0), in which case, is defined as:
E (E (Y |X )) = E (Y )
E (E (Y |X )) = E (Y )
Hence, we can calculate the joint density function knowing the marginal of
a variable and the conditional of the other variable given this one.
P1. Cov (X , Y ) = E (X · Y ) − E (X ) · E (Y )
P1. Cov (X , Y ) = E (X · Y ) − E (X ) · E (Y )
P2. Cov (X , X ) = Var (X )
P1. Cov (X , Y ) = E (X · Y ) − E (X ) · E (Y )
P2. Cov (X , X ) = Var (X )
P3. Cov (X , Y ) = Cov (Y , X )
P1. Cov (X , Y ) = E (X · Y ) − E (X ) · E (Y )
P2. Cov (X , X ) = Var (X )
P3. Cov (X , Y ) = Cov (Y , X )
P4. If X and Y are independent ⇒ Cov (X , Y ) = 0
P1. Cov (X , Y ) = E (X · Y ) − E (X ) · E (Y )
P2. Cov (X , X ) = Var (X )
P3. Cov (X , Y ) = Cov (Y , X )
P4. If X and Y are independent ⇒ Cov (X , Y ) = 0
P5. Cov (a + b · X , c + d · Y ) = bd · Cov (X , Y ) ⇒ The covariance varies
under changes of scale.
Cov (X ,Y )
ρXY = σX ·σY
• sign(ρXY ) = sign(σXY )
• −1 ≤ ρXY ≤ 1
Cov (X ,Y )
ρXY = σX ·σY
• sign(ρXY ) = sign(σXY )
• −1 ≤ ρXY ≤ 1
Cov (X ,Y )
ρXY = σX ·σY
• sign(ρXY ) = sign(σXY )
• −1 ≤ ρXY ≤ 1
Cov (X ,Y )
ρXY = σX ·σY
• sign(ρXY ) = sign(σXY )
• −1 ≤ ρXY ≤ 1
Cov (X ,Y )
ρXY = σX ·σY
• sign(ρXY ) = sign(σXY )
• −1 ≤ ρXY ≤ 1
This value indicates a weak negative linear relationship between the vari-
ables
X1 + · · · + Xn → B(n, p)
X1 + · · · + Xn → B(n, p)
• Binomial
X1 , . . . , Xk independent random variables with distribution B(ni , p),
i = 1, . . . , k
X1 + · · · + Xk → B(n1 + · · · + nk , p)
X1 + · · · + Xn → B(n, p)
• Binomial
X1 , . . . , Xk independent random variables with distribution B(ni , p),
i = 1, . . . , k
X1 + · · · + Xk → B(n1 + · · · + nk , p)
• Poisson
X1 , . . . , Xn independent random variables with distribution P(λi ), i =
1, . . . , n
X1 + · · · + Xn → P(λ1 + · · · + λn )
Unit 5. Bidimensional Random Variables 62 / 65
• Geometric
X1 , . . . , Xr independent random variables with Geometric distribution
G (p)
X1 + · · · + Xr → BN(r , p)
X1 + · · · + Xr → BN(r , p)
• Negative Binomial
X1 , . . . , Xn independent random variables with distribution BN(ri , p),
i = 1, . . . , n
X1 + · · · + Xn → BN(r1 + · · · + rn , p)