0% found this document useful (0 votes)
85 views61 pages

Lecture 4: Special Distribution Function & Joint Probability Distribution

Binomial tables provide the probability of getting x successes in n trials for pre-specified values of p. They allow us to look up binomial probabilities without calculations. 2. Discrete Distribution Poisson probability distribution The Poisson distribution is a discrete probability distribution that expresses the probability of a number of events occurring in a fixed period of time if these events occur with a known average rate and independently of the time since the last event. The Poisson distribution can be used if the following conditions are met: 1. The number of events in non-overlapping intervals are independent. 2. The average rate of occurrences is known or can be estimated. 3. The probability of an event is proportional to the interval of

Uploaded by

Rahmat Junaidi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
85 views61 pages

Lecture 4: Special Distribution Function & Joint Probability Distribution

Binomial tables provide the probability of getting x successes in n trials for pre-specified values of p. They allow us to look up binomial probabilities without calculations. 2. Discrete Distribution Poisson probability distribution The Poisson distribution is a discrete probability distribution that expresses the probability of a number of events occurring in a fixed period of time if these events occur with a known average rate and independently of the time since the last event. The Poisson distribution can be used if the following conditions are met: 1. The number of events in non-overlapping intervals are independent. 2. The average rate of occurrences is known or can be estimated. 3. The probability of an event is proportional to the interval of

Uploaded by

Rahmat Junaidi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 61

Lecture 4: Special Distribution Function &

Joint Probability Distribution

Hengki Purwoto (Econ UGM) Statistics 2 : Lecture 4 March 8, 2021 1


0. Outline

1 Special Distribution Function


Motivation

2 Discrete Distribution
Binomial Probability Distribution
Poisson Probability Distribution

3 Continuous Distributions
Uniform Probability Distribution
Normal Probability Distribution

4 Joint Probability Distributions


Bivariate distributions
More than two random variables

Hengki Purwoto (Econ UGM) Statistics 2 : Lecture 4 March 8, 2021 2


1. Special Distribution Function

Motivation

In statistics, we’re dealing with population with different characteristic


The characteristic is described by a distribution
Knowing the distribution allows us to make better inference
There are so many distributions out there:
◮ https://en.wikipedia.org/wiki/List of probability distributions
In this lecture, we will first discuss probability distributions that are
commonly used

Hengki Purwoto (Econ UGM) Statistics 2 : Lecture 4 March 8, 2021 3


2. Discrete Distribution

Bernoulli probability distribution

This is a distribution with only two outcomes


◮ Ex: tossing a coin
Suppose that we refer observing a head as a “success” and a tail
as a “failure”:
◮ the probability of head is p
◮ the probability of tail is 1 − p
What is the random variable?

Hengki Purwoto (Econ UGM) Statistics 2 : Lecture 4 March 8, 2021 4


2. Discrete Distribution
Bernoulli probability distribution
The random variable X takes a value of 1 if heads are observed and
0 otherwise.
Such random variable is called a Bernoulli probability distribution.
The probability function of a Bernoulli random variable X is:
x 1−x
p x =P X=x = ቊp (1−p) x=0, 1
0 otherwise

where 0 ≤ p ≤ 1 is a parameter.
The characteristic is:
◮ E [X ] = p
◮ Var [X ] = p (1 − p)

Hengki Purwoto (Econ UGM) Statistics 2 : Lecture 4 March 8, 2021 5


2. Discrete Distribution

Bernoulli probability distribution

We can repeat the Bernoulli trials many times to observe the


total number of successes
For example, we want to calculate the probability of k successes
in n Bernoulli trials.
This is referred to as the binomial probability distribution.

Hengki Purwoto (Econ UGM) Statistics 2 : Lecture 4 March 8, 2021 6


2. Discrete Distribution

Binomial probability distribution

Definition
A binomial experiment is one that has the following properties:
1) the experiment consist of n trials; 2) each trial results in two
outcomes: success (S) and failure (F ); 3) the probability of success
in every trial is p ; 4) the outcome of the trials are independent; 5)
the random variable X is the number of successes in n trials.

Hengki Purwoto (Econ UGM) Statistics 2 : Lecture 4 March 8, 2021 7


2. Discrete Distribution

Binomial probability distribution

▪ A fixed number of observations, n


▪ e.g., 15 tosses of a coin; ten light bulbs taken from a warehouse
▪ Two mutually exclusive and collectively exhaustive categories
▪ e.g., head or tail in each toss of a coin; defective or not defective light bulb
▪ Generally called “success” and “failure”
▪ Probability of success is P, probability of failure is 1 – P
▪ Observations are independent
▪ The outcome of one observation does not affect the outcome of the other
▪ Constant probability for each observation
▪ e.g., Probability of getting a tail is the same each time we toss the coin

Hengki Purwoto (Econ UGM) Statistics 2 : Lecture 4 March 8, 2021 8


2. Discrete Distribution

Possible Binomial Distribution Settings

• A manufacturing plant labels items as either defective or acceptable


• A firm bidding for contracts will either get a contract or not
• A marketing research firm receives survey responses of “yes I will buy”
or “no I will not”
• New job applicants either accept the offer or reject it

Hengki Purwoto (Econ UGM) Statistics 2 : Lecture 4 March 8, 2021 9


2. Discrete Distribution

Developing the Binomial Distribution

• The number of sequences with x successes in n independent


trials is:

n!
Cnx =
x! (n − x)!

where n! = n·(n – 1)·(n – 2)· . . . ·1 and 0! = 1

• These sequences are mutually exclusive, since no two can


occur at the same time

Hengki Purwoto (Econ UGM) Statistics 2 : Lecture 4 March 8, 2021 10


2. Discrete Distribution

Binomial probability distribution


Definition
A random variable X is said to have binomial probability distribution
with parameters (n, p) iff:
n x n−x
P X=x =p x = p q
x
n!
= px qn−x
x! n−x !

Hengki Purwoto (Econ UGM) Statistics 2 : Lecture 4 March 8, 2021 11


2. Discrete Distribution

Binomial probability distribution


Example
What is the probability of one success in five observations
if the probability of success is 0.1?
x = 1, n = 5, and P = 0.1

n!
P(x = 1) = P X (1− P)n− X
x! (n − x)!
5!
= (0.1)1(1− 0.1)5−1
1! (5 − 1)!
= (5)(0.1)(0.9)4
= .32805

Hengki Purwoto (Econ UGM) Statistics 2 : Lecture 4 March 8, 2021 12


2. Discrete Distribution

Shape of Binomial Distribution


• The shape of the binomial distribution depends on the values of
P and n
P(x) n = 5 P = 0.1
.6
▪ Here, n = 5 and P = 0.1 .4
.2
0 x
0 1 2 3 4 5

P(x) n = 5 P = 0.5
▪ Here, n = 5 and P = 0.5 .6
.4
.2
0 x
0 1 2 3 4 5
Hengki Purwoto (Econ UGM) Statistics 2 : Lecture 4 March 8, 2021 13
2. Discrete Distribution

Mean and Variance of a Binomial Distribution

▪ Mean μ = E(x) = nP

▪ Variance and Standard Deviation

σ 2 = nP(1- P) σ = nP(1- P)

Where n = sample size


P = probability of success
(1 – P) = probability of failure

Hengki Purwoto (Econ UGM) Statistics 2 : Lecture 4 March 8, 2021 14


2. Discrete Distribution

Binomial probability distribution

Theorem
If X is a binomial random variable with parameters n and p, then:

 = E [X ] = np
Var [X ] = np (1 − p) .

Hengki Purwoto (Econ UGM) Statistics 2 : Lecture 4 March 8, 2021 15


2. Discrete Distribution
Binomial Characteristics
Examples

Mean P(x) n = 5 P = 0.1


μ = nP = (5)(0.1) = 0.5 .6
.4
σ = nP(1- P) = (5)(0.1)(1− 0.1) .2
= 0.6708 0 x
0 1 2 3 4 5

P(x) n = 5 P = 0.5
μ = nP = (5)(0.5) = 2.5 .6
.4
σ = nP(1- P) = (5)(0.5)(1− 0.5) .2
= 1.118 0 x
0 1 2 3 4 5

Hengki Purwoto (Econ UGM) Statistics 2 : Lecture 4 March 8, 2021 16


2. Discrete Distribution
Using Binomial Tables

N x … p=.20 p=.25 p=.30 p=.35 p=.40 p=.45 p=.50


10 0 … 0.1074 0.0563 0.0282 0.0135 0.0060 0.0025 0.0010
1 … 0.2684 0.1877 0.1211 0.0725 0.0403 0.0207 0.0098
2 … 0.3020 0.2816 0.2335 0.1757 0.1209 0.0763 0.0439
3 … 0.2013 0.2503 0.2668 0.2522 0.2150 0.1665 0.1172
4 … 0.0881 0.1460 0.2001 0.2377 0.2508 0.2384 0.2051
5 … 0.0264 0.0584 0.1029 0.1536 0.2007 0.2340 0.2461
6 … 0.0055 0.0162 0.0368 0.0689 0.1115 0.1596 0.2051
7 … 0.0008 0.0031 0.0090 0.0212 0.0425 0.0746 0.1172
8 … 0.0001 0.0004 0.0014 0.0043 0.0106 0.0229 0.0439
9 … 0.0000 0.0000 0.0001 0.0005 0.0016 0.0042 0.0098
10 … 0.0000 0.0000 0.0000 0.0000 0.0001 0.0003 0.0010

Examples:
n = 10, x = 3, P = 0.35: P(x = 3|n =10, p = 0.35) = .2522
n = 10, x = 8, P = 0.45: P(x = 8|n =10, p = 0.45) = .0229

Hengki Purwoto (Econ UGM) Statistics 2 : Lecture 4 March 8, 2021 17


2. Discrete Distribution

Binomial probability distribution

More Examples
The probability of stunting among poor children is 55%. If we
randomly pick 10 poor children, what is the probability that 4 poor
children experience stunted growth?

What is the probability that at least two poor children experience


stunted growth?

Hengki Purwoto (Econ UGM) Statistics 2 : Lecture 4 March 8, 2021 18


2. Discrete Distribution

Poisson probability distribution

Suppose that A is an event of interest


A counting random variable counts the number of occurrences of A
We apply the Poisson distribution when:
◮ count the number of times an event occurs in a given continuous
interval
◮ the probability of occurrence in one subinterval is very small, the
same for all subintervals, and independent of the events in other
subintervals.

Hengki Purwoto (Econ UGM) Statistics 2 : Lecture 4 March 8, 2021 19


2. Discrete Distribution

Poisson probability distribution

The Poisson probability distribution is characterized by λ


Poisson distribution attempts to model rare events:
◮ usually if p ≤ 0.1 and n ≥ 40.
Example:
◮ number of misprints in a book
◮ number of defective products in a production line
◮ radioactivity counts per unit time
◮ the number of plankton per aliquot of seawater
◮ bacterial colonies per petri plate in microbiological study

Hengki Purwoto (Econ UGM) Statistics 2 : Lecture 4 March 8, 2021 20


2. Discrete Distribution

Poisson probability distribution

Definition
A discrete random variable X follows the Poisson probability
distribution with parameter λ > 0, Poisson (λ), if:

e− λ λx
P (X = x ) = f (x , λ) = f (x ) =
x!

where:
x = 0, 1, 2, …
P(x) = the probability of x successes over a given time or space, given 
 = the expected number of successes per time or space unit,  > 0
e = base of the natural logarithm system (2.71828...)
Hengki Purwoto (Econ UGM) Statistics 2 : Lecture 4 March 8, 2021 21
2. Discrete Distribution

Poisson probability distribution

Theorem
If X is a Poisson random variable with parameter λ, then:

X = E [X ] = λ
Var [X ] = E [X - X] 2 = λ

where  = expected number of successes per time or space unit

Hengki Purwoto (Econ UGM) Statistics 2 : Lecture 4 March 8, 2021 22


2. Discrete Distribution
Using Poisson Tables

X 0.10 0.20 0.30 0.40 0.50 0.60 0.70 0.80 0.90

0 0.9048 0.8187 0.7408 0.6703 0.6065 0.5488 0.4966 0.4493 0.4066


1 0.0905 0.1637 0.2222 0.2681 0.3033 0.3293 0.3476 0.3595 0.3659
2 0.0045 0.0164 0.0333 0.0536 0.0758 0.0988 0.1217 0.1438 0.1647
3 0.0002 0.0011 0.0033 0.0072 0.0126 0.0198 0.0284 0.0383 0.0494
4 0.0000 0.0001 0.0003 0.0007 0.0016 0.0030 0.0050 0.0077 0.0111
5 0.0000 0.0000 0.0000 0.0001 0.0002 0.0004 0.0007 0.0012 0.0020
6 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0001 0.0002 0.0003
7 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000

Example: Find P(X = 2) if  = .50

e −  X e −0.50 (0.50)2
P( X = 2) = = = .0758
X! 2!

Hengki Purwoto (Econ UGM) Statistics 2 : Lecture 4 March 8, 2021 23


2. Discrete Distribution
Graph of Poisson Probabilities

0.70

Graphically: 0.60

 = .50 0.50

= P(x) 0.40

X 0.50
0.30
0 0.6065
0.20
1 0.3033
2 0.0758 0.10

3 0.0126 0.00
0 1 2 3 4 5 6 7
4 0.0016
5 0.0002 x
6 0.0000
7 0.0000
P(X = 2) = .0758

Hengki Purwoto (Econ UGM) Statistics 2 : Lecture 4 March 8, 2021 24


2. Discrete Distribution

Poisson Distribution Shape

• The shape of the Poisson Distribution depends on the


parameter  :

 = 0.50  = 3.00
0.70 0.25

0.60
0.20
0.50

0.15
0.40

P(x)
P(x)

0.30 0.10

0.20
0.05
0.10

0.00 0.00
0 1 2 3 4 5 6 7 1 2 3 4 5 6 7 8 9 10 11 12

x x

Hengki Purwoto (Econ UGM) Statistics 2 : Lecture 4 March 8, 2021 25


2. Discrete Distribution

Poisson Approximation to the Binomial Distribution

Let X be the number of successes from n independent trials,


each with probability of success P. The distribution of the
number of successes, X , is binomial, with mean nP. If the
number of trials, n , is large and nP is of only moderate size
(preferably nP ≤ 7 ), this distribution can be approximated by
the Poisson distribution with  = nP. The probability
distribution of the approximating distribution is

e −nP (nP)x
P(x) = for x = 0,1,2,...
x!

Hengki Purwoto (Econ UGM) Statistics 2 : Lecture 4 March 8, 2021 26


2. Discrete Distribution

Poisson Approximation to the Binomial Distribution

Theorem
If X is a binomial r.v. with parameters n and p, then for
each x = 0, 1, 2, . . . and as p → 0, n → ∞ with np = λ
constant:
n e−λ λx
lim px (1−p) n−x =
n→∞ p x!

Hengki Purwoto (Econ UGM) Statistics 2 : Lecture 4 March 8, 2021 27


2. Discrete Distribution

Poisson probability distribution

More Examples

Let X be a Poisson random variable with λ = 1/3.


Find:
◮ P (X = 0)
◮ P (X ≥ 3)

Hengki Purwoto (Econ UGM) Statistics 2 : Lecture 4 March 8, 2021 28


3. Continuous Distributions

Uniform probability distribution


The uniform distribution is a probability distribution that has
equal probabilities for all possible outcomes of the random
variable.

f(x)
Total area under the
uniform probability
density function is 1.0

xmin xmax x

Example: A school bus arrives at stop every 30 minutes between 6 a.m.


and 11 p.m. Students arrive at the bus stop at random times. The time
that a student waits is uniformly distributed from 0 to 30 minutes.
Hengki Purwoto (Econ UGM) Statistics 2 : Lecture 4 March 8, 2021 29
3. Continuous Distributions
Uniform probability distribution

where: f(x) = value of the density function at any x value


a = minimum value of x
b = maximum value of x
Hengki Purwoto (Econ UGM) Statistics 2 : Lecture 4 March 8, 2021 30
3. Continuous Distributions

Uniform probability distribution

Hengki Purwoto (Econ UGM) Statistics 2 : Lecture 4 March 8, 2021 31


3. Continuous Distributions

Mean and Variance of the Uniform Distribution

• The mean of a uniform distribution is

a+b
μ=
2
• The variance is

(b - a)2
σ2 =
12

where a = minimum value of x


b = maximum value of x

Hengki Purwoto (Econ UGM) Statistics 2 : Lecture 4 March 8, 2021 32


3. Continuous Distributions

Uniform probability distribution

Example:
Uniform probability distribution over the range 2 ≤ x ≤ 6:

1
f(x) = 6 - 2 = .25 for 2 ≤ x ≤ 6

f(x)
a+b 2+6
μ= = =4
.25 2 2

(b - a)2 (6 - 2)2
σ2 = = = 1.333
2 6 x 12 12

Hengki Purwoto (Econ UGM) Statistics 2 : Lecture 4 March 8, 2021 33


3. Continuous Distributions

Uniform probability distribution


More Examples

Suppose that we have a uniform probability distribution over


the range:
4 ≤ x ≤ 20

◮ Find the density function


◮ Find P (X > 6)
◮ Find E [X ]

Hengki Purwoto (Econ UGM) Statistics 2 : Lecture 4 March 8, 2021 34


3. Continuous Distributions

Normal probability distribution

The normal distribution is the single most important distribution


in statistics
The normal distribution has a:
◮ bell shaped pdf
◮ symmetrical pdf
◮ equal mean, median, and mode
The location is determined by the mean, µ
The spread is determined by the standard deviation, σ
The random variable has an infinite theoretical range: − ∞ < x < ∞ .

Hengki Purwoto (Econ UGM) Statistics 2 : Lecture 4 March 8, 2021 35


3. Continuous Distributions
Normal probability distribution

Definition
A random variable X is said to have a normal probability distribution
with parameters µ and σ2 if it has a probability density function:
1 2 2
f x = e− x−μ /2σ
2πσ
for − ∞ < x < ∞ , − ∞ < µ < ∞ , and σ > 0.

where e = the mathematical constant approximated by 2.71828


π = the mathematical constant approximated by 3.14159
μ = the population mean
σ2 = the population variance
x = any value of the continuous variable, − < x < 

Hengki Purwoto (Econ UGM) Statistics 2 : Lecture 4 March 8, 2021 36


3. Continuous Distributions

Normal probability distribution

If µ = 0 and σ = 1, the distribution is referred to as the standard


normal random variable
In general,
X ∼ N ( µ, σ 2 )

We can always transform any normal random variable to standard


normal variable:
X − µ
∼ N (0, 1) .
σ

Hengki Purwoto (Econ UGM) Statistics 2 : Lecture 4 March 8, 2021 37


3. Continuous Distributions

General Procedure for Finding Probabilities

To find P(a < X < b) when X is distributed normally:


▪ Draw the normal curve for the problem in terms of X
▪ Translate X-values to Z-values
▪ Use the Cumulative Normal Table

Hengki Purwoto (Econ UGM) Statistics 2 : Lecture 4 March 8, 2021 38


3. Continuous Distributions

Finding Normal Probabilities


• Suppose X is normal with mean 8.0 and standard deviation 5.0.
Find P(X < 8.6)

X − μ 8.6 − 8.0
Z= = = 0.12
σ 5.0

μ=8 μ=0
σ = 10 σ=1

8 8.6 X 0 0.12 Z

P(X < 8.6) P(Z < 0.12)


Hengki Purwoto (Econ UGM) Statistics 2 : Lecture 4 March 8, 2021 39
3. Continuous Distributions

Solution: Finding P(Z < 0.12)

Standardized Normal Probability P(X < 8.6)


Table (Portion)
= P(Z < 0.12)
z F(z) F(0.12) = 0.5478

.10 .5398

.11 .5438

.12 .5478
Z
.13 .5517 0.00
0.12

Hengki Purwoto (Econ UGM) Statistics 2 : Lecture 4 March 8, 2021 40


3. Continuous Distributions

Normal probability distribution

More Examples
For X ∼ N (0, 1), calculate P (Z ≥ 1.13)
For X ∼ N (5, 4), calculate P (−2.5 < X < 10).
For standard normal random variable Z , find the value of z0
such that:
◮ P (Z > z ) = 0.25
0

Hengki Purwoto (Econ UGM) Statistics 2 : Lecture 4 March 8, 2021 41


4. Joint Probability Distributions – Bivariate distributions

Dealing with more than one random variables


In practice, we are dealing with more than 1 random variables.
These variables may not be independent to each other.
For example:
◮ height and weight
◮ being poor and heigh
◮ being poor and educational attainment
In this section, we study the joint distribution of two variables:
bivariate distributions.

Hengki Purwoto (Econ UGM) Statistics 2 : Lecture 4 March 8, 2021 42


4. Joint Probability Distributions – Bivariate distributions

Joint probability function

Definition
Let X and Y be random variables. If both X and Y are discrete, then:

f (x , y ) = P (X = x , Y = y )

is called the joint probability mass function (joint pmf ) of X and Y .

Hengki Purwoto (Econ UGM) Statistics 2 : Lecture 4 March 8, 2021 43


4. Joint Probability Distributions – Bivariate distributions

Joint probability function

Definition
If both X and Y are continuous then f (x , y ) is called the joint
probability density function (joint pdf ) of X and Y iff:
𝑏 𝑑
𝑃 𝑎 ≤ 𝑋 ≤ 𝑏, 𝑐 ≤ 𝑌 ≤ 𝑑 = න න 𝑓 𝑥, 𝑦 𝑑𝑥𝑑𝑦
𝑎 𝑐

Hengki Purwoto (Econ UGM) Statistics 2 : Lecture 4 March 8, 2021 44


4. Joint Probability Distributions – Bivariate distributions

Joint probability function


Example
Suppose that there are 8 red balls, 10 yellow balls, and 20 blue
balls in a bucket. A total of 10 balls are randomly selected from
this bucket. Let X = number of red balls and Y = number of
blue balls. Find the joint probability function of the bivariate
random variable (X , Y ).

Hengki Purwoto (Econ UGM) Statistics 2 : Lecture 4 March 8, 2021 45


4. Joint Probability Distributions – Bivariate distributions

Joint probability function

Theorem
If X and Y are two random variables with joint probability
function f (x , y ), then:
f (x , y ) ≥ 0 for all x and y
If X and Y are discrete, then x , y f (x , y ) = 1
where the sum is over all values (x , y ) that are assigned nonzero
probabilities. If X and Y are continuous, then
∞ ∞
න න f x,y =1 .
−∞ −∞

Hengki Purwoto (Econ UGM) Statistics 2 : Lecture 4 March 8, 2021 46


4. Joint Probability Distributions – Bivariate distributions

Marginal distribution function

Suppose that we are given the joint probability function (pdf or pmf )*
We can obtain the probability distribution function of one of the
components through the marginals.

* Probability mass functions (pmf) are used to describe discrete probability


distributions. While probability density functions (pdf) are used to describe
continuous probability distributions.
Hengki Purwoto (Econ UGM) Statistics 2 : Lecture 4 March 8, 2021 47
4. Joint Probability Distributions – Bivariate distributions

Marginal distribution function

Definition
The marginal pmf or pdf of X , fX (x ) is defined by:

න 𝑓 𝑥, 𝑦 𝑑𝑦 𝑋, 𝑌 𝑎𝑟𝑒 𝑐𝑜𝑛𝑡𝑖𝑛𝑢𝑜𝑢𝑠
𝑓𝑥 𝑥 = −∞

෍ 𝑓 𝑥, 𝑦 𝑋, 𝑌 𝑎𝑟𝑒 𝑑𝑖𝑠𝑐𝑟𝑒𝑡𝑒
∀𝑦

similarly, fY (y ) is defined by:



න 𝑓 𝑥, 𝑦 𝑑𝑥 𝑋, 𝑌 𝑎𝑟𝑒 𝑐𝑜𝑛𝑡𝑖𝑛𝑢𝑜𝑢𝑠
𝑓𝑦 𝑦 = −∞

෍ 𝑓 𝑥, 𝑦 𝑋, 𝑌 𝑎𝑟𝑒 𝑑𝑖𝑠𝑐𝑟𝑒𝑡𝑒
∀𝑥

Hengki Purwoto (Econ UGM) Statistics 2 : Lecture 4 March 8, 2021 48


4. Joint Probability Distributions – Bivariate distributions

Marginal distribution function


Example
Find the marginal probability density function of random variable
X and Y
y
-2 0 1 4
-1 0.2 0.1 0 0.2
x
3 0.1 0.2 0.1 0
5 0.1 0 0 0

Hengki Purwoto (Econ UGM) Statistics 2 : Lecture 4 March 8, 2021 49


4. Joint Probability Distributions – Bivariate distributions

Conditional probability distribution

The conditional probability distribution of the random variable X


given Y is given by:

𝑓(𝑥, 𝑦)
𝑋, 𝑌 𝑎𝑟𝑒 𝑐𝑜𝑛𝑡𝑖𝑛𝑢𝑜𝑢𝑠
𝑓𝑦 (𝑦)
𝑓 𝑥𝑦 =𝑓 𝑥𝑌=𝑦 =
𝑃(𝑋 = 𝑥 , 𝑌 = 𝑦)
𝑋, 𝑌 𝑎𝑟𝑒 𝑑𝑖𝑠𝑐𝑟𝑒𝑡𝑒
𝑓𝑦 (𝑦)

Hengki Purwoto (Econ UGM) Statistics 2 : Lecture 4 March 8, 2021 50


4. Joint Probability Distributions – Bivariate distributions

Conditional probability distribution

Example
Let 1
𝑓(𝑥, 𝑦) = ቐ5 3𝑥 − 𝑦 1 ≤ 𝑥 ≤ 2 ,1 ≤ 𝑦 ≤ 3
0 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒
Find:
◮ fX (x ) and fY (y )
◮ f (x | y )
◮ f (x | Y = 1)

Hengki Purwoto (Econ UGM) Statistics 2 : Lecture 4 March 8, 2021 51


4. Joint Probability Distributions – Bivariate distributions

Expected Value

Definition
Let f (x , y) be the joint probability function. The expected value of
(X , Y ) is:
෍ 𝑥𝑦𝑓 𝑥, 𝑦 𝑋, 𝑌 𝑎𝑟𝑒 𝑑𝑖𝑠𝑐𝑟𝑒𝑡𝑒
𝑥,𝑦
𝐸 𝑋, 𝑌 = ∞ ∞
න න 𝑥𝑦𝑓 𝑥, 𝑦 𝑑𝑥𝑑𝑦 𝑋, 𝑌 𝑎𝑟𝑒 𝑐𝑜𝑛𝑡𝑖𝑛𝑢𝑜𝑢𝑠
−∞ −∞

Hengki Purwoto (Econ UGM) Statistics 2 : Lecture 4 March 8, 2021 52


4. Joint Probability Distributions – Bivariate distributions

Properties of Expected Value

Let X and Y be two random variables. Then:


◮ E [aX + bY ] = aE [X ] + bE [Y ]
◮ If X and Y are independent, then

E [XY ] = E [X ] E [Y ]

Hengki Purwoto (Econ UGM) Statistics 2 : Lecture 4 March 8, 2021 53


4. Joint Probability Distributions – Bivariate distributions

Conditional Expectation

Definition
Let X and Y be jointly distributed with pmf or pdf f (x , y ).
Then, the conditional expectation of X given Y = y is:
෍ 𝑥𝑓 𝑥|𝑦 𝑋, 𝑌 𝑎𝑟𝑒 𝑑𝑖𝑠𝑐𝑟𝑒𝑡𝑒
𝑥
𝐸 𝑋|𝑌 = 𝑦 = ∞
න 𝑥𝑓 𝑥|𝑦 𝑑𝑥 𝑋, 𝑌 𝑎𝑟𝑒 𝑐𝑜𝑛𝑡𝑖𝑛𝑢𝑜𝑢𝑠
−∞

Hengki Purwoto (Econ UGM) Statistics 2 : Lecture 4 March 8, 2021 54


4. Joint Probability Distributions – Bivariate distributions

Conditional Expectation
Example

Let
1
𝑓(𝑥, 𝑦) = ቐ5 3𝑥 − 𝑦 1 ≤ 𝑥 ≤ 2 ,1 ≤ 𝑦 ≤ 3
0 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒
Find:
◮ E [X | Y = 1]

Hengki Purwoto (Econ UGM) Statistics 2 : Lecture 4 March 8, 2021 55


4. Joint Probability Distributions – Bivariate distributions

Covariance and Correlation

Definition
The covariance between two random variables X and Y is defined by:

σXY = Cov (X , Y ) = E (X − µX ) (Y − µY ) = E (XY ) − µX µY


where µX = E (X ) and µ Y = E (Y ). The correlation coefficient is
defined by: Cov(X,Y)
ρ x,y =
Var X Var(Y)

Hengki Purwoto (Econ UGM) Statistics 2 : Lecture 4 March 8, 2021 56


4. Joint Probability Distributions – Bivariate distributions

Covariance

If small values of X (X − µX ) < 0 are correlated with small values


of Y (Y − µY ) < 0, then the covariance is positive.
If small values of X (X − µX ) < 0 are correlated with large values
of Y (Y − µY ) > 0, then the covariance is negative.
Covariance is a signed measure of the variance of Y relative to X
If X and Y are independent, then Cov (X , Y ) = 0. .

Hengki Purwoto (Econ UGM) Statistics 2 : Lecture 4 March 8, 2021 57


4. Joint Probability Distributions – Bivariate distributions

Correlation

Correlation is the measure of the linear relationship between the random


variables X and Y . If Y = aX + b, a  0, then ρ (x , y ) = 1.
Unlike covariance, the correlation coefficient of X and Y is
dimensionless.

Hengki Purwoto (Econ UGM) Statistics 2 : Lecture 4 March 8, 2021 58


4. Joint Probability Distributions – Bivariate distributions
Properties of Covariance and Correlation
The properties of covariance and correlation coefficient:
◮ −1 ≤ ρ ≤ 1
◮ If X and Y are independent, then ρ = 0.
◮ If Y = aX + b, then:

1 𝑖𝑓 𝑎 > 0
𝐶𝑜𝑣 𝑋, 𝑌 = ቊ
−1 𝑖𝑓 𝑎 < 0

◮ If U = a 1 X + b 1 and V = a 2 Y + b 2 , then:

𝐶𝑜𝑣 𝑈, 𝑉 = 𝑎1 𝑎2 𝐶𝑜𝑣(𝑋, 𝑌)
and
𝜌𝑥𝑦 𝑖𝑓 𝑎1 𝑎2 > 0
𝜌𝑢𝑣 = ቊ
−𝜌𝑥𝑦 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒

◮ Var (aX + bY ) = a2Var (X ) + b2Var (Y ) + 2abCov (X , Y )


Hengki Purwoto (Econ UGM) Statistics 2 : Lecture 4 March 8, 2021 59
4. Joint Probability Distributions – Multivariate distributions

Sums of random variables

Let X1, X2, . . . , Xk be k random variables with means µ 1, µ2,…, µk


and variances σ21, σ22 , … , σ2k . Then:

E (X1 + X 2 + . . . + Xk ) = µ 1 + µ 2 + . . . + µ k

Hengki Purwoto (Econ UGM) Statistics 2 : Lecture 4 March 8, 2021 60


4. Joint Probability Distributions – Multivariate distributions

Sums of random variables

Suppose that X1, X2,…, Xk are independent random variables. Then:

𝑉𝑎𝑟(𝑋1 + 𝑋2 + ⋯ + 𝑋𝑘 ) = 𝜎12 +𝜎22 + ⋯ + 𝜎𝑘2

If X1, X2, . . . , Xk are not independent random variables. Then:


𝐾−1 𝐾

𝑉𝑎𝑟 𝑋1 + 𝑋2 + ⋯ + 𝑋𝑘 = 𝜎12 +𝜎22 +⋯+ 𝜎𝑘2 + 2 ෍ ෍ 𝐶𝑜𝑣 (𝑋𝑖 , 𝑋𝑗 )


𝑖=1 𝑗=𝑖+1

Hengki Purwoto (Econ UGM) Statistics 2 : Lecture 4 March 8, 2021 61

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy