Construction of Random Variable Distribution Function
Construction of Random Variable Distribution Function
We now suppose that (Ω, F, P) is a probability space and that X : (Ω, F, P) → (R, B) is a
random variable so that X −1 (B) ∈ B for all B ∈ B so that PX = P ◦ X −1 , the law of X, is
a probability on (R, B) and is characterized by the distribution function of X which is given
by FX (x) = PX ((−∞, x]) for x ∈ R.
The goal of these notes is to show a partial converse, namely that to every distribution
function F there exists a random variable X with FX = F . In particular, this random
variable will be constructed on the uniform probability space (Ω, F, P) = ([0, 1], B1 , unif).
As such, we often call ([0, 1], B1 , unif) the canonical probability space.
For the remainder of these notes, let (Ω, F, P) = ([0, 1], B1 , unif). Suppose that we define
the function U : Ω → Ω by setting U (ω) = ω so that U is the identity function on Ω.
However, since Ω = [0, 1], we can also view U as a function U : Ω → R. It now follows that
U : ([0, 1], B1 , unif) → (R, B) is a random variable since U −1 (B) = B ∩ [0, 1] ∈ B1 for any
B ∈ B. We call U a uniform [0, 1] random variable.
We will now compute FU (x), the distribution function of U . If 0 ≤ x ≤ 1, then
P ({ω ∈ Ω : U (ω) ≤ x}) = P ({ω ∈ [0, 1] : ω ≤ x}) = unif([0, x]) = x − 0 = x,
if x < 0, then
P ({ω ∈ Ω : U (ω) ≤ x}) = P ({ω ∈ [0, 1] : ω ≤ x}) = unif(∅) = 0,
and if x > 1, then
P ({ω ∈ Ω : U (ω) ≤ x}) = P ({ω ∈ [0, 1] : ω ≤ x}) = unif([0, 1]) = 1 − 0 = 1.
To summarize, if x ∈ R, then
0, if x < 0,
FU (x) = P ({ω ∈ Ω : U (ω) ≤ x}) = x, if 0 ≤ x ≤ 1, (1)
1, if x > 1.
Note that we sometimes call FU the uniform [0, 1] distribution function or the distribution
function of a uniform [0, 1] random variable.
Suppose now that F : R → [0, 1] is any distribution function. We will use U to construct
the random variable X on (Ω, F, P) with FX = F . In order to motivate the construction,
consider the particular case when F : R → [0, 1] is continuous and strictly increasing. For
such a function F , we know that F −1 is also continuous and strictly increasing.
Define
for any x ∈ R where the third equality uses the fact that F is strictly increasing and
continuous, and the fourth equality follows from (1). In other words, FX = F as required.
In the case that the distribution function F is neither strictly increasing nor continuous the
same type of result holds, provided that one is careful and chooses a single-valued inverse
of F .
defined by
X(ω) = inf{x : F (x) ≥ ω}
for 0 ≤ ω ≤ 1 satisfies FX (x) = F (x).
Proof. Suppose that X(ω) = inf{x : F (x) ≥ ω} for 0 ≤ ω ≤ 1. Our goal is to compute
for x ∈ R and show FX = F . Since X is an increasing function on [0, 1], we see that the
event A = Ax = {ω ∈ Ω : X(ω) ≤ x} is necessarily an interval with endpoints 0 and sup A.
Therefore, since P = unif so that the probability of an interval contained in [0, 1] is just its
length, we conclude
F (x) ≥ F (X(ω)) ≥ ω
so that F (x) is an upper bound of A. However, we know that X(F (x)) ≤ x so that F (x) ∈ A.
This implies that F (x) = sup(A) as required.
A few remarks are in order. Note that since the uniform random variable U satisfies U (ω) = ω
an equivalent version of Theorem 1 is the following.
Remark. Thanks to advances in computing power, this so-called inversion sampling method
is so fast that, for the software package R, this method is the most efficient way of generating
normal random variables. See
https://stat.ethz.ch/R-manual/R-devel/library/base/html/Random.html
for details.