0% found this document useful (0 votes)
4 views12 pages

Unit 1 Multivariate Analysis Lecture Notes

The lecture notes on Multivariate Analysis cover key concepts such as multivariate distributions, properties of multivariate normal distributions, and the structure of multivariate data. It aims to equip students with the ability to distinguish between singular and non-singular matrices, understand the implications of multivariate normal distributions, and compute mean and variance-covariance matrices. The document includes definitions, theorems, and examples to facilitate understanding of the subject matter.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views12 pages

Unit 1 Multivariate Analysis Lecture Notes

The lecture notes on Multivariate Analysis cover key concepts such as multivariate distributions, properties of multivariate normal distributions, and the structure of multivariate data. It aims to equip students with the ability to distinguish between singular and non-singular matrices, understand the implications of multivariate normal distributions, and compute mean and variance-covariance matrices. The document includes definitions, theorems, and examples to facilitate understanding of the subject matter.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 12

NSTC32: Multivariate Analysis M.Sc.

, Statistics III Semester

Multivariate Analysis: Unit 1


Lecture Notes
Dr K MANOJ,
Assistant Professor, Department of Statistics,
Manonmaniam Sundaranar University, Tirunelveli - 12, Tamilnadu, India
July 26, 2020

Contents

1 What is Multivariate Analysis? 2

2 Multiple measurement, or observation, as row or column vector 2

3 Multivariate Distributions 3

4 Structure of Multivariate Data 3

5 Singular and Non-Singular 3

6 Multivariate normal distributions and their properties 4

7 Properties of the Multivariate Normal Distribution 4

8 Marginal and conditional distributions 5

9 Characteristic function and moments 6

10 Distribution of linear combinations of multivariate normal vector 7

11 Mean Vector and Covariance Matrix 8

12 Solved Exercises 9

13 Sample Multiple Choice Questions 12

14 Answer the following questions 12

Copyright@2020
In this lecture note contain sources were collected from various books, lectures and online. The sources are cited in the
reference section. This document cannot reproducible or republishing for any kind of circumstances and it is open access
for everyone. Provided this material only for the students study purpose.

Lecture Notes on Multivariate Analysis 1 Prepared & Documented using LATEX by Dr K Manoj
NSTC32: Multivariate Analysis M.Sc., Statistics III Semester

Objectives

Upon completion of this lesson, you should be able to:


• Distinguish Singular vs Non-singular
• How the shape of the multivariate normal distribution depends on the variances and covariances.
• Understand the definition of the multivariate normal distribution;

• Know about the Properties of Multivariate normal distribution.


• Determine the mean and variance-covariance matrix of the multivariate normal distribution.

1 What is Multivariate Analysis?


Multivariate statistical analysis is concerned with data that consist of sets of measurements On a number of individuals or
objects.

• When researchers conduct a survey or experiment there are often many variables of interest.
• Multivariate analysis is a branch of statistics concerned with the analysis of multiple measurements, made on one
or several samples of individuals. For example, we may wish to measure length, width, and weight of a product.

Definition: Multivariate analysis


Multivariate analysis is a set of techniques used for analysis of data that contain more than one variable.

• There is always more than one side to the problem you are trying to solve. It’s the same in your data.
• Multivariate analysis provides a more accurate view of the behavior between variables that are highly correlated,
and can detect potential problems in a product or process.
• Many decisions are based on univariate analysis, but only multivariate analysis reveals relationships that help you
detect problems that are not obvious by looking at the variables individually.

2 Multiple measurement, or observation, as row or column vector


A multiple measurement or observation may be expressed as
x = [4 2 0.6]
referring to the physical properties of length, width, and weight, respectively.

The collection of measurements on x is called a vector. In this case it is a row vector. We could have written x as a
column vector.  
4
x= 2 
0.6

Lecture Notes on Multivariate Analysis 2 Prepared & Documented using LATEX by Dr K Manoj
NSTC32: Multivariate Analysis M.Sc., Statistics III Semester

3 Multivariate Distributions
• A multivariate distribution describes the underlying random structure of a vector of random variables.
• From it we can derive marginal properties of the individual variables.
• It also describes relationships between variables or groups of variables.
• As in much of statistics, we are generally interested in making inferences about this distribution based on a sample.

Normal Curve (n=1000000, Mean=0,Sd=1)


0.4
0.3
Density

0.2
0.1
0.0

−3 −2 −1 0 1 2 3

Figure 1: Normal curve with n = 1000000, µ = 0, σ = 1

4 Structure of Multivariate Data


• Suppose that we have measurements on p variables for each of n experimental units.
• We will use xi j to denote the observed value of the jth variable ( j = 1, ..., p) on the ith unit (i = 1, ..., n).
• We will typically gather the information into a n × p matrix.
 
x11 x12 . . . x1 j . . . x1p
x21 x22 . . . x2 j . . . x2p 
 
 .. .. .. .. 
 . . . . 
X = 
x
 i1 x i1 . . . x ij . . . xip 
 . .. .. .. 
 .. . . . 
xn1 xn1 . . . xn j . . . xp

5 Singular and Non-Singular


If a matrix A is square and of full rank, then A is said to be nonsingular, and A has a unique inverse, denoted by A−1 with
the property that
AA−1 = A−1 A = I
If A is square and of less than full rank, then an inverse does not exist, and A is said to be singular.

Example (Singular)
Let  
1 2
A=
1 2
 
1 2
A= = (1 × 2) − (1 × 2) = 2 − 2 = 0
1 2
Now, Matrix A said to be a singular, because its determinant is equal to zero.

Lecture Notes on Multivariate Analysis 3 Prepared & Documented using LATEX by Dr K Manoj
NSTC32: Multivariate Analysis M.Sc., Statistics III Semester

Example (Non – Singular)


Let  
1 2
A=
3 2
Solution  
1 2
A= = (3 × 2) − (1 × 2) = 6 − 2 = 4
3 2
Now, Matrix A said to be a non-singular, because its determinant is 4 (Which is not equal to zero).

6 Multivariate normal distributions and their properties


A multivariate normal distribution is a vector in multiple normally distributed variables, such that any linear combina-
tion of the variables is also normally distributed.
It is mostly useful in extending the central limit theorem to multiple variables, but also has applications to bayesian
inference and thus machine learning, where the multivariate normal distribution is used to approximate the features of
some characteristics; for instance, in detecting faces in pictures.

Definition of multivariate normal distribution


A p-dimensional vector of random variables, X = X1 , X2 , . . . , X p , −∞ < Xi < ∞, i = 1, . . . , p is said to have a
multivariate normal distribution if its density function f (X) is of the form

f (X) = f (X1 , X2 , . . . , X p )
  p/2  
1 −1/2 1 0 −1
= ∑ exp − (X − m) ∑ (X − m)
2π 2

where m = (m1 , . . . , m p ) is the vector of means and Σ is the variance-covariance matrix of the multivariate normal
distribution. The shortcut notation for this density is X = N p (m, Σ).

7 Properties of the Multivariate Normal Distribution


The multivariate normal density is a generalization of the univariate normal density to p ≥ 2.

Figure 2: A normal density with mean µ and variance σ 2 and selected area under curve

A plot of this function yeilds the familier bell-shapped curve shown in Figure 2. Also shown in the figure are appro-
priate areas under curve with in ±1 standard deviations and ±2 standard deviations of the mean.
The areas represent the probabilities, and thus, for the normal random variable X.
.
P(µ − σ ≤ X ≤ µ + σ ) = .68
.
P(µ − 2σ ≤ X ≤ µ + 2σ ) = .95

Lecture Notes on Multivariate Analysis 4 Prepared & Documented using LATEX by Dr K Manoj
NSTC32: Multivariate Analysis M.Sc., Statistics III Semester

The following are true for a random vector X having a multivariate normal distribution:

• Linear combinations of the components of X are normally distributed.


• All Subsets of the components of X have a (Multivariate) Normal distribution.
• Zero covariance implies that the corresponding components are independently distributed.
• The conditional distribution of the components are (Multivariate) Normal.

8 Marginal and conditional distributions


     
X1 µ ∑ ∑12
Assume an n dimensional random vector x = has a normal distribution N(x, µ, Σ) with µ = 1 and ∑ = 11
X2 µ2 ∑21 ∑22
where x1 and x2 are two subvectors of respective dimensions p and q with p + q = n. Note that Σ = ΣT , and Σ21 = ΣT21 .

Theorem
Part A The marginal distributions of x1 and x2 are also normal with mean vector µi and covariance matrix Σii (i = 1, 2),
respectively.
Part B The conditional distribution of xi given x j is also normal with mean vector.
−1
µi| j = µi + ∑i j ∑ j j (x j − µ j )

and covariance matrix


T −1
∑i| j = ∑i j − ∑i j ∑ii ∑i j
Marginal Distributions
Given the cdf of two random variables X,Y as being F(x, y), the marginal cdf of X is

Pr{X ≤ x} = Pr{X ≤ x,Y ≤ ∞} (1)

= F(x, ∞)
Let this be F(x). Clearly Z x Z ∞
F(x) = f (u, v)dvdu (2)
−∞ −∞
We call Z ∞
f (u, v)dv = f (u) (3)
−∞
say, the marginal density of X. Then (2) is Z x
F(x) = f (u)du (4)
−∞

In a similar fashion we define G(y), the marginal cdf of Y , and g(y), the marginal density of Y .
Now we turn to the general case. Given F (x1 , . . . , x p ) as the cdf of X1 , . . . , X p , we wish to find the marginal cdf of
some of X1 , · · · , X p say. of X1 , . . . , Xr (r < p). It is

Pr (X1 ≤ xr , . . . , Xr ≤ xr ) (5)

= Pr [X1 ≤ x1 , . . . , Xr ≤ xr , Xr+1 ≤ ∞, . . . , X p ≤ ∞]
= F (x1 , . . . , xr , ∞, . . . . . . , ∞)
The marginal density of X1 , . . . , Xr is
Z ∞ Z 0
··· f (x1 , . . . . . . , xr , ur+1 , . . . , u p ) dur+1 · · · du p∗ (6)
−∞ −∞

Lecture Notes on Multivariate Analysis 5 Prepared & Documented using LATEX by Dr K Manoj
NSTC32: Multivariate Analysis M.Sc., Statistics III Semester

The marginal distribution and density of any other subset of X1 , . . . , X p are obtained in the obviously similar fashion.
The joint moments of a subset of variates can be computed from the marginal distribution; for example,

E X1h1 · · · Xrhr = E X1h1 · · · Xrhr Xr+1


0
· · · X p0 (7)
= −∞ · · · −∞ x1h1 · · · xrhr f (x1 , . . . , x p ) dx1 · · · dx p
R∞ R∞
R ∞ h1
x1 · · · xrhr · −∞
R∞ R ∞ R∞ 
= −x · · · −x · · · −∞ f (x1 , . . . , x p ) dxr+1 · · · dx p dx1 · · · dxr

Conditional Distributions
If A and B are two events such that the probability of A and B occurring simultaneously is P(AB) and the probability of B
occurring is P(B) > 0, then the conditional probability of A occurring given that B has occurred is P(AB)/P(B). Suppose
the event A is X falling in the interval [x1 , x2 ] and the event B is Y falling in [y1 , y2 ] . Then the conditional probability that
X falls in [x1 , x2 ], given that Y falls in [y1 , y2 ] , is
Pr {x1 ≤ X ≤ x2 , y1 ≤ Y ≤ y2 }
Pr {x1 ≤ X ≤ x2 | y1 ≤ Y ≤ y2 } = (8)
Pr (y1 ≤ Y ≤ y2 }
R x2 R y2
x1 y f (u, v)dvdu
= R 1y2
y1 g(v)dv
Now let y1 = y, y2 = y + ∆y. Then for a continuous density,
Z y+∆y
g(u)du = g (y∗ ) ∆y (9)
y

where y ≤ y∗ ≤ y + ∆y. Also


Z y+∆y
f (u, v)dv = f [u, y∗ (u)] ∆y (10)
y
where y ≤ y∗ (u) ≤ y + ∆y. Therefore,
f [u, y∗ (u)]
Z x2
Pr {x1 ≤ X ≤ x2 | y ≤ Y ≤ y + ∆y} = du (11)
x1 g (y∗ )
It will be noticed that for fixed y and ∆y(> 0), the integrand of (30) behaves as a univariate density function. Now for y
such that g(y) > 0, we define Pr (x1 ≤ X ≤ 12 | Y = y) , the probability that X lies between x1 and x2 , given that Y is y, as
the limit of (30) as ∆y → 0. Thus Z x2
Pr {x1 ≤ X ≤ x2 | Y = y} = f (u | y)du (12)
x1
where f (u | y) = f (u, y)/g(y). For given y, f (u | y) is a density function and is called the conditional density of X given y.
We note that if X and Y are independent, f (x | y) = f (x)
In the general case of X1 , . . . , XP with cdf F (x1 , . . . , x p ) , the conditional density of X1 , . . . , Xr , given Xr+1 = xr+1 , . . . , X p =
x p , is
f (x1 , . . . , x p )
R∞ R∞ (13)
−∞ · · · −∞ f (u 1 . . , ur , xr+1 , . . . , x p ) ωu1 · · · dur
, .

9 Characteristic function and moments


We have defined the moment generating function MX (s), for real values of s, and noted that it may be infinite for some
values of s. In particular, if MX (s) = ∞ for every s 6= 0, then the moment generating function does not provide enough
information to determine the distribution of X.
consider a PDF fX (x) = c/(1 + x2 ),of the form where c is a suitable normalizing constant. A way out of this difficulty

is to consider complex values of s, and in particular, the case where s is a purely imaginary number: s = it, where i = −1,
and t ∈ R The resulting function is called the characteristic function, formally defined by
φX (t) = E[eitX ]
when X is a continuous random variable with PDF f , we have φX (t) = eixt f (x)dx
R

The characteristic function of a multivariate normal distribution has a form similar to the density function.

Lecture Notes on Multivariate Analysis 6 Prepared & Documented using LATEX by Dr K Manoj
NSTC32: Multivariate Analysis M.Sc., Statistics III Semester

Definition
The characteristic function of a random vector X is
0
φ (t) = E eit X (14)

defined for every real vector t.

The Moments
The monents of X1 , . . . , X p with a joint normal distribution can be obtained trom the characteristic function (9). The mean
is
1 ∂φ
E Xh = (15)
i ∂th t=0
= 1i − ∑ j σh j t j + iµh φ (t) t=0


= µh
The second moment is
1 ∂ 2φ
E Xh X j = (16)
i2 ∂th ∂t j t=0
  
1

= i2
− ∑ σhk tk + iµn − ∑k σk j tk + iµ j − σh j φ (t)
k t=0
= σk j + µh µ j
Thus
Variance (Xi ) = E (Xi − µi )2 = σit (17)
Covariance (Xi , X j ) = E (Xi − µi ) (X j − µ j ) = σi j (18)
Any third moment about the mean is
E (Xi − µi ) (X j − µ j ) (Xk − µk ) = 0 (19)
The fourth moment about the mean is

E (Xi − µi ) (Xi − µ j ) (Xk − µk ) (Xl − µi ) = σi j σkl + σik σ jl + σil σ jk (20)

Every moment of odd order is 0.

10 Distribution of linear combinations of multivariate normal vector


Result 1 If X is distributed as N p (µ, ∑) then any linear combination of variables a0 X = a1 X1 + a2 X2 + · · · + a p X p is
distributed as N(a0 µ, a0 ∑ a). Also, if a0 X is distributed as N(a0 µ, a0 ∑ a)for every a, then X must be N p (µ, ∑)
Proof Proving a0 X is normally distributed if X is multivariate normal is more difficult.

Example Consider the linear combination a0 X of a Multivariate random vector determined by choice a0 = [1, 0, . . . , 0].
Since  
X1
X2 
a0 X = [1, 0, . . . , 0]  .  = X1
 
 .. 
Xp

and  
µ1
 µ2 
a0 µ = [1, 0, . . . , 0]  .  = µ1
 
 .. 
µp

Lecture Notes on Multivariate Analysis 7 Prepared & Documented using LATEX by Dr K Manoj
NSTC32: Multivariate Analysis M.Sc., Statistics III Semester

We have   
σ11 σ12 ··· σ1p 1
σ12 σ22 ··· σ2p  0
a0 ∑ a = [1, 0, . . . , 0]  . ..   ..  = σ11
  
.. ..
 .. . . . .
σ1p σ2p ··· σ pp 0
and its follow from result 4.2 that Xi is distributed as N(µ1 , σ11 ). More generally, the marginal distribution of any compo-
nent Xi of X is N(µi , σii ).

11 Mean Vector and Covariance Matrix


The first step in analyzing multivariate data is computing the mean vector and the variance-covariance matrix.
Sample data matrix: Consider the following matrix
 
4.0 2.0 0.60
4.2 2.1 0.59
 
X =
3.9 2.0 0.58

4.3 2.1 0.62
4.1 2.2 0.63

The set of 5 observations, measuring 3 variables, can be described by its mean vector and variance-covariance matrix.
The three variables, from left to right are length, width, and height of a certain object, for example. Each row vectorXi is
another observation of the three variables (or components).

Definition of mean vector and variance-covariance matrix


The mean vector consists of the means of each variable and the variance - covariance matrix consists of the
variances of the variables along the main diagonal and the covariances between each pair of variables in the other
matrix positions.
The formula for computing the covariance of the variables X and Y is
n
∑ (Xi − x̄) (Yi − ȳ)
i=1
Cov =
n−1
with x̄ and ȳ denoting the means of X and Y , respectively.

Example: Mean vector & variance-covariance matrix


The results are:  
x̄ = 4.10 2.08 0.604
 
0.025 0.0075 0.00175
S =  0.0075 0.0070 0.00135
0.00175 0.00135 0.00043
where the mean vector contains the arithmetic averages of the three variables and the (unbiased) variance-covariance
matrix S is calculated by
1 n 0
S= ∑ (Xi − X̄) (Yi − Ȳ )
n − 1 i=1
where n = 5 for this example.

Thus, 0.025 is the variance of the length variable, 0.0075 is the covariance between the length and the width variables, 0.00175 is the covariance
between the length and the height variables, 0.007 is the variance of the width variable, 0.00135 is the covariance between the width and height variables
and 0.00043 is the variance of the height variable.

Lecture Notes on Multivariate Analysis 8 Prepared & Documented using LATEX by Dr K Manoj
NSTC32: Multivariate Analysis M.Sc., Statistics III Semester

12 Solved Exercises
Exercise 1
Let X = [X1 X2 ]> be a multivariate normal random vector with mean
 >
µ= 1 2
and covariance matrix  
3 1
V=
1 2
Prove that the random variable
Y = X1 + X2
has a normal distribution with mean equal to 3 and variance equal to 7 .
Hint: use the joint moment generating function of x and its properties.
Solution

The random variable Y can be written as


Y = BX
where  
B= 1 1
Using the formula for the joint moment generating function of a linear transformation of a random vector
 
MY (t) = MX B>t
and the fact that the mgf of a multivariate normal vector X is
 
1
MX (t) = exp t > µ + t >V t
2
we obtain  
MY (t) = MX B>t
 
1
= exp t > Bµ + t > BV B>t
2
 
1
= exp Bµt + BV B>t 2
2
where, in the last step, we have also used the fact that t is a scalar, because Y is unidimensional. Nov
 
  1
Bµ = 1 1 = 1·1+1·2 = 3
2
and   
>
  3 1 1
BV B = 1 1
1 2 1
 
 3·1+1·1

= 1 1
1·1+2·1
 
  4
= 1 1
3
= 1·4+1·3
=7
Plugging the values just obtained into the formula for the mgf of Y , we get
   
1 7
MY (t) = exp Bµt + BV B>t 2 = exp 3t + t 2
2 2
But this is the moment generating function of a normal random variable with mean equal to 3 and variance equal to 7.
Therefore, Y is a normal random variable with mean equal to 3 and variance equal to 7 (remember that a distribution is
completely characterized by its moment generating function).

Lecture Notes on Multivariate Analysis 9 Prepared & Documented using LATEX by Dr K Manoj
NSTC32: Multivariate Analysis M.Sc., Statistics III Semester

Exercise 2
Let X = [X1 X2 ]> be a multivariate normal random vector with mean
 >
µ= 2 3

and covariance matrix  


2 1
V=
1 2
Using the joint moment generating function of x, derive the cross-moment

E X12 X2
 

Solution

The joint mgf of X is  


1
MX (t) = exp t > µ + t >V t
2
 
1
= exp 2t1 + 3t2 + 2t12 + 2t22 + 2t1t2

2
= exp 2t1 + 3t2 + t1 + t22 + t1t2
2


The third-order cross-moment we want to compute is equal to a third partial derivative of the mgf, evaluated at zero:

 ∂ 3 MX (t1 ,t2 )
E X12 X2 =

∂t12 ∂t2 t1 −0,tz−0

The partial derivatives are

∂ MX (t1 ,t2 )
= (2 + 2t1 + t2 ) exp 2t1 + 3t2 + t12 + t22 + t1t2

∂t1
2
 
∂ MX (t1 ,t2 ) ∂ ∂ MX (t1 ,t2 )
=
∂t12 ∂t1 ∂t1
=2 exp 2t1 + 3t2 + t12 + t22 + t1t2


+ (2 + 2t1 + t2 )2 exp 2t1 + 3t2 + t12 + t22 + t1t2




∂ 3 MX (t1 ,t2 )
 2 
∂ ∂ MX (t1 ,t2 )
=
∂t12 ∂t2 ∂t2 ∂t12
=2 (3 + 2t2 + t1 ) exp 2t1 + 3t2 + t12 + t22 + t1t2


+ 2 (2 + 2t1 + t2 ) exp 2t1 + 3t2 + t12 + t22 + t1t2




+ (2 + 2t1 + t2 )2 (3 + 2t2 + t1 ) exp 2t1 + 3t2 + t12 + t22 + t1t2




Thus,
 ∂ 3 MX (t1 ,t2 )
E X12 X2 = = 2 · 3 · 1 + 2 · 2 · 1 + 22 · 3 · 1

∂t12 ∂t2 t1 −0,t2 −0
= 6 + 4 + 12 = 22

Lecture Notes on Multivariate Analysis 10 Prepared & Documented using LATEX by Dr K Manoj
NSTC32: Multivariate Analysis M.Sc., Statistics III Semester

Exercise 3
Measurements were taken on n heart-attack patients on their cholesterol levels. For each patient, measurements were
taken 0, 2, and 4 days following the attack. Treatment was given to reduce cholesterol level. The sample mean vector is:

Variable Mean
X1 = 0-Day 259.5
X2 = 2-Day 230.8
X3 = 4-Day 221.5

The covariance matrix is

0-Day 2-Day 4-day


0-Day 2276 1508 813
2-Day 1508 2206 1349
4-Day 813 1349 1865

Suppose that we are interested in the difference X1 − X2 , the difference between the 0-day and the 2-day measure-
ments.
Solution

We can write the linear combination of interest as


 
x1
0

ax= 1 −1 0  x2 
x3

The mean value for the difference is  


 259.5
= 1 −1 0  230.8 
221.5
= 28.7
The variance is   
 2276
1508 813 1
= 1 −1 0  1508
2206 1349   −1 
1349813 1865 0
 
 1
= 768 −698 −536  −1  = 1466
0
If we assume the three measurements have a multivariate normal distribution, then the distribution of the difference X1 −X2
has a univariate normal distribution.

Useful Results for the Multivariate Normal


For variables with a multivariate normal distribution with mean vector µ and covariance matrix Σ, some useful
facts are:

• Each single variable has a univariate normal distribution. Thus we can look at univariate tests of normality
for each variable when assessing multivariate normality.
• Any subset of the variables also has a multivariate normal distribution.
• Any linear combination of the variables has a univariate normal distribution.
• Any conditional distribution for a subset of the variables conditional on known values for another subset of
variables is a multivariate distribution.

Lecture Notes on Multivariate Analysis 11 Prepared & Documented using LATEX by Dr K Manoj
NSTC32: Multivariate Analysis M.Sc., Statistics III Semester

13 Sample Multiple Choice Questions


1. Linear combinations of normal variables are normally distributed and hence that marginal distributions are .
a) Normal b) non-normal c) Independent d) dependent
2. Suppose the random vector X of p components has the covariance matrix Σ which is assumed to be
a) positive definite b) negative definite c) infinite d) finite
3. If every linear combination of the components of a vector Y is normally distributed, then Y is .
a) Normal b) non-normal c) random vector d) none of the above
4. Multivariate data is usually presented in the form of
a) Frequency Distribution b) A Two-Way Table c) A Matrix d) both (b) and (c)
5. If all the p-variables are independent, then the variance-covariance matrix will be
a) Frequency Distribution b) A Two-Way Table c) A Matrix d) both (b) and (c)

14 Answer the following questions


1. Define multivariate probability density function? (2 marks)
2. Explain briefly about the properties of multivariate normal distribution. (5 marks)
3. Prove the distribution of linear combination of normally distributed normal variable is also normal. (5 marks)
4. Define Multivariate normal distribution and derive its characteristic function. (5 marks)
5. Explain the relationship between singular and non-singular multivariate normal distribution. (8 marks)

Answer Key
1. A
2. A
3. A
4. C
5. B

References
[1] T. W. Anderson, “An Introduction to Multivariate Statistical Analysis (Wiley Series in Probability and Statistics)”,
Wiley-Interscience; 2003. pages 747.
[2] Johnson, R.A. and D.W. Wichern. ”Applied Multivariate Statistical Analysis (Sixth Edition)”, Pearson New Interna-
tional Edition. 2013.
[3] Marco Taboga, Multivariate normal distribution, https://www.statlect.com/probability-distributions/
multivariate-normal-distribution, 2020
[4] The Multivariate Normal Distribution, http://www.math.hkbu.edu.hk/~hpeng/Math3806/Lecture_note3.
pdf,
[5] Lesson 4: Multivariate Normal Distribution, https://online.stat.psu.edu/stat505/book/export/html/
636
******************

Lecture Notes on Multivariate Analysis 12 Prepared & Documented using LATEX by Dr K Manoj

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy