0% found this document useful (0 votes)
50 views8 pages

Sst414 Lesson 2

Uploaded by

kamandawyclif0
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
50 views8 pages

Sst414 Lesson 2

Uploaded by

kamandawyclif0
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

LESSON 2 (3hrs)

INFERENCE ON THE MEAN VECTOR  WHEN  IS KNOWN


2.1 Introduction
In this Lesson we will consider inference on the mean vector  when  is known
Key terms
• Multivariate normal random vector: a vector whose probability density
function follows multivariate normal distribution
2.2 Lesson Learning Outcomes
By the end of this Lesson, you shall be able to:
2.2.1 Estimate the mean vector and the dispersion matrix from N p (  , )
2.2.2 Test hypothesis on the Mean Vector  when the dispersion matrix  is
known
2.2.3 Test for difference between two mean vectors when the common variance-
covariance matrix  is known

1.2.1. Estimation of the Mean Vector and the Dispersion Matrix from N p (  , )
Suppose a random vector X = ( X1 , X 2 , ..., X p ) N p ( , ) where  is a vector of
constants and  is a symmetric positive definite matrix of constants. Suppose that a
random sample of size n is taken from this population. Then the observation matrix(or
data matrix) is given by

 x11 x12 ... x1n 


 
 x21 x22 ... x2 n 
X = . . ... .  =  X1 X 2 ... X n 
 
 . . ... . 
x xp2 ... x pn 
 p1

Where

 X1 j 
 
X2 j 
X j = .  , j = 1, 2, ..., n .
 
. 
 
 X pj 
The sample mean vector is defined by

1
 X1 
 
X2  n
X = .  , where X i = X
1
is the mean of the observation of the i − th component of
  n j =1
ij

. 
 
 X p 
X.

X
1
We note that X = j .
n j =1

We define the sample dispersion matrix to be the matrix of sample variances and
covariances of X1 , X 2 , ..., X p .

That is

 s11 s12 ... s1 p 


   n n

 s21 s22 ... s2 p  
1 n
 X X ik jk 
S = .

. ... .  where sij =  X ik X jk −
 n  k =1
 k =1
n
k =1  , i, j = 1, 2, ..., p

 . . ... .   
s sp2 ... s pp   
 p1

So that we can write S as

(X
1
S= j − X )( X j − X ) which is a symmetric positive definite matrix. It can be shown
n j =1

that the maximum likelihood estimators(M.L.E.’s) of  and  are respectively

ˆ = X and ˆ = S .

Sampling Distribution of X when  is Known

If X 1 X 2 ... X n are iid N p (  , ) then it can be shown that the sample mean vector
1
X N p (  , ) .
n
We know that ( X −  )−1 ( X −  )  2p
So that

n( X −  )−1 ( X −  )  p2

2
2.2.2 Testing for the Mean Vector  when  is Known
Let X 1 X 2 ... X n be iid N p (  , )
Suppose we wish to test

H 0 :  = 0 against H a :   0 (  0 is a specified vector and  is specified)

at  level of significance. Then under H 0 the statistic

Q = n( X − 0 )−1 ( X − 0 )  p2

The critical region of the test is given by


n( X − 0 )−1 ( X − 0 )   2p, 

Where  2p, is such that pr (  p2   p2,  ) = 


Thus the test is to compute Q = n( X − 0 )−1 ( X − 0 ) and reject H 0 at  level whenever
Q   2p, 
In two-dimensional space Q is an ellipse and for p  2 Q is an ellipsoid.
The confidence set for  is given by the ellipsoid

C = { : n( X −  )−1 ( X −  )   2p,  }

That is,

pr {   C} = 1 − 

Compare this to Confidence Interval for  in 1-dimensional case. i.e.


 
C = { : X − z 2    X + z 2 }
n n

Example 2.1: In Example 1.1 in Lesson 1 suppose that we assume that thorax length( X 1 )
and elytra length ( X 2 ) have a joint bivariate normal distribution with mean vector  and
known variance-covariance matrix

 300 225 
= 
 225 350 
The sample mean vector was obtained as

X = [194.9 263.4] . Test at  = 0.05 level

3
H 0 :  = 0 against H a :   0

Where 0 = [195 262] . Give a 95% confidence set.

Solution: Here n = 10

Q = n( X − 0 )−1 ( X − 0 ) 22 under H 0

Now

1  350 −225 
 −1 =  
54375  −225 300 

So that

10  350 −225  −0.1


Q= ( −0.1 1.4 )   
54375  −225 300  1.4 

10  654.5
= =0.12
54375

Now

2,2 0.05 = 5.99 . Since Q  2,2 0.05 we fail to reject H 0

The 95% confidence set is given by

C = { :10( X −  )−1 ( X −  )  5.99} i.e. all vectors  that fall inside the ellipsoid

10
C = { : [350( X1 − 1 )2 + 300( X 2 − 2 )2 − 450( X1 − 1 )( X 2 − 2 )  5.99}
54375

2.2.3. Testing for difference between two mean vectors when the common variance-
covariance matrix  is known

Suppose X 1 X 2 ... X n are iid N p ( 1 , )


Y1 Y2 ... Ym are iid N p ( 2 , )
Where X and Y samples are assumed independent.
Then

4
n

X
1 1 n
X= j N p ( 1 , ) , Y = 1 Y j N p ( 2 ,
1
)
n j =1 n m j =1 m

Let Z = X − Y , then

E(Z ) = E( X − Y )
= 1 − 2
and
var(Z ) = var( X ) + var(Y ))

1 1 m+n
= + = 
n m mn

Since Z = X − Y is a linear combination of normal vectors it is also normal.


That is

m+n
Z N p ( , ) where  = 1 − 2
mn

We then have

mn
(Z −  )−1 ( Z −  )  2p
m+n

To test

H 0 : 1 = 2 against H a : 1  2  H 0 : 1 − 2 = 0 against H a : 1 − 2  0

The test is to compute the value of the test statistic

mn
Q= Z −1Z
m+n

mn
= ( X − Y )−1 ( X − Y )  p2 under H 0
m+n

and reject H 0 at  level whenever the computed value Qc   p2,  (tabular value)
the 100(1 −  )% confidence set for  = 1 − 2 is given by

5
mn
C = {1 − 2 : (Z −  )−1 (Z −  )   p2,  }
m+n

Example 2.2:

Let X1 , X 2 , ..., X 10 be iid N2 ( 1 , )


Y1 , Y2 , ..., Y12 be iid N 2 ( 2 , )

 1 0.8 
Where  =   is the known common dispersion matrix.
 0.8 2 

Given the sample mean vectors are X = [99 127] , Y = [105 119] respectively

Test H 0 : 1 = 2 against H a : 1  2
at  = 0.05 level and give 95% confidence set.

Solution:
Here we wish to test H 0 : 1 − 2 = 0 against H a : 1 − 2  0

mn
We have Q = ( X − Y )−1 ( X − Y )  22 under H 0
m+n

Here m = 12 , n = 10 , p = 2 so that we get the computed value of Q as

(12)(10)  1   2 −0.8  −6 
Qc =   (−6, 8)   
12 + 10  1.36   −0.8 1  8 

 120  212.8 
=   =853.48
 22  1.36 

Comparing this to 2,2 0.05 = 5.99 we have Qc  2,2 0.05 we reject H 0

The 95% confidence set for 1 − 2 is given by

120
C = {1 − 2 : [2(6 + 1 )2 + (8 − 2 )2 + (1.6)(6 + 1 )(8 − 2 )]  5.99}
22

2.4 Assessment Questions

6
2.4.1 . Write down the pgf of the multivariate normal distribution
2.4.2 . What is the distribution of the quadratic form in the exponent of the pdf of p-
variate normal distribution? How is this fact used is testing a hypothesis on the mean
vector of the p-variate normal distribution?

2.5 Exercise
2.5.1 A random sample of size n=10 is obtained from a N 2 (  , ) distribution where

 4 4.2 
 =  
 4 .2 9 
To test the null hypothesis H 0 :  = (8, 7) ' against the alternative H1 :   (8, 7) '
the sample mean vector is found to be X = (8.8, 7.2) '
a) Carry out the test at  = 0.01 level of significance.
b) Give a 95% confidence region for the mean vector 

2.5.2 If random sample of size n=101 is obtained from a N 2 (  , ) distribution where


the dispersion matrix is

 2 1
= 
 1 1
To test the null hypothesis H 0 :  = (5.0 , 3.5) ' against the alternative
H 1 :   (5.0 , 3.5) the sample mean vector is found to be X = (5.5., 3.4) Carry out
' '

the test at  = 0.05 level of significance.

Summary
In this Lesson we have considered inference on the mean vector  when  is known. In
particular we have:
1. Estimated the mean vector and the dispersion matrix from N p (  , )
2. Tested hypothesis on the Mean Vector  when the dispersion matrix  is known
3. Tested for difference between two mean vectors when the common variance-
covariance matrix  is known

7
References

1. Manly, B.F.J.(2004). Multivariate Statistical Methods: A Primer, 3rd Edition. Chapman&


Hall/HRC. ISBN-1584884142, ISBN-13: 978-1583883149.
2. Morrison, D. F. (2004). Multivariate Statistical Methods; 4th Edition; McGraw Hill; ISBN:
07-043185.
3. Krzanowski, W. J. (2000). Principles of Multivariate Analysis; 2nd Edition; Oxford
University Press; ISBN: 0198507089, 97801198507086.
4. https://www.worldcat.org/title/applied-multivariate-analysis/oclc/1035710263

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy