Thesis 1973D H723b
Thesis 1973D H723b
TO TWO-PHASE REGRESSION
By
.. DONALD HOLBERT
.......
. Bachelor of Science
. '(Jniversity of Oregon
Eugene, Oregon
1967
'Master of Arts
Washington State University
· P\1ll:ma.n, Washingten
196'9
.
j),~
\<l\1'& Q
\~ 1 'l.~ b
t-.1.
.'
.:.. .
OKLAHOMA
STATE UNIVERSITY
UBRARY
FEB 15 1974
Thesis Approved:
873300
ACKNOWLEDGMENTS
Broemeling for suggesting the topic and for his guidance and encourage.
the teachers that I have had during the course of n;iy graduate study.
might be called 11 hard~co+e 11 data analysis, and I want to thank him for
that,
For the friendship and help they have given me during the course
to Ceal.
I would also like to thank Mrs. Mary Bonner for the first rate
There are many other people to whom I owe a vote of thanks for
make this prohibitive I would like to thank collectively all of those who
Chapter Page
Introduct~on . 1
Review Of Some Related Literature , 3
Organization Of Thesis 6
A SELECTED BIBLIOGRAPHY . 75
LIST OF TABLES
Table Page
LIST OF FIGURES
Figure Page
Introduction
happen that the complete data set <;:<;tn be divided into subsets in a well
tising and take their business elsewhere, It also seems clear that if a
the time point in the sequence at whic;h the change occurred. This
knowledge may allow the company to recover the faulty product and
exhaustive. Most of the recent studies have been from the classical
framework.
The shift is then from one point p-space to another, rather than a
sequence, The first ~s a detecticm problem, That is, has there been
what point in the sequence did it occur? Along with these are the
E(S )
r
= 0.
Quandt (4) dlscusse S a maxb:X),Uffi likelihood technique for
points. Be also disc:\lsses in this article and a later paper (5) several
swHch,
His tests, however, are based on the assumption that one knows
between wh'ich two of the independent variables the switch occurs, The
switch point is then the abscissa of the point at which the intersection
change in the mean ls made. A test is also given for the null hypothesis
of no shift against the alternative of exactly one sMft, and its power
Page (2). This ·Ls general,ized in a later paper by Kander and Zacks
(8) to the case where the distributions of the X.l 's belong to the one
niques are graphical in nature, along the lines suggested by Tukey (11)~
suc:h a way that under they hypothes·Ls of no shift t4ey are independently
distributed N(O, 0" 2 ). Other useful plots are the cumulatlve sums of
literature,
hypothesis of the form H : '( = '( . In his 1971 article, Hinkley (13)
0 0
parameterizes the problem a little differently. Be ass\,l.mes that Yi,
assumption that 132 is unknown and also under the assumption that
( 15) .
the analyses given by Quandt (4, 18), Sprent (6), Hinkley (12, 13) and
others.
g·Lven by:
and
2
w'lth er > O and <Po f. cp 1 •
The case in which cr 2 is assumed known has been studied by
shall consider:
(i) <Po , <P 1 , both known
shift is known, and this will be disqus sed· in the following seetions.
While the main emphasis "in this paper is on the estimation of the
I
n- 1
· = {l, ...• n -1} . The parameters fo~ our problem are now m,
dens Hies:
= { 1/(n-l), m=l,,,.,n-1
1To (m) 0
elsewhere, and
2
The pric:n• on er ·is of course an improper density, It has been
widely used to indicate vague prior know Ledge of the var·~ance. Its
2
L(m, O" )
2
where 0 < CT < ro and m belongs to In accorda.nc;e with
2
Bayes theorem the joint posterior d'lstribut'lon of m (;l.nd CT is
2 2 2
ir 1 (m, CT ) cc L(m, CT ) ir 0 (m) ir 0 (0' )
cc (0'2)-(n/2+1) exp{(-1/2(!"2)[.m.I;
1=1
(Xi-~0)2+. ~ 1=mtl
(X1-<l>1)2J}
(2. 2)
2
where 0 < er < oo and m belongs to I ,
n- 1
Infe:ren~e About m
which is glven by
11
m 2 n 2]-n/2
ir 1 (m)cx: [ ~(X.-cp) +.~ (Xi-cp 1 )· m=l,, .. ,n-1.
i=l 1 O i=m+l
(2. 3)
Inference About rr 2
2
This may be based on the marginal posterior distribution of O' '
which is given by
2 n-1 2
ir 1 (cr ) iii: ~ ir 1 ( m, O' . )
m=l
n-1 2 -(n/2+1) 2 2
ex: ~ (er exp {(-1/20- )K(m)}, 0 <er <co \2,4)
m=l
where
n-l n/2 -1
ir 1 (w) ex: ~ w exp{-wK;(m)/2}, O<w<oo, (2.5)
m=l ·
We may also make the equivalent shatement that the posterior density
based is the c;onditional poster~or dish:i;-ibution, ;rl (CJ 2 jm). One could
w = 1 I CJ 2 , If we le t
n -1
;r 1 (w) a: ~ f (w; n/2, K(m)) (2. 7)
m=l m
K !
=
co n-1
L: r(n/2) [2/K(m)r 12/ r(n/2) [2/K(m)r 12 fm(w; n/2, K(m))dw
0 m=l
n-1
= r(n/2) 2n 12 L: [K(m)rn/ 2 .
m=l
13
index m.
that for the case <Pa known and cpl unknown, and for that reason we
shall study the latter case only in this sectLon. The results for the
shall assume that the direc;tion of the shUt is now known and assign the
Next we shall assume that it is known that q, 0 < cp 1 , and in this case
easily seen to be
where
-n
xm+l = ~ xi/(n-m), Making use of ~he Tonelli theorem
i=m+l
(2 1) we can write
[/
_
co 2 - n 2
exp{(-,(n-m)/2cr )(cp 1 -Xm+l).}dcp~dcr
2 l
00
er; (n-m)
-1/2 !co 2 ~(n+l)/2
(er )
2
exp{(-l/2cr ) [K(m, cp 0 )]}dcr
2
a
where
m 2 n -n 2
= Z::(X"-cpO) + 2: (X.-X +l)
i= 1 1 i= mt 1 1 m
15
Making use of the inverted gamma integral as before we obtain for the
posterior density of m
(2. 9)
introduced,
2
Unconstrained Prior, Inferenc:e About er
2
This may be based on the marginal posterior distribution of er
'
which ls given by
2
ir 1 (cr) o:i
n-1
~
/co 2 -(n/2+1)
(er)
m= 1 -co
{
exp (-l/2er) 2 [m
.~(Xi - cp 0 ) 2+. n
~ (Xi- 2J} dcp 1
<1> 1 )
i= 1 i=m-li 1
n-1
2:
Jai (o- 2 ) -(n/2+1)
m=l 0
n-l[m 2 n zJ-n/2
Trl(cpl)o: 2: 2:(X.-cp0) + 2: (X.-cpl)
m=l i=l 1 i=m+l 1
where
and it is easy to verify that the mixing density has the value
a t t. t s m th mass po1n
. t, m= 1, . , . , n-1 .
17
2
The joint posterior density of m, rr and <1> 1 is now
2
1T 1 ( m, CT ' cp 1 ) ex:
2
where me I , O<CT <1;D and cp 0 < <I> 1 < co , The marginal of m
n- 1
is thus
2
Integration with respec;:t to CT proc;eeds as before to give
1T 1(m)
The integral is seen to be, apart from the norming constant, the
-n
upper tall of the t dens~ty with loc;ation parameter xm+ 1 • precision
(2. 13)
posterior density of m
computational ease ..
2
Constrained Prior, Inference About CT
2
The marginal posterior density of CT is given by
19
2
Trl (o- ) 0:: 2:
n-1 Joo (o- 2 ) -(n/2+1)
m= 1 <Po
n-1 2 -(n/2+1) 2
ai 2: (o-) exp{(-1/20- )K(m 1 cp 0 )}
m=l
f cp
cxi 2 - n 2
exp{(-(n-m)/2 o- ) (cp 1 -Xm+l)} dcp 1 .
0
~ - N (( cj>0 - X~+1 ) J
(o-/ / n -m )) (
2 , 1$ )
2
for 0 < o- < cxi •
is
20
(2. 16)
Referdng again to (2. 16) we see that the vaLue of the mixing density at
. th . .
its rn mass poi,nt ls
(2. 17)
·As in the previm,1s case we shall study the pre sent .situation for
known about th.e order relation between cp 0 and cp 1 and as sign the
improper prior density
Next we shall assume it ii:! known that <l>o < cp 1 and C1.ssign the prior
The theory for the qase in whl<;h the order relation on the
prior and "constrained" prior for (2. 18) and (2. 19) respectively.
2
The joint posterior density of m CT ' is
'
2
where me I , 0 < er < co, and < "'·
'+'1 <
for i = 0, 1 . The
n- 1 -co 00
where
= ( ·~
i= 1
x.)fm
l.
and = (. ~
i=m+l
xJ)(n-m).
Let
m -m2 n -n 2
C(m) = . ~ (Xi - XI ) + ~ (Xi - Xm+ 1)
t= I i=m+ 1
where
(n -2 )m
0
-m C(m)
XI
! =
G: l. ~(m) =
-n
xm+l
'J' (m) =
0
(n-2)(n~m)
C(m)
23
The dou,ble integral above is, apart from the norming consta,nt,
A comparison of (2. 3), (2, 9), and (2. 22) displays an interesting
Unconstrained Prior,
. .,,
Inference About ,.
o-~
2 n-ljc:o
1Tl (o- ) oc :E
2 -(n/2+1)
(o- ·)
Jco
m= l -co -co
n-1 2 -(n/?+l) { 2 }
a:: ~ (cr) exp (~l/2cr )C(m)
m=l
2
O<cr <co,
(2. 23)
Letting h (cj>; n-2, µ(m), T(m)) denote the bivariate t density with
m"" ,.._,
n-2 degrees of freedom, location parameter !:(m). and precision
.. n .. z
n-1 1/2 -Z-
'IT1(cp0, cJ> 1 ) a; ~ [m(n-m)J"" [C(m)] h (cj>; n-2, µ(m), T(m)),
m ,...., "'
m= 1
- co < ,!,..
'l"l < co '. i ::: 1, 2. (2. 2 5 )_
2 . 2 - (n I 2 + 1 )
TI"l(m,o- ,cj>O,cj>l)c:c(o-)
{ 2 2 n
exp (-1/20-) .!:(Xi .. cj>O) + ~ (Xi-cj>l)
~m
2 J}
·=1 l=m+ 1
(2. 26)
2
where me In-l' 0 <a- <co, and -co < cp 0 < cp 1 < co, The marginal
'IT 1 (m) oc !
-co
co
(n-m)
_ 112 [m
.~
t= I
(X 1 -cp 0 )
2
+. ~
n
i::;mt 1
(X 1 -Xm+l)
~ n 2]-(n-1)/2
26
-n-2
irl(m) ex: [m(n-m)]l/2 [C(m)]---Z-[ooro [1 - i!rn-l(T(m, <l>o)l/2(<1>0-X~+1))]
, gm(c1> 0 ; n-Z,.xf1, w(m~d cl>o (2. 28)
-n-Z
ir I (m) m [m(n -m)r l /Z [C (m)) -Z E <l>o ~ - Vn- l (T(ttl, <i>ol l /Z ($0 - x;+ I~
(2. 29)
where
+ rxf1-x~+i ~
The expectation of the indicated func;:Hon of y is now taken with
tion technique of some kind would be needed to evaluate this density for
2
The marginal posterior density of o- for this case is
( 2)
irlcr a:~
n-1
" Joojoo (""2)-{n/2+1)
v
m= I -oo <Po
2 [m
exp {(-1/20-) .L: (Xi-cpO) 2 +. L:
n (Xi.- cpl) 2J} dcp 1 dcp 0
i=l i=m+l
a:
n-1
L:
2 -(n/2+1)/co
(er)
{ 2 [m 2 n _ n
exp (-l/2cr) L:(Xi-cpO) +. L: (XCXm+l)
~~
m=< 1 -a:i 1= 1 i=m+l
2) 1I 2
. (~~:;,
rL-N (:;~
x )il
<P -
Jd~o n
28
a::
m-l
n=l
1 2 2 -n/ 2
L: [m(n-m)] I (o-)
2
exp{(-1/20- )C(m)}Ecp [ 1 -N (
0
cl>o-X +l
-n
m
/n-m a-/
)~
(2 .. 3 1)
Letting
write
2 n-l 1/2 2
rr 1 (o-) a:: L: [m(n-m)r exp{(-1/Zo- )C(m)}
m=l
(2.32)
•
Constrained Prior, Inference About (cp0 , cpl)
[m n
2 .L: (Xi-cpO) 2·+. L: (Xi-cpl)
exp { (-1/Zo-) zJ} do- 2
1=1 · 1=m+l
29
where -m < <Po < <1> 1 < oo. Evaluation of the integral and sLmp!ification
<Po and <1> 1 , The joint posterior density of <Po and <1> 1 may then be
written as
n 1 ·
1T
1
(<!> 0 .<!> 1 ) o:: i: [m(n-m)r 112 [C(m)r<n-Z)/Zhm {cj>;n-2,
m=l ,.._,
µ(m),
,.._,
T(m)),
As before, h m.....,
(cj>; n ... 2, ......,
µ(m), T(m)) is the bivariate t density
(n-2)m
C(m)' 0
T(m) =
(n~2)(n~m)
0 . C{m)
CBAPTER III
DISCRETE CASE
random variable m for the unknown switch point, and we shall further
assume that the state space of m is the set IT~ 2 = {2, 3,.,., T-2}.
2
Yi, i= 1, ... , m, independently distributed N(a 1+13 1 x 1, a- ) ,
and
2
Yj, j =m+l, , .. , T, independently distributed N(a 2 +13 2 Xj' a- ) ,
where a-~ > 0 X 1 < ... < XT are non ~stochastic regre s sor
'
variables, and m is the unknown switch point,
such that independent, diffuse prior densities for the unknown para -
and
+ . i
1=m+l
[Yi - (a 2 + 132 Xi ]2J}
J
expl(-1/2) [m
.~ [Yi-(a 1+13 1 Xi)] 2 + ·. ~
T [Yi-(a +13 Xi)]
2 2
~}
l i=l i=m+l
(3. 1)
Inference About m
m e IT _2 . (3. 2)
T [ ]2 T [ /\ m]2 m 1 -1 m
. ~ Yi- (a 2 +13 2 Xi) = ~ Y. - Y. + (a-µ ) ~ (a - µ ) (3. 3)
i=m+l i=m+l 1 L ,....., ,....., m"" ,.....,
where
/\ m
a2
m
!: =
/\ m
!32
-
T
~
-T. -T
(X.-X +l){Y.-Y +l)
,i=m+l L m + rn
IT i=m+l
Z: (X. - X
1
-T
m
+l)
2
T
(T -m) ~ X.
1
i=m+l
~ -1
m =
T T
~ X. ~ X.2
1 1
i=m+l i=m+l
33
= (.
1=m+l
Xi \ ~
Y"j T - m\ ,
') = (. , ~
1=m+l
Yi)\ A(T - m) ,
V\
and
may be written
/ CO/CO I
exp{(-1/2)(~ _J;:,m) ~~ (~ _l:m)} da.
1
- co -co
for m = 2, .. , , T -2 ,
Inference About [3 2
T-2
J
co
'IT1(f32) ex: ~ 'ITl(m, a2, l3z)da2' -oo < 132 <co. (3. 5)
m=2 -co
34
(3, 5) becomes
~I (~2) :~:
oo (T-m)-1 /2 exp {(-1 /2) i~I [Yi - (a I+ ~I Xi))2}
Substituting
T Am 2
+ ~ (Y. - Y. )
i=m+l 1 1
where
K (m) = ~ T -m).1=m+l
T -T
~ (Xi - Xm+ 1)
2]-1/2
exp { (-1/2) [ .~
m [Y - (a +13 Xi)]
1 1 1
2+. ~·[Yi-Yim]
T /\ 2J} •
1=1 1=m+l
35
=i
fI ~
i=m+l
(xi_ x~+ 1 )2
and g(y; m, v) is the normal densLty with mean m and variance v.
Inferenc;e About a 2
(3. 9)
where now
.l'\m
var (a 2 ) = ( T 2 )/~
. L: Xi
i=m+l
T - T
(T-m). L: (Xi-Xm+l)
i=m+l
2] .
on the normal distributions involved in (3. 8) and (3. 9). For each m,
the mean and variance of the m th dens Hy in the mixture are the least
Regression Known
2
Assuming as before that (J" =1 the likelihood function is now
-T /2
L(m, a 1' f31, a2, f32) = (2 7r)
exp{(-1/2)[~ . 1
1=
[Y.-(a 1 +[3 1 X.)] 2 +
1 1
1=mt l
.
~ [Y.-(a 2 +[3 2 X.)] 2
l 1
J}·
As signing indep~ndent, improper uniform prior densities to the
we may write
where
m /\ m /\m /\ m Am 1
Qi =( Qi 1' f31 ' Qi 2' f32) 1 ' ~ = ( Qi 1 , f31 ' Qi 2 ' f32 )
/ \.m, L
Y = 0m + ~mX L\.m U
Yi '
Am
= a2 + f32
"'-m
x1 •
1 '-' 1 1-'l i '
37
m
m ~x
1 i
m m
~x. ~x.2
1 l 1 l
T
T -m ~ x.l
CD
m+l
T T
~ x.1 ~ X.2
l
m+l m+l
/\ m
\'/\m
Of course a2 and ~2 a:re as defined in the prevLous case
and and ~~ are ~heir counterparts for the first m data points.
easLly using the four variate normal integral to obtain as the posterior
density of m
exp(-1/2)
{ ~·mI:(Y.-~.m,L) 2+ T . /\
I: (Y.-Y.m'U) 2J} (3. 12)
. 1= l l
i=m +l l
. 1
A comparison of this result with (3, 4) shows that the known first
regression has been replac;:ed by its estimate for each m and. the
Regression Known
2
To the new unknown parameter er we shall assign the improper
prior density
2
O<cr <co
density
2 - (T I 2 +1 ) J 2 [ m 2
(er) expl(-l/2cr) i:: 1 [Yi-(a 1 +13 1 Xi)]
+
. +1
F:::m
i [Y.-(a 2 +13 2 x.)J 2
l l
j1j (3.13)
2
for .m;::: 2, . , . , T -2 , 0 < (J" < co ' -co < O!z < co and -co < l3z < co •
for m=2, .,., T-2, -cxi < a2 < cp and -cxi < 13 2 < cxi. Using identity
(3. 3) and the same notation ai;; used in c~se one we can write
where
K(m)
(3. 16)
Regression l<nown
The likelihood function for this the last case of this chapter is
40
2
for m =2, .. , , T -2 , 0 < CT < co, -co a.
l
< co, -ro < P..
~l
< co , i = I, 2 .
2
Integration on CT proceeds as in case three to give for the joint
Making use of the identities given in c;ase two we c<;1.n write, using
m /\ L2 T " u2
K(m) = ~ (Y.-Y.m' ) + ~ (Y.-Y.m' )
i= 1 1 1 i= m+ 1 + 1
m
41
m /\ 2 T /\ 2]-(T-4)/2
[ I: (Y.-Y.m,L) + I: (Y.-Y.m,U) (3. 17)
L L 1 1
i=l l=m+l
for m =2, , . , , T -2 .
In closing this chapter we point out the similarity between (3, 16)
and (3. 17), and remind the reader that examples of (3, 12) and (3.17)
CONTINUOUS CASE
In some cases interest may center more on the absciesa of the point of
prior densities, and then to obtain from this the distribution of that
analogous to our "first regression known", and for that reason are not
2
er · Known, First Regression Known, m Known
+
T
~ [Y.-(a 2 +(3 2 X.)]
.i=m +1 1 1
zl
, I
l
I,.
_!)
2
As before we are as sur;ning for convenience that er =1.
Combining this likelihood function with the same diffuse prior densities
for -a:i < a 2 < co and -ro < 13 2 < a:i , Using identity (3. 3) of Chapter
(4. 2)
where
T
(T-m) ~ X.
m+l 1
I
(/\m ~m) -1
!::.m = a2 1 t"'z ~
m
=
T T zI
~
m+l
X.
1
~
m+l
X.1 J
44
Am 6m .
Of course a2 and ~2 are the usual ieast squares estimates
m 1',ffi 6m I
v
....., = ( Q:' 2 - Q:' 1 I ~2 - ~2 )
/ \.m, U
Y = ,;;; m + ~ m X
l ""2 1"2 i
sec;:tion
where
2 -1/2 2 . 2
E[IXIJ = µ[2cp(µ/cr)- l] + (Ti/2cr) exp{-µ /2cr} (4, 4)
(4. 5)
2
er· Known, Fir~t Regression Known, m Unknown
of m, a 2 and 13 2
tr I (m, az, ~z) ~ exp{(-1 /Z{~l [Yi - +~I Xi)]z + i=l+/y i- (az+~z Xi)]~}
(al
(4. 6)
where
~ 1 (m, y 1, Yz) ~ h(m) IYz I exp {(l /2) A(m, y 1) B 2 (m, y 1)}
exp{(.J/Z)A(m,y 1 )(y 2 • B(m,y 1 )) 2}
where
h(m) = expJ(-l/2)[~[Y.-(al+f31X.)]2+
l i=l 1
~.T [~.m,U_(a1+f31X.)]2
i=m+l 1 1 l
+ ~..
i=mtl
[Y. -
1
~.m'
1
U fJ} . (4, 7)
where -co < -y 1 < co . It is seen that the density in this case is a
(4. 9)
where
E("m)
I
m /\ffi ~m /\ID ;;:;,.m 1
a = (a I' 131' a 2' 132) !::, = {a I ' 131 ' a2 ' 132 )
and
48
m
m L: x.
1 1
m m
L: x. L: x2
1 1 i
1
L:-1
m
=
T
(T-m) L: X.
m+l 1
T T
L: x.1 L: x2
m+l m+l i
('@{11". ~r) and (0 ;i, ~;1) are the usual least squares estimates of
the regression parameters based on the fl:rst m and last T-m data
wl = a 1 - a2
w2 = a2
W3 = '31 - !32
w4 = !32
w4 in the exponent and these may be integrated out using the form of
(4. 10)
49
where
'{l = -w 1 /w 3 ·
where
R(m, '{ l)
and WTite
course P(m, -y 1 ) is a function not only of m and 'Yi bµt also of the
x 1' s from the data. The author has been unable to show ~hat P(m 1 'Y 1 )
We shall thus assume that P(m,-y 1 ) ls positive for all 'Yi and
proceed to obtain
where
inter sec;tion as
1Tl (-y 1) a: f J[ (
p -1/2 (m, 'Y1) {" (m, 'Yr )/P(m, 'Y1 ~ 2 cp Q(m, 'Y1) p -1/2. (m, 'Y1 )) - ~ J~
expt-l/2)Q 2 (m,y 1 )/P(m,y 1 ~ + bP(m,y 1 )/~ -l/ 2 },
2
er Known, Neither Regression
Known, m Unknown
this case proceeds exactly as in c;ase three, the additional step being
T ... 2 I
1T1h'1) ex: L: p~l 2(m,-yl)
m=2
We remind the reader that ~or this case the c;ondition discussed
iq case three must be sat~sfied for each m over the range of the
summation.
2
er Unknown, First Regression
Known, m Known
2
We assign to er the usual improper prior density
2 2 2
1T 0 (er) ex:: l/er, O<er <co,
and retain i:r:pproper uniform prior densities for the :regres13ion para-
Tr 1 (cr
2 2 -(T/2+1)
,a 2 ,13 2 )q: (er)
{ 2
exp(-1/2cr)
[m1 1
~ [Yi ... (a 1 +13 1 X)J
2
+.
L.:;mfl
~.. (Yi - (a2 + !32 Xl)]zl}
J (4. 14)
2
for 0 < CT < co , .,.co < a2 < co, and -i;o < 132 < co. In.tegrat'lng fir st
(4, 15)
where
and obtain
where
E(m)
and
53
m
m ~x.
l
1
-1
I: .. .::;
m
m m
I: x1 I: x.'?l .
1 1
where
and
the third case considered in this ehapter, As surning that G(m, '\' 1 ) is
and make use of the general t density (22) to integrate with respect
to 'Yz. Although this has not; been proved in general it has been
checked for a number of data sets and was not violated in those cases
where
+ µ ( 2 vn (µ t l /2 ) - 1) , (4.19)
of freedom. Applying this result to (4, 18) we may write the density
of the interse~Hon as
55
rrl('(l) cc G-(T-1)/2(m,'(l)A-l/2(m,'(1)
(
2
CJ Unknown, First Regression
Known, m Unknown
2 2 -(T/2+1) { 2rm .2
TTl (m, (j 'av 132) cc (CJ ) exp ( .. 1 /2 (j )L~1 [Yi - (a 1+131 Xi)]
+.
1=m+ 1
~
[Yi-(q2tj32Xi)]21}.
J
(4.21)
2
for m =2, ,,. 1 T -2 , O < CJ < co , -co < a 2 < co , and -co < 13 2 < co .
r, 2 l·(t.,z)n
~ + A(m, 'i 1 ) B (m, 'i 1 ) I G(m, "'{ l )j
(4. 22 >
Known, m Known
parameters is
for 0 < 0" 2 < co . .co < a.L < co ap.d -co < A. < co , i = 1, 2 .
~l
(4. 23)
e;xplalned immediately below equation (4. 9). We ne;xt make the trans -
formatLon ho
57
wl = Q!l - Q! 2
W2 = Ql2
w3 = !31 - !32
w4 = !32
density shown in De Groot (22), and then obtain the j oin!l dlstr·ibution of
and
We obtain
where
P(m, 'Ii)
T /\ 2
= l: (Y. ~ Y.)
1= I l i
58
where
and integrat"lon with respec:t to 'Yz proc;:e~ds eai;Uy using the general
that this coadition is i;;atisfied and proceed. One thE)n obtains for the
(4, 26)
2
er Unknown, Neither Regression
Known., m Unknown.
final case is
2 2.-(T/2+1) { 2[m 2
,,. 1 (m,cr , a 1 , [3 1, a 2 , [3 2 ) ex: (cr ) exp (-l/2cr ) i!}Y 1 - (a 1 + [3 1 Xi)]
+. ~
L=.mtl
~
[Y. (a2 + f32
~
X)Jil}.
J
(4,27)
of the same type as (4. 26), Its derivation proceeds as in the previous
qase with the adqit·Lona,l step being summation on the shift incl.ex m.
SOME EXAMPLES
IV.
chapter we re done with programs wriHen by the author ar:i.d run on the
Computer Center,
two, a plot of the posterior density (in the univariate case) or contours
ExamI? le 5. 1
order to compare our tec;hniques with his 1 we shall use the same set of
variable Y i,s blood factor VII production. rrk(m) and rru(m) are
TABLE I
m
.x m y
m rrk(m) TT
u
(m)
1 2.00000 0.370483
2 2,52288 0. 537970 0,000057 0.002121
3 3.00000 0.607684 0,005115 0. 011831
4 3.52288 0.723323 0.031579 0.034446
5 4.00000 0,761856 0.297597 0.281910
6 4.52288 0.892063 0.276329 0.266278
7 5,00000 0.956707 0,351680 0.365501
8 5,52288 0. 940549 0,037518 0.055949
9 6.00000 0.898609 0.000117 0,001142
10 6.52288 0.953850 0.000006 0.000376
11 7.00000 0.990834 0.000002 0.000322
12 7,52288 0.890291 0.000000 0.000049
13 8.00000 0.990779 0.000000 0,000075
14 8,52288 1. 050865
15 9.00000 0.982785
62
two cases, narp.ely varianc;e known (equation (3. 12)) and variance
unkn0wn (equatlon (3. 17)). Both cases ass\,lme that neither regression
is known. Our resulte for equatlon. (3. 12) are denoted in Table I above
by 1Tk(m), while those for eql,lation (3. 17) are denoted by 1Tu(m). In
'IT (m)
u
•
.3
•
.2
' 1
m
2 3 4 5 6 7 8 9 10 11 12 13
(24) for a survey of this subject. We shall present here the numerical
;r (m)
u
Mode of Posterior Distribution 7.00000
Median of Posterior Distribution: 6.00000 6.00000
Mean of Posherior Distribution 6.05077 6.04998
Var (m)
u
= 1. 09669 .
64
Hypothesis Testing
to judge whether or not the data support some specified hypothesis, say
First Approach
when H is rejected.
Second Approach
r;-·~ = 1S rr 1 (8; y) d8 .
0
Third Approach
posed by r1 and *
r1 when H is a slrnple hypothesis of the form
H: e = e0 •
For the data of our Example 5, 1, we compute and
have
rl = 10.586
*=
rl 0.914
r2 ::; l. 000
Other Techniques
An Ad Hoc Technique
regression lines.
since the median of ou.r density 1T (m) was six and the mean was near
u
68
si:x:. It also agrees to four decimal places with the e $timate from
density (4. 22) for the case "second regression known". The density
the density are relatively small outside of the intervC;Ll [3, 6), the
3 4 5 6
,
,
we now present the results from our posterior deni;ity as well as those
Hinkley 4.88 (4. 55, 5, 29) (4. 45, 5. 39) (4. 25, 5, 66)
For the same dat;;i. set as we have used Ln the previous examples
ilj ( l()
2
3 4 5 6
(4. 22) and (4, 28) compare quite favorably with the confidence intervals
likelihood ratio statistics, with the result that the approximation for
other hand, the posterior distributions given in this report are exact,
is not a single interval. Hinkley ( 13) also points out that the confidenGe
single interval.
unknown).
situation would of course include estimates not only of the shift point
and intersection, b\lt also of the l'egressi,.on parameters and the error
sequence,
imprecise 1 We remark here that even if prior knowledge does not fit
problem.
general problem.
IV. One of the re as ans for this is the apparent algebraic complexity of
of the distributions,
Hinkley ( 13 ). The same ·is true for the regression parameters of the
Finally we point out that the study by Hinkley ( 13) assumes that
'{, the abscissa of the intersection point of the two regression lines, is
constrained by Xm ::_ '{ < Xm+l, where m ·is the (usually unknown)
switch point of the sequence. The author has investigated this situation
2
in the Bayesian f:i,-amework for the first c;aee, namely where m, (J'
and the first regression are all known, The resulting posterior
interval [X
.m
,X + ).
.m 1
One could proceed to study some of the more
complex cases under this added re stricHon to i:;ee how the resulting
(5) Quandt, R. E. 11 Tests Of The Hypo the sls That A Linear Regre s -
sion Obeys Two Separate Regimes. 11 Journal of the American
Statistical Association, Vol. 5 5, 1960, 324 ,..330, -
7i:;
76
Donald Holbert
Doctor of Philosophy
· Biograph·ical: