Understanding The Kalman Filter
Understanding The Kalman Filter
The American Statistician, Vol. 37, No. 2. (May, 1983), pp. 123-127.
Stable URL:
http://links.jstor.org/sici?sici=0003-1305%28198305%2937%3A2%3C123%3AUTKF%3E2.0.CO%3B2-Z
Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at
http://www.jstor.org/about/terms.html. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained
prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in
the JSTOR archive only for your personal, non-commercial use.
Please contact the publisher regarding any further use of this work. Publisher contact information may be obtained at
http://www.jstor.org/journals/astata.html.
Each copy of any part of a JSTOR transmission must contain the same copyright notice that appears on the screen or printed
page of such transmission.
The JSTOR Archive is a trusted digital repository providing for long-term preservation and access to leading academic
journals and scholarly literature from around the world. The Archive is supported by libraries, scholarly societies, publishers,
and foundations. It is an initiative of JSTOR, a not-for-profit organization with a mission to help the scholarly community take
advantage of advances in technology. For more information regarding JSTOR, please contact support@jstor.org.
http://www.jstor.org
Thu Dec 20 12:01:53 2007
Understanding the Kalman Filter
RICHARD J. MEINHOLD and NOZER D. SINGPURWALLA*
with P(e, ( 0, , Y,-l) being the likelihood. If in (4.2) we replace XI, X2, p2, and C22by el, O,, ~ 1 8 1 - ~ ,
Using the fact that Y, = F,O, + v,, (3.5) can be written and R,, respectively, and recall the result that (el )8,,
as e l = F,(O, - G , ~ , - ~ ) + V ,so
, that E(e,(O,, Y,-l)= -
yl-J N(F,(O, - G , ~ , - I ) V,)
, (Eq. (3.7)), then
F, (01 - G10,-1).
-
Since v, N(0, V,), it follows that the likelihood is P-1 + C12Rt-I (0, - ~rer-1)e ~ r ( o -
r ~ter-I),
described by so that pl@O and C12@FrRr;similarly,
C11 - C12CG1C21= 2 1 1 - FtRrF: Vt
We can now use Bayes's theorem (Eq. (3.6)) to so that Cll V, + FtR,F: .
obtain We now invoke the converse relation mentioned pre-
viously to conclude that the joint distribution of 0, and
e, , given Y,- can be described as
4. DETERMINATION OF THE
POSTERIOR DISTRIBUTION This is the desired posterior distribution. We now sum-
marize to highlight the elements of the recursive pro-
The tedious effort required to obtain P ( 0 , )Y,) using cedure.
(3.8) can be avoided if we make use of the following After time t - 1, we had a posterior distribution for
well-known result in multivariate statistics (Anderson with mean 8,-1 and variance C,-l (Eq. (3.3)).
1958, pp. 28-29), and some standard properties of the Forming a prior for 0, with mean ~ , 8 , and - ~ variance
normal distribution. R, = G,C,-lG: + W , (Eq. (3.4)) and evaluating a like-
Let Xl and X2 have a bivariate normal distribution lihood given e, = Y , - F,G,~,-,( Eq. (3.5)), we arrive at
with means p1 and p2, respectively, and a covariance the posterior density for 0,; this has mean
matrix
~ (V, + F,R,F: )-'el
8, = ~ , 8 , +- R,F: (4.5)
and variance
we denote this by 2, = R, - R,F: (V, + F,R,F: )-'F,R,. (4.6)
We now continue through the next cycle of the process.