LN Ausgl
LN Ausgl
Adjustment Theory
Geodätisches Institut
Universität Stuttgart
❤tt♣s✿✴✴✇✇✇✳❣✐s✳✉♥✐✲st✉tt❣❛rt✳❞❡
Rev. 4.48
1 Introduction 6
1.1 Adjustment theory – a first look . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.2 Historical development . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
3 Generalizations 21
3.1 Higher dimensions: the 𝐴-model (observation equations) . . . . . . . . . . . . . . . 21
3.2 The datum problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
3.3 Linearization of non-linear observation equations . . . . . . . . . . . . . . . . . . . 26
3.4 Higher dimensions: the 𝐵-model (Condition equations) . . . . . . . . . . . . . . . . 32
3.5 Linearization of non-linear condition equations . . . . . . . . . . . . . . . . . . . . 36
3.6 Higher dimensions: the mixed model . . . . . . . . . . . . . . . . . . . . . . . . . . 37
5 Geomatics examples 47
5.1 A-Model: Adjustment of observation equations . . . . . . . . . . . . . . . . . . . . . 47
5.1.1 Planar triangle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
5.1.2 Distance Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
5.1.3 Distance and Direction Network (1) . . . . . . . . . . . . . . . . . . . . . . . 55
5.1.4 Distance and Direction Network (2a) . . . . . . . . . . . . . . . . . . . . . . 58
5.1.5 Free Adjustment: Distance and Direction Network (2b) . . . . . . . . . . . . 62
5.1.6 Overconstrained adjustment: Distance, direction and angle network . . . . 65
5.1.7 Polynomial fit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
5.2 B-Model: Adjustment of condition equations . . . . . . . . . . . . . . . . . . . . . . 78
5.2.1 Planar triangle 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
5.2.2 Planar triangle 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
3
Contents
6 Statistics 106
6.1 Expectation of sum of squared residuals . . . . . . . . . . . . . . . . . . . . . . . . . 106
6.2 Basics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
6.3 Hypotheses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
6.4 Distributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
A Partitioning 131
A.1 Inverse Partitioning Method (IPM) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
A.2 Inverse Partitioning Method: special case 1 . . . . . . . . . . . . . . . . . . . . . . . 131
A.3 Inverse Partitioning Method: special case 2 . . . . . . . . . . . . . . . . . . . . . . . 132
4
Contents
5
1 Introduction
Ausgleichungs-
rechnung
Adjustment theory deals with the optimal combination of redundant measurements together with
the estimation of unknown parameters.
Teunissen, 2000
To understand the purpose of adjustment theory consider the following simple highschool example
that is supposed to demonstrate how to solve for unknown quantities. In case 0 the price of apples
and pears is determined after doing groceries twice. After that we will discuss more interesting
shopping scenarios.
Case 0)
3 apples + 4 pears = 5.00 e
5 apples + 2 pears = 6.00 e
5 = 3𝑥 1 + 4𝑥 2
2 equations in 2 unknowns:
6 = 5𝑥 1 + 2𝑥 2
5 3 4 𝑥1
as matrix-vector system: =
6 5 2 𝑥2
linear algebra: 𝑦 = 𝐴𝑥
The determinant of matrix 𝐴 reads det 𝐴 = 3 · 2 − 5 · 4 = −14. Thus the above linear system can be
inverted:
−1 𝑥1 1 2 −4 5 1
𝑥 = 𝐴 𝑦 ⇐⇒ = =
𝑥2 −14 −5 3 6 0.5
So each apple costs 1 e and each pear 50 cents. The price can be determined because there are as
many unknowns (the price of apples and the price of pears) as there are observations (shopping
twice). The square and regular matrix 𝐴 is invertible.
Remark 1.1 (terminology) The left-hand vector 𝑦 contains the observations. The vector 𝑥 contains
the unknown parameters. The two vectors are linked through the design matrix 𝐴. The linear model
𝑦 = 𝐴𝑥 is known as the model of observation equations.
The following cases demonstrate that the idea of determining unknowns from observations is not
as straightforward as may seem from the above example.
6
1.1 Adjustment theory – a first look
Case 1a)
If one buys twice as much apples and pears the second time, and if one has to pay twice as much as
well, no new information is added to the system of linear equations
3a + 4p = 5 e 5 3 4 𝑥1
⇐⇒ =
6a + 8p = 10 e 10 6 8 𝑥2
The matrix 𝐴 has linearly dependent columns (and rows), i.e. it is singular. Correspondingly det 𝐴 =
0 and the inverse 𝐴−1 does not exist. The observations (5 e and 10 e) are consistent, but the vector
𝑥 of unknowns (price per apple or pear) cannot be determined. This situation will return later with
so-called datum problems. Seemingly trivial, case 1a) is of fundamental importance.
Case 1b)
Suppose the same shopping scenario as above, but now one needs to pay 8 e the second time.
5
𝑦=
8
In this alternative scenario, the matrix is still singular and 𝑥 cannot be determined. But worse still,
the observations 𝑦 are inconsistent with the linear model. Mathematically, they do not fulfil the
compatibility conditions. In data analysis inconsistency is not necessarily a weakness. In fact, it
may add information to the linear system. It might indicate observation errors (in 𝑦), for instance a
miscalculation of the total grocery bill. Or it might indicate an error in the linear model: the prices
may have changed in between, which leads to a different 𝐴.
Case 2)
We go back to the consistent and invertible case 0. Suppose a third combination of apples and pears
gives an inconsistent result.
© ª ©
5 3 4 ª 𝑥1
6 ® = 5 2 ® 𝑥2
® ®
« ¬ «
3 1 2 ¬
The third row is inconsistent with 𝑥 1 = 1, 𝑥 2 = 12 from case 0. But one can equally maintain that the
first row is inconsistent with the second and third. In short, we have redundant and inconsistent
information: the number of observations (𝑚 = 3) is larger than the number of unknowns (𝑛 = 2).
Consequently, matrix 𝐴 is not a square matrix.
Although a standard inversion is not possible anymore, redundancy is a positive characteristic in
engineering disciplines. In data analysis redundancy provides information on the quality of the
observations, it strengthens the estimation of the unknowns and allows us to perform statistical
tests. Thus, redundancy provides a handle to quality control.
But obviously the inconsistencies have to be eliminated. This is done by spreading them out in
an optimal way. This is the task of adjustment: to combine redundant and inconsistent data in an
optimal way. Two main questions will be addressed in the first part of this course:
• How to combine inconsistent data optimally?
• Which criterion defines what optimal is?
7
1 Introduction
Errors
The inconsistencies may be caused by model errors. If the green grocer changed his prices between
two rounds of shopping we need to introduce new parameters. In surveying, however, the observa-
tion models are usually well-defined, e.g. the sum of angles in a plane triangle equals 𝜋. So usually
the inconsistencies arise from observation errors. To make the linear system 𝑦 = 𝐴𝑥 consistent
again, we need to introduce an error vector 𝑒 with the same dimension as the observation vector.
𝑦 =𝐴 𝑥 + 𝑒 . (1.1)
𝑚×1 𝑚×𝑛 𝑛×1 𝑚×1
Remark 1.2 (sign convention) In many textbooks the error vector is put at the same side of the
equation as the observations: 𝑦 + 𝑒 = 𝐴𝑥. Where to put the 𝑒-vector is rather a philosophical question.
Practically, though, one should be aware of the definitions used, how the sign of 𝑒 is defined.
Zufallsvariable errors are stochastic quantities. Thus, the vector 𝑒 is a (𝑚-dimensional) stochastic variable. The
vector of observations is consequently also a stochastic variable. Such quantities will be underlined,
if necessary:
𝑦 = 𝐴𝑥 + 𝑒 .
Nevertheless, it will be assumed in the sequel that 𝑒 is drawn from a distribution of random errors.
8
1.2 Historical development
The question how to combine redundant and inconsistent data has been treated in many different
ways in the past. To compare the different approaches, the following mathematical framework is
used:
observation model: 𝑦 = 𝐴𝑥
combination: 𝐿 𝑦 = 𝐿 𝐴 𝑥
𝑛×𝑚 𝑚×1 𝑛×𝑚 𝑚×𝑛 𝑛×1
invert: 𝑥 = (𝐿𝐴) −1 𝐿𝑦
= 𝐵𝑦
From a modern viewpoint matrix 𝐵 is a left-inverse of 𝐴 because 𝐵𝐴 = 𝐼 . Note that such a left-inverse
is not unique, as it depends on the choice of the combination matrix 𝐿.
The trouble with this approach, obviously, is the arbitrariness of the choice of 𝑛 observations. There
are 𝑚𝑛 choices.
From a modern perspective the method of selected points resembles the principle of cross-validation.
The idea of this principle is to deliberately leave out a limited number of observations during the
estimation and to use the estimated parameters to predict values for those observations that were left
out. A comparison between actual and predicted observations provides information on the quality
of the estimated parameters.
Mayer called them equations of conditions, which is, from today’s view point, an unfortunate desig-
nation.
1 Tobias Mayer (1723–1762) made the breakthrough that enabled the lunar distance method to become a practicable way
of finding longitude at sea. As a young man, he displayed an interest in cartography and mathematics. In 1750, he
was appointed professor in the Georg-August Academy in Göttingen, where he was able to devote more time to his
interests in lunar theory and the longitude problem. From 1751 to 1755, he had an extensive correspondence with
Leonhard Euler, whose work on differential equations enabled Mayer to calculate lunar distance tables.
9
1 Introduction
©1 1 ··· 1 0 0 0 0 0 0 0 0ª
𝐿 = 0 0 0 0 1 1 · · · 1 0 0 0 0 ®®
«0 0 0 0 0 0 0 0 1 1 ··· 1¬
3×27
𝑦 =𝐴 𝑥
24×1 24×4 4×1
𝐿 𝑦 =𝐿 𝐴 𝑥
4×24 24×1 4×24 24×4 4×1
𝑥 = (𝐿𝐴) −1 𝐿𝑦
2 Euler (1707–1783) was a Swiss mathematician and physicist. He is considered to be one of the greatest mathematicians
who ever lived. Euler was the first to use the term function (defined by Leibniz in 1694) to describe an expression
involving various arguments; i.e. 𝑦 = 𝐹 (𝑥). He is credited with being one of the first to apply calculus to physics.
3 Pierre-Simon, Marquis de Laplace (1749–1827) was a French mathematician and astronomer who put the final capstone
on mathematical astronomy by summarizing and extending the work of his predecessors in his five volume Mécanique
Céleste (Celestial Mechanics) (1799–1825). This masterpiece translated the geometrical study of mechanics used by
Newton to one based on calculus, known as physical mechanics. He is also the discoverer of Laplace’s equation and the
Laplace transform, which appear in all branches of mathematical physics – a field he took a leading role in forming.
He became count of the Empire in 1806 and was named a marquis in 1817 after the restoration of the Bourbons.
Pierre-Simon Laplace was among the most influential scientists in history.
10
1.2 Historical development
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
1 1 1 −1 −1 −1 −1 −1 −1 −1 −1 −1 −1 −1 −1
1 1 1 1 1 1 1 1 1
𝐿 = −1 1 0 0 −1 0 0 1 1 0 −1 0 0 1 1
1 1 1 1 1 1 1 1 1
0 0 −1 −1 0 1 1 0 0 −1 0 1 1 0 0
0 1 1 0 0 −1 0 0 1
4×24
1 0 0 −1 −1 0 1 1 0
= 𝑥 1 + sin2 𝜑𝑥 2
First attempt: All 52 = 2!(5−2)!
5!
= 5·4·3·2·1
2·1·3·2·1 = 10 combinations with 2 observations each.
=⇒ 10 systems of equations (2 × 2)
=⇒ 10 solutions
Comparison of results.
His result: gross variations of the ellipticity =⇒ reject the ellipsoidal hypothesis.
Second attempt: The mean deviation (or sum of deviations) should be zero:
Õ
5
𝑒𝑖 = 0 ,
𝑖=1
11
1 Introduction
for the determination of orbits of comets and to derive the Earth ellipticity. As will be derived in the
next chapter, the matrix 𝐿 will be the transposed of the design matrix 𝐴:
Õ
5
L= 𝑒𝑖2 = 𝑒 T𝑒 = (𝑦 − 𝐴𝑥) T (𝑦 − 𝐴𝑥) = min
𝑥ˆ
𝑖=1
⇐⇒ 𝐿 = 𝐴T
⇐⇒ 𝑥ˆ = ( 𝐴T𝐴 ) −1 𝐴T 𝑦
𝑛×1
|{z} 𝑛×𝑚 𝑚×1
𝑛×𝑛
After Legendre’s publication Gauss states that he already developed and used the method of least
squares in 1794. He published his own theory only several years later. A bitter argument over the
scientific priority broke out. Nowadays it is acknowledged that Gauss’s claim of priority is very
likely valid but that he refrained from publication because he found his results still premature.
12
2 Least squares adjustment
Legendre’s method of least squares is actually not a method. Rather, it provides the criterion for the
optimal combination of inconsistent data: combine the observations such that the sum of squared
residuals is minimal. It was seen already that this criterion defines the combination matrix 𝐿:
𝐿𝑦 = 𝐿𝐴𝑥 =⇒ 𝑥 = (𝐿𝐴) −1 𝐿𝑦 .
But what is so special about 𝐿 = 𝐴T ? In this chapter we will derive the equations of least squares
adjustment from several mathematical viewpoints:
• geometry: smallest distance (Pythagoras)
• linear algebra: orthogonality between the optimal 𝑒 and the columns of 𝐴: 𝐴T𝑒 = 0
• calculus: minimizing target function → differentiation
• probability theory: BLUE (Best Linear Unbiased Estimate)
These viewpoints are elucidated by a simple but fundamental example in which a distance is mea-
sured twice.
We will start with the model of the introduction 𝑦 = 𝐴𝑥. This is the model of observation equations, vermittelnde
in which observations are linearly related to unknowns. Ausgleichung
Suppose that, in order to determine a certain distance, it is measured twice. Let the unknown dis-
tance be 𝑥 and the observations 𝑦1 and 𝑦2 :
𝑦1 = 𝑥 𝑦1 1 direkte
=⇒ = 𝑥 =⇒ 𝑦 = 𝑎𝑥 (2.1)
𝑦2 = 𝑥 𝑦2 1 Beobachtungen
If 𝑦1 = 𝑦2 the equations are consistent and the parameter 𝑥 clearly solvable: 𝑥 = 𝑦1 = 𝑦2 . If, on
the other hand, 𝑦1 ≠ 𝑦2 the equations are inconsistent and 𝑥 not solvable directly. Given a limited
measurement precision the latter scenario will be more likely. Let’s therefore take into account
measurement errors 𝑒.
𝑦1 1 𝑒
= 𝑥+ 1 =⇒ 𝑦 = 𝑎𝑥 + 𝑒 (2.2)
𝑦2 1 𝑒2
A geometric view
The column vector 𝑎 spans up a line 𝑦 = 𝑎𝑥 in R2 . This line is the 1D model space or range space Spaltenraum
of 𝐴: R (𝐴). Inconsistency of the observation vector means that 𝑦 does not lie on this line. Instead,
13
2 Least squares adjustment
there is some vector of discrepancies 𝑒 that connects the observations to the line. Both this vector
𝑒 and the point on the line, defined by the unknown parameter 𝑥, must be found, see the left panel
of fig. 2.1. Adjustment of observations is about finding the optimal 𝑒 and 𝑥. An intuitive choice for
(a) Inconsistent data: the observation vec- (b) Least squares adjustment means or-
tor 𝑦 is not in the model space, i. e. not thogonal projection of 𝑦 onto the line
on the line spanned by 𝑎. 𝑎𝑥. This guarantees the shortest 𝑒.
Figure 2.1
Í Í
“optimality” is to make the vector 𝑒 as short as possible. The shortest possible 𝑒 is indicated by a
ˆ The squared length 𝑒ˆT𝑒ˆ = 𝑖 𝑒ˆ𝑖2 is the smallest of all possible 𝑒 T𝑒 = 𝑖 𝑒𝑖2 , which explains the
hat: 𝑒.
name least squares. If 𝑒ˆ is determined, we will at the same time know the optimal 𝑥. ˆ
How do we get the shortest 𝑒? The right panel of fig. 2.1 show that the shortest 𝑒 is perpendicular
to 𝑎:
𝑒ˆ ⊥ 𝑎
Subtracting 𝑒ˆ from the vector of observations 𝑦 leads to the point 𝑦ˆ = 𝑎𝑥ˆ that is on the line and
closest to 𝑦. This is the vector of adjusted observations. Being on the line means that 𝑦ˆ is consistent.
If we now substitute 𝑒ˆ = 𝑦 −𝑎𝑥,
ˆ the least squares criterion leads us subsequently to optimal estimates
of 𝑥, 𝑦 and 𝑒:
14
2.1 Adjustment with observation equations
Exercise 2.1 Call the matrix in square brackets 𝑃 and convince yourself that the sum of squares of the
residuals (the squared length of 𝑒)
ˆ in the last line indeed follows from the line above. Two things should
be shown: that 𝑃 is symmetric, and that 𝑃𝑃 = 𝑃.
The least squares criterion leads us to the above algorithm. Indeed, the combination matrix reads
𝐿 = 𝐴T .
A calculus view
Let us define the Lagrangian or cost function:
1
L𝑎 (𝑥) = 𝑒 T𝑒 , (2.4)
2
which is half of the sum of square residuals. Its graph would be a parabola. The factor 21 shouldn’t
worry us. If we find the minimum L𝑎 , then any scaled version of it is also minimized. The task is
now to find the 𝑥ˆ that minimizes the Lagrangian. With 𝑒 = 𝑦 − 𝑎𝑥 we get the minimization problem:
1
min L𝑎 (𝑥) = min (𝑦 − 𝑎𝑥) T (𝑦 − 𝑎𝑥)
𝑥ˆ 2
𝑥ˆ
1 T T 1 T 2
= min 𝑦 𝑦 − 𝑥𝑎 𝑦 + 𝑎 𝑎𝑥 .
𝑥ˆ 2 2
The term 21 𝑦 T𝑦 is just a constant that doesn’t play a role in the minimization. The minimum occurs
at the location where the derivative of L𝑎 is zero (necessary condition):
dL𝑎
ˆ = −𝑎 T𝑦 + 𝑎 T𝑎𝑥ˆ = 0 .
(𝑥)
d𝑥
The solution of this equation, which happens to be the normal equation (2.3c), is the 𝑥ˆ we are looking
for:
𝑥ˆ = (𝑎 T𝑎) −1𝑎 T𝑦 .
To make sure that the derivative does not give us a maximum, we must check that the second
derivative of L𝑎 is positive at 𝑥ˆ (sufficiency condition):
d2 L𝑎
ˆ = 𝑎 T𝑎 > 0 ,
(𝑥)
d𝑥 2
which is a positive constant for all 𝑥 indeed.
Projectors
Figure 2.1 shows that the optimal, consistent 𝑦ˆ is obtained by an orthogonal projection of the original
𝑦 onto the line 𝑎𝑥. Mathematically this was translated by (2.3e) as:
15
2 Least squares adjustment
It projects onto the line 𝑎𝑥 along a direction orthogonal to 𝑎. With this projection in mind, the
property 𝑃𝑎 𝑃𝑎 = 𝑃𝑎 becomes clear: if a vector has been projected already, the second projection has
no effect anymore.
Also (2.3f) can be abbreviated:
𝑒ˆ = 𝑦 − 𝑃𝑎𝑦 = (𝐼 − 𝑃𝑎 ) 𝑦 = 𝑃𝑎⊥𝑦 ,
which is also a projection. In order to give 𝑒ˆ the vector 𝑦 is projected onto a line perpendicular to
𝑎𝑥 along the direction 𝑎. And, of course, 𝑃𝑎⊥ is idempotent as well:
Moreover, the definition (2.5c) makes clear that 𝑃𝑎 and 𝑃𝑎⊥ are symmetric. Therefore the square sum
of residuals (2.3g) could be simplified to:
T
𝑒ˆT𝑒ˆ = 𝑦 T 𝑃𝑎⊥ 𝑃𝑎⊥𝑦 = 𝑦 T 𝑃𝑎⊥ 𝑃𝑎⊥𝑦 = 𝑦 T 𝑃𝑎⊥𝑦 .
At a more fundamental level the definition of the orthogonal projector 𝑃𝑎⊥ = 𝐼 − 𝑃𝑎 can be recast
into the equation:
𝐼 = 𝑃𝑎 + 𝑃𝑎⊥ .
zerlegen Thus, we can decompose every vector, say 𝑧, into two components: one in component in a subspace
defined by 𝑃𝑎 , the other mapped onto a subspace by 𝑃𝑎⊥ :
𝑧 = 𝐼𝑧 = 𝑃𝑎 + 𝑃𝑎⊥ 𝑧 = 𝑃𝑎 𝑧 + 𝑃𝑎⊥𝑧 .
In the case of ls adjustment, the subspaces are defined by the range space R (𝑎) and its orthogonal
complement R (𝑎) ⊥ :
𝑦 = 𝑃𝑎𝑦 + 𝑃𝑎⊥𝑦 = 𝑦ˆ + 𝑒ˆ ,
which is visualized in fig. 2.1.
Numerical example
With 𝑎 = (1 1) T we will follow the steps from (2.3):
(𝑎 T𝑎)𝑥ˆ = 𝑎 T𝑦 ←→ 2𝑥ˆ = 𝑦1 + 𝑦2
1
𝑥ˆ = (𝑎 T𝑎) −1𝑎 T𝑦 ←→ 𝑥ˆ = (𝑦1 + 𝑦2 ) (average)
2
𝑦ˆ1 1 𝑦1 + 𝑦2
𝑦ˆ = 𝑎(𝑎 T𝑎) −1𝑎 T𝑦 ←→ =
𝑦ˆ2 2 𝑦1 + 𝑦2
𝑒ˆ1 1 𝑦1 − 𝑦2
𝑒ˆ = 𝑦 − 𝑦ˆ ←→ = (error distribution)
𝑒ˆ2 2 −𝑦1 + 𝑦2
1
𝑒ˆT𝑒ˆ ←→ (𝑦1 − 𝑦2 ) 2 (least squares)
2
16
2.2 Adjustment with condition equations
and check the equations 𝑦ˆ = 𝑃𝑎𝑦 and 𝑒ˆ = 𝑃𝑎⊥𝑦 with the numerical results above.
In the ideal case, in which the measurements 𝑦1 and 𝑦2 are without error, both observations would
be equal: 𝑦1 = 𝑦2 or 𝑦1 − 𝑦2 = 0. In matrix notation:
𝑦
1
1 −1 =0 =⇒ 𝑏T 𝑦 = 0 . (2.7)
𝑦2 1×2 2×1 1×1
In reality, though, both observations do contain errors, i.e. they are not equal: 𝑦1 −𝑦2 ≠ 0 or 𝑏 T𝑦 ≠ 0.
Instead of 0 one would obtain a misclosure 𝑤. If we recast the observation equation into 𝑦 − 𝑒 = 𝑎𝑥, Widerspruch
it is clear that it is (𝑦 − 𝑒) that has to obey the above condition:
𝑏 T (𝑦 − 𝑒) = 0 =⇒ 𝑤 := 𝑏 T𝑦 = 𝑏 T𝑒 . (2.8)
In this condition equation the vector 𝑒 is unknown. The task of adjustment according to the model Bedingungs-
of condition equations is to find the smallest possible 𝑒 that fulfills the condition (2.8). At this stage, gleichung
the model of condition equations does not involve any kind of parameters 𝑥.
A geometric view
The condition (2.8) describes a line with normal vector 𝑏 that goes through the point 𝑦. This line is
the set of all possible vectors 𝑒. We are looking for the shortest 𝑒, i.e. the point closest to the origin.
Figure 2.2 makes it clear that 𝑒ˆ is perpendicular to the line 𝑏 T𝑒 = 𝑤. So 𝑒ˆ lies on a line through 𝑏.
Geometrically, 𝑒ˆ is achieved by projecting 𝑦 onto a line through 𝑏. Knowing the definition of the
projectors from the previous section, we here define the following estimates by using the projector Schätzungen
𝑃𝑏 :
Exercise 2.3 Confirm that the orthogonal projector 𝑃𝑏 is idempotent and verify that the equation for
𝑒ˆT𝑒ˆ is correct.
17
2 Least squares adjustment
(a) The condition equation describes a line in R2 , (b) Least squares adjustment with condition
perpendicular to 𝑏 and going through 𝑦. We equations means orthogonal projection of 𝑦
are looking for a point 𝑒 on this line. onto the line through 𝑏. This guarantees the
shortest 𝑒.
Figure 2.2
Numerical example
With 𝑏 T = 1 −1 we get
1
=⇒ (𝑏 T𝑏) −1 =
𝑏 T𝑏 = 2
2
T −1 T 1 1 1 1 −1
𝑃𝑏 = 𝑏 (𝑏 𝑏) 𝑏 = 1 −1 =
2 −1 2 −1 1
1 𝑦1 − 𝑦2
=⇒ 𝑒ˆ = 𝑃𝑏 𝑦 =
2 −𝑦1 + 𝑦2
⊥ 10 1 1 −1 1 11
𝑃𝑏 = 𝐼 − 𝑃𝑏 = − =
01 2 −1 1 2 11
1 𝑦1 + 𝑦2
=⇒ 𝑦ˆ = 𝑃𝑏⊥𝑦 =
2 𝑦1 + 𝑦2
These results for 𝑦ˆ and 𝑒ˆ are the same as those for the adjustment with observation equations. The
estimator 𝑦ˆ describes the mean of the two observations, whereas the estimator 𝑒ˆ distributes the
inconsistencies equally. Also note that 𝑃𝑏 = 𝑃𝑎⊥ and vice versa.
18
2.2 Adjustment with condition equations
A calculus view
Alternatively we can again determine the optimal 𝑒 by minimizing the target function L𝑏 (𝑒) = 𝑒 T𝑒,
but now under the condition 𝑏 T (𝑦 − 𝑒) = 0:
The main trick here – due to Lagrange – is to not consider the condition as a constraint or limitation
of the minimization problem. Instead, the minimization problem is extended. To be precise, the
condition is added to the original cost function, multiplied by a factor 𝜆. Such factors are called
Lagrangian multipliers. In case of more than one condition, each gets its own multiplier. The target
function L𝑏 is now a function of 𝑒 and 𝜆.
The minimization problem now exists in finding the 𝑒ˆ and 𝜆ˆ that minimize the extended L𝑏 . Thus
we need to derive the partial derivatives of L𝑏 towards 𝑒 and 𝜆. Next, we impose the conditions that
ˆ
these partial derivatives are zero when evaluated in 𝑒ˆ and 𝜆.
𝜕L ˆ =0
(𝑒,
ˆ 𝜆) =⇒ 𝑒ˆ − 𝑏 𝜆ˆ = 0
𝜕𝑒
𝜕L ˆ =0
(𝑒,
ˆ 𝜆) =⇒ 𝑏 T𝑦 − 𝑏 T𝑒ˆ = 0
𝜕𝜆
In matrix terms, the minimization problem leads to:
𝐼 −𝑏 𝑒ˆ 0
= . (2.11)
−𝑏 T 0 𝜆ˆ −𝑏 T𝑦
Because of the extension of the original minimization problem, this system is square. It might be
inverted in a straightforward manner, see also A.1. Instead, we will solve it stepwise. First, rewrite
the first line:
𝑒ˆ − 𝑏 𝜆ˆ = 0 =⇒ 𝑒ˆ = 𝑏 𝜆ˆ .
𝑏 T𝑦 − 𝑏 T𝑏 𝜆ˆ = 0 ,
𝑒ˆ − 𝑏 (𝑏 T𝑏) −1𝑏 T𝑦 = 0 ,
19
2 Least squares adjustment
2.3 Synthesis
Both the calculus and geometric approach provide the same ls estimators. This is due to
which fundamentally connects the model with observation equations to the model with condition
equations. Starting with the observation equation, and applying the orthogonality, one ends up with
the condition equation:
𝑏T 𝑏 T𝑎=0
𝑦 = 𝑎𝑥 + 𝑒 −→ 𝑏 T𝑦 = 𝑏 T𝑎𝑥 + 𝑏 T𝑒 −→ 𝑏 T𝑦 = 𝑏 T𝑒 .
20
3 Generalizations
In this chapter we will apply several generalizations. First we will take the ls adjustment problems
to higher dimensions. What we will basically do is replace the vector 𝑎 by an (𝑚 × 𝑛) matrix 𝐴
and replace the vector 𝑏 by an (𝑚 × (𝑚 − 𝑛)) matrix 𝐵. The basic structure of the projectors and
estimators will remain the same.
Moreover, we need to be able to formulate the 2 ls problems with constant terms:
𝑦 = 𝐴𝑥 + 𝑎 0 + 𝑒 and 𝐵 T (𝑦 − 𝑒) = 𝑏 0 .
Next, we will deal with nonlinear observation equations and nonlinear condition equations. This
will involve linearization, the use of approximate values, and iteration.
We will also touch upon the datum problem, which arises if 𝐴 contains dependent columns. Math-
ematically we have rank 𝐴 < 𝑛 so that the normal matrix has det 𝐴T𝐴 = 0 and is not invertible.
At the end we will merge both models in order to establish the so-called general model of adjustment
theory.
The vector of observations 𝑦, the vector of inconsistencies 𝑒 and their respective ls-estimators will
be (𝑚 × 1) vectors. The vector 𝑥 will contain 𝑛 unknown parameters. Thus the redundancy, that is
the number of redundant observations, is:
redundancy: 𝑟 = 𝑚 − 𝑛 .
Geometry
𝑦 = 𝐴𝑥 +𝑒 is the multidimensional extension of 𝑦 = 𝑎𝑥 +𝑒 with given (reduced) vector of observations Absolutglied-
𝑦. vektor
We split 𝐴 in its 𝑛 column vectors 𝑎𝑖 , 𝑖 = 1, . . . , 𝑛
𝑚×1
𝐴 = [ 𝑎 1 , 𝑎 2 , 𝑎 3 , . . . , 𝑎𝑛 ]
𝑚×𝑛 𝑚×1 𝑚×1 𝑚×1 𝑚×1
Õ
𝑛
𝑦 = 𝑎𝑖 𝑥 𝑖 + 𝑒 ,
𝑚×1 𝑚×1 1×1 𝑚×1
𝑖=1
21
3 Generalizations
Example: 𝑚 = 3, 𝑛 = 2 ( 𝑦 spans an E3 )
𝑚×1
(a) The vectors 𝑦, 𝑎 1 and 𝑎 2 all lie in R3 . (b) To see that 𝑦 is inconsistent, the space
spanned by 𝑎 1 and 𝑎 2 , is shown as the base
plane. It is clear that the observation vector
is not in this plane 𝑦 ∉ R (𝐴), i.e. 𝑦 cannot be
written as a linear combination of 𝑎 1 and 𝑎 2 .
Figure 3.1
1
L𝐴 (𝑥) = 𝑒 T𝑒
2
1
= (𝑦 − 𝐴𝑥) T (𝑦 − 𝐴𝑥)
2
1 1 1 1 𝑥
= 𝑦 T𝑦 − 𝑦 T𝐴𝑥 − 𝑥 T𝐴T𝑦 + 𝑥 T𝐴T𝐴𝑥 −→ min
2 2 2 2
𝜕L
(𝑥)
ˆ =0 =⇒ 𝑒ˆ = 𝑦 − 𝑦ˆ = [𝐼 − 𝐴(𝐴T𝐴) −1𝐴T ]𝑦 = 𝑃𝐴⊥𝑦
𝜕𝑥
22
3.1 Higher dimensions: the 𝐴-model (observation equations)
𝑃𝐴⊥ idempotent?
𝑃𝐴⊥ 𝑃𝐴⊥ = [𝐼 − 𝐴(𝐴T𝐴) −1𝐴T ] [𝐼 − 𝐴(𝐴T𝐴) −1𝐴T ]
= 𝐼 − 2𝐴(𝐴T𝐴) −1𝐴T + 𝐴(𝐴T𝐴) −1 𝐴T𝐴(𝐴T𝐴) −1 𝐴T
| {z }
=𝐼
T −1 T
= 𝐼 − 𝐴(𝐴 𝐴) 𝐴
= 𝑃𝐴⊥
𝑦ˆ = 𝑃𝐴𝑦 = 𝐴(𝐴T𝐴) −1𝐴T𝑦
ℎ 1B = 𝐻 B − 𝐻 1 + 𝑒 1B
ℎ 13 = 𝐻 3 − 𝐻 1 + 𝑒 13
ℎ 12 = 𝐻 2 − 𝐻 1 + 𝑒 12
ℎ 32 = 𝐻 2 − 𝐻 3 + 𝑒 32
ℎ 1A = 𝐻 A − 𝐻 1 + 𝑒 1A
Δℎ T := [ℎ 1B , ℎ 13 , ℎ 12 , ℎ 32 , ℎ 1A ] vector of levelled height differences
𝐻 1 , 𝐻 2 , 𝐻 3 unknown heights of points 𝑃 1, 𝑃2, 𝑃3
𝐻 A , 𝐻 B given bench marks
In matrix notation:
0 0 © 𝐻1 ª
© 1B ª © ª © B ª © 1B ª
ℎ −1 𝐻 𝑒
ℎ 13 ® −1
® 0 1 ®® 𝐻 2 ®® 0 ®® 𝑒 13 ®®
ℎ 12 ® = −1 1 0 ®® « 𝐻 3 ¬ + 0 ®® + 𝑒 12 ®®
®
ℎ ® 0 1 −1 ®® 0 ® 𝑒 ®
32 ® ® 32 ®
« ℎ 1A ¬ « −1 0 0¬ « A ¬ « 𝑒 1A ¬
𝐻
0 0 © 𝐻1 ª
© 1B ª © ª © 1B ª
ℎ − 𝐻𝐵 −1 𝑒
ℎ 13 ® −1 ®
0 1 ®® 𝐻 2 ® 𝑒 13 ®®
®
ℎ 12 ® = −1 1 0 ®® « 𝐻 3 ¬ + 𝑒 12 ®®
® ∼ 𝑦 =𝐴 𝑥 + 𝑒
ℎ ® 0 1 −1 ®® 𝑒 ®
® 32 ®
5×1 5×3 3×1 5×1
32
« ℎ 1A − 𝐻 A ¬ « −1 0 0¬ « 𝑒 1A ¬
23
3 Generalizations
So far we have disregarded the fact that the matrix 𝐴T𝐴 might not be invertible because it is rank defi-
cient. From matrix algebra it is known that the rank of the normal equation matrix 𝑁 := 𝐴T𝐴, rank 𝑁 ,
equals the the rank of 𝐴, rank 𝐴. If it should happen now that – for some reason – matrix 𝐴 is rank
deficient, then the normal equation matrix 𝑁 =𝐴T𝐴 cannot be inverted. The following statements
are equivalent:
• Matrix 𝐴 rank deficient (rank 𝐴 < 𝑛),
𝑚×𝑛
ℎ 12 = 𝐻 2 − 𝐻 1
© ℎ 12 ª © −1 1 0 ª © 𝐻 1 ª
ℎ 13 ® = −1 0 1 ® 𝐻 2 ®
ℎ 13 = 𝐻 3 − 𝐻 1
=⇒ ® ® ®
ℎ 32 = 𝐻 2 − 𝐻 3
« 32 ¬ «
ℎ 0 1 −1 ¬ « 𝐻3 ¬
=⇒ 𝑦 =𝐴 𝑥
3×1 3×3 3×1
• 𝑚 = 3, 𝑛 = 3, rank 𝐴 = 2 =⇒ 𝑑 = 𝑛 − rank 𝐴 = 1 =⇒ 𝑟 = 𝑚 − (𝑛 − 𝑑) = 1,
0 1 1 0
• det 𝐴 = −1 , −(−1) = 1 + (−1) = 0,
1 −1 1 −1
• =⇒ 𝐴 and 𝑁 = 𝐴T𝐴 are not invertible,
• 𝑑 := dim N (𝐴) > 0,
• 𝐴𝑥 = 0 has a nontrivial solution =⇒ homogeneous solution 𝑥 hom ≠ 0.
=⇒ 𝑥 + 𝜆𝑥 hom is a solution of 𝑦 = 𝐴𝑥 because
|{z}
𝐴 (𝑥 + 𝜆𝑥 hom ) = 𝐴𝑥 + 𝜆 𝐴𝑥 hom = 𝐴𝑥 = 𝑦
=0
is fullfilled.
Interpretation:
• Unknown heights can be changed by an arbitrary constant height shift without affecting the
observations.
• Observed height differences are not sensitive to the null space N (𝐴).
24
3.2 The datum problem
© ℎ 12 + 𝐻 1 ª © 1 0 ª 𝐻 2
ℎ 13 + 𝐻 1 ® = 0 1 ® 𝐻 3
=⇒ ® ®
« ℎ 32 ¬ « 1 −1 ¬
© 𝐻1 ª
𝐻1 = 0 =⇒ 1 0 0 𝐻 2 ®® = 0 ∼ 𝐷T 𝑥 = 𝑐
« 𝐻3 ¬
𝑑×𝑛 𝑛×1 𝑑×1
In order to remove the rank deficiency of 𝐴, matrix 𝐷 T must be chosen in such a way that
T
rank [ 𝐴 | 𝐷 ] =𝑛.
𝑛×𝑚 𝑛×𝑑
𝐴𝐷 = 0, however is not required. As an example, 𝐷 T = [1, −1, 0] is not permitted. The approach of
augmenting the solution space is far more flexible as compared to approach 1: no changes of original
quantities 𝑦, 𝐴 are necessary. Even curious constraints are allowed as long as datum deficiency is
resolved. However, we are faced with the constrained Lagrangian
1
L𝐷 (𝑥, 𝜆) = 𝑒 T𝑒 + 𝜆(𝐷 T𝑥 − 𝑐)
2
1 1
= 𝑦 T𝑦 − 𝑦 T𝐴𝑥 + 𝑥 T𝐴T𝐴𝑥 + 𝜆(𝐷 T𝑥 − 𝑐)
2 2
𝜕L𝐷
= −𝐴T𝑦 + 𝐴T𝐴𝑥 + 𝐷𝜆 = 0
𝜕𝑥
𝜕L𝐷
= 𝐷 T𝑥 − 𝑐 = 0
𝜕𝜆
T
𝐴T𝐴 𝐷 𝑥ˆ 𝐴𝑦
=⇒ ˆ = =⇒ 𝑀 𝑧ˆ = 𝑣
𝐷T 0 𝜆 𝑐
(𝑛+𝑑 )×(𝑛+𝑑 ) (𝑛+𝑑 )×1
25
3 Generalizations
E.g.
© −1 1 0ª © −1 1 0 ª © −1 1 0 ª © 2 1 −1 ª
𝐴 = −1 0 1 ®® =⇒ 𝐴T𝐴 = −1 0 1 ®® −1 0 1 ®® = 1 2 −1 ®®
« 0 1 −1 ¬ « 0 1 −1 ¬ « 0 1 −1 ¬ « −1 −1 2 ¬
© 2 1 −1 1ª
1 2 −1 0 ®®
𝑀 =
−1 −1 2 0 ®®
« 1 0 0 0¬
© 1 2 −1 ª
®
det 𝑀 = −1 · det −1 −1 2 ® = −1 · 1 · det
2 −1
= −3
−1 2
« 1 0 0¬
=⇒ 𝑀 regular =⇒ 𝑧ˆ = 𝑀 −1𝑣
𝑥ˆ = 𝑁 −1 𝐴T𝑦 + 𝐷𝑐 − 𝐷 (𝐷 T 𝑁 −1 𝐷) −1 𝐷 T 𝑁 −1𝐴T𝑦 + (𝐷 T 𝑁 −1 𝐷 − 𝐼 )𝑐
𝑁 := 𝐴T𝐴 + 𝐷𝐷 T
General 1-D-formulation
The functional model
𝑦 = 𝑓 (𝑥) ,
Õ∞
𝑓 (𝑛) (𝑥 0 )
𝑓 (𝑥) = (𝑥 − 𝑥 0 )𝑛
𝑛=0
𝑛!
d𝑓 1 d2 𝑓
= 𝑓 (𝑥 0 ) + (𝑥 − 𝑥 0 ) + (𝑥 − 𝑥 0 ) 2 + . . .
d𝑥 2 d𝑥 2
| {z }
𝑥0 𝑥0
negligible if 𝑥 − 𝑥 0 small
Substracting 𝑓 (𝑥 0 ) yields
d𝑓
𝑓 (𝑥) − 𝑓 (𝑥 0 ) = 𝑦 − 𝑦0 = (𝑥 − 𝑥 0 ) + . . .
d𝑥 𝑥0
d𝑓
(Δ𝑥) + O (Δ𝑥 2 )
| {z }
Δ𝑦 =
d𝑥 0
| {z } terms of higher order
linear model = model errors
with Δ𝑥 := 𝑥 − 𝑥 0 and Δ𝑦 := 𝑦 − 𝑦0 .
26
3.3 Linearization of non-linear observation equations
𝑦𝑖 = 𝑓𝑖 (𝑥 𝑗 ), 𝑖 = 1, . . . , 𝑚; 𝑗 = 1, . . . , 𝑛
𝑥 𝑗,0 −→ 𝑦𝑖,0 = 𝑓𝑖 (𝑥 𝑗,0 )
© ª © 𝜕𝑥 1 ª © Δ𝑥 2 ª®
Δ𝑦1 𝜕𝑓1 𝜕𝑓1 𝜕𝑓1 Δ𝑥 1
···
Δ𝑦2 ® 𝜕𝑥 2
.. ®
𝜕𝑥𝑛
=⇒ . ®® = ... ..
. ®® .. ®®
.. ® 𝜕𝑓𝑚
. ∼ Δ𝑦 = 𝐴(𝑥 0 ) Δ𝑥
. ®
« 𝜕𝑥 1
𝜕𝑓𝑚 𝜕𝑓𝑚
𝜕𝑥 2 · · · 𝜕𝑥𝑛 ¬ 0 Δ𝑥𝑛
« Δ𝑦𝑚 ¬ | {z }« ¬
Jacobian matrix 𝐴
27
3 Generalizations
© 𝑖 ª
Δ𝑥
Δ𝑦𝑖 ®
𝑥 0𝑗 −𝑥𝑖0 𝑦 0𝑗 −𝑦𝑖0 𝑥 0𝑗 −𝑥𝑖0 𝑦 0𝑗 −𝑦𝑖0
®
=⇒ Δ𝑠𝑖 𝑗 := 𝑠𝑖 𝑗 − 𝑠𝑖0𝑗 = − − Δ𝑥 𝑗 ®
| {z } ®
𝑠𝑖0𝑗 𝑠𝑖0𝑗 𝑠𝑖0𝑗 𝑠𝑖0𝑗
“reduced observation” « Δ𝑦 𝑗 ¬
Δ𝑦 = 𝐴(𝑥 0 ) Δ𝑥
Sometimes it is more convenient to use implicit differentiation within the linearization of observa-
tion equations.
2 2
Depart from 𝑠𝑖2𝑗 = 𝑥 𝑗 − 𝑥𝑖 + 𝑦 𝑗 − 𝑦𝑖 instead from 𝑠𝑖 𝑗 and calculate the total differential:
2/𝑠𝑖 𝑗 d𝑠𝑖 𝑗 = 2/ 𝑥 𝑗 − 𝑥𝑖 d𝑥 𝑗 − d𝑥𝑖 + 2/ 𝑦 𝑗 − 𝑦𝑖 d𝑦 𝑗 − d𝑦𝑖
𝑥 0𝑗 − 𝑥𝑖0 𝑦 0𝑗 − 𝑦𝑖0
Δ𝑠𝑖 𝑗 := 𝑠𝑖 𝑗 − 𝑠𝑖0𝑗 = Δ𝑥 𝑗 − Δ𝑥𝑖 + Δ𝑦 𝑗 − Δ𝑦𝑖
𝑠𝑖0𝑗 𝑠𝑖0𝑗
Grid bearings:
𝑥 𝑗 − 𝑥𝑖
𝑇𝑖 𝑗 = arctan
𝑦 𝑗 − 𝑦𝑖
Directions:
28
3.3 Linearization of non-linear observation equations
𝑟𝑖 𝑗 = 𝑇𝑖 𝑗 − 𝜔𝑖
𝑥 𝑗 − 𝑥𝑖
= arctan − 𝜔𝑖
𝑦 𝑗 − 𝑦𝑖
𝑦 0𝑗 − 𝑦𝑖0 𝑥 0𝑗 − 𝑥𝑖0 𝑦 0𝑗 − 𝑦𝑖0 𝑥 0𝑗 − 𝑥𝑖0
= 𝑟𝑖0𝑗 − Δ𝑥𝑖 + Δ𝑦𝑖 + Δ𝑥 𝑗 − Δ𝑦 𝑗 − 𝜔𝑖
(𝑠𝑖0𝑗 ) 2 (𝑠𝑖0𝑗 ) 2 (𝑠𝑖0𝑗 ) 2 (𝑠𝑖0𝑗 ) 2
Angles:
𝛼𝑖 𝑗𝑘 = 𝑇𝑖𝑘 − 𝑇𝑖 𝑗
𝑥𝑘 − 𝑥𝑖 𝑥 𝑗 − 𝑥𝑖
= arctan − arctan
𝑦𝑘 − 𝑦𝑖 𝑦 𝑗 − 𝑦𝑖
3D distances:
q
𝑑𝑖 𝑗 = (𝑥 𝑗 − 𝑥𝑖 ) 2 + (𝑦 𝑗 − 𝑦𝑖 ) 2 + (𝑧 𝑗 − 𝑧𝑖 ) 2 (𝑖 = 1, . . . , 4; 𝑗 ≡ 𝑃)
. . . linearization as usual.
29
3 Generalizations
Vertical angles:
p
(𝑥 𝑗 − 𝑥𝑖 ) 2 + (𝑦 𝑗 − 𝑦𝑖 ) 2
𝛽𝑖 𝑗 = arccot other trigonometric relations applicable
𝑧 𝑗 − 𝑧𝑖
𝑠𝑖 𝑗
= arccot
𝑧 𝑗 − 𝑧𝑖
1
= 𝛽𝑖0𝑗 − 2 · . . . Δ𝑥𝑖 + . . . Δ𝑦𝑖 + ... + . . . Δ𝑧 𝑗
𝑠𝑖 𝑗
1+ 𝑧 𝑗 −𝑧𝑖
d4 d1
P4 4 1 P1
d3
3
d2
2
P3 P2
d𝑓
Δ𝑦 = Δ𝑥 + 𝑒 = 𝐴(𝑥 0 ) Δ𝑥 + 𝑒 .
d𝑥 𝑥0
30
3.3 Linearization of non-linear observation equations
No ! ^
PDxP2 < e Stop criteria
Yes!
^
y^ = f(x0)+ADx adjusted (estimated) observations
Error in iteration
process ! ^
e = y - y^ estimated residuals (inconsistencies)
No !
ATe^ = 0 orthogonality check satisfied ?
Yes!
No ! main check: nonlinear observation
y^ - f(x)
^
=0 equations satisfied by adjusted
observations ?
Yes!
Error in iteration
process or erroneous
linearization !
31
3 Generalizations
ℎ 1B − ℎ 1A = (𝐻 B − 𝐻 1 ) − (𝐻 A − 𝐻 1 ) = 𝐻 B − 𝐻 A
ℎ 13 + ℎ 32 − ℎ 12 = (𝐻 3 − 𝐻 1 ) + (𝐻 2 − 𝐻 3 ) − (𝐻 2 − 𝐻 1 ) = 0
or
© ª © ª
1 0 0 0 −1 ℎ 1B 1 0 0 0 −1 𝐻 B
0 1 −1 1 0 ℎ 13 ®® 0 1 −1 1 0 0 ®®
ℎ 12 ® = 0 ®.
® ®
ℎ ® 0 ®
32 ® ®
« 1A ¬
ℎ « A¬
𝐻
Due to erroneous observations, a vector 𝑒 of unknown inconsistencies must be introduced in order
to make our linear model consistent.
© ª © ª
1 0 0 0 −1 ℎ 1B − 𝑒 1B 1 0 0 0 −1 𝐻 B
0 1 −1 1 0 ℎ 13 − 𝑒 13 ®® 0 1 −1 1 0 0 ®®
ℎ 12 − 𝑒 12 ® = 0 ®
® ®
ℎ −𝑒 ® 0 ®
32 32 ® ®
« ℎ 1A − 𝑒 1A ¬ « 𝐻 A ¬.
or
T
𝐵 Δℎ − 𝑒 = 𝐵 T𝑐 .
2×5 5×1 5×1 2×1
A 1: Starting from
𝐵 T (Δℎ − 𝑒) = 𝐵 T𝑐 ,
32
3.4 Higher dimensions: the 𝐵-model (Condition equations)
Figure 3.6
33
3 Generalizations
where solely 𝑒 is unknown, we collect all unknown parts on the left and all known quantities
on the right hand side
=⇒ 𝐵 T Δℎ − 𝐵 T𝑒 = 𝐵 T𝑐
𝐵 T𝑒 = 𝐵 T Δℎ − 𝐵 T𝑐
𝐵 T 𝑒 = 𝐵 T𝑦 =: 𝑤
𝑟 ×𝑚 𝑚×1 𝑟 ×1
𝑤 : vector of misclosures 𝑤 := 𝐵 T𝑦
𝑦 : reduced vector of observations
𝑟 : number of conditions
𝑟 =𝑚 −𝑛
Sometimes the number of conditions can hardly be determined without knowledge on the
number 𝑛 of unknowns in the 𝐴-model. This will be treated later in more detail together with
the so-called datum problem.
A 3:
1 T
L𝐵 (𝑒, 𝜆) = 𝑒 𝑒 + 𝜆 T ( 𝐵 T 𝑦 − 𝐵 T 𝑒 ) −→ min
2 1×𝑚 𝑚×1 1×𝑟 𝑟 ×𝑚 𝑚×1 𝑟 ×𝑚 𝑚×1
| {z }
𝑒,𝜆
1×1
𝜕L𝐵 ˆ = 𝑒ˆ − 𝐵 𝜆ˆ = 0
(𝑒,
ˆ 𝜆)
𝜕𝑒 𝑚×1 𝑚×𝑟 𝑟 ×1 𝑚×1
𝜕L𝐵 ˆ = − 𝐵 T 𝑒ˆ + 𝐵 T 𝑦 = 0
(𝑒,
ˆ 𝜆) (𝑤 = 𝐵 T𝑦)
𝜕𝜆
𝑟 ×𝑚 𝑚×1 𝑟 ×𝑚 𝑚×1 𝑟 ×1
𝑒ˆ
© ª © ª
𝐼 −𝐵 0
=⇒ 𝑚×𝑚T 𝑚×𝑟 ® 𝜆ˆ = 𝑚×1 ®
−𝐵 0 −𝑤
« 𝑟 ×𝑚 𝑟 ×𝑟 ¬ « 𝑚×1 ¬
𝑒ˆ = 𝐵 𝜆ˆ =⇒ 𝐵 T 𝐵 𝜆ˆ = 𝑤
=⇒ 𝜆ˆ = (𝐵 T 𝐵) −1𝑤 rank(𝐵 T 𝐵) = 𝑟
=⇒ 𝑒ˆ = 𝐵(𝐵 T 𝐵) −1𝑤
= 𝐵(𝐵 T 𝐵) −1 𝐵 T𝑦
= 𝑃𝐵𝑦
𝑦ˆ = 𝑦 − 𝑒ˆ
= 𝐼 − 𝐵(𝐵 T 𝐵) −1 𝐵 T 𝑦
= 𝑃𝐵⊥𝑦
34
3.4 Higher dimensions: the 𝐵-model (Condition equations)
left multiply 𝑦 = 𝐴𝑥 + 𝑒 by 𝐵 T :
𝐵 T𝑦 = 𝐵 T𝐴𝑥 + 𝐵 T𝑒 ⇐⇒ 𝐵 T𝐴 = 0 .
E.g.:
© ª
−1 0 0
−1 0 1 ®®
1 0 0 0 −1 ® 000
0 1 −1 1 0
−1 1 0®= .
® 000
2×5 0 1 −1 ® 2×3
« −1 0 0¬
5×3
35
3 Generalizations
(𝑎 − 𝑒𝑎 ) sin(𝛽 − 𝑒 𝛽 ) − (𝑏 − 𝑒𝑏 ) sin(𝛼 − 𝑒𝛼 ) = 0
𝜕𝑓 𝜕𝑓
𝑓 (𝑒𝑎 , 𝑒𝑏 , 𝑒𝛼 , 𝑒 𝛽 ) = 𝑓 (𝑒𝑎0 , 𝑒𝑏0 , 𝑒𝛼0 , 𝑒 𝛽0 ) + (𝑒𝑎 − 𝑒𝑎0 ) + . . . + (𝑒 𝛽 − 𝑒 𝛽0 )
𝜕𝑒𝑎 0 𝜕𝑒 𝛽 0
𝜕𝑓 𝜕𝑓 𝜕𝑓 𝜕𝑓
= 𝑓 (𝑒𝑎0 , 𝑒𝑏0 , 𝑒𝛼0 , 𝑒 𝛽0 ) + 𝑒𝑎 + . . . + 𝑒𝛽 − 𝑒𝑎0 − . . . − 𝑒0
𝜕𝑒𝑎 0 𝜕𝑒 𝛽 0 𝜕𝑒𝑎 0 𝜕𝑒 𝛽 0 𝛽
𝑤 − 𝐵 T𝑒 = 0
36
3.6 Higher dimensions: the mixed model
In the 𝐴-model, every observation is – in general – a linear or non-linear function of all unknown
quantities, i.e.
𝑦𝑖 = 𝑓𝑖 (𝑥 1, 𝑥 2, . . . , 𝑥𝑛 ) = 𝑓𝑖 (𝑥 𝑗 ) = 𝑓𝑖 (𝑥), 𝑖 = 1, . . . , 𝑚; 𝑗 = 1, . . . , 𝑛
and every observation equation 𝑦𝑖 contains just one single inconsistency 𝑒𝑖 . In contrast, in the 𝐵-
model no unknown parameter 𝑥 exist and we have linear or non-linear relationships between the
observations only,
𝑓 𝑗 (𝑦𝑖 ) = 𝑓 𝑗 (𝑦) = 0, 𝑖 = 1, . . . , 𝑚; 𝑗 = 1, . . . , 𝑟 .
However, in many applications, functional relationships exist between both, parameters 𝑥 and ob-
servations 𝑦, which can be formulated only as an implicit function
𝑓 (𝑥 𝑗 , 𝑦𝑖 ) = 0, 𝑖 = 1, . . . , 𝑚; 𝑗 = 1, . . . , 𝑛 .
This will lead to a combination of both, 𝐴- and 𝐵-model, which is known as the general model of
adjustment, mixed model or Gauss-Helmert model, in honor of Friedrich Robert Helmert1 .
Example: Best fitting circle with unknown radius 𝑟 , and unknown centre coordinates 𝑢 M , 𝑣 M ;
observations 𝑢𝑖 and 𝑣𝑖 inconsistent.
2 2
𝑓 (𝑟, 𝑢 M, 𝑦M, 𝑢𝑖 − 𝑒𝑢𝑖 , 𝑣𝑖 − 𝑒 𝑣𝑖 ) = 𝑢𝑖 − 𝑒𝑢𝑖 − 𝑢 M + 𝑣𝑖 − 𝑒 𝑣𝑖 − 𝑣 M − 𝑟2 = 0
| {z } | {z }
unknown observations “𝑦” −
parameters inconsistencies “𝑒”
“𝑥”
1 Friedrich Robert Helmert (1843–1917) was a famous German geodesist and mathematician who introduced this model
1872 in his book “Die Ausgleichungsrechnung nach der Methode der kleinsten Quadrate” (Adjustment theory using
the method of least squares). He is also known as the father of many mathematical and physical theories of modern
geodesy.
37
4 Weighted least squares
Analytical interpretation
Target function:
𝑦1 −→ 𝑤 1 1
L𝑎𝑤 = 𝑤 1 (𝑦1 − 𝑎𝑥) 2 + 𝑤 2 (𝑦2 − 𝑎𝑥) 2
𝑦2 −→ 𝑤 2 2
1 T 𝑤1 0
= (𝑦 − 𝑎𝑥) (𝑦 − 𝑎𝑥)
2 0 𝑤2
1
= (𝑦 − 𝑎𝑥) T𝑊 (𝑦 − 𝑎𝑥)
2
1 1
= 𝑦 T𝑊 𝑦 − 𝑦 T𝑊 𝑎𝑥 + 𝑥 T𝑎 T𝑊 𝑎𝑥
2 2
1 T
= 𝑒 𝑊𝑒
2
Necessary condition:
𝜕L𝑎
𝑥ˆ : min L𝑎 (𝑥) =⇒ (𝑥)
ˆ
𝑥 𝜕𝑥
𝜕L
=⇒ = −𝑦 T𝑊 𝑎 + 𝑎 T𝑊 𝑎𝑥 = −𝑎 T𝑊 𝑦 + 𝑎 T𝑊 𝑎𝑥ˆ = 0
𝜕𝑥
=⇒ 𝑎 T𝑊 𝑎𝑥ˆ = 𝑎 T𝑊 𝑦 normal equation
Sufficient condition:
𝜕2 L
= 𝑎 T𝑊 𝑎 > 0, since 𝑊 is positive definite
𝜕𝑥 2
Normal equation =⇒
𝑎 T𝑊 (𝑦 − 𝑎𝑥)
ˆ =0 =⇒ 𝑎 T𝑊 𝑒ˆ = 0
= 𝑒ˆ ⊥ 𝑊 𝑎
38
4.1 Weighted observation equations
Example
𝑤 0
1
𝑎=
1 ! T 1
𝑎𝑊 = 11 = 𝑤1 𝑤2
𝑤1 0
0 𝑤2
𝑊 =
0 𝑤2
1
T
𝑎 𝑊 𝑎 = 𝑤1 𝑤2 = 𝑤1 + 𝑤2
1
=⇒
1
𝑥ˆ = (𝑤 1𝑦1 + 𝑤 2𝑦2 ) (weighted mean)
𝑤1 + 𝑤2
𝑤1 𝑤2
= 𝑦1 + 𝑦2
𝑤1 + 𝑤2 𝑤1 + 𝑤2
𝑒ˆ1 𝑦 𝑦ˆ
= 1 − 1
𝑒ˆ2 𝑦2 𝑦ˆ2
1 (𝑤 1 + 𝑤 2 )𝑦1 − 𝑤 1𝑦1 − 𝑤 2𝑦2
=
𝑤 1 + 𝑤 2 (𝑤 1 + 𝑤 2 )𝑦2 − 𝑤 1𝑦1 − 𝑤 2𝑦2
1 𝑤 2 (𝑦1 − 𝑦2 )
=
𝑤 1 + 𝑤 2 𝑤 1 (𝑦2 − 𝑦1 )
Projectors
39
4 Weighted least squares
(a) Circle (b) Ellipse aligned with coordi- (c) General ellipse, not aligned
nate axes. with coordinate axes.
Figure 4.1
4.1.1 Geometry
𝐹 (𝑧) = 𝑧 T𝑊 𝑧 = 𝑐
𝑤 1𝑧 12 + 𝑤 2𝑧 22 = 𝑐
𝑤1 2 𝑤2 2
𝑧 + 𝑧 =1
𝑐 1 𝑐 2
𝑧 12 𝑧 22
𝑐 + 𝑐 =1
𝑤1 𝑤2
𝑧 12 𝑧 22
+ =1 ellipse equation
𝑎 𝑏
A family (𝑐 may vary!) of ellipses, the principal exes of which are not aligned with coordinate axes,
in general.
General ellipse
40
4.2 Weighted condition equations
• 𝑒ˆ is the projection of 𝑦
– onto (𝑊 𝑎) ⊥
– in direction of 𝑎
⊥
=⇒ 𝑒ˆ = 𝑃 (𝑊 𝑎) ⊥,𝑎𝑦 with 𝑃 (𝑊 𝑎) ⊥,𝑎 = 𝑃𝑎,(𝑊 𝑎) ⊥
= 𝐼 − 𝑎(𝑎 T𝑊 𝑎) −1𝑎 T𝑊
= 𝐼 − 𝑎(𝑎 T𝑊 𝑎) −1𝑎 T𝑊 𝑦
• Because of 𝑒ˆ 6⊥ 𝑎 (or 𝑎 T𝑒ˆ ≠ 0) projections are oblique projections (or orthogonal projections
with respect to the metric 𝑊 ; 𝑒ˆ ⊥ 𝑊 𝑎 or 𝑎 T𝑊 𝑒ˆ = 0)
becomes
𝑦 =𝐴 𝑥 + 𝑒
𝑚×1 𝑚×𝑛 𝑛×1 𝑚×1
Replace 𝑎 by 𝐴!
𝑛×𝑛
Geometry
Starting point again: 𝑏 T𝑎 = 0 (𝑎 ⊥ 𝑏):
41
4 Weighted least squares
Direction of (𝑊 𝑎) ⊥ :
=⇒ 𝑏 T𝑒ˆ = 𝑏 T𝑊 −1𝑏𝛼 = 𝑏 T𝑦
=⇒ 𝛼 = (𝑏 T𝑊 −1𝑏) −1𝑏 T𝑦
=⇒ 𝑒ˆ = 𝑊 −1𝑏 (𝑏 T𝑊 −1𝑏) −1𝑏 T𝑦
=⇒ 𝑦ˆ = 𝑦 − 𝑒ˆ = 𝐼 − 𝑊 −1𝑏 (𝑏 T𝑊 −1𝑏) −1𝑏 T 𝑦
Calculus
1
L𝑏 (𝑒, 𝑦) = 𝑒 T𝑊 𝑒 + 𝜆 T (𝑏 T𝑦 − 𝑏 T𝑒) etc.
2
𝑒ˆ : min 𝑒 T𝑊 𝑒 under constraint 𝑏 T𝑒 = 𝑏 T𝑦
𝑒
42
4.2 Weighted condition equations
Lagrange:
1
L𝑏 (𝑒, 𝜆) = 𝑒 T𝑊 𝑒 + 𝜆 T (𝑏 T𝑦 − 𝑏 T𝑒)
2
Find 𝑒 and 𝜆 which minimize 𝐿𝑏 .
( 𝜕L
𝑏 ˆ = 𝑊 𝑒ˆ − 𝑏 𝜆ˆ = 0
𝜕𝑒 (𝑒,
ˆ 𝜆)
=⇒ 𝜕 L𝑏 ˆ = −𝑏 T𝑒 + 𝑏 T𝑦 = 0
𝜕𝜆 (𝑒,
ˆ 𝜆)
𝑊 −𝑏 𝑒ˆ 0
⇐⇒ =
−𝑏 T 0 𝜆ˆ −𝑏 T𝑦
1. row
𝑊 𝑒ˆ − 𝑏 𝜆ˆ = 0 =⇒ 𝑒ˆ = 𝑊 −1𝑏 𝜆ˆ
2. row
𝑏 T𝑒ˆ = 𝑏 T𝑦 =⇒ 𝑏 T𝑊 −1𝑏 𝜆ˆ = 𝑏 T𝑦
solve for 𝜆ˆ
𝜆ˆ = (𝑏 T𝑊 −1𝑏) −1𝑏 T𝑦
substitute in 1. row
Higher dimensions
Replace 𝑏 with 𝐵.
43
4 Weighted least squares
𝐵 T𝑦 = 𝐵 T𝑒
)
𝑦 = 𝐴𝑥 + 𝑒
=⇒ 𝐵 T𝑦 = 𝐵 T𝐴𝑥 + 𝐵 T𝑒 = 𝐵 T𝑒
𝐵 T𝐴 = 0
!
© 𝑚×𝑚 𝑚×𝑟 ª © 𝑚×1 ª
𝑊 𝐵 𝑒ˆ 0
T ® ˆ ® =
𝐵 0 𝜆 𝐵 T𝑦
« 𝑟 ×𝑚 𝑟 ×𝑟 ¬ « 𝑟 ×1 ¬
𝑒ˆ = 𝑊 −1 𝐵(𝐵 T𝑊 −1 𝐵) −1 𝐵 T𝑦
𝑦ˆ = 𝐼 − 𝑊 −1 𝐵(𝐵 T𝑊 −1 𝐵) −1 𝐵 T 𝑦
𝐵 T (𝑦 − 𝑒) = 𝑐 =⇒ 𝐵 T𝑒 = 𝐵 T𝑦 − 𝑐 =: 𝑤
𝑒ˆ = 𝑊 −1 𝐵(𝐵 T𝑊 −1 𝐵) −1 [𝐵 T𝑦 − 𝑐]
| {z }
=⇒
𝑦ˆ = 𝑦 − 𝑒ˆ = . . .
𝑤
4.3 Stochastics
Probabilistic formulation
(stochastic quantities are underlined)
Version 1: 𝑦 = 𝐴𝑥 + 𝑒, E 𝑒 = 0 D 𝑒 = 𝑄𝑦
n o
Version 2: E 𝑦 = 𝐴𝑥 D 𝑒 = 𝑄𝑦
| {z | } {z }
Stochastic model:
Functional model
variance-covariance matrix
| {z }
Mathematical model
44
4.4 Best Linear Unbiased Estimation (blue)
𝑥ˆ = (𝐴T𝑊 𝐴) −1𝐴T𝑊 𝑦
= 𝐿𝑦
n o
−→ E 𝑥ˆ = (𝐴T𝑊 𝐴) −1𝐴T𝑊 E 𝑦
= (𝐴T𝑊 𝐴) −1𝐴T𝑊 𝐴𝑥
=𝑥 (unbiased estimate)
−→ 𝑄 𝑥ˆ = 𝐿𝑄 𝑦 𝐿 T
= (𝐴T𝑊 𝐴) −1𝐴T𝑊 𝑄 𝑦𝑊 𝐴(𝐴T𝑊 𝐴) −1
𝑦ˆ = 𝐴𝑥ˆ
= 𝑃𝐴𝑦
n o n o
−→ E 𝑦ˆ = 𝐴 E 𝑥ˆ = 𝐴𝑥 = E 𝑦
−→ 𝑄 𝑦ˆ = 𝑃𝐴𝑄 𝑦 𝑃𝐴 T
𝑒ˆ = 𝑦 − 𝐴𝑥ˆ = (𝐼 − 𝑃𝐴 )𝑦
n o
−→ E 𝑒ˆ = E 𝑦 −𝐴𝑥 = 0
Questions:
• Is 𝑥ˆ the best estimator?
• Or: When is 𝑄 𝑥ˆ smallest?
2D-example (old)
n o
1
E 𝑦 = 𝑎𝑥, 𝑎=
1
n o 2
𝜎1 𝜎12
D 𝑦 = 𝑄 𝑦, 𝑄𝑦 =
𝜎12 𝜎22
45
4 Weighted least squares
L-property:
𝑥ˆ = 𝑙 T𝑦
U-property:
n o
E 𝑥ˆ = 𝑙 T E 𝑦 = 𝑙 T𝑎𝑥 = 𝑥 =⇒ 𝑙 T𝑎 = 1
B-property:
𝑥ˆ = 𝑙 T𝑦 =⇒ 𝜎𝑥2ˆ = 𝑙 T𝑄 𝑦 𝑙
=⇒ min 𝑙 T𝑄 𝑦 𝑙 under 𝑙 T𝑎 = 1
𝑙
Solution?
Higher dimensions
𝑎 −→ 𝐴, 𝑄 𝑦−1 = 𝑃 𝑦
Gauss coined the variable 𝑃 from the Latin pondus, which means weight.
)
BLUE: 𝑥ˆ = (𝐴T 𝑃 𝑦 𝐴) −1𝐴T 𝑃 𝑦𝑦
=⇒ BLUE, if 𝑊 = 𝑃 𝑦 = 𝑄 𝑦−1
Det.: 𝑥ˆ = (𝐴T𝑊 𝐴) −1𝐴T𝑊 𝑦
𝑥ˆ = (𝐴T 𝑃 𝑦 𝐴) −1𝐴T 𝑃 𝑦𝑦
Besides:
𝐼 = 𝑃𝐴 + 𝑃𝐴⊥ =⇒ 𝑄 𝑦 = 𝑃𝐴𝑄 𝑦 + 𝑃𝐴⊥𝑄 𝑦 = 𝑄 𝑦ˆ + 𝑄𝑒ˆ
Note: 𝑃𝐴 is a projector, but 𝑃 𝑦 is a weight matrix.
46
5 Geomatics examples
Further (simple and more advanced) examples including data files can be found in "Geodetic Net-
work Adjustment Examples" (http://www.gis.uni-stuttgart.de/lehre/campus-docs/adjustment_examples.pdf).
𝑥 𝑗 − 𝑥𝑖
𝑇𝑖 𝑗 = arctan
𝑦 𝑗 − 𝑦𝑖
Angles:
𝑥2 − 𝑥1 𝑥3 − 𝑥1
𝛼 = 𝑇12 − 𝑇13 = arctan − arctan
𝑦2 − 𝑦1 𝑦3 − 𝑦1
𝑥3 − 𝑥2 𝑥1 − 𝑥2
𝛽 = 𝑇23 − 𝑇21 = arctan − arctan
𝑦3 − 𝑦2 𝑦1 − 𝑦2
𝑥1 − 𝑥3 𝑥2 − 𝑥3
𝛾 = 𝑇31 − 𝑇32 = arctan − arctan
𝑦1 − 𝑦3 𝑦2 − 𝑦3
47
5 Geomatics examples
s13 s23
s12
“Observations”
Approx. coordinates
from approx. Observations 𝜎
in m
coordinates
0
𝑠 12 1m 𝑠 12 1.01 m ±0.01 m
0
𝑠 13 1m 𝑠 13 1.02 m ±0.02 m
𝑥0 𝑦0 0
𝑠 23 1m 𝑠 23 0.97 m ±0.01 m
1 0 0 𝛼0 60◦ 𝛼 60◦ ±1 ′′
2 1 0 𝛽0 60◦ 𝛽 59.7◦ ±1 ′
√
1 3
3 2 2 𝛾0 60◦ 𝛾 60.2◦ ±1 ′
=⇒ Linearized distance observation equation (Taylor point = point of expansion = set of approximate
coordinates).
48
5.1 A-Model: Adjustment of observation equations
In this example, measured distances between the points of the network in figure 3.6 are adjusted.
The standard deviation of observations is 𝜎𝑠 = ±1 cm, the a priori standard deviation 𝜎0 = ±1 cm.
𝜎02
=⇒ 𝑃𝑠 = =1
𝜎𝑠2
Table 5.1 contains measured distances (observations 𝑦) between respective network points, while
table 5.2 contains approximate coordinates of the points. Points A and B are datum points with the
minimum number of datum parameters 𝑋 A , 𝑌A , 𝑋 B fixed.
Table 5.3 contains the reduced vector Δ𝑦 and table 5.4 the estimated parameters at first iteration.
Tables 5.5 and 5.6 contain the adjusted coordinates and observations (𝑦)
ˆ respectively, of the network
points after 6 iterations. Table 5.7 shows estimated inconsistencies in measured distances.
49
5 Geomatics examples
leg ˆ
𝑦/m leg ˆ
𝑦/m
point ID 𝑋ˆ /m 𝑌ˆ /m A–B 1309.155 D–H 2179.1462
A 184 270.031 725 830.033 A–C 1188.464 D– I 1461.0749
B 185 549.974 725 555.019 A–G 1267.520 E– F 1031.2318
C 183 185.048 725 344.999 B–G 1447.552 E– I 1353.1463
D 183 598.001 723 680.041 B–H 1077.634 F –H 1991.0036
E 184 499.996 722 144.987 C–D 1715.4053 F– I 997.2854
F 185 469.997 722 495.040 C–G 1504.0395 G–H 1149.3454
G 184 480.021 724 580.029 C– I 2688.0873 G– I 1310.9572
H 185 625.005 724 480.000 D–E 1780.4458 H– I 1241.8109
I 185 030.002 723 390.016 D–G 1260.133
leg ˆ
𝑒/mm leg ˆ
𝑒/mm
A–B −0.01 D–H 0.78
A–C −0.01 D– I −0.88
A–G 0.00 E– F 0.22
B–G 0.01 E– I −0.27
B–H −0.01 F –H 0.43
C–D −0.27 F– I −0.39
C–G −0.46 G–H −0.35
C– I 0.67 G– I −0.18
D–E 0.20 H– I −0.86
D–G −0.04
50
5.1 A-Model: Adjustment of observation equations
While figure 5.2 indicates the convergence of estimated corrections to approximate coordinates,
figure 5.3 depicts the overall convergence in adjustment iteration. Figure 5.4 represents approximate
points, adjusted and datum points. Finally in figure 5.5 adjusted and datum points are shown with
error ellipses. Table 5.8 displays the 𝐴-matrix after the first iteration.
Convergence of estimated corrections to approximate coordinates
250
1) ∆XA=0
200 2) ∆YA=0
3) ∆XB=0
150 4) ∆YB
5) ∆XC
100 6) ∆YC
7) ∆XD
Parameter value in m
50 8) ∆YD
9) ∆XE
0 10) ∆YE
11) ∆X F
−250
1 2 3 4 5 6
Iteration count
2
10
0
10
RMS of all estimated corrections in m
−2
10
−4
10
−6
10
−8
10
−10
10
−12
10
1 2 3 4 5 6
Iteration count
51
5 Geomatics examples
Approximate Points (blue), Adjusted Points (red), Datum Points (Triangles): ∆XA= ∆YA= ∆XB=0
726000
(A)
A
B
725500 (C)
(B)
C
725000
(H)
G
724500 H
(G)
724000
Y/m
(D)
723500
(I) I
723000
722500 (F)
F
E
(E)
722000
183000 183500 184000 184500 185000 185500 186000 186500
X/m
52
5.1 A-Model: Adjustment of observation equations
B
725500
C
725000
G
724500 H
724000
Y/m
723500
I
723000
1 cm
722500 Error Ellipses
F
722000
183000 183500 184000 184500 185000 185500 186000 186500
X/m
53
leg Δ𝑌B Δ𝑋 C Δ𝑌C Δ𝑋 D Δ𝑌D Δ𝑋 E Δ𝑌E Δ𝑋 F Δ𝑌F Δ𝑋 G Δ𝑌G Δ𝑋 H Δ𝑌H Δ𝑋 I Δ𝑌I
A-B −0.318 48 0 0 0 0 0 0 0 0 0 0 0 0 0 0
A-C 0 −0.942 33 −0.334 68 0 0 0 0 0 0 0 0 0 0 0 0
A-G 0 0 0 0 0 0 0 0 0 0.158 77 −0.987 31 0 0 0 0
B-G 0.689 66 0 0 0 0 0 0 0 0 −0.724 13 −0.689 66 0 0 0 0
B-H 0.980 57 0 0 0 0 0 0 0 0 0 0 0.196 15 −0.980 57 0 0
C-D 0 −0.301 13 0.953 58 0.301 13 −0.953 58 0 0 0 0 0 0 0 0 0 0
C-G 0 −0.777 94 0.628 34 0 0 0 0 0 0 0.777 94 −0.628 34 0 0 0 0
C-I 0 −0.615 27 0.788 32 0 0 0 0 0 0 0 0 0 0 0.615 27 −0.788 32
D-E 0 0 0 −0.316 23 0.948 68 0.316 23 −0.948 68 0 0 0 0 0 0 0 0
D-G 0 0 0 −0.635 71 −0.771 93 0 0 0 0 0.635 71 0.771 93 0 0 0 0
D-H 0 0 0 −0.865 43 −0.501 04 0 0 0 0 0 0 0.865 43 0.501 04 0 0
D-I 0 0 0 −0.988 94 0.148 34 0 0 0 0 0 0 0 0 0.988 94 −0.148 34
E-F 0 0 0 0 0 −0.913 81 −0.406 14 0.913 81 0.406 14 0 0 0 0 0 0
E-I 0 0 0 0 0 −0.347 31 −0.937 75 0 0 0 0 0 0 0.347 31 0.937 75
F-H 0 0 0 0 0 0 0 −0.221 62 −0.975 13 0 0 0.221 62 0.975 13 0 0
F-I 0 0 0 0 0 0 0 0.388 06 −0.921 64 0 0 0 0 −0.388 06 0.921 64
G-H 0 0 0 0 0 0 0 0 0 −0.978 98 −0.203 95 0.978 98 0.203 95 0 0
G-I 0 0 0 0 0 0 0 0 0 −0.287 35 0.957 83 0 0 0.287 35 −0.957 83
H-I 0 0 0 0 0 0 0 0 0 0 0 0.584 30 0.811 53 −0.584 30 −0.811 53
Table 5.8: 𝐴-Matrix (1st iteration).
5 Geomatics examples
54
5.1 A-Model: Adjustment of observation equations
A monitoring situation where directions and distances to 4 points A, B, C, and D are measured from
point N, see table 5.10. Coordinates (Table 5.9) of points 1 to 4 including N0 and the orientation
𝜔 N0 = 63.5610 gon are approximately given (see Jäger, 2005, pg. 241–242).
2000
Datum point
New Point
B
1500
1000 N
Y/m
500
C
A
D
0
0 500 1000 1500 2000 2500
X/m
𝜎02 𝜎02 m2
𝑃𝑠 = = 10000, 𝑃𝑟 = = 1.6211 · 1010
𝜎𝑠2 𝜎𝑟2 rad2
55
5 Geomatics examples
leg Δ𝑦/m
leg Δ𝑋 N Δ𝑌N Δ𝜔 0 phys. unit N–A 0.000 37
N–A 0.777 83 0.628 47 0 N–B 0.004 86
m N–C
distance N–B −0.010 86 −0.999 94 0 −, rad
−0.002 46
N–C −0.847 72 0.530 45 0 leg Δ𝑦/rad
N–A 0.000 64 −0.000 79 −1 N–A −7.4935 · 10−6
N–B −0.001 31 0.000 01 −1 rad N–B −2.5420 · 10−6
direction m , −
N–C 0.000 50 0.000 80 −1 N–C 1.8678 · 10−6
N–D 0.001 14 0.000 04 −1 N–D −5.6276 · 10−6
Table 5.11 shows the Designmatrix 𝐴, table 5.12 the reduced observation vector.
Table 5.13 contains the estimated parameters updates after the 1st iteration and table 5.14 contains
the adjusted coordinates for point N, including the adjusted orientation.
c
Δ𝜉/m c
Δ𝜉/rad
Δ𝑋ˆ N0 −0.000 49 Δ𝜔ˆ N0 3.3544 · 10−6 𝑋ˆ N 1175.150 m 𝜔ˆ N 63.5612 gon
Δ𝑌ˆN0 0.001 60 𝑌ˆN 997.722 m
Table 5.13: Estimated parameters. Table 5.14: Adjusted Coordinates and orientation.
Table 5.15 gives the adjusted observations including the distance 𝑠 ND , which is approximately given
by the adjusted coordinates of point N. Table 5.16 shows the inconsistencies 𝑒ˆ of the observations.
leg ˆ
𝑠/m leg 𝑟ˆ/rad leg ˆ
𝑒/m leg ˆ
𝑒/gon
N–A 982.690 N–A 193.1751 N–A −0.0003 N–A −0.000 16
N–B 764.994 N–B 337.1304 N–B 0.0065 N–B 9.6 · 10−6
N–C 1063.894 N–C 72.0341 N–C −0.0037 N–C 0.000 27
N–D 873.693 N–D 134.0759 N–D — N–C −0.000 11
Finally, tables 5.17 and 5.18 show the standard deviations for coordinates for adjusted point N, as
well as for adjusted orientation and observations.
Figure 5.7 shows the situation in detail, including the error ellipse for the adjusted coordinates of
point N.
56
5.1 A-Model: Adjustment of observation equations
Approximate point
Adjusted Point
997.724
997.722
997.72 N
Y/m
997.718
1cm
Error Ellipses
997.716
57
5 Geomatics examples
1 Benchmark 2
1000 New Point
800
600
Y/m
400
200
0
3 4
This example (Benning, 2011, pg. 258–261) treats a network (Figure 5.8) of measured distances and
directions between two given points and two new points. The standard deviation of observations:
𝜎𝑠 = ±1 cm for distances, 𝜎𝑟 = ±1 mgon for directions and 𝜎0 = ±1 cm as a priori standard deviation.
This give the elements for the weight-matrix 𝑃:
𝜎02 𝜎02 m2
=⇒ 𝑃𝑠 = = 1, 𝑃𝑟 = = 4.0528 · 105
𝜎𝑠2 𝜎𝑟2 rad2
Table 5.19 contains coordinates for benchmarks 1 and 2, table 5.20 approximate coordinates for
points 3 and 4.
Point ID 𝑋 /m 𝑌 /m Point ID 𝑋 0 /m 𝑌0 /m
1 0.00 1000.00 3 0.00 0.00
2 1000.00 1000.00 4 1000.00 0.00
Table 5.21 contains measured distances (𝑠) between individual points, approximate distances (𝑠 0 ) and
reduced observations (Δ𝑠).
58
5.1 A-Model: Adjustment of observation equations
Table 5.22 displays direction observations (𝑟 ), approximate grid bearings (𝑇0 ), approximate orienta-
tion unknows (𝜔 0 ), approximate directions (𝑟 0 ) and reduced direction observations (Δ𝑟 0 ).
Table 5.23 contains the designmatrix 𝐴 and table 5.24 the reduced observation vector Δ𝑦 after 1st
iteration.
leg Δ𝑦 /m
leg Δ𝑋 3 Δ𝑌3 Δ𝑋 4 Δ𝑌4 Δ𝜔 1 Δ𝜔 2 Δ𝜔 3 phys. unit 1–3 0.02
1–3 0 −1 0 0 0 0 0 1–4 −0.0136
1–4 0 0 0.7071 −0.7071 0 0 0 2–3 0.0264
distance 2–3 −0.7071 −0.7071 0 0 0 0 0 m
−, rad 2–4 −0.02
2–4 0 0 0 −1 0 0 0 3–4 0
3–4 −1 0 1 0 0 0 0 leg Δ𝑦 /rad
1–3 −0.001 0 0 0 −1 0 0 1–3 1.5708 · 10−5
1–4 0 0 −0.0005 −0.0005 −1 0 0 1–4 0
2–3 −0.0005 0.0005 0 0 0 −1 0 2–3 −3.1416 · 10−5
direction 2–4 0 0 −0.001 0 0 −1 0 rad , − 2–4 0
m
3–1 −0.001 0 0 0 0 0 −1 3–1 0
3–2 −0.0005 0.0005 0 0 0 0 −1 3–2 −1.5708 · 10−5
3–4 0 0.001 0 −0.001 0 0 −1 3–4 −4.7124 · 10−5
Table 5.25 shows the estimated parameter updates after the 1st iteration and table 5.26 contains the
adjusted coordinates for point 3 and 4 and adjusted orientations.
Table 5.27 contains the adjusted observations for distances and directions, and 5.28 the estimated
inconsistencies 𝑒ˆ of the observations.
Finally, the tables 5.29, 5.30 and 5.31 contain standard deviations for adjusted coordinates and ori-
entations as well as for adjusted distance and direction observations.
59
5 Geomatics examples
c
Δ𝜉/m c
Δ𝜉/gon
Δ𝑋ˆ 3 −0.0101 Δ𝜔ˆ 1 −0.0003 𝑋ˆ 3 −0.010 m 𝜔ˆ 1 149.9997 gon
Δ𝑌ˆ3 −0.0231 Δ𝜔ˆ 2 0.0011 𝑌ˆ3 −0.023 m 𝜔ˆ 2 200.0011 gon
Δ𝑋ˆ 4 −0.0096 Δ𝜔ˆ 3 0.0006 𝑋ˆ 4 999.990 m 𝜔ˆ 3 0.0006 gon
Δ𝑌ˆ4 0.0163 𝑌ˆ4 0.016 m
Table 5.25: Estimated parameters. Table 5.26: Adjusted coordinates and orientations.
leg ˆ
𝑠/m leg 𝑟ˆ/gon leg ˆ
𝑒/m leg ˆ
𝑒/gon
1–3 1000.023 1–3 50.0009 1–3 −0.0031 1–3 0.0001
1–4 1414.195 1–4 0.0001 1–4 0.0048 1–4 −0.0001
2–3 1414.237 2–3 49.9985 2–3 0.0029 2–3 −0.0005
2–4 999.984 2–4 399.9995 2–4 −0.0037 2–4 0.0005
3–4 1000.001 3–1 0.0001 3–4 −0.0005 3–1 −0.0001
3–2 49.9990 3–2 0.0000
3–4 99.9969 3–4 0.0001
Figure 5.9 shows network of points including error ellipses, figure 5.10 and figure 5.11 give a detailed
view for points 3 and 4.
60
5.1 A-Model: Adjustment of observation equations
1 Benchmark 2
1000 New Point
800
600
Y/m
400
200
1cm
Error Ellipses
0
3 4
0.01 0.03
Approximate Point
Approximate Point Adjusted Point
Adjusted Point
0.005 0.025
0 0.02
3
−0.005 0.015 4
−0.01
Y/m
0.01
Y/m
1cm 1cm
−0.02 0
4
3
−0.025 −0.005
−0.03 −0.01
−0.02 −0.015 −0.01 −0.005 0 0.005 0.01 0.015 0.02 999.98 999.985 999.99 999.995 1000 1000.005 1000.01 1000.015 1000.02
X/m X/m
Figure 5.10: Detailed view point 3. Figure 5.11: Detailed view point 4.
61
5 Geomatics examples
This example (Benning, 2011, pg. 273–281) processes the same network as before (Figure 5.8). There-
fore, observations, weights and reduced observations Δ𝑦 (Table 5.24) do not change. However, since
we deal with a free adjustment here, designmatrix 𝐴 (Table 5.32) is augmented by four additional
columns comprising partial derivatives of those observations which also involve points 1 and 2.
The free adjustment process makes use of the pseudoinverse 𝑁 + shown in table 5.33 (1st iteration).
Table 5.34 shows estimated parameters (1st iteration) and table 5.35 contains the adjusted coordinates
for point 1–4 and also adjusted orientations.
c
Δ𝜉/m c
Δ𝜉/gon in m in gon
Δ𝑋ˆ 1 0.0018 Δ𝜔ˆ 1 −0.0003 𝑋ˆ 1 0.002 𝜔ˆ 1 149.9997
Δ𝑌ˆ1 0.0031 Δ𝜔ˆ 2 0.0017 𝑌ˆ1 1000.003 𝜔ˆ 2 200.0017
Δ𝑋ˆ 2 0.0135 Δ𝜔ˆ 3 0.0008 𝑋ˆ 2 1000.013 𝜔ˆ 3 0.0008
Δ𝑌ˆ2 −0.0014 𝑌ˆ2 999.999
Δ𝑋ˆ 3 −0.0076 𝑋ˆ 3 −0.008
Δ𝑌ˆ3 −0.0184 𝑌ˆ3 −0.018
Δ𝑋ˆ 4 −0.0077 𝑋ˆ 4 999.992
Δ𝑌ˆ4 0.0167 𝑌ˆ4 0.017
Table 5.34: Estimated parameters. Table 5.35: Adjusted coordinates and orientations.
62
5.1 A-Model: Adjustment of observation equations
Table 5.36 contains the adjusted observations for distances and directions, while table 5.37 shows
the estimated inconsistencies 𝑒ˆ of the observations.
leg ˆ
𝑠/m leg 𝑟ˆ/gon leg ˆ
𝑒/gon leg ˆ
𝑒/gon
1–3 1000.021 1–3 50.0009 1–3 −0.0015 1–3 0.0001
1–4 1414.197 1–4 0.0001 1–4 0.0027 1–4 −0.0001
2–3 1414.240 2–3 49.9984 2–3 −0.0004 2–3 −0.0004
2–4 999.982 2–4 399.9996 2–4 −0.0019 2–4 0.0004
3–4 1000.000 3–1 399.9998 3–4 0.0001 3–1 0.0002
3–2 49.9993 3–2 −0.0003
3–4 99.9969 3–4 0.0001
Finally again, the tables 5.38, 5.39, 5.40 and 5.41 contain standard deviations for adjusted coordinates
and orientations as well as for adjusted distance and direction observations.
Figure 5.12 shows network of points including error ellipses, figure 5.13, 5.14, 5.15 and 5.16 give a
detailed view for points 1, 2, 3 and 4.
63
5 Geomatics examples
1 Approximate Point 2
1000 Adjusted Point
800
600
Y/m
400
200
1cm
Error Ellipses
0
3 4
1000.005 1000.005
1
1000 1000
Y/m
Y/m
1 2
2
999.995 999.995
999.99 999.99
999.985 999.985
999.98 999.98
−0.02 −0.015 −0.01 −0.005 0 0.005 0.01 0.015 0.02 999.985 999.99 999.995 1000 1000.005 1000.01 1000.015 1000.02 1000.025
X/m X/m
Figure 5.13: Detailed view point 1. Figure 5.14: Detailed view point 2
0.015 0.03
Approximate Point Approximate Point
Adjusted Point Adjusted Point
0.01 0.025
0.005 0.02
4
0 0.015
3
−0.005 0.01
Y/m
Y/m
1cm
Error Ellipses
−0.01 0.005
1cm
Error Ellipses
−0.015 0
4
3
−0.02 −0.005
−0.025 −0.01
−0.02 −0.015 −0.01 −0.005 0 0.005 0.01 0.015 0.02 999.98 999.985 999.99 999.995 1000 1000.005 1000.01 1000.015 1000.02
X/m X/m
Figure 5.15: Detailed view point 3 Figure 5.16: Detailed view point 4
64
5.1 A-Model: Adjustment of observation equations
This example, taken from Wolf (1979, pg. 66–78), consists of a 10-point network observed by direc-
tions, one distance and one angle, see (5.17). The network is overconstrained because its datum is
defined by 6 benchmarks A–F.
727000
A B
726500
726000
725500
C G H
725000
724500
Y/m
724000
723500
I
D
723000
722500
722000
Benchmark F
New Point E
721500
183000 184000 185000 186000 187000
X/m
The observations are collected in table 5.43 (2nd column), and the following standard deviations have
been assumed:
If the a priori standard deviation is taken as 𝜎0 = ±2.5 mgon then the weight matrix elements turn
out to be
𝜎2 𝜎2 𝜎2 rad2
𝑃𝑟 = 02 = 1, 𝑃𝛼 = 02 = 0.5102 and 𝑃𝑠 = 02 = 1.7135 · 10−6 2 .
𝜎𝑟 𝜎𝛼 𝜎𝑠 m
Table 5.42 contains coordinates for benchmarks A to F and approximate coordinates G, H and I.
65
5 Geomatics examples
benchmarks A–F
point ID 𝑋 /m 𝑌 /m
A 184 423.28 726 419.33
B 186 444.18 726 476.66
C 183 257.84 725 490.35
D 184 292.00 723 313.00
E 185 487.00 721 829.00
F 186 708.72 722 104.58
approximate coordinates
new points G–I
point ID 𝑋 0 /m 𝑌0 /m
G 184 868.20 725 139.70
H 186 579.30 725 336.60
I 185 963.07 723 322.02
Additionally to observations, table 5.43 includes approximate grid bearings (𝑇0 ), angle (𝛼 0 ) and dis-
tance (𝑠 0 ), approximate orientation unknowns (𝜔 0 ), approximate directions (𝑟 0 ) and reduced obser-
vations (Δ𝑟 0 ). Orientation unknowns (𝜔 0 ) are mean values calculated from Δ𝑟 0 .
Table 5.44 contains the designmatrix 𝐴 after 1st iteration.
Table 5.45 shows the estimated parameters after 1st iteration and table 5.46 the adjusted coordinates
and orientations.
Table 5.47 contains the estimated inconsistencies 𝑒ˆ of the observations leading to a weighted square
sum of residuals
Finally, the tables 5.48 and 5.49 contain the standard deviations of coordinates, orientations and
observations.
Figures 5.18 and 5.19 show the adjusted network with corresponding absolute and relative error
ellipses at/between the new points.
Table 5.50 contains the elements (semi major axis 𝑎, semi minor axis 𝑏 and bearing 𝜙 of 𝑎) for absolute
error ellipses for new points and also for relative error ellipses between new points.
Figures 5.20, 5.21 and 5.22 give a detailed view, for the new points G, H and I.
66
5.1 A-Model: Adjustment of observation equations
Distance observation in m
leg 𝑠 𝑠0 Δ𝑠 0
G– I 2121.90 2121.96 −0.06 — — —
Direction observations in gon
leg 𝑟 𝑇0 Δ𝑟 0 = 𝑇0 − 𝑟 𝜔0 𝑟 𝜔0 = 𝑇0 − 𝜔 0 Δ𝑟 𝜔0 = 𝑟 − 𝑟 𝜔0
A–B 0.0000 98.1945 98.1945 98.1960 −0.0008 0.0008
A–G 80.5000 178.6975 98.1975 80.5022 −0.0022
A–C 158.9610 257.1571 98.1961 158.9618 −0.0008
B–H 0.0000 192.4898 192.4861 192.4861 0.0073 −0.0073
B–G 62.7260 255.2121 192.4861 62.7296 −0.0036
B–A 105.7120 298.1945 192.4824 105.7120 0.0000
C–A 0.0000 57.1571 57.1571 57.1640 −0.0095 0.0095
C–G 56.4960 113.6491 57.1531 56.4825 0.0135
C– I 85.8450 143.0148 57.1698 85.8482 −0.0032
C–D 114.5950 171.7711 57.1761 114.6045 −0.0095
D–G 0.0000 19.4522 19.4522 19.4476 0.0015 −0.0015
D–H 34.4500 53.8894 19.4394 34.4387 0.0113
D– I 80.2110 99.6564 19.4454 80.2057 0.0053
D– E 137.4020 156.8412 19.4391 137.3905 0.0115
D–C 352.3090 371.7711 19.4621 352.3205 −0.0115
E– I 0.0000 19.6507 19.6507 19.6357 0.0224 −0.0224
E– F 66.2450 85.8763 19.6313 66.2481 −0.0031
E–D 337.2160 356.8412 19.6251 337.2129 0.0031
F–E 0.0000 285.8763 285.8763 285.8642 0.0000 0.0000
F– I 79.1690 365.0151 285.8461 79.1388 0.0302
F –H 111.5820 397.4521 285.8701 111.5758 0.0062
G–B 0.0000 55.2121 55.2121 55.2147 −0.0027 0.0027
G–H 37.4980 92.7064 55.2083 37.4916 0.0064
G– I 110.2580 165.4862 55.2282 110.2715 −0.0135
G–D 164.2320 219.4522 55.2201 164.2374 −0.0054
G–C 258.4410 313.6491 55.2081 258.4344 0.0066
G–A 323.4860 378.6975 55.2115 323.4827 0.0033
H– F 0.0000 197.4521 197.4521 197.4508 0.0013 −0.0013
H– I 21.4450 218.8979 197.4529 21.4471 −0.0021
H–D 56.4420 253.8889 197.4474 56.4386 0.0034
I –H 0.0000 18.8979 18.8979 18.9025 −0.0046 0.0046
I –F 146.1430 165.0151 18.8721 146.1126 0.0304
I –E 200.7330 219.6507 18.9177 200.7482 −0.0152
I –D 280.7560 299.6564 18.9004 280.7539 0.0021
I –C 324.1050 343.0148 18.9098 324.1123 −0.0073
I –G 346.5690 365.4862 18.9172 346.5837 −0.0147
Angle observation in gon
leg 𝛼 𝛼0 Δ𝛼 0
𝛼 HGB 99.7810 99.7834 −0.0024 — — —
67
5 Geomatics examples
leg Δ𝑋 G Δ𝑌G Δ𝑋𝐻 Δ𝑌𝐻 Δ𝑋𝐼 Δ𝑌𝐼 Δ𝜔𝐴 Δ𝜔 𝐵 Δ𝜔𝐶 Δ𝜔 𝐷 Δ𝜔 𝐸 Δ𝜔 𝐹 Δ𝜔𝐺 Δ𝜔 𝐻 Δ𝜔 𝐼 phys. unit
Distance observation
G– I −0.5159 0.8566 0 0 0.5159 −0.8566 0 0 0 0 0 0 0 0 0 m
−, rad
Direction observations
A–B 0 0 0 0 0 0 −1 0 0 0 0 0 0 0 0
A –G −0.0007 −0.0002 0 0 0 0 −1 0 0 0 0 0 0 0 0
A –C 0 0 0 0 0 0 −1 0 0 0 0 0 0 0 0
B –H 0 0 −0.0009 −0.0001 0 0 0 −1 0 0 0 0 0 0 0
B –G −0.0003 0.0004 0 0 0 0 0 −1 0 0 0 0 0 0 0
B –A 0 0 0 0 0 0 0 −1 0 0 0 0 0 0 0
C –A 0 0 0 0 0 0 0 0 −1 0 0 0 0 0 0
C –G −0.0001 −0.0006 0 0 0 0 0 0 −1 0 0 0 0 0 0
C–I 0 0 0 0 −0.0002 −0.0002 0 0 −1 0 0 0 0 0 0
C –D 0 0 0 0 0 0 0 0 −1 0 0 0 0 0 0
D –G 0.0005 −0.0002 0 0 0 0 0 0 0 −1 0 0 0 0 0
D –H 0 0 0.0002 −0.0002 0 0 0 0 0 −1 0 0 0 0 0
D– I 0 0 0 0 3.2 · 10−6 −0.0006 0 0 0 −1 0 0 0 0 0
D–E 0 0 0 0 0 0 0 0 0 −1 0 0 0 0 0
D –C 0 0 0 0 0 0 0 0 0 −1 0 0 0 0 0
E–I 0 0 0 0 0.0006 −0.0002 0 0 0 0 −1 0 0 0 0
E–F 0 0 0 0 0 0 0 0 0 0 −1 0 0 0 0
E –D 0 0 0 0 0 0 0 0 0 0 −1 0 0 0 0 rad , −
F –E 0 0 0 0 0 0 0 0 0 0 0 −1 0 0 0 m
F–I 0 0 0 0 0.0006 0.0004 0 0 0 0 0 −1 0 0 0
F –H 0 0 0.0003 1.2 · 10−5 0 0 0 0 0 0 0 −1 0 0 0
G–B −0.0003 0.0004 0 0 0 0 0 0 0 0 0 0 −1 0 0
G –H −6.6 · 10−5 0.0006 6.6 · 10−5 −0.0006 0 0 0 0 0 0 0 0 −1 0 0
G– I 0.0004 0.0002 0 0 −0.0004 −0.0002 0 0 0 0 0 0 −1 0 0
G –D 0.0005 −0.0001 0 0 0 0 0 0 0 0 0 0 −1 0 0
G –C −0.0001 −0.0006 0 0 0 0 0 0 0 0 0 0 −1 0 0
G –A −0.0007 −0.0002 0 0 0 0 0 0 0 0 0 0 −1 0 0
H–F 0 0 0.0003 1.2 · 10−5 0 0 0 0 0 0 0 0 0 −1 0
H– I 0 0 0.0005 −0.0001 −0.0005 0.0001 0 0 0 0 0 0 0 −1 0
H –D 0 0 0.0002 −0.0002 0 0 0 0 0 0 0 0 0 −1 0
I –H 0 0 0.0005 −0.0001 −0.0005 0.0001 0 0 0 0 0 0 0 0 −1
I –F 0 0 0 0 0.0006 0.0004 0 0 0 0 0 0 0 0 −1
I –E 0 0 0 0 0.0006 −0.0002 0 0 0 0 0 0 0 0 −1
I –D 0 0 0 0 3.2 · 10−6 −0.0006 0 0 0 0 0 0 0 0 −1
I –C 0 0 0 0 −0.0002 −0.0002 0 0 0 0 0 0 0 0 −1
I –G 0.0004 0.0002 0 0 −0.0004 −0.0002 0 0 0 0 0 0 0 0 −1
Angle observation
𝛼 HGB 6.6 · 10−5 −0.0006 −0.0009 0.0004 0 0 0 0 0 0 0 0 0 0 0 rad , −
m
c
Δ𝜉/m in m
Δ𝑋ˆ𝐺 −0.162 𝑋ˆ𝐺 184 868.038
Δ𝑌ˆ𝐺 −0.043 𝑌ˆ𝐺 725 139.657
Δ𝑋ˆ𝐻 0.037 𝑋ˆ𝐻 186 579.337
Δ𝑌ˆ𝐻 −0.186 𝑌ˆ𝐻 725 336.414
Δ𝑋ˆ𝐼 0.145 𝑋ˆ𝐼 185 963.215
Δ𝑌ˆ𝐼 0.283 𝑌ˆ𝐼 723 322.303
c
Δ𝜉/gon in gon
Δ𝜔ˆ 𝐴 0.0026 𝜔ˆ 𝐴 98.1987
Δ𝜔ˆ 𝐵 0.0005 𝜔ˆ 𝐵 192.4866
Δ𝜔ˆ 𝐶 −0.0007 𝜔ˆ 𝐶 57.1634
Δ𝜔ˆ 𝐷 −0.0024 𝜔ˆ 𝐷 19.4452
Δ𝜔ˆ 𝐸 0.0007 𝜔ˆ 𝐸 19.6364
Δ𝜔ˆ 𝐹 0.0042 𝜔ˆ 𝐹 285.8684
Δ𝜔ˆ 𝐺 0.0002 𝜔ˆ 𝐺 55.2150
Δ𝜔ˆ 𝐻 0.0017 𝜔ˆ 𝐻 197.4525
Δ𝜔ˆ 𝐼 −0.0024 𝜔ˆ 𝐼 18.9001
Table 5.45: Estimated parameters after Table 5.46: Adjusted coordinates and
1st iteration. orientations.
68
5.1 A-Model: Adjustment of observation equations
leg ˆ
𝑠/m ˆ
𝑒/m
G– I 2121.836 0.0638
leg 𝑟ˆ/gon ˆ
𝑒/gon leg 𝑟ˆ/gon ˆ
𝑒/gon leg 𝑟ˆ/gon ˆ
𝑒/gon
A–B −0.0042 0.0042 D– I 80.2004 0.0106 G–D 164.2325 −0.0005
A–G 80.5067 −0.0067 D–E 137.3959 0.0061 G–C 258.4371 0.0039
A–C 158.9585 0.0025 D–C 352.3259 −0.0169 G–A 323.4904 −0.0044
B–H 0.0024 −0.0024 E– I 0.0164 −0.0164 H– F 0.0003 −0.0003
B–G 62.7277 −0.0017 E– F 66.2399 0.0051 H– I 21.4464 −0.0014
B–A 105.7079 0.0041 E–D 337.2047 0.0113 H–D 56.4403 0.0017
C–A −0.0062 0.0062 F –E 0.0079 −0.0079 I –H −0.0012 0.0012
C–G 56.4887 0.0072 F– I 79.1588 0.0102 I –F 146.1271 0.0159
C– I 85.8457 −0.0007 F –H 111.5843 −0.0023 I –E 200.7527 −0.0197
C–D 114.6078 −0.0128 G–B −0.0007 0.0007 I –D 280.7455 0.0105
D–G 0.0022 −0.0022 G–H 37.4975 0.0005 I –C 324.1089 −0.0039
D–H 34.4476 0.0024 G– I 110.2583 −0.0003 I –G 346.5731 −0.0041
leg ˆ
𝛼/gon ˆ
𝑒/gon
𝛼 HGB 99.7765 0.0045
in cm in mgon
𝜎ˆ𝑋ˆ G ±11.866 𝜎ˆ𝜔ˆ A ±6.0023
𝜎ˆ𝑌ˆG ±13.078 𝜎ˆ𝜔ˆ B ±6.7376
𝜎ˆ𝑋ˆ H ±15.816 𝜎ˆ𝜔ˆ C ±5.1859
𝜎ˆ𝑌ˆH ±26.380 𝜎ˆ𝜔ˆ D ±4.8772
𝜎ˆ𝑋ˆ I ±11.470 𝜎ˆ𝜔ˆ E ±5.9353
𝜎ˆ𝑌ˆI ±13.537 𝜎ˆ𝜔ˆ F ±6.1002
𝜎ˆ𝜔ˆ G ±4.3863
𝜎ˆ𝜔ˆ H ±6.5588
𝜎ˆ𝜔ˆ I ±4.3554
in cm
𝜎ˆ𝑠ˆ𝐺𝐼 ±10.2660
in mgon in mgon in mgon in mgon
𝜎ˆ𝑟ˆAB ±6.0023 𝜎ˆ𝑟ˆCD ±5.1859 𝜎ˆ𝑟ˆFE ±6.1002 𝜎ˆ𝑟ˆHF ±6.1800
𝜎ˆ𝑟ˆAG ±6.8033 𝜎ˆ𝑟ˆDG ±5.3020 𝜎ˆ𝑟ˆFI ±6.6864 𝜎ˆ𝑟ˆHI ±6.1533
𝜎ˆ𝑟ˆAC ±6.0023 𝜎ˆ𝑟ˆDH ±5.3830 𝜎ˆ𝑟ˆFH ±6.2494 𝜎ˆ𝑟ˆHD ±6.3495
𝜎ˆ𝑟ˆBH ±8.3169 𝜎ˆ𝑟ˆDI ±5.7282 𝜎ˆ𝑟ˆGB ±6.0317 𝜎ˆ𝑟ˆIH ±6.1128
𝜎ˆ𝑟ˆBG ±6.7802 𝜎ˆ𝑟ˆDE ±4.8772 𝜎ˆ𝑟ˆGH ±8.1933 𝜎ˆ𝑟ˆIF ±6.9589
𝜎ˆ𝑟ˆBA ±6.7376 𝜎ˆ𝑟ˆDC ±4.8772 𝜎ˆ𝑟ˆGI ±6.0859 𝜎ˆ𝑟ˆIE ±5.7188
𝜎ˆ𝑟ˆCA ±5.1859 𝜎ˆ𝑟ˆEI ±6.5637 𝜎ˆ𝑟ˆGD ±6.1072 𝜎ˆ𝑟ˆID ±5.6981
𝜎ˆ𝑟ˆCG ±6.0882 𝜎ˆ𝑟ˆEF ±5.9353 𝜎ˆ𝑟ˆGC ±6.1493 𝜎ˆ𝑟ˆIC ±4.5646
𝜎ˆ𝑟ˆCI ±5.2157 𝜎ˆ𝑟ˆED ±5.9353 𝜎ˆ𝑟ˆGA ±6.5839 𝜎ˆ𝑟ˆIG ±5.7435
in mgon
𝜎ˆ𝛼ˆHGB ±9.4045
69
5 Geomatics examples
727000
20 cm
Error Ellipses A B
726500
726000
725500 725500
H
C G
H
725000 G
725000
724500
Y/m
724000 724500
Y/m
723500
I
D 724000
723000
722500
723500
722000 20 cm
Benchmark F Error Ellipses I
New Point E
721500 723000
183000 184000 185000 186000 187000 184500 185000 185500 186000 186500 187000
X/m X/m
Figure 5.18: Network with error ellipses. Figure 5.19: Detailed view: absolute and relative
error ellipses.
70
5.1 A-Model: Adjustment of observation equations
Approximate Point
Approximate Point Adjusted Point
Adjusted Point 725336.7
725139.9
H
725336.5
H
G
Y/m
725139.7
Y/m
725336.3
G
10cm
Error Ellipses
10cm
Error Ellipses
725139.5
725336.1
Figure 5.20: Detailed view of point G. Figure 5.21: Detailed view of point H.
I
723322.3
Y/m
723322.1
10cm
I Error Ellipses
723321.9
71
5 Geomatics examples
Observations: 𝑦𝑖 , 𝑖 = 1, . . . , 𝑚.
Given: fixed x-coordinates 𝑥𝑖 , 𝑖 = 1, . . . , 𝑚.
Find parameters 𝑎𝑛 , 𝑛 = 0, . . . , 𝑛 max of fitting polynomial
𝑛Õ
max
𝑓 (𝑥) = 𝑦 = 𝑎𝑛 𝑥 𝑛 .
𝑛=0
Observation equation
𝑛Õ
max
𝑦𝑖 = 𝑎𝑛 𝑥 𝑛 + 𝑒𝑖 ,
𝑛=0
𝑦1 = 𝑎 0𝑥 10 + 𝑎 1𝑥 11 + 𝑎 2𝑥 12 + . . . + 𝑒 1,
..
.
0 1 2
𝑦𝑚 = 𝑎 0𝑥𝑚 + 𝑎 1𝑥𝑚 + 𝑎 2𝑥𝑚 + . . . + 𝑒𝑚 .
Vandermonde matrix 𝐴
𝑥 1 · · · 𝑥 1𝑛max
© ª © ª© ª © ª
𝑦1 1 𝑎0 𝑒1
𝑦2 ® 1 𝑥 2 · · · 𝑥 2𝑛max ® 𝑎 1 ® 𝑒 2 ®
. ®=. ® . ®+ . ® .
. ® . .. ® . ® . ®
. ® . . ® . ® . ®
« 𝑦𝑚 ¬ « 1 𝑥𝑚 · · · 𝑥𝑚 ¬ « 𝑎𝑛max ¬ « 𝑒𝑚 ¬
𝑛 max
|{z} | {z } | {z } |{z}
𝑦 𝐴 𝜉 𝑒
72
5.1 A-Model: Adjustment of observation equations
𝑔(𝑥) = 𝑓 (𝑥 T ) + 𝑓 ′ (𝑥 T ) (𝑥 − 𝑥 T ) =⇒ 𝑦P = 𝑔(𝑥 P ) = 𝑓 (𝑥 T ) + 𝑓 ′ (𝑥 T ) (𝑥 P − 𝑥 T ) .
𝑓 (𝑥) = 𝑎 0 + 𝑎 1𝑥 + 𝑎 2𝑥 2 parabola
′
𝑓 (𝑥) = 𝑎 1 + 2𝑎 2𝑥
Tangent in 𝑥 T : 𝑔(𝑥) = 𝑎 0 + 𝑎 1𝑥 T + 𝑎 2𝑥 T2 + (𝑎 1 + 2𝑎 2𝑥 T ) (𝑥 − 𝑥 T )
Tangent in 𝑥 T , passing through 𝑥 P, 𝑦P
𝑦P = 𝑎 0 + 𝑎 1𝑥 T + 𝑎 2𝑥 T2 + (𝑎 1 + 2𝑎 2𝑥𝑇 ) (𝑥 P − 𝑥 T )
= 𝑎 0 + 𝑎 1𝑥 T + 𝑎 2𝑥 T2 + 𝑎 1 (𝑥 P − 𝑥 T ) + 2𝑎 2 (𝑥 P − 𝑥 T )𝑥 T
= 𝑎 0 + 𝑥 P𝑎 1 + 𝑥 T (2𝑥 P − 𝑥 T )𝑎 2
=⇒ 𝐵 T 𝜉 = 𝑦P, with 𝜉 = [𝑎 0, 𝑎 1, 𝑎 2 ] T, 𝐵 T = 1 𝑥 P 𝑥 T (2𝑥 P − 𝑥 T ) .
T
=⇒ Tangent equation with adjusted parameters 𝜉ˆ = 𝑎ˆ0, . . . , 𝑎ˆ𝑛max
𝑦ˆT − 𝑦P
𝑦 = 𝑎 T𝑥 + 𝑏 T, 𝑎 T := “tangent slope”,
𝑥T − 𝑥P
𝑦ˆT − 𝑦P
𝑏 T := 𝑦ˆT − 𝑥T “axis intercept”,
𝑥T − 𝑥P
𝑦ˆT = 𝑎ˆ0 + 𝑎ˆ1𝑥 T + . . . + 𝑎ˆ𝑛max 𝑥 T𝑛max “estimated ordinate”.
73
5 Geomatics examples
c) The unknown coefficient 𝑎𝑘 should have the fixed numerical value 𝑎˜𝑘 .
" #
|{z}
T T
0 ... 1 ...
𝐵 𝜉 = 𝑎˜𝑘 , 𝐵 =
position 𝑘 + 1
Examples
T
𝑥𝑖 = −1, 0, 1, 2, 3, 4, 5 ,
T
𝑦𝑖 = 1.3, 0.8, 0.9, 1.2, 2.0, 3.5, 4.1 .
1st order polynomial: 𝑒ˆT𝑒ˆ = 2.5 2nd order polynomial: 𝑒ˆT𝑒ˆ = 3.4 · 10−1
4 4
2 2
0 0
−1 0 1 2 3 4 5 −1 0 1 2 3 4 5
3rd order polynomial: 𝑒ˆT𝑒ˆ = 1.7 · 10−1 4th order polynomial: 𝑒ˆT𝑒ˆ = 7.5 · 10−2
4 4
2 2
0 0
−1 0 1 2 3 4 5 −1 0 1 2 3 4 5
5th order polynomial: 𝑒ˆT𝑒ˆ = 8.8 · 10−4 6th order polynomial: 𝑒ˆT𝑒ˆ = 2.4 · 10−29
4 4
2 2
0 0
−1 0 1 2 3 4 5 −1 0 1 2 3 4 5
74
5.1 A-Model: Adjustment of observation equations
1st order polynomial: 𝑒ˆT𝑒ˆ = 6.3 2nd order polynomial: 𝑒ˆT𝑒ˆ = 4.7 · 10−1
4 4
2 2
0 0
−1 0 1 2 3 4 5 −1 0 1 2 3 4 5
3rd order polynomial: 𝑒ˆT𝑒ˆ = 2.1 · 10−1 4th order polynomial: 𝑒ˆT𝑒ˆ = 2.1 · 10−1
4 4
2 2
0 0
−1 0 1 2 3 4 5 −1 0 1 2 3 4 5
5th order polynomial: 𝑒ˆT𝑒ˆ = 5.1 · 10−2 6th order polynomial: 𝑒ˆT𝑒ˆ = 1.3 · 10−2
4 4
2 2
0 0
−1 0 1 2 3 4 5 −1 0 1 2 3 4 5
Figure 5.25: Polynomial fit with tangent restriction: tangent in 𝑥 T = 1, 𝑦ˆT (𝑥 T ) shall pass through the
point 𝑥 P = 4, 𝑦P = 2.
1st order polynomial: 𝑒ˆT𝑒ˆ = 3.1 2nd order polynomial: 𝑒ˆT𝑒ˆ = 2.9
4 4
2 2
0 0
−1 0 1 2 3 4 5 −1 0 1 2 3 4 5
3rd order polynomial: 𝑒ˆT𝑒ˆ = 2.8 4th order polynomial: 𝑒ˆT𝑒ˆ = 2.5
4 4
2 2
0 0
−1 0 1 2 3 4 5 −1 0 1 2 3 4 5
5th order polynomial: 𝑒ˆT𝑒ˆ = 1.4 6th order polynomial: 𝑒ˆT𝑒ˆ = 1.3
4 4
2 2
0 0
−1 0 1 2 3 4 5 −1 0 1 2 3 4 5
Figure 5.26: Polynomial fit with point restriction: adjusted polynomial shall pass through the point
𝑥 Q = 1.5, 𝑦Q = 2.
75
5 Geomatics examples
1st order polynomial: 𝑒ˆT𝑒ˆ = 1.0 · 101 2nd order polynomial: 𝑒ˆT𝑒ˆ = 3.9 · 10−1
4 4
2 2
0 0
−1 0 1 2 3 4 5 −1 0 1 2 3 4 5
3rd order polynomial: 𝑒ˆT𝑒ˆ = 3.5 · 10−1 4th order polynomial: 𝑒ˆT𝑒ˆ = 3.4 · 10−1
4 4
2 2
0 0
−1 0 1 2 3 4 5 −1 0 1 2 3 4 5
5th order polynomial: 𝑒ˆT𝑒ˆ = 1.2 · 10−3 6th order polynomial: 𝑒ˆT𝑒ˆ = 3.7 · 10−4
4 4
2 2
0 0
−1 0 1 2 3 4 5 −1 0 1 2 3 4 5
Figure 5.27: Polynomial fit with coefficient restriction: coefficient 𝑎ˆ1 shall vanish, i. e. 𝑎ˆ1 = 0.
More examples: Various straight line fits. For the numerics, the values on page 74 were reused.
1) Straight line fit using A-Model, with inconsistencies 𝑒 𝑦𝑖 in observations 𝑦𝑖 (𝑄 𝑦−1 = 𝐼 ). Observation
equation: 𝑦𝑖 = 𝑎 0 + 𝑎 1𝑥𝑖 .
Results (see also figure 5.28):
4
3 data points
adjusted data
𝑦
2
residuals
1
0
−1 0 1 2 3 4 5
𝑥
76
5.1 A-Model: Adjustment of observation equations
2) Straight line fit using A-Model, with inconsistencies 𝑒𝑥𝑖 in observations 𝑥𝑖 (𝑄 𝑥−1 = 𝐼 ). Observation
equation: 𝑥𝑖 = 𝑎 0 + 𝑎 1𝑦𝑖 .
Results (see also figure 5.29):
4
3 data points
adjusted data
𝑦
2
residuals
1
0
−1 0 1 2 3 4 5
𝑥
77
5 Geomatics examples
Observations: angles 𝛼, 𝛽, 𝛾
Unknowns: inconsistencies 𝑒𝛼 , 𝑒 𝛽 , 𝑒𝛾 =⇒ linear function
𝑓 (𝑒𝛼 , 𝑒 𝛽 , 𝑒𝛾 ) = (𝛼 − 𝑒𝛼 ) + (𝛽 − 𝑒 𝛽 ) + (𝛾 − 𝑒𝛾 ) − 180° = 0 .
𝐵 T (𝑦 − 𝑒) − 180° = 𝐵 T𝑦 − 180° − 𝐵 T𝑒 = 𝑤 − 𝐵 T𝑒 = 0
T T
with 𝑒 = 𝑒𝛼 , 𝑒 𝛽 , 𝑒𝛾 , 𝑦 = 𝛼, 𝛽, 𝛾 and 𝑤 = 𝐵 T𝑦 − 180° (“misclosure”).
78
5.2 B-Model: Adjustment of condition equations
𝜕𝑓 𝜕𝑓
𝑓 𝑒𝑎 , 𝑒𝑏 , 𝑒𝛼 , 𝑒 𝛽 = 𝑓 𝑒𝑎0 , 𝑒𝑏0 , 𝑒𝛼0 , 𝑒 𝛽0 + 𝑒𝑎 − 𝑒𝑎0 + 𝑒𝑏 − 𝑒𝑏0
𝜕𝑒𝑎
0 𝜕𝑒𝑏 0
𝜕𝑓 𝜕𝑓
!
+ 𝑒𝛼 − 𝑒𝛼0 + 𝑒 𝛽 − 𝑒 𝛽0 = 0,
𝜕𝑒𝛼 0 𝜕𝑒 𝛽 0
𝑓 𝑒𝑎0 , 𝑒𝑏0 , 𝑒𝛼0 , 𝑒 𝛽0 = (𝑎 − 𝑒𝑎0 ) sin(𝛽 − 𝑒 𝛽0 ) − (𝑏 − 𝑒𝑏0 ) sin(𝛼 − 𝑒𝛼0 )
𝜕𝑓
𝑒𝑎 − 𝑒𝑎0 = − sin(𝛽 − 𝑒 𝛽0 ) 𝑒𝑎 − 𝑒𝑎0
𝜕𝑒𝑎 0
𝜕𝑓
𝑒𝑏 − 𝑒𝑏0 = sin(𝛼 − 𝑒𝛼0 ) 𝑒𝑏 − 𝑒𝑏0
𝜕𝑒𝑏 0
𝜕𝑓
𝑒𝛼 − 𝑒𝛼0 = 𝑏 − 𝑒𝑏0 cos(𝛼 − 𝑒𝛼0 ) 𝑒𝛼 − 𝑒𝛼0
𝜕𝑒𝛼
0
𝜕𝑓
𝑒 𝛽 − 𝑒 𝛽0 = − 𝑎 − 𝑒𝑎0 cos(𝛽 − 𝑒 𝛽0 ) 𝑒 𝛽 − 𝑒 𝛽0
𝜕𝑒 𝛽
0
79
5 Geomatics examples
Collect the coefficients of all terms with 𝑒 in −𝐵 T , all remaining terms go into the vector 𝑤 of mis-
closures.
=⇒ 𝐵 T = sin(𝛽 − 𝑒 𝛽0 ), − sin(𝛼 − 𝑒𝛼0 ), −(𝑏 − 𝑒𝑏0 ) cos(𝛼 − 𝑒𝛼0 ), (𝑎 − 𝑒𝑎0 ) cos(𝛽 − 𝑒 𝛽0 ) ,
𝑤 = 𝑎 sin 𝛽 − 𝑒 𝛽0 − 𝑏 sin 𝛼 − 𝑒𝛼0 − 𝑏 − 𝑒𝑏0 cos(𝛼 − 𝑒𝛼0 )𝑒𝛼0
+ 𝑎 − 𝑒𝑎0 cos(𝛽 − 𝑒 𝛽0 )𝑒 𝛽0 (“misclosure”)
Example: observations
𝑎 = 10, 𝑏 = 5, 𝛼 = 60°, 𝛽 = 23.7°
with associated weights
c k < 10−12 )
Results: parameters (after 6 iterations, k Δ𝑒
𝑒ˆ𝑎 = −5.63 · 10−6, 𝑒ˆ𝑏 = 1.13 · 10−4, 𝑒ˆ𝛼 = 6′26.241′′, 𝑒ˆ𝛽 = −1°55′42.492′′,
𝑒ˆT 𝑃 𝑒ˆ = 4.017 · 10−6 .
80
5.3 Mixed model
5.3.1 Straight line fit using A-model with pseudo observation equations
Example: Straight line fit using A-Model, with inconsistencies 𝑒𝑥𝑖 and 𝑒 𝑦𝑖 in both observations 𝑥𝑖
and 𝑦𝑖 (𝑄 𝑦−1 = 𝑄 𝑥−1 = 𝐼 , 𝑃 = diag(𝑄 𝑦−1, 𝑄 𝑥−1 )). For the numerics, the values on page 74 have been used.
Unknown parameters 𝑎 0 , 𝑎 1 , 𝑥¯𝑖 , 𝑖 = 1, . . . , 𝑚
This leads to
Δ𝑦𝑖 − 𝑒 𝑦𝑖 = Δ𝑎 0 + 𝑎 01 Δ𝑥¯𝑖 + 𝑥¯𝑖0 Δ𝑎 1
and
Δ𝑥𝑖 − 𝑒𝑥𝑖 = Δ𝑥¯𝑖 .
In matrix notation:
Δ𝑎 0
Δ𝑦 − 𝑒 𝑦 𝐴1 © ª
® 𝐴1
𝐴2
= Δ𝑎 1 ® = Δ𝜉
Δ𝑥 − 𝑒𝑥 𝐴2
2𝑚×(𝑚+2) «
2𝑚×1
Δ𝑥¯ ¬
(𝑚+2)×1
where
𝑥¯10 𝑎 01
© ª © ª
1 0... 0 0 0 1 0 ... 0
1 𝑥¯20 𝑎 01 ® 0 0®
𝐴1 = . ®; 𝐴2 = . .. ®® .
0 ... 0 0 0... 1
.. .. .. ..® .. .. ..
.. ® .. .®
.. ..
𝑚×(𝑚+2) . . .. . 𝑚×(𝑚+2) . . . .
«1 𝑎1 ¬ «0 1¬
0
𝑥¯𝑚 0 0 ... 0 0 0 0 ...
c k < 10−12 ):
Parameters (after 20 iterations, k Δ𝜉
81
5 Geomatics examples
T
𝑦ˆ = 0.514, 0.822, 1.277, 1.782, 2.409, 3.209, 3.787
T
𝑒ˆ𝑦 = 0.786, −0.022, −0.377, −0.582, −0.409, 0.291, 0.313
T
𝑥ˆ = −0.551, −0.012, 0.785, 1.668, 2.766, 4.166, 5.179
T
𝑒ˆ𝑥 = −0.449, 0.012, 0.215, 0.332, 0.234, −0.166, −0.179
4
3 data points
adjusted data
𝑦
2
residuals
1
0
−1 0 1 2 3 4 5
𝑥
82
5.3 Mixed model
Example: Straight line fit using extended B-Model with 𝑄 𝑦−1 = 𝑄 𝑥−1 = 𝐼 , 𝑃 = diag(𝑄 𝑦−1, 𝑄 𝑥−1 ).
Non linear condition equation with unknowns 𝑎 0 , 𝑎 1 :
𝑦𝑖 − 𝑒 𝑦𝑖 − 𝑎 0 + 𝑎 1 (𝑥𝑖 − 𝑒𝑥𝑖 ) = 0
1 𝑥 1 − 𝑒𝑥0
1
1 𝑥2 − 𝑒 0 T
𝑥2
𝐴 =−. ;
Δ𝑎 0 𝑒
𝑒 = 𝑒𝑥 1 · · · 𝑒𝑥𝑚 𝑒 𝑦1 · · · 𝑒 𝑦𝑚 = 𝑥 ;
.
. Δ𝜉 = ;
. .
. Δ𝑎 1 𝑒𝑦
1 𝑥𝑚 − 𝑒 0
𝑚×2 2×1 2𝑚×1
𝑥𝑚
𝑎 0
1 −1
𝐵T = . . . .. = 𝑎 01 𝐼𝑚 , −𝐼𝑚 ; 𝑤 = 𝑦 − (𝑎 00 + 𝑎 01𝑥) .
.
−1
𝑚×2𝑚 𝑚×1
𝑎 01
Lagrangian:
1
L (Δ𝜉, 𝑒, 𝜆) = 𝑒 T 𝑃𝑒 + 𝜆 T (𝑤 + 𝐴Δ𝜉 + 𝐵 T𝑒) −→ min
2 Δ𝜉,𝑒,𝜆
𝜕L
(𝑒,
ˆ 𝜆, c = 𝑃 𝑒ˆ
ˆ Δ𝜉) + 𝐵 𝜆ˆ = 0
𝜕𝑒 2𝑚×2𝑚 2𝑚×1 2𝑚×𝑚 𝑚×1 2𝑚×1
𝜕L
(𝑒,
ˆ 𝜆, c =
ˆ Δ𝜉) 𝐴T 𝜆ˆ = 0
𝜕Δ𝜉 2×𝑚 𝑚×1 2×1
𝜕L
(𝑒, c = 𝐵 T 𝑒ˆ + 𝐴 Δ𝜉
ˆ Δ𝜉)
ˆ 𝜆, c = −𝑤
𝜕𝜆 𝑚×2𝑚 2𝑚×1 𝑚×2 2×1 𝑚×1
83
5 Geomatics examples
4
3 data points
adjusted data
𝑦
2
residuals
1
0
−1 0 1 2 3 4 5
𝑥
Example: The following results and figure show the cases for the previous two examples, ob-
servations having weights (𝑃𝑥 = 𝑄 𝑥−1 ≠ 𝐼 , 𝑃 𝑦 = 𝑄 𝑦−1 ≠ 𝐼 , 𝑃 = diag (𝑃𝑥 , 𝑃 𝑦 )). We introduce the
weights
T
diag 𝑃𝑥 = 3, 9, 8, 4, 5, 7, 10
T
diag 𝑃 𝑦 = 2, 8, 7, 5, 10, 8, 6
Both, the A-model with inconsistencies 𝑒𝑥𝑖 and 𝑒 𝑦𝑖 and the extended B-model, give identical results.
Due to 𝑃 ≠ 𝐼 residuals are not orthogonal to the adjusted line. See figure 5.34.
T
𝑦ˆ = 0.208, 0.620, 1.124, 1.633, 2.281, 3.288, 3.895
T
𝑒ˆ𝑦 = 1.092, 0.180, −0.224, −0.433, −0.281, 0.212, 0.205
T
𝑥ˆ = −0.521, 0.105, 0.871, 1.644, 2.630, 4.159, 5.081
T
𝑒ˆ𝑥 = −0.479, −0.105, 0.129, 0.356, 0.370, −0.159, −0.081
4
3 data points
adjusted data
𝑦
2
residuals
1
0
−1 0 1 2 3 4 5
𝑥
84
5.3 Mixed model
The following two tables (see Niemeier, 2008, pg. 374–375) give coordinates with respect to the
source (𝑢, 𝑣)-system and the target (𝑥, 𝑦)-system. Points 1–4 are identical to both systems (con-
trol points). We assume inconsistencies in both source and target system coordinates and they are
uncorrelated having equal unit variances, i. e.
Mixed model approach I: A-model with inconsistencies in both [𝑥𝑖 , 𝑦𝑖 ] and [𝑢𝑖 , 𝑣𝑖 ] coordinates
(𝑖 = 1, . . . , 𝑝, with 𝑝 number of control points).
85
5 Geomatics examples
Approximate values:
Linearization process:
𝑥𝑖 − 𝑒𝑥𝑖 = (𝜆 0 cos 𝛼 0𝑢¯𝑖0 + 𝜆 0 sin 𝛼 0𝑣¯𝑖0 + 𝑡𝑥0 ) +Δ𝑡𝑥 + (−𝜆 0 sin 𝛼 0𝑢¯𝑖0 + 𝜆 0 cos 𝛼 0𝑣¯𝑖0 ) Δ𝛼
| {z } | {z }
𝑥𝑖0 𝑎𝑖
𝑦𝑖 − 𝑒 𝑦𝑖 = (−𝜆 0 sin 𝛼 0𝑢¯𝑖0 + 𝜆 0 cos 𝛼 𝑣¯𝑖0 + 𝑡 𝑦0 ) +Δ𝑡 𝑦 −(𝜆 0 cos 𝛼 0𝑢¯𝑖0 + 𝜆 0 sin 𝛼 0𝑣¯𝑖0 ) Δ𝛼
| {z } | {z }
𝑦𝑖0 𝑐𝑖
In matrix form:
𝑥1 − 𝑥 0 𝑒𝑥 1 1
1 0 𝑎1 𝑏 1 𝜆 0 cos 𝛼 0 . . . 0 𝜆 0 sin 𝛼 0 . . . 0
. .. ..
.. . .
.. .. .. .. .. .. .. .. ..
. .
𝑥 − 𝑥 0
. . . . . . .
𝑝 𝑝
𝑒 𝑥 𝑝 1 𝜆 sin 𝛼
. . . 𝜆 0 cos 𝛼 0 0 0
0 𝑎𝑝 𝑏 𝑝 0 0 ...
Δ𝑡𝑥
𝑦1 − 𝑦1 𝑒 𝑦 0 Δ𝑡 𝑦
1
0
.
1 𝑐1 𝑑1 −𝜆 0 sin 𝛼 0 ... 0 𝜆 0 cos 𝛼 0 ... 0
. . . Δ𝛼
. .. .. .. .. .. .. .. .. .. .. ..
. . . . . . . . . Δ𝜆
𝑦𝑝 − 𝑦𝑝0 𝑒 0 𝜆 cos 𝛼
𝑦𝑝 . . . −𝜆 0 sin 𝛼 0 0 0
− = Δ𝑢¯1
1 𝑐 𝑝 𝑑𝑝 0 0 ...
...
𝑢 1 − 𝑢¯0 𝑒𝑢1 0
1 . .
0 0 0 1 ... 0 0 ... 0
Δ𝑢¯𝑝
. . .
.. . .
.... .. .. .. .. .. .. ..
Δ¯𝑣 1
. . . . . . . . .
𝑢 − 𝑢¯0 𝑒𝑢𝑝 0 ...
𝑝 0 0 0 0 ... 1 0 ... 0
Δ¯𝑣 𝑝
𝑝
𝑒𝑣 0 |{z}
𝑣 1 − 𝑣¯10 1
.
0 0 0 0 ... 0 1 ... 0
. . .
. .. .. .... .. .. .. .. ..
.. .. Δ𝜉
. . . . . . . . .
𝑒 0
(2𝑝+4)×1
𝑣 𝑝 − 𝑣¯𝑝 𝑣𝑝
0 0 0 0 0 ... 0 0 ... 1
| {z } |{z} | {z }
𝑙 𝑒 𝐴
4𝑝×1 4𝑝×1 4𝑝×(2𝑝+4)
86
5.3 Mixed model
c k < 10−11 ):
we obtain the parameters (after 5 Iterations, k Δ𝜉
Coordinates of data points in the target system are listed in Tab. 5.53.
4 14
15 14
4 15
11 500 22 000
3
3
11 000 21 500
control points control points
source data source data
13 000 13 500 14 000 14 500 15 000 18 500 19 000 19 500 20 000 20 500
𝑢/m 𝑥/m
Figure 5.35: 2D similarity transformation: Gauss Markov model; inconsistencies in both source and
target system.
Mixed model approach II: extended B-model with inconsistencies in both [𝑥𝑖 , 𝑦𝑖 ] and [𝑢𝑖 , 𝑣𝑖 ]
coordinates (𝑖 = 1, . . . , 𝑝, with 𝑝 number of control points).
𝑓𝑥𝑖 := 𝑥𝑖 − 𝑒𝑥𝑖 − 𝜆 cos 𝛼 (𝑢𝑖 − 𝑒𝑢𝑖 ) + 𝜆 sin 𝛼 (𝑣𝑖 − 𝑒 𝑣𝑖 ) + 𝑡𝑥 = 0,
𝑓𝑦𝑖 := 𝑦𝑖 − 𝑒 𝑦𝑖 − −𝜆 sin 𝛼 (𝑢𝑖 − 𝑒𝑢𝑖 ) + 𝜆 cos 𝛼 (𝑣𝑖 − 𝑒 𝑣𝑖 ) + 𝑡 𝑦 = 0 .
87
5 Geomatics examples
𝑣𝑖
𝜕𝑒𝑥𝑖 𝜕𝑒 𝑦𝑖 𝜕𝑒𝑢𝑖 𝜕𝑒 𝑣𝑖 𝜕𝑡𝑥 𝜕𝑡 𝑦 𝜕𝛼 𝜕𝜆 0
0
where
𝑓𝑥0𝑖 = 𝑥𝑖 − 𝑒𝑥0𝑖 − 𝜆 0 cos 𝛼 0 (𝑢𝑖 − 𝑒𝑢0𝑖 ) + 𝜆 0 sin 𝛼 0 (𝑣𝑖 − 𝑒 𝑣0𝑖 ) + 𝑡𝑥0
𝑓𝑦0𝑖 = 𝑦𝑖 − 𝑒 𝑦0 𝑖 − −𝜆 0 sin 𝛼 0 (𝑢𝑖 − 𝑒𝑢0𝑖 ) + 𝜆 0 cos 𝛼 0 (𝑣𝑖 − 𝑒 𝑣0𝑖 ) + 𝑡 𝑦0 .
𝐵𝑖 T
𝑣𝑖 𝑣𝑖 | {z }
𝜕𝑡 𝑦 𝜕𝛼 𝜕𝜆
𝑓0 |{z} |{z} | {z }
𝐴𝑖
2×1 𝑒𝑖0 𝑒𝑖 2×4 Δ𝜉
| {z }
4×1 4×1
4×1
𝑤𝑖
2×1
88
5.3 Mixed model
In matrix notation:
𝑒𝑥 1
..
.
𝑒𝑥
𝑥1 − 𝑥 0 𝑝 −1 0 𝜆 0𝑎 0 −𝑏 10
1 𝑒
𝑦1 . . ..
1
.. . .. .. ..
. .. . .
Δ𝑡𝑥
𝑥 − 𝑥0 −𝐼𝑝 0𝑝 𝜆 0 cos 𝛼 0 𝐼𝑝 𝜆 0 sin 𝛼 0 𝐼𝑝 −1 0 𝜆 0𝑎 0 −𝑏 𝑝0
𝑝 𝑝 Δ𝑡 𝑦
𝑒 𝑦𝑝
+
𝑝
+ Δ𝛼 = 0,
𝑒𝑢1
𝑦1 − 𝑦1
0
0𝑝 −𝐼𝑝 −𝜆 sin 𝛼 𝐼𝑝 𝜆 cos 𝛼 𝐼𝑝
0 0 0 0 . 0 −1 𝜆 0𝑏 10 𝑎 01 Δ𝜆
| {z } . . . ..
. . . | {z }
. . .
.. ..
𝑒𝑢𝑝
. .
𝑦𝑝 − 𝑦𝑝0 0 −1 𝜆 0𝑏 𝑝0 𝑎𝑝0
𝐵T
𝑒 𝑣1
2𝑝×4𝑝 Δ𝜉
| {z } | {z }
..
4×1
.
𝑤 𝐴
2𝑝×1
𝑒𝑣 2𝑝×4
𝑝
| {z }
𝑒
4𝑝×1
where 𝐼𝑝 is the unit matrix of size 𝑝 × 𝑝 and 0𝑝 the zero matrix of the same size. Additionally, we
define 𝑢¯𝑖0 = 𝑢𝑖 − 𝑒𝑢0𝑖 and 𝑣¯𝑖0 = 𝑣𝑖 − 𝑒 𝑣0𝑖 to get the abbreviations
𝑥𝑖0 = 𝜆 0 (cos 𝛼 0𝑢𝑖 + sin 𝛼 0𝑣𝑖 ) + 𝑡𝑥0 , 𝑦𝑖0 = 𝜆 0 (− sin 𝛼 0𝑢𝑖 + cos 𝛼 0𝑣𝑖 ) + 𝑡 𝑦0 ,
𝑎𝑖0 = sin 𝛼 0𝑢¯𝑖0 − cos 𝛼 0𝑣¯𝑖0, 𝑏𝑖0 = cos 𝛼 0𝑢¯𝑖0 + sin 𝛼 0𝑣¯𝑖0 .
Results: by using the following initial approximate values for unknown parameters
c k < 10−11 ):
we get the parameters (after 7 iterations, k Δ𝜉
89
5 Geomatics examples
Mixed model approach I: A-model with inconsistencies in both (𝑥𝑖 , 𝑦𝑖 ) and (𝑢𝑖 , 𝑣𝑖 ) coordinates.
Approximate values:
Linearization:
𝑥𝑖 − 𝑒𝑥𝑖 = 𝜆10𝑢¯𝑖0 (cos 𝛼 0 − 𝑘 0 sin 𝛼 0 ) + 𝜆10𝑣¯𝑖0 (sin 𝛼 0 + 𝑘 0 cos 𝛼 0 ) + 𝑡𝑥0 +Δ𝑡𝑥
| {z }
𝑥𝑖0
+ −𝜆10𝑢¯𝑖0 (sin 𝛼 0 + 𝑘 0 cos 𝛼 0 ) + 𝜆10𝑣¯𝑖0 (cos 𝛼 0 − 𝑘 0 sin 𝛼 0 ) Δ𝛼
| {z }
𝑎𝑖
+ 𝑢¯𝑖0 (cos 𝛼 0 − 𝑘 sin 𝛼 ) + 𝑣¯𝑖0 (sin 𝛼 0 + 𝑘 0 cos 𝛼 0 ) Δ𝜆1 + −𝜆10𝑢¯𝑖0 sin 𝛼 0 + 𝜆10𝑣¯𝑖0 cos 𝛼 0 Δ𝑘
0 0
| {z } | {z }
𝑏𝑖 𝑓𝑖
+ (−𝑢¯𝑖0 sin 𝛼 0 + 𝑣¯𝑖0 cos 𝛼 0 ) Δ𝜆2 −𝜆20 sin 𝛼 0 Δ𝑢¯𝑖 + 𝜆20 cos 𝛼 0 Δ¯𝑣𝑖 .
| {z } | {z } | {z }
𝑑𝑖 𝑞 𝑟
90
5.3 Mixed model
In matrix notation:
𝑥1 − 𝑥 0 𝑒𝑥 1 1
1 0 𝑎1 𝑏1 0 𝑓1
𝑔 ... 0 ℎ ... 0
.. ..
.. .
.. .. .. .. ..
.. . . .... . . ..
. . . . . Δ𝑡𝑥
𝑥 − 𝑥0
. . . . . . . .
𝑝 𝑝
𝑒𝑥𝑝 1 ℎ
0 𝑎𝑝 𝑏 𝑝 0 𝑓𝑝 0 . . . 𝑔 0 ... Δ𝑡 𝑦
Δ𝛼
𝑦1 − 𝑦1 𝑒 𝑦1 0 0
Δ𝜆
0
1 𝑐1 0 𝑑1 0 𝑞 ... 0 𝑟 ...
. . .. 1
..
.. .. .. .. .. .. .... . . .. .. . . Δ𝜆
.
. . . . . . . . . . . 2
𝑦𝑝 − 𝑦𝑝0 𝑒 0 𝑟
Δ𝑘
−
1 𝑐𝑝 0 𝑑𝑝 0 0 ... 𝑞 0 ...
𝑦
= Δ𝑢¯1 .
𝑝
.
𝑢 1 − 𝑢¯0
1 1 0 0 .
. .. .
𝑒 0 0 0 0 0 1 ... 0 0 ...
𝑢
.. .
. . . Δ𝑢¯𝑝
.. .... .. .. .. .. . . .. .. . .
.
. . . . . . . . . .
𝑢 − 𝑢¯0 𝑒𝑢𝑝 0 0 Δ¯𝑣 1
𝑝 𝑝 0 0 0 0 0 0 ... 1 0 ...
..
.
𝑣 1 − 𝑣¯1
0
𝑒 𝑣1 0
0
Δ¯𝑣 𝑝
0 0 0 0 0 0 ... 0 1 ...
. . ..
..
.. .. .... .. .. .. .. . . .. .. . .
| {z }
.
. . . . . . . . . . .
𝑣 𝑝 − 𝑣¯𝑝 𝑒 0 1
0 0 0 0 0 0 0 ... 0 0 ... 𝜉
| {z } | {z } | {z }
𝑣 𝑝
(2𝑝+6)×1
𝑙 𝑒 𝐴
4𝑝×1 4𝑝×1 4𝑝×(2𝑝+6)
c k < 10−11 ):
we obtain the parameters (after 5 iterations, k Δ𝜉
Coordinates of data points in the target system are listed in Tab. 5.54.
91
5 Geomatics examples
𝑦/m
𝑣/m
4 14
15 14
4 15
11 500 22 000
3
3
11 000 21 500
control points control points
source data source data
13 000 13 500 14 000 14 500 15 000 18 500 19 000 19 500 20 000 20 500
𝑢/m 𝑥/m
Figure 5.36: 6-parameter affine transformation: Gauss Markov model; inconsistencies in both source
and target systems.
Mixed model approach II: Extended B-model with inconsistencies in both (𝑥𝑖 , 𝑦𝑖 ) and (𝑢𝑖 , 𝑣𝑖 )
coordinates (𝑖 = 1, . . . , 𝑝 with 𝑝 number of control points).
𝑓𝑥𝑖 := 𝑥𝑖 − 𝑒𝑥𝑖 − 𝜆1 (𝑢𝑖 − 𝑒𝑢𝑖 ) (cos 𝛼 − 𝑘 sin 𝛼) + 𝜆1 (𝑣𝑖 − 𝑒 𝑣𝑖 ) (sin 𝛼 + 𝑘 cos 𝛼) + 𝑡𝑥 = 0
𝑓𝑦𝑖 := 𝑦𝑖 − 𝑒 𝑦𝑖 − −𝜆2 (𝑢𝑖 − 𝑒𝑢𝑖 ) sin 𝛼 + 𝜆2 (𝑣𝑖 − 𝑒 𝑣𝑖 ) cos 𝛼 + 𝑡 𝑦 = 0
Linearization using Taylor point 𝑒𝑥0𝑖 , 𝑒 𝑦0 𝑖 , 𝑒𝑢0𝑖 , 𝑒 𝑣0𝑖 , 𝑡𝑥0 , 𝑡 𝑦0 , 𝛼 0 , 𝜆10 , 𝜆20 , 𝑘 0 so that
𝑒𝑥𝑖 = 𝑒𝑥0𝑖 + Δ𝑒𝑥𝑖 , 𝑒𝑢𝑖 = 𝑒𝑢0𝑖 + Δ𝑒𝑢𝑖 , 𝑡𝑥 = 𝑡𝑥0 + Δ𝑡𝑥 , 𝛼 = 𝛼 0 + Δ𝛼, 𝜆2 = 𝜆20 + Δ𝜆2,
𝑒 𝑦𝑖 = 𝑒 𝑦0 𝑖 + Δ𝑒 𝑦𝑖 , 𝑒 𝑣𝑖 = 𝑒 𝑣0𝑖 + Δ𝑒 𝑣𝑖 , 𝑡 𝑦 = 𝑡 𝑦0 + Δ𝑡 𝑦 , 𝑘 = 𝑘 0 + Δ𝑘, 𝜆1 = 𝜆10 + Δ𝜆1 .
Δ𝑡𝑥
Δ𝑒𝑥𝑖 Δ𝑡
" # 𝑦
𝜕𝑓𝑥𝑖
Δ𝑒 𝑦𝑖
𝜕𝑓𝑥𝑖
𝑓𝑥0𝑖 Δ𝛼
+ + = 0,
𝜕 (𝑒𝑥𝑖 ,𝑒 𝑦𝑖 ,𝑒𝑢𝑖 ,𝑒 𝑣𝑖 ) 0 𝜕 (𝑡𝑥 ,𝑡 𝑦 ,𝛼,𝜆1 ,𝜆2 ,𝑘) 0
Δ𝑒𝑢 Δ𝜆1
𝜕𝑓𝑦𝑖 𝜕𝑓𝑦𝑖
𝑓𝑦0𝑖
Δ𝑒 Δ𝜆2
𝑖
𝑣𝑖
𝜕 (𝑒𝑥𝑖 ,𝑒 𝑦𝑖 ,𝑒𝑢𝑖 ,𝑒 𝑣𝑖 ) 0 𝜕 (𝑡𝑥 ,𝑡 𝑦 ,𝛼,𝜆1 ,𝜆2 ,𝑘) 0
Δ𝑘
where
𝑓𝑥0𝑖 = 𝑥𝑖 − 𝑒𝑥0𝑖 − 𝜆10 (𝑢𝑖 − 𝑒𝑢0𝑖 ) (cos 𝛼 0 − 𝑘 0 sin 𝛼 0 ) + 𝜆10 (𝑣𝑖 − 𝑒 𝑣0𝑖 ) (sin 𝛼 0 + 𝑘 0 cos 𝛼 0 ) + 𝑡𝑥0
and 𝑓𝑦0𝑖 = 𝑦𝑖 − 𝑒 𝑦0 𝑖 − −𝜆20 (𝑢𝑖 − 𝑒𝑢0𝑖 ) sin 𝛼 0 + 𝜆20 (𝑣𝑖 − 𝑒 𝑣0𝑖 ) cos 𝛼 0 + 𝑡 𝑦0 .
92
5.3 Mixed model
In matrix notation:
𝑒𝑥 1
..
.
𝑒𝑥
𝑥1 − 𝑥 0 𝑝 −1 0 𝑑𝑎,1 𝑑𝑏,1 0 𝑑𝑐,1
1 𝑒
𝑦1 .. .. .. Δ𝑡𝑥
.. . . .
.. .. ..
.
. .. . . . Δ𝑡 𝑦
𝑥 − 𝑥0 −𝐼𝑝 0𝑝 𝜆10𝑏 0 𝐼𝑝 −1 0 𝑑𝑎,𝑝 𝑑𝑏,𝑝 0 𝑑𝑐,𝑝
𝑝 𝑝 𝜆10𝑎 0 𝐼𝑝 Δ𝛼
𝑒 𝑦𝑝
+ +
Δ𝜆1 = 0,
𝑒𝑢1 0 −1
𝑦1 − 𝑦1
0
0𝑝 −𝐼𝑝 −𝜆2 sin 𝛼 𝐼𝑝 𝜆2 cos 𝛼 𝐼𝑝
0 0 0 0 . 𝑑𝑑,1 0 𝑑𝑒,1 0
Δ𝜆2
| {z } . . . ..
. .. .. Δ𝑘
.. .. .. ..
.
𝑒𝑢𝑝 . . . .
𝑦𝑝 − 𝑦𝑝0
𝐵T
0 −1 𝑑𝑑,𝑝 0 𝑑𝑒,𝑝 0 | {z }
2𝑝×4𝑝
𝑒 𝑣1
| {z } | {z }
..
Δ𝜉
.
𝑤 𝐴 6×1
𝑒𝑣
2𝑝×1 2𝑝×6
𝑝
| {z }
𝑒
4𝑝×1
where 𝐼𝑝 is the unit matrix of size 𝑝 × 𝑝 and 0𝑝 the zero matrix of the same size. Additionally, we
put the abbreviations 𝑎 0 = cos 𝛼 0 − 𝑘 0 sin 𝛼 0 , 𝑏 0 = sin 𝛼 0 + 𝑘 0 cos 𝛼 0 , 𝑢¯𝑖0 = 𝑢𝑖 − 𝑒𝑢0𝑖 and 𝑣¯𝑖0 = 𝑣𝑖 − 𝑒 𝑣0𝑖 to
get
𝑥𝑖0 = 𝜆10 𝑎 0𝑢𝑖 + 𝑏 0𝑣𝑖 + 𝑡𝑥0 , 𝑦𝑖0 = 𝜆20 − sin 𝛼 0𝑢𝑖 + cos 𝛼 0𝑣𝑖 + 𝑡 𝑦0
𝑑𝑎,𝑖 = 𝜆10 𝑏 0𝑢¯𝑖0 − 𝑎 0𝑣¯𝑖0 , 𝑑𝑏,𝑖 = − 𝑎 0𝑢¯𝑖0 + 𝑏 0𝑣¯𝑖0 , 𝑑𝑐,𝑖 = 𝜆10 sin 𝛼 0𝑢¯𝑖0 − cos 𝛼 0𝑣¯𝑖0 ,
𝑑𝑑,𝑖 = 𝜆20 cos 𝛼 0𝑢¯𝑖0 + sin 𝛼 0𝑣¯𝑖0 , 𝑑𝑒,𝑖 = sin 𝛼 0𝑢¯𝑖0 − cos 𝛼 0𝑣¯𝑖0 .
Results: with the following initial approximate values for unknown parameters and inconsis-
tencies
c k < 10−11 ):
we get the parameters (after 4 iterations, k Δ𝜉
93
5 Geomatics examples
Mixed model approach I: A-model with inconsistencies in both [𝑥𝑖 , 𝑦𝑖 ] and [𝑢𝑖 , 𝑣𝑖 ] coordinates.
Approximate values:
Linearization process:
𝑥𝑖 − 𝑒𝑥𝑖 = 𝜆10𝑢¯𝑖0 cos 𝜀 0 − 𝜆20𝑣¯𝑖0 sin 𝛿 0 + 𝑡𝑥0 +Δ𝑡𝑥 − 𝜆10𝑢¯𝑖0 sin 𝜀 0 Δ𝜀 − 𝜆20𝑣¯𝑖0 cos 𝛿 0 Δ𝛿
| {z }
𝑥𝑖0
+ 𝑢¯𝑖0 cos 𝜀 0 Δ𝜆1 − 𝑣¯𝑖0 sin 𝛿 0 Δ𝜆2 + 𝜆10 cos 𝜀 0 Δ𝑢¯𝑖 − 𝜆20 sin 𝛿 0 Δ¯𝑣𝑖
𝑦𝑖 − 𝑒 𝑦𝑖 = 𝜆10𝑢¯𝑖0 sin 𝜀 0 + 𝜆20𝑣¯𝑖0 cos 𝛿 0 + 𝑡 𝑦0 +Δ𝑡 𝑦 + 𝜆10𝑢¯𝑖0 cos 𝜀 0 Δ𝜀 − 𝜆20𝑣¯𝑖0 sin 𝛿 0 Δ𝛿
| {z }
𝑦𝑖0
+ 𝑢¯𝑖0 sin 𝜀 0 Δ𝜆1 + 𝑣¯𝑖0 cos 𝛿 0 Δ𝜆2 + 𝜆10 sin 𝜀 0 Δ𝑢¯𝑖 + 𝜆20 cos 𝛿 0 Δ¯𝑣𝑖
𝑢𝑖 − 𝑒𝑢𝑖 = 𝑢¯𝑖0 + Δ𝑢¯𝑖
𝑣𝑖 − 𝑒 𝑣𝑖 = 𝑣¯𝑖0 + Δ¯𝑣𝑖
94
5.3 Mixed model
In matrix notation:
𝑥1 − 𝑥 0 𝑒𝑥 1 1 0 −𝑏𝑠 . . . 0
1
0 −𝑎𝑠 𝑢¯10 −𝑏𝑐 𝑣¯10 𝑐𝑐,1 −𝑑𝑠,1 𝑎𝑐 . . .
.. .. .. .. . . ..
. .
.. .. .. .. .. .. . .
.
..
. Δ𝑡
. . . . . . . . . .
𝑥 − 𝑥0 1 . . . 𝑎𝑐 0 . . . −𝑏𝑠 𝑥
𝑝 𝑝 𝑒𝑥 𝑝 0 −𝑎𝑠 𝑢¯𝑝0 −𝑏𝑐 𝑣¯𝑝0 𝑐𝑐,𝑝 −𝑑𝑠,𝑝
0
Δ𝑡 𝑦
Δ𝜀
𝑦1 − 𝑦1 𝑒 𝑦1 0 . . . 0 𝑏𝑐 . . . 0
0
1 𝑎𝑐 𝑢¯10 −𝑏𝑠 𝑣¯10 𝑐𝑠,1 𝑑𝑐,1 𝑎𝑠
. . Δ𝛿
..
.. .. .. .. .. .. .. .. . . .. .. . . ..
. . Δ𝜆1
.
. . . . . . . . .
𝑦𝑝 − 𝑦𝑝 𝑒 0 . . . 𝑎𝑠 0 . . . 𝑏𝑐 Δ𝜆2
𝑝
0
1 𝑎𝑐 𝑢¯𝑝0 −𝑏𝑠 𝑣¯𝑝0 𝑐𝑠,𝑝 𝑑𝑐,𝑝 0
−
𝑦
= Δ𝑢¯1 ,
𝑢1 − 𝑢 .
1
0 𝑢1 0 ... 0 0 ... 0 .
.
𝑒 0 0 0 0 0 1
.. .. . . .. .. . . ..
.. .
.. .. .. .. .. ..
. . Δ𝑢¯𝑝
.
.
. . . . . . . . .
𝑢 − 𝑢0 0 . . . 1 0 . . . 0 Δ¯𝑣 1
𝑝 𝑒 𝑝
0 0 0 0 0 0
..
𝑝 𝑢
.
𝑒𝑣 0
𝑣 1 − 𝑣 10 1 . . . 0 1 . . . 0
0 0 0 0 0 0
. . . . .. .. . . .. | Δ¯𝑣 𝑝
..
.. .. .. .. .. .. .. .. {z }
.
. . . . . . . . . . .
𝑣𝑝 − 𝑣𝑝 𝑒 0 . . . 0 0 . . . 1
𝑣𝑝
0 0 0 0 0 0 0 Δ𝜉
| {z } | {z } | {z } (2𝑝+6)×1
𝑙 𝑒 𝐴
4𝑝×1 4𝑝×1 4𝑝×(2𝑝+6)
where
𝑎𝑐 = 𝜆10 cos 𝜀 0, 𝑏𝑐 = 𝜆20 cos 𝛿 0, 𝑐𝑐,𝑖 = 𝑢¯𝑖0 cos 𝜀 0, 𝑑𝑐,𝑖 = 𝑣¯𝑖0 cos 𝛿 0,
𝑎𝑠 = 𝜆10 sin 𝜀 0, 𝑏𝑠 = 𝜆20 sin 𝛿 0, 𝑐𝑠,𝑖 = 𝑢¯𝑖0 sin 𝜀 0, 𝑑𝑠,𝑖 = 𝑣¯𝑖0 sin 𝛿 0 .
c k < 10−11 ):
Parameters (after 7 iterations, k Δ𝜉
𝑡ˆ𝑥 = 5388.876 m, 𝜀ˆ = 5′7.89′′, 𝜆ˆ1 = 1.000 409 734,
𝑡ˆ𝑦 = 10 346.871 m, 𝛿ˆ = 5′2.06′′, 𝜆ˆ2 = 1.000 406 883, 𝑒ˆT 𝑃 𝑒ˆ = 0.000 993 2 m2 .
Mixed model approach II: Extended B-model with inconsistences in both [𝑥𝑖 , 𝑦𝑖 ] and [𝑢𝑖 , 𝑣𝑖 ]
coordinates.
𝑓𝑥𝑖 := 𝑥𝑖 − 𝑒𝑥𝑖 − 𝜆1 𝑢𝑖 − 𝑒𝑢𝑖 cos 𝜀 − 𝜆2 𝑣𝑖 − 𝑒 𝑣𝑖 sin 𝛿 + 𝑡𝑥 = 0
𝑓𝑦𝑖 := 𝑦𝑖 − 𝑒 𝑦𝑖 − 𝜆1 𝑢𝑖 − 𝑒𝑢𝑖 sin 𝜀 + 𝜆2 𝑣𝑖 − 𝑒 𝑣𝑖 cos 𝛿 + 𝑡 𝑦 = 0
95
5 Geomatics examples
Linearization:
Δ𝑡𝑥
Δ𝑒𝑥𝑖 Δ𝑡
" # 𝑦
𝜕𝑓𝑥𝑖
Δ𝑒 𝑦𝑖
𝜕𝑓𝑥𝑖
𝑓𝑥0𝑖 Δ𝜀
+ + = 0,
𝜕 (𝑒𝑥𝑖 ,𝑒 𝑦𝑖 ,𝑒𝑢𝑖 ,𝑒 𝑣𝑖 ) 0 𝜕 (𝑡𝑥 ,𝑡 𝑦 ,𝜀,𝛿,𝜆1 ,𝜆2 ) 0
Δ𝑒𝑢 Δ𝛿
𝜕𝑓𝑦𝑖 𝜕𝑓𝑦𝑖
𝑓𝑦0𝑖
Δ𝑒 Δ𝜆1
𝑖
𝑣𝑖
𝜕 (𝑒𝑥𝑖 ,𝑒 𝑦𝑖 ,𝑒𝑢𝑖 ,𝑒 𝑣𝑖 ) 0 𝜕 (𝑡𝑥 ,𝑡 𝑦 ,𝜀,𝛿,𝜆1 ,𝜆2 ) 0
Δ𝜆2
where
𝑓𝑥0𝑖 = 𝑥𝑖 − 𝑒𝑥0𝑖 − 𝜆10 (𝑢𝑖 − 𝑒𝑢0𝑖 ) cos 𝜀 0 − 𝜆20 (𝑣𝑖 − 𝑒 𝑣0𝑖 ) sin 𝛿 0 + 𝑡𝑥0
and 𝑓𝑦0𝑖 = 𝑦𝑖 − 𝑒 𝑦0 𝑖 − 𝜆10 (𝑢𝑖 − 𝑒𝑢0𝑖 ) sin 𝜀 0 + 𝜆20 (𝑣𝑖 − 𝑒 𝑣0𝑖 ) cos 𝛿 0 + 𝑡 𝑦0 .
In matrix notation:
𝑒𝑥 1
..
.
𝑒𝑥
𝑥1 − 𝑥 0 𝑝 −1
1 𝑒
0 𝑎𝑠 𝑢¯10 𝑏𝑐 𝑣¯10 −𝑐𝑐,1 𝑑𝑠,1
𝑦1 .. Δ𝑡𝑥
.. . .
.. .. .. .. ..
. .. . . . . . Δ𝑡 𝑦
𝑥 − 𝑥0 −𝐼𝑝 0𝑝 𝑎𝑐 𝐼𝑝 −𝑏𝑠 𝐼𝑝 −1
𝑝 𝑝 𝑎𝑠 𝑢¯𝑝0 𝑏𝑐 𝑣¯𝑝0 −𝑐𝑐,𝑝 𝑑𝑠,𝑝 Δ𝜀
𝑒 𝑦𝑝
0
+ +
𝑒𝑢1 Δ𝛿 = 0,
𝑦1 − 𝑦1 0𝑝 −𝐼𝑝 𝑎𝑠 𝐼𝑝 𝑏𝑐 𝐼𝑝 . 0 −𝑑𝑐,1
Δ𝜆1
0
.
−1 −𝑎𝑐 𝑢¯1 𝑏𝑠 𝑣¯1 −𝑐𝑠,1
| {z } . . ..
..
.. .. .. .. .. Δ𝜆
.
𝑒𝑢𝑝 . . . . . 2
𝑦𝑝 − 𝑦𝑝
𝐵T
0 −𝑑𝑐,𝑝 | {z }
𝑒 𝑣1
0 2𝑝×4𝑝 −1 −𝑎𝑐 𝑢¯𝑝 𝑏𝑠 𝑣¯𝑝 −𝑐𝑠,𝑝
| {z } | {z }
..
Δ𝜉
.
𝑤 𝐴 6×1
𝑒𝑣
2𝑝×1 2𝑝×6
𝑝
| {z }
𝑒
4𝑝×1
where 𝐼𝑝 is the unit matrix of size 𝑝 × 𝑝 and 0𝑝 the zero matrix of the same size. Additionally, we
use the following abbreviations
𝑎𝑐 = 𝜆10 cos 𝜀 0, 𝑏𝑐 = 𝜆20 cos 𝛿 0, 𝑐𝑐,𝑖 = 𝑢¯𝑖0 cos 𝜀 0, 𝑑𝑐,𝑖 = 𝑣¯𝑖0 cos 𝛿 0, 𝑢¯𝑖0 = 𝑢𝑖 − 𝑒𝑢0𝑖 ,
𝑎𝑠 = 𝜆10 sin 𝜀 0, 𝑏𝑠 = 𝜆20 sin 𝛿 0, 𝑐𝑠,𝑖 = 𝑢¯𝑖0 sin 𝜀 0, 𝑑𝑠,𝑖 = 𝑣¯𝑖0 sin 𝛿 0, 𝑣¯𝑖0 = 𝑣𝑖 − 𝑒𝑢0𝑖 .
Results: with the following initial approximate values for unknown parameters
𝑡𝑥0 = 5500 m, 𝑡 𝑦0 = 10 200 m, 𝜀 0 = 1.5′′, 𝛿 0 = 3.5′′, 𝜆10 = 1, 𝜆20 = 1,
𝑒𝑥0𝑖 = 𝑒 𝑦0 𝑖 = 𝑒𝑢0𝑖 = 𝑒 𝑣0𝑖 = 0 ∀𝑖 .
c k < 10−11 ):
we get the parameters (after 20 iterations, k Δ𝜉
𝑡ˆ𝑥 = 5388.876 m, 𝜀ˆ = 5′7.89′′, 𝜆ˆ1 = 1.000 409 734,
𝑡ˆ𝑦 = 10 346.871 m, 𝛿ˆ = 5′2.06′′, 𝜆ˆ2 = 1.000 406 882, 𝑒ˆT 𝑃 𝑒ˆ = 0.000 993 2 m2 .
96
5.3 Mixed model
Example: Best fitting ellipse (here: principal axes aligned with coordinate axes) with unknown
semi major axis 𝑎, semi minor axis 𝑏 and centre coordinates (𝑥 M, 𝑦M ); observations 𝑥𝑖 and 𝑦𝑖 are
inconsistent.
2 2
𝑥 𝑖 − 𝑒𝑥𝑖 − 𝑥 M 𝑦𝑖 − 𝑒 𝑦𝑖 − 𝑦M
𝑓 (𝑎, 𝑏, 𝑥 M, 𝑦M, 𝑥𝑖 − 𝑒𝑥𝑖 , 𝑦𝑖 − 𝑒 𝑦𝑖 ) = + −1=0
| {z } | {z } 𝑎2 𝑏2
unknown observations 𝑦
parameters 𝜉 − inconsistencies 𝑒
Possible restriction: Best fitting ellipse shall pass through the point (𝑥 P, 𝑦P )
(𝑥 P − 𝑥 M ) 2 (𝑦P − 𝑦M ) 2
𝑔(𝑎, 𝑏, 𝑥 M, 𝑦M ) = + −1=0
| {z } 𝑎2 𝑏2
𝜉
𝜕𝑓 𝜕𝑓
𝑓 (𝜉, 𝑒) = 𝑓 (𝜉 0, 𝑒 0 ) + Δ𝜉 + 𝑒+O =0
𝜕𝜉 𝑥 0 ,𝑒 0 𝜕𝑒 𝜉 0 ,𝑒 0
𝑤 + 𝐴 Δ𝜉 + 𝐵 T𝑒 =0
𝜕𝑔
𝑔(𝜉) = 𝑔(𝜉 0 ) + Δ𝜉 +O =0
𝜕𝜉 𝜉0
𝑤𝑅 + 𝑅 Δ𝜉 =0
1 T
L𝑅 (Δ𝜉, 𝑒, 𝜆, 𝜆𝑅 ) = 𝑒 𝑊 𝑒 + 𝜆 T ( 𝐴 Δ𝜉 + 𝐵 T 𝑒 + 𝑤 )
2 1×𝑝 𝑝×𝑝 𝑝×1 1×𝑚 𝑚×𝑛 𝑛×1 𝑚×𝑝 𝑝×1 𝑚×1
+ 𝜆𝑅 T ( 𝑅 Δ𝜉 + 𝑤 𝑅 ) −→ min
1×𝑟 𝑟 ×𝑛 𝑛×1 𝑟 ×1 Δ𝜉,𝑒,𝜆,𝜆𝑅
97
5 Geomatics examples
Necessary condition
𝜕L𝑅 c ˆ ˆ !
( Δ𝜉, 𝑒,
ˆ 𝜆, 𝜆𝑅 ) = 0 =⇒ 𝐴T 𝜆ˆ + 𝑅 T 𝜆ˆ𝑅 = 0
𝜕Δ𝜉
𝜕L𝑅 c ˆ ˆ !
( Δ𝜉, 𝑒,
ˆ 𝜆, 𝜆𝑅 ) = 0 =⇒ 𝑊 𝑒ˆ + 𝐵 𝜆ˆ =0
𝜕𝑒
𝜕L𝑅 c ˆ ˆ ! c + 𝐵 T𝑒ˆ
( Δ𝜉, 𝑒,
ˆ 𝜆, 𝜆𝑅 ) = 0 =⇒ 𝐴 Δ𝜉 = −𝑤
𝜕𝜆
𝜕L𝑅 c ˆ ˆ ! c
( Δ𝜉, 𝑒,
ˆ 𝜆, 𝜆𝑅 ) = 0 =⇒ 𝑅 Δ𝜉 = −𝑤 𝑅
𝜕𝜆𝑅
𝑊 𝐵 0 0
𝑝×𝑝 𝑝×𝑚 0
𝑝×1
𝑒ˆ
𝐵T 0 𝐴 0 −𝑤
𝑝×𝑛 𝑝×𝑟
𝜆ˆ
𝑚×𝑝 𝑚×𝑚 𝑚×𝑛 𝑚×𝑟
c = 𝑚×1
0 𝐴T 0 𝑅T Δ𝜉 0
𝑛×𝑝 𝑛×𝑚 𝑛×𝑛 𝑛×𝑟 𝑛×1
𝜆ˆ𝑅 −𝑤
0 0 𝑅 0
𝑟 ×𝑝 𝑟 ×𝑚 𝑟 ×1
𝑟 ×𝑛 𝑟 ×𝑟
(𝑝+𝑚+𝑛+𝑟 )×1
𝑅
(𝑝+𝑚+𝑛+𝑟 )×(𝑝+𝑚+𝑛+𝑟 )
𝑊
𝐵 0 0 𝑒ˆ 0
0
−𝐵 𝑊 −1 𝐵
T 𝐴 0 𝜆ˆ −𝑤
0 c =
0 𝑅 T Δ𝜉 0
𝐴T
0
𝑅 0 𝜆ˆ𝑅 −𝑤 𝑅
0
−1
2nd row multiplied with 𝐴T 𝐵 T𝑊 −1 𝐵 (from left) is added to 3rd row
𝑊 0 𝑒ˆ
𝐵 0 0
0
−𝐵 T𝑊 −1 𝐵
0 𝜆 ˆ −𝑤
c −𝐴T 𝐵 T𝑊 −1 𝐵 𝑤
𝐴
0 𝐴T 𝐵 T𝑊 −1 𝐵 𝐴 𝑅 T Δ𝜉
−1 = −1
0
0 0 𝜆ˆ𝑅
0 𝑅 −𝑤 𝑅
T T −1 −1 " #
𝐴 (𝐵 𝑊 𝐵) 𝐴 𝑅 T c
Δ𝜉 T T −1
−𝐴 (𝐵 𝑊 𝐵) 𝑤 −1
=⇒ ˆ =
𝑅 0 𝜆𝑅 −𝑤 𝑅
−1
Case 1: 𝐴T 𝐵 T𝑊 −1 𝐵 𝐴 = 𝐴T 𝑀 −1𝐴 is a full-rank matrix. =⇒ Use partitioning formula:
𝑄 22 −1 𝑁 ) −1
= (𝑁 22 − 𝑁 21 𝑁 11
12
𝑄 12
−1 𝑁 𝑄
= −𝑁 11
𝑁 11 𝑁 12 𝑄 11 𝑄 12 𝐼 0 12 22
= =⇒
𝑁 21 𝑁 22 𝑄 21 𝑄 22 0𝐼 −1
= −𝑄 22 𝑁 21 𝑁 11
𝑄 21
𝑄 11
−1 + 𝑁 −1 𝑁 𝑄 𝑁 𝑁 −1
= 𝑁 11 11 12 22 21 11
98
5.3 Mixed model
𝑁 11 = 𝐴T (𝐵 T𝑊 −1 𝐵) −1𝐴 = 𝐴T 𝑀 −1𝐴
𝑁 12 = 𝑅 T
𝑁 21 = 𝑁 12 T = 𝑅
𝑁 22 = 0
𝑄 22 = [0 − 𝑅(𝐴T 𝑀 −1𝐴) −1𝑅 T ] −1 = −[𝑅(𝐴T 𝑀 −1𝐴) −1𝑅 T ] −1
𝑄 12 = (𝐴T 𝑀 −1𝐴) −1𝑅 T [𝑅(𝐴T 𝑀 −1𝐴) −1𝑅 T ] −1 = −𝑁 11
−1
𝑁 12𝑄 22
𝑄 21 = 𝑄 12 T
𝑄 11 = (𝐴T 𝑀 −1𝐴) −1 {𝐼 − 𝑅 T [𝑅(𝐴T 𝑀 −1𝐴) −1𝑅 T ] −1𝑅(𝐴T 𝑀 −1𝐴) −1 }
−1 −1
= 𝑁 11 − 𝑄 12 𝑁 12 T 𝑁 11
c
= −(𝐴T 𝑀 −1𝐴) −1𝐴T 𝑀 −1𝑤 + 𝛿 Δ𝜉
c
= Δ𝜉 (without restrictions 𝑔(𝜉) = 0) c
+ 𝛿 Δ𝜉
c
−𝑤 = −𝑀 𝜆ˆ + 𝐴Δ𝜉
=⇒ c + 𝑤)
𝜆ˆ = 𝑀 −1 (𝐴Δ𝜉
c + 𝑤)
𝑒ˆ = 𝑊 −1 𝐵 𝜆ˆ = 𝑊 −1 𝐵𝑀 −1 (𝐴Δ𝜉
99
5 Geomatics examples
𝑁 𝑅 + 𝑅 T𝑆 = 𝐼 (5.3)
𝑁 𝑆 T + 𝑅 T𝑄 = 0 (5.4)
𝑅𝑅 = 0 (5.5)
𝑅𝑆 T = 𝐼 (5.6)
𝐴 𝑀 𝐴𝐻 = 0
T −1 T
𝑁𝐻T
=0
𝐻𝑁
=0
𝑁 is symmetric.
𝐻 𝑁 𝑆 T +𝐻𝑅 T𝑄 = 0 𝐻𝑅 T𝑄 = 0
|{z}
𝐻 · (5.4) =⇒ =⇒
0
𝐻𝑅 T full rank =⇒ 𝑄 = 0.
c = −𝑅𝐴T 𝑁 −1𝑤 + 𝑆 T𝑤 𝑅
Δ𝜉
= −(𝑁 + 𝑅 T𝑅) −1𝐴T 𝑀 −1𝑤
+ (𝑁 + 𝑅 T𝑅) −1𝑅 T (𝐻𝑅 T ) −1𝐻𝐴T 𝑀 −1𝑤 −𝑆 T𝑤 𝑅
| {z }
=0
if 𝑤 𝑅 = 0:
c = −(𝑁 + 𝑅 T𝑅) −1𝐴T 𝑀 −1𝑤
Δ𝜉
c −→ 𝜆ˆ = 𝑀 −1 (𝐴Δ𝜉
Δ𝜉 c + 𝑤)
𝑒ˆ = 𝑊 −1 𝐵 𝜆ˆ
= −𝑊 −1 𝐵𝑀 −1 (𝑁 + 𝑅 T𝑅) −1𝐴T 𝑀 −1 − 𝐼 𝑤
100
5.3 Mixed model
!2 !2
𝑥𝑖 − 𝑒𝑥0𝑖 − 𝑥 M
0 𝑦𝑖 − 𝑒 𝑦0 𝑖 − 𝑦M
0
𝑓 (𝑎, 𝑏, 𝑥 M, 𝑦M, 𝑥𝑖 − 𝑒𝑥𝑖 , 𝑦𝑖 − 𝑒 𝑦𝑖 ) = + −1
𝑎0 𝑏0
2(𝑥𝑖 − 𝑒𝑥0𝑖 − 𝑥 M
0) 2(𝑦𝑖 − 𝑒 𝑦0 𝑖 − 𝑦M
0 )
+ 𝑒𝑥0𝑖 + 𝑒 𝑦0 𝑖
𝑎 20 𝑏 02
2(𝑥𝑖 − 𝑒𝑥0𝑖 − 𝑥 M
0) 0 )
2(𝑦𝑖 − 𝑒 𝑦0 𝑖 − 𝑦M
− Δ𝑥 M − Δ𝑦M
𝑎 20 𝑏 02
2(𝑥𝑖 − 𝑒𝑥0𝑖 − 𝑥 M
0) 2(𝑦𝑖 − 𝑒 𝑦0 𝑖 − 𝑦M
0 )
− 𝑒𝑥𝑖 − 𝑒 𝑦𝑖
𝑎 20 𝑏 02
2(𝑥𝑖 − 𝑒𝑥0𝑖 − 𝑥 M
0 )2 2(𝑦𝑖 − 𝑒 𝑦0 𝑖 − 𝑦M
0 )2
− Δ𝑎 − Δ𝑏 = 0
𝑎 30 𝑏 03
h 0 ) 2(𝑦 −𝑒 0 −𝑦 0 )
2(𝑥𝑖 −𝑒𝑥0 𝑖 −𝑥 M
i 𝑒
𝑖 𝑦𝑖 M 𝑥𝑖
=⇒ 𝑎 20 𝑏 02 𝑒 𝑦𝑖
Δ𝑥 M
h i Δ𝑦
2(𝑥𝑖 −𝑒𝑥0 𝑖 −𝑥 M
M
0) 2(𝑦𝑖 −𝑒 𝑦0 𝑖 −𝑦M
0 ) 2(𝑥𝑖 −𝑒𝑥0 𝑖 −𝑥 M
0 )2 2(𝑦𝑖 −𝑒 𝑦0 𝑖 −𝑦M
0 )2
+ − − − − Δ𝑎
𝑎02 𝑏0 2 𝑎03 𝑏0 3
Δ𝑏
0 )2
(𝑥𝑖 − 𝑒𝑥0𝑖 − 𝑥 M (𝑦𝑖 − 𝑒 𝑦0 𝑖 − 𝑦M
0 )2 0)
2(𝑥𝑖 − 𝑒𝑥0𝑖 − 𝑥 M 2(𝑦𝑖 − 𝑒 𝑦0 𝑖 − 𝑦M
0 )
+ + −1+ 𝑒𝑥0𝑖 + 𝑒 𝑦0 𝑖 = 0
𝑎 20 𝑏 02 𝑎 20 𝑏 02
101
5 Geomatics examples
=⇒ 𝐵 T𝑒 + 𝐴Δ𝜉 + 𝑤 = 0
𝑥 1 −𝑒𝑥0 1 −𝑥 M0 𝑦1 −𝑒 𝑦0 1 −𝑦M
0
𝑎 20 𝑏 02
0 0 ... 0 0
0 𝑦 −𝑒 0 −𝑦 0
𝑥 2 −𝑒𝑥0 2 −𝑥 M 2 𝑦2
𝐵 T = −2
M
0 0 ... 0 0
𝑎 20 𝑏 02
.. .. .. .. .. .. ..
.
𝑚×𝑝
. . . . . .
0 𝑦 −𝑒 0 −𝑦 0
𝑥𝑚 −𝑒𝑥0𝑚 −𝑥 M
𝑚 𝑦𝑚 M
0 0 0 0 ... 𝑎02 𝑏 02
𝐴 = −2 𝑎 20 𝑏 02 𝑎 30 𝑏 03
. . .. ..
𝑚×4 .. .. . .
𝑥𝑚 −𝑒𝑥0𝑚 −𝑥 M0 𝑦𝑚 −𝑒 𝑦0 𝑚 −𝑦M0 (𝑥𝑚 −𝑒𝑥0𝑚 −𝑥 M0 ) 2
0 )2
(𝑦𝑚 −𝑒 𝑦0 𝑚 −𝑦M
𝑎 20 𝑏 02 𝑎 30 𝑏 03
T
𝑒 = 𝑒𝑥 1 𝑒 𝑦1 𝑒𝑥 2 𝑒 𝑦2 . . . 𝑒𝑥𝑚 𝑒 𝑦𝑚
𝑝×1
T
Δ𝜉 = Δ𝑥 M Δ𝑦M Δ𝑎 Δ𝑏
4×1
(𝑥 1 −𝑒𝑥0 1 −𝑥 M
0 )2 (𝑦1 −𝑒 𝑦0 1 −𝑦M
0 )2 𝑥 1 −𝑒𝑥0 1 −𝑥 M
0
𝑦1 −𝑒 𝑦0 1 −𝑦M
0
+ +2 𝑒𝑥01 +2 𝑒 𝑦0 1 −1
𝑎 20 𝑏 02 𝑎 20 𝑏 02
(𝑥 2 −𝑒𝑥0 2 −𝑥 M
0 )2 (𝑦2 −𝑒 𝑦0 2 −𝑦M
0 )2 𝑥 2 −𝑒𝑥0 2 −𝑥 M
0 𝑦2 −𝑒 𝑦0 2 −𝑦M
0
𝑒𝑥02 𝑒 𝑦0 2
𝑤 =
+ + 2 + 2 −1
𝑎 20 𝑏 02 𝑎
0
2
0 𝑏 2
..
𝑚×1
.
𝑒 𝑦𝑚 − 1
0 )2
(𝑥𝑚 −𝑒𝑥0𝑚 −𝑥 M 0 )2
(𝑦𝑚 −𝑒 𝑦0 𝑚 −𝑦M 0 0
𝑥𝑚 −𝑒𝑥𝑚 −𝑥 M 0 0 0
𝑦𝑚 −𝑒 𝑦𝑚 −𝑦M 0
2
𝑎0
+ 𝑏02 +2 𝑎 20
𝑒𝑥𝑚 + 2 𝑏 02
𝑊 0 𝑒ˆ 0
2𝑚×2𝑚 2𝑚×𝑚
𝐵 2𝑚×1
2𝑚×𝑛 2𝑚×1
𝐵T 𝐴 𝜆ˆ = −𝑤
0
𝑚×2𝑚 𝑚×𝑚 𝑚×𝑛 𝑚×1 𝑚×1
=⇒
0T 𝐴T 0 Δ𝜉 0
c
𝑛×2𝑚 𝑛×𝑚 𝑛×𝑛 𝑛×1 𝑛×1
(3𝑚+4)×(3𝑚+4) (3𝑚+4)×1 (3𝑚+4)×1
)
𝑊 𝑒ˆ + 𝐵 𝜆ˆ = 0 =⇒ 𝑒ˆ = −𝑊 −1 𝐵𝜆ˆ
c = −𝑤
=⇒ −𝐵 T𝑊 −1 𝐵 𝜆ˆ + 𝐴Δ𝜉
c = −𝑤
𝐵 T𝑒ˆ + 𝐴Δ𝜉
=⇒ c + 𝑤)
𝜆ˆ = (𝐵 T𝑊 −1 𝐵) −1 (𝐴Δ𝜉
=⇒ c + 𝐴T (𝐵 T𝑊 −1 𝐵) −1𝑤 = 0
𝐴T (𝐵 T𝑊 −1 𝐵) −1𝐴Δ𝜉
−1
=⇒ c = − 𝐴T (𝐵 T𝑊 −1 𝐵) −1𝐴 𝐴T (𝐵 T𝑊 −1 𝐵) −1𝑤
Δ𝜉
=⇒ c + 𝑤)
𝑒ˆ = −𝑊 −1 𝐵(𝐵 T𝑊 −1 𝐵) −1 (𝐴Δ𝜉
−1
= 𝑊 −1 𝐵(𝐵 T𝑊 −1 𝐵) −1 𝐴 𝐴T (𝐵 T𝑊 −1 𝐵) −1𝐴 𝐴T (𝐵 T𝑊 −1 𝐵) −1 − 𝐼 𝑤
102
5.3 Mixed model
Numerics
T
𝑥= 0, 50, 90, 120, 130, −130, −100, −50, 0
T
𝑦= 120, 110, 80, 0, −50, −50, 60, 100, −110
Approximated values:
0 0
𝑥M = 𝑦M = 0, 𝑎 0 = 𝑏 0 = 120 .
c k < 10−12 , k Δ𝑒
Parameters (after 10 iterations: k Δ𝜉 c k < 10−12 ):
T
𝑒ˆ𝑥 = 0.026, 1.793, −0.627, −10.466, 9.322, −8.324, 6.771, 1.534, −0.030
T
𝑒ˆ𝑦 = 6.813, 5.089, −0.736, −0.224, −4.355, −3.933, −5.583, −4.142, 7.072
100
50
0
y/m
-50
-100
-150
-150 -100 -50 0 50 100 150
x/m
103
5 Geomatics examples
Example 2: as example 1, but with additional (linear) restriction 𝑔(𝜉) = 0 so that 𝑎ˆ = 𝑏ˆ (best fitting
circle).
c k < 10−12 , k Δ𝑒
Parameters (after 10 iterations: k Δ𝜉 c k < 10−12 ):
T
𝑒ˆ𝑥 = −0.009, 0.404, −0.509, −3.992, 13.118, −15.134, 2.798, 3.145, 0.178
T
𝑒ˆ𝑦 = 0.987, 0.943, −0.480, −0.132, −4.690, −5.318, −1.769, −6.394, 16.854
100
50
0
y/m
-50
-100
-150
-150 -100 -50 0 50 100 150
x/m
104
5.3 Mixed model
Example 3: as example 1, but with additional (non-linear) constraint 𝑔(𝜉) so that best fitting el-
lipse passes through the point 𝑥 P = 100, 𝑦P = −100.
(𝑥 P − 𝑥 M ) 2 (𝑦P − 𝑦M ) 2
𝑔(𝜉) = 𝑔(𝑥𝑚 , 𝑦𝑚 , 𝑎, 𝑏) = + −1=0
𝑎2 𝑏2
h𝑥 0 0 0 )2 0 )2 i (𝑥 P − 𝑥 M0 )2 (𝑦P − 𝑦M 0 )2
P −𝑥 M 𝑦P −𝑦M (𝑥 P −𝑥 M (𝑦P −𝑦M
=⇒ 𝑅 = −2 2 2 3 3 , 𝑤 𝑅 = + −1
𝑎0 𝑏0 𝑎0 𝑏0 𝑎 20 𝑏 02
c k < 10−12 , k Δ𝑒
Parameters (after 10 iterations: k Δ𝜉 c k < 10−12 ):
T
𝑒ˆ𝑥 = −0.263, 1.262, −2.361, −18.668, −2.701, −6.996, 2.583, 0.569, 1.191
T
𝑒ˆ𝑦 = 7.401, 3.985, −2.988, −2.287, 0.966, −2.275, −2.051, −1.336, 26.079
50
0
y/m
-50
-100
-150
-150 -100 -50 0 50 100 150
x/m
105
6 Statistics
n o Õ𝑚 Õ
𝑚 n o
=⇒ E 𝑒ˆT𝑄 𝑦−1𝑒ˆ = (𝑃 𝑦 )𝑖 𝑗 E 𝑒ˆ𝑖 𝑒ˆ 𝑗
𝑖=1 𝑗=1
𝑚 Õ
Õ 𝑚
= (𝑃 𝑦 )𝑖 𝑗 (𝑄𝑒ˆ )𝑖 𝑗
𝑖=1 𝑗=1
𝑚 Õ
Õ 𝑚 Õ
= (𝑃 𝑦 )𝑖 𝑗 (𝑄𝑒ˆ ) 𝑗𝑖 = 𝑃 𝑦 𝑄𝑒ˆ 𝑖𝑖
𝑖=1 𝑗=1 𝑖
= trace(𝑃 𝑦 𝑄𝑒ˆ )
= trace(𝑃 𝑦 (𝑄 𝑦 − 𝑄 𝑦ˆ ))
= trace(𝐼𝑚 − 𝑃 𝑦 𝑄 𝑦ˆ )
= 𝑚 − trace 𝑃 𝑦 𝑄 𝑦ˆ
trace 𝑃 𝑦 𝑄 𝑦ˆ = trace 𝑄 𝑦ˆ 𝑃 𝑦
= trace 𝐴𝑄 𝑥ˆ 𝐴T 𝑃 𝑦
= trace 𝐴(𝐴T 𝑃 𝑦 𝐴) −1𝐴T 𝑃 𝑦
= trace 𝑃𝐴
Linear algebra:
trace 𝑋 = sum of eigenvalues of 𝑋
Q: Eigenvalues of a projector?
𝑃𝐴𝑧 = 𝜆𝑧 (special) eigenvalue problem
𝑃𝐴 𝑃𝐴𝑧 = 𝑃𝐴𝑧 = 𝜆𝑧 0
𝜆 2𝑧 = 𝜆𝑧 =⇒ 𝜆(𝜆 − 1)𝑧 = 0 =⇒ 𝜆=
𝑃𝐴 𝑃𝐴𝑧 = 𝜆𝑃𝐴𝑧 = 𝜆 2𝑧 1
=⇒ trace 𝑃𝐴 = number of eigenvalues 1
106
6.2 Basics
dim R (𝐴) = 𝑛
T
E 𝑒ˆ 𝑃 𝑦 𝑒ˆ = 𝑚 − 𝑛 (= 𝑟 redundancy)
6.2 Basics
Random variable: 𝑥
Realization: 𝑥
Wahrschein-
Probability density function (pdf) lichkeitsdichte
(a) Probability density function. (b) Probability calculations by inte- (c) Intervall.
grating over the pdf.
Figure 6.1
∫∞
𝑓 (𝑥) d𝑥 = 1
−∞
Note: not necessarily normal distribution.
∫∞
E 𝑥 =: 𝜇𝑥 = 𝑥 𝑓 (𝑥) d𝑥
−∞
∫∞
D 𝑥 =: 𝜎𝑥2 = (𝑥 − 𝜇𝑥 ) 2 𝑓 (𝑥) d𝑥 = E (𝑥 − 𝜇𝑥 ) 2
−∞
107
6 Statistics
Verteilungs-
funktion Cumulative distribution or density function (cdf)
∫𝑥
𝐹 (𝑥) = 𝑓 (𝑦) d𝑦 = 𝑃 (𝑥 < 𝑥)
−∞
e. g.
∫𝑏 ∫𝑏 ∫𝑎
𝑃 (𝑎 ≤ 𝑥 ≤ 𝑏) = 𝑓 (𝑥) d𝑥 = 𝑓 (𝑥) d𝑥 − 𝑓 (𝑥) d𝑥
𝑎 −∞ −∞
= 𝐹 (𝑏) − 𝐹 (𝑎)
6.3 Hypotheses
𝐻 : 𝑥 ∼ 𝑓 (𝑥)
Sicherheitswahr-
Assumption: 𝑥 is distributed with given 𝑓 (𝑥).
scheinlichkeit
𝑃 (𝑎 ≤ 𝑥 ≤ 𝑏) = 1 − 𝛼 = confidence level
Irrtumswahr-
scheinlichkeit 𝑃 (𝑥 ∉ [𝑎; 𝑏]) = 𝛼 = significance level
Konfidenzbe-
[𝑎; 𝑏] = confidence region
reich (Annahme-
bereich) [−∞; 𝑎] ∪ [𝑏; ∞] = critical region
kritischer Be-
reich (Ableh- Now: given a realization 𝑥 of 𝑥. If 𝑎 ≤ 𝑥 ≤ 𝑏, there is no reason to reject the hypothesis, otherwise:
nungs-, Verwer- reject hypothesis. E. g.
fungsbereich)
𝑒ˆ = 𝑃𝑎⊥𝑦
𝑄𝑒ˆ = 𝑃𝑎⊥𝑄 𝑦 = 𝑄 𝑦 − 𝑄 𝑦ˆ
108
6.3 Hypotheses
𝑃 (𝜇 − 𝜎 ≤ 𝑥 ≤ 𝜇 + 𝜎) = 68.3% =⇒ 𝛼 = 0.317
𝑃 (𝜇 − 2𝜎 ≤ 𝑥 ≤ 𝜇 + 2𝜎) = 95.5% =⇒ 𝛼 = 0.045
𝑃 (𝜇 − 3𝜎 ≤ 𝑥 ≤ 𝜇 + 3𝜎) = 99.7% =⇒ 𝛼 = 0.003
Matlab: ♥♦r♠♣❞❢
1 − 𝛼 = 𝐹 (𝑏) − 𝐹 (𝛼) = 𝐹 (𝜇 + 𝑘𝜎) − 𝐹 (𝜇 − 𝑘𝜎)
𝑘 = critical value kritischer Wert
define 𝛼: determine 𝑎, 𝑏
Matlab: ♥♦r♠✐♥✈
Rejection of hypothesis
=⇒ an alternative hypothesis must hold
𝐻0 : 𝑥 ∼ 𝑓0 (𝑥) null-hypothesis
𝐻𝑎 : 𝑥 ∼ 𝑓𝑎 (𝑥) alternative hypothesis
109
6 Statistics
𝐻 0 true 𝐻 0 false
wrong =⇒ type I error
𝑥 ∈𝐾
(false alarm) OK
=⇒ reject 𝐻 0
𝑃 (𝑥 ∈ 𝐾 |𝐻 0 ) = 𝛼
wrong =⇒ type II error
𝑥 ∉𝐾
OK (failed alarm)
=⇒ accept 𝐻 0
𝑃 (𝑥 ∉ 𝐾 |𝐻𝑎 ) = 𝛽
6.4 Distributions
1 1 2
𝑥 ∼ 𝑁 (0, 1), 𝑓 (𝑥) = √ 𝑒 − 2 𝑥 ,
2𝜋
E 𝑥 = 0,
D 𝑥 = E 𝑥2 = 1 ←− 𝑥 2 ∼ 𝜒 2 (1, 0), E 𝑥2 = 1 .
110
6.4 Distributions
1 T 1
𝑥 ∼ 𝑁 ( 0 , 1), 𝑓 (𝑥) = 𝑘
exp − 𝑥 𝑥 ,
(2𝜋) 2 2
𝑘-vector 𝑘-vector
E 𝑥 = 0 ,
𝑘-vector 𝑘-vector
D 𝑥 = E 𝑥2 = 1 ←− 𝑥 2 ∼ 𝜒 2 (1, 0), E 𝑥𝑥 T = 𝐼,
𝑥 T𝑥 = 𝑥 21 + 𝑥 22 + . . . + 𝑥 𝑘2 ∼ 𝜒 2 (𝑘, 0),
E 𝑥 T𝑥 = E 𝑥 21 + . . . + E 𝑥 𝑘2 = 𝑘 .
𝜎12
© ª
®
0
𝜎22
𝑥 ∼ 𝑁 (0, 𝑄 𝑥 ), 𝑄 𝑥 = ®,
®
®
..
.
0
« 𝜎𝑘2 ¬
2 1 1 𝑥 𝑖2
𝑥 𝑖 ∼ 𝑁 (0, 𝜎𝑖 ), 𝑓 (𝑥 𝑖 ) = √ exp − 2 ,
2𝜋𝜎𝑖 2 𝜎𝑖
𝑥
𝑦 = 𝑖 ∼ 𝑁 (0, 1),
𝑖 𝜎𝑖
𝑥 12 𝑥 22 𝑥𝑘2
𝑥 T𝑄 𝑥−1𝑥 = + +...+ ∼ 𝜒 2 (𝑘, 0) =⇒ E 𝑥 T𝑄 𝑥−1𝑥 = 𝑘,
𝜎12 𝜎22 𝜎𝑘2
1 1
𝑓 (𝑥) = 𝑘
exp (− 𝑥 T𝑄 𝑥−1𝑥) .
1
(2𝜋) (det 𝑄 𝑥 )
2 2 2
E 𝑥 =𝜇
E 𝑥 T𝑥 = 𝑘 + 𝜆; 𝜇 T 𝜇 = non-centrality parameter
= 𝜇 12 + 𝜇 22 + . . . + 𝜇𝑘2
111
6 Statistics
c2(k,0)
N(0,1) N(m,1)
c2(k,l)
1
0 m k k+l
General case
𝑥 ∼ 𝑁 (𝜇, 𝑄 𝑥 ),
1 1 −1
𝑓 (𝑥) = 𝑘 1
exp − (𝑥 − 𝜇)𝑄 𝑥 (𝑥 − 𝜇) ,
(2𝜋) 2 (det 𝑄 𝑥 ) 2 2
E 𝑥 = 𝜇, D 𝑥 = 𝑄𝑥 ,
T −1
E 𝑥 𝑄𝑥 𝑥 = 𝑘 + 𝜆 = 𝜇 T𝑄 𝑥−1 𝜇 .
112
7 Statistical Testing
𝑒ˆ = 𝑦 − 𝑦ˆ
= 𝑃𝐴⊥𝑦
E 𝑒ˆ = 0, 𝑒ˆ ∼ 𝑁 (0, 𝑄𝑒ˆ )
D 𝑒ˆ = 𝑄𝑒ˆ
= 𝑄 𝑦 − 𝑄 𝑦ˆ
T
= 𝑃𝐴⊥𝑄 𝑦 (𝑃𝐴⊥ )
T −1
Question: 𝑒ˆT𝑄𝑒−1 ˆ 2
ˆ 𝑒 ∼ 𝜒 (𝑚, 0) and thus E 𝑒 𝑄 𝑒ˆ 𝑒 = 𝑚?
ˆ ˆ
No, because 𝑄𝑒ˆ is singular and therefore not invertible. However, in 6.1:
n o
E 𝑒ˆT𝑄 𝑦−1𝑒ˆ = trace(𝑄 𝑦−1 E 𝑒ˆ𝑒ˆT ) = trace(𝑄 𝑦−1 (𝑄 𝑦 − 𝑄 𝑦ˆ )) = 𝑚 − 𝑛
| {z }
𝑄𝑒ˆ
Test statistic
As residuals tell us something about the mismatch between data and model, they will be the basis
for our testing. In particular the sum of squared estimated residuals will be used as our test statistic
𝑇:
𝑇 = 𝑒ˆT𝑄 𝑦−1𝑒ˆ ∼ 𝜒 2 (𝑚 − 𝑛, 0)
E 𝑇 = 𝑚 −𝑛
Thus, we have a test statistic and we know its distribution. This is the starting point for global model
testing.
𝑻 > 𝒌 𝜶 : reject 𝑯 0
In case𝑇 – the realization of𝑇 – is larger than a chosen critical value (based on 𝛼), the null hypothesis
𝐻 0 should be rejected. At this point, we haven’t formulated an alternative hypothesis 𝐻 a yet. The
rejection may be due to:
113
7 Statistical Testing
Figure 7.1: Distribution of the test statistic 𝑇 under the null and alternative hypotheses. (Non-
centrality parameter 𝜆 to be explained later)
𝑄 𝑥ˆ = (𝐴T𝑄 𝑦−1𝐴) −1
𝑄 𝑦 = 𝜎 2𝑄
y 𝑃 𝑦 = 𝑄 −1 = 𝜎 −2𝑄 −1
𝑦
Thus, the estimate 𝑥ˆ is independent of the variance factor and therefore insensitive to stochastic
model errors. However, the covariance matrix 𝑄 𝑥ˆ is scaled by the variance factor. This is also
114
7.2 Testing procedure
𝐻0 𝐻a
n o n o n o n o
E 𝑦 = 𝐴𝑥; D 𝑦 = 𝑄 𝑦 E 𝑦 = 𝐴𝑥 + 𝐶∇; D 𝑦 = 𝑄 𝑦
𝑥
= 𝐴𝐶
∇
↓ ↓
𝑥ˆ 0 ˆ
𝑥ˆ , ∇ a
↓ ↓
𝑦ˆ = 𝐴𝑥ˆ 0 𝑦ˆ = 𝐴𝑥ˆ a + 𝐶 ∇ˆ
0 a
↓ ↓
𝑒ˆ 0 𝑒ˆ a
↓ ↓
𝑒ˆ T0 𝑄 𝑦−1𝑒ˆ 0 ∼ 𝜒 2 (𝑚 − 𝑛) 𝑒ˆ Ta 𝑄 𝑦−1𝑒ˆ a ∼ 𝜒 2 (𝑚 − 𝑛 − 𝑞)
115
7 Statistical Testing
How is it distributed?
𝐻 0 : 𝑇 ∼ 𝜒 2 (𝑞, 0) and 𝐻 a : 𝑇 ∼ 𝜒 2 (𝑞, 𝜆)
116
7.2 Testing procedure
Geometry of 𝑯 0 und 𝑯 a
(a) (b)
(c)
ˆ
𝑦ˆ = 𝐴𝑥ˆ a + 𝐶 ∇
a
ˆ +𝑃 ⊥𝐶 ∇
= 𝐴𝑥ˆ a + 𝑃𝐴𝐶 ∇ ˆ
| {z }
𝐴
𝑦ˆ
0
ˆ
=⇒ 𝑦ˆ − 𝑦ˆ = 𝑃𝐴⊥𝐶 ∇
a 0
ˆ T𝑄 𝑦−1 𝑃 ⊥𝐶 ∇
=⇒ 𝑇 = 𝑃𝐴⊥𝐶 ∇ ˆ T𝐶 T
ˆ =∇ T
𝑃𝐴⊥ 𝑄 𝑦−1 𝑃𝐴⊥ ˆ
𝐶∇
| {z }
𝐴
⊥ =𝑄 −1𝑄 𝑄 −1
=𝑄 𝑦−1 𝑃𝐴 𝑦 𝑒ˆ0 𝑦
ˆ T𝐶 T𝑄 𝑦−1𝑄𝑒ˆ 𝑄 𝑦−1𝐶 ∇
=∇ ˆ Second version
0
𝑇 = 𝑒ˆ T0 𝑄 𝑦−1𝑒ˆ 0 − 𝑒ˆ Ta 𝑄 𝑦−1𝑒ˆ a
= (𝑦ˆ − 𝑦ˆ ) T𝑄 𝑦−1 (𝑦ˆ − 𝑦ˆ ) Third version
0 a 0 a
ˆ T𝐶 T𝑄 𝑦−1𝑄𝑒ˆ 𝑄 𝑦−1𝐶 ∇
=∇ ˆ
0
117
7 Statistical Testing
ˆ
(ˆ𝑒 a, 𝑦ˆ , ∇)
a
® ∇
𝑛×𝑞 𝑦
𝐶 𝑄 𝑦 𝐴 𝐶 𝑄 𝑦−1𝐶
T ˆ 𝐶 T𝑄 𝑦−1𝑦
« 𝑞×𝑛 𝑞×𝑞 ¬
ˆ = 𝐴T𝑄 𝑦−1𝐴𝑥ˆ
𝐴T𝑄 𝑦−1𝐴𝑥ˆ a + 𝐴T𝑄 𝑦−1𝐶 ∇ 0
ˆ
=⇒ 𝑥ˆ a = 𝑥ˆ 0 − (𝐴T𝑄 𝑦−1𝐴) −1𝐴T𝑄 𝑦−1𝐶 ∇
ˆ
=⇒ 𝐴𝑥ˆ a = 𝐴𝑥ˆ 0 − 𝑃𝐴𝐶 ∇
ˆ = 𝐴𝑥ˆ + (𝐼 − 𝑃𝐴 )𝐶 ∇
=⇒ 𝐴𝑥ˆ a + 𝐶 ∇ ˆ
0
ˆ
=⇒ 𝑦ˆ = 𝑦ˆ + 𝑃𝐴⊥𝐶 ∇
a 0
ˆ −→ laborious derivation!
2. row: substitute 𝑥ˆ a and solve for ∇
Result:
ˆ = (𝐶 T𝑄 𝑦−1𝑄𝑒ˆ 𝑄 𝑦−1𝐶) −1𝐶 T𝑄 𝑦−1𝑒ˆ
∇ 0 0
Distribution of 𝑇
Transformation of variables
𝑧 = 𝐶 T 𝑄 𝑦−1 𝑒ˆ 0
𝑞×1 𝑞×𝑚 𝑚×𝑚 𝑚×1
𝑄𝑧 = 𝐶 T𝑄 𝑦−1𝑄𝑒ˆ0 𝑄 𝑦−1𝐶
ˆ = 𝑄𝑧−1𝑧 =⇒ 𝑧 = 𝑄𝑧 ∇
∇ ˆ
𝑇 = 𝑧 T𝑄𝑧−1𝑧 ∼ 𝜒𝑞2
118
7.2 Testing procedure
𝐻0 𝐻a
𝑧 ∼ 𝑁 (0, 𝑄𝑧 ) ˆ 𝑄𝑧 )
𝑧 ∼ 𝑁 (𝑄𝑧 ∇,
𝑇 ∼ 𝜒 2 (𝑞, 0) 2
𝑇 ∼ 𝜒 (𝑞, 𝜆)
𝜆 = ∇T𝑄𝑧 𝑄𝑧−1𝑄𝑧 ∇ = ∇T𝐶 T𝑄 𝑦−1𝑄𝑒ˆ0 𝑄 𝑦−1𝐶∇
Summary
Test quantity 𝑇 = 𝑒ˆ T0 𝑄 𝑦−1𝑒ˆ 0 − 𝑒ˆ Ta 𝑄 𝑦−1𝑒ˆ a exhibits that 𝐻 0 has to be rejected in favor of 𝐻 a : Model
E {𝑦} = 𝐴𝑥 is not suitable. In case 𝐻 0 is true 𝑇 is (central) 𝜒 2 -distributed with 𝑞 degrees of freedom,
2 , otherwise 𝑇 ∼ 𝜒 2 with 𝜆 being the non-centrality parameter 𝜆 = ∇T𝐶 T𝑄 −1𝑄 𝑄 −1𝐶∇.
𝑇 ∼ 𝜒𝑞,0 𝑞,𝜆 𝑦 𝑒ˆ0 𝑦
ˆT T
(3) ∇ 𝐶 𝑄 𝑦−1𝑄𝑒ˆ0 𝑄 𝑦−1𝐶 ∇ ˆ
−1
(4) 𝑒ˆ T0 𝑄 𝑦−1𝐶 𝐶 T𝑄 𝑦−1𝑄𝑒ˆ0 𝑄 𝑦−1𝐶 𝐶 T𝑄 𝑦−1𝑒ˆ 0
Versions (1)–(3) explicitely involve the computation of 𝐻 a while cases (4) and (5) require only 𝑒ˆ 0 and
some 𝐶.
For the reason that
𝑧 ∼ 𝑁 (0, 𝑄𝑧 ) under 𝐻 0
𝑧 ∼ 𝑁 (𝑄𝑧 ∇, 𝑄𝑧 ) under 𝐻 a
𝑇 ∼ 𝜒 2 (𝑞, 0) under 𝐻 0
𝑇 ∼ 𝜒 2 (𝑞, 𝜆), 𝜆 = ∇ T𝑄 𝑧 ∇ under 𝐻 a
𝑛 + 𝑞 ≤ 𝑚 =⇒ 0 < 𝑞 ≤ 𝑚 − 𝑛
119
7 Statistical Testing
!
rank(𝐴|𝐶) = 𝑛 + 𝑞 = 𝑛 + (𝑚 − 𝑛) = 𝑚
=⇒ 𝑜 (𝐴|𝐶) = 𝑚 × 𝑛 + 𝑞 = 𝑚 × 𝑚 “quadratic”
=⇒ redundancy = 𝑚 − 𝑛 − 𝑞 = 0
=⇒ 𝑒ˆ a = 0
=⇒ 𝑦ˆ = 𝑦
a
=⇒ 𝑇 = 𝑒ˆ T0 𝑄 𝑦−1𝑒ˆ 0
n o n o
𝐻 0 : E 𝑦 = 𝐴𝑥 versus 𝐻 a : E 𝑦 ∈ R𝑚
2
𝑇 ∼ 𝜒𝑚−𝑛,0
2
𝑇 ∼ 𝜒𝑚−𝑛,𝜆 , 𝜆 = ∇ T𝑄 𝑧 ∇
For the reason that 𝑒ˆ a = 0, it is obviously not necessary to specify any matrix 𝐶. The test can always
be carried out, that is why it is called overal model test or global test.
120
7.3 DIA-Testprinciple
∇ˆ2
= 2
𝜎ˆ
∇
𝐶 T𝑄 𝑦−1𝑒ˆ
ˆ2 =
∇
𝐶 T𝑄 𝑦−1𝑄𝑒ˆ0 𝑄 𝑦−1𝐶
Important application:
Detection of a gross error (outlier, blunder) in the observations, which leads to a wrong model
specification.
n o n o
𝐻 0 : E 𝑦 = 𝐴𝑥 𝐻 a : E 𝑦 = 𝐴𝑥 + 𝐶∇
T
𝐶 = 0, 0, . . . , 1 , 0, . . . , 0
|{z}
position i
ˆ2
∇ p ∇ˆ √ p ∇ˆ √
Reject 𝐻 0 if 𝑇 = > 𝑘𝛼 or if 𝑇 = < − 𝑘𝛼 and 𝑇 = ˆ can be positive or negative)!
> 𝑘 𝛼 (∇
𝜎 2ˆ 𝜎∇ˆ 𝜎∇ˆ
∇
Should 𝐻 0 be rejected, observation 𝑦𝑖 must be checked and corrected, discarded or even be remea-
sured. The test is performed for every 𝑖 = 1, . . . , 𝑚 if necessary in an iterative manner. The test is
called data snooping. For a diagonal matrix 𝑄 𝑦 we get
p 𝑒ˆ
𝑇 = 𝑖
𝜎𝑒ˆ𝑖
p p p
𝐻 0 : 𝑇 ∼ 𝑁 (0, 1) 𝐻 a : 𝑇 ∼ 𝑁 (∇ 𝑇 , 1)
p q
with ∇ 𝑇 = 𝐶 T𝑄 𝑦−1𝑄𝑒ˆ𝑄 𝑦−1𝐶∇
7.3 DIA-Testprinciple
121
7 Statistical Testing
Detection: Check the overall validity of 𝐻 0 , perform the overall model test, answer the question
whether or not we have generally to expect any model error, e. g. an outlier in the data, search
for a possible model misspecification.
Identification: Perform data snooping in order to locate a possible gross error. Identify it in the
collection of observations. Screen each individual observation for the presence of a blunder.
Adaptation: React to the outcomes of detection and identification step. Perform a corrective action
in order to get the null hypothesis accepted. Repair, replace or discard the corrupted obser-
vation. Remeasure part of the observations or change the model in order to account for the
identified model errors.
Question: How to ensure consistent testing parameters? How can we avoid the situation of a conflict
between the overall model test in the detection step and individual test of the identification step?
Answer: Consistency is guaranteed if the probability of detecting an outlier under the alternative
hypothesis with 𝑞 = 1 (data snooping) is the same for the global test. Thus, both tests must use the
same 𝛾 = 1 − 𝛽, which is called 𝛾 0 here.
𝜆0 = 𝜆 (𝛼, 𝑞 = 𝑚 − 𝑛, 𝛾 = 𝛾 0 ) = 𝜆 (𝛼 1, 𝑞 = 1, 𝛾 = 𝛾 0 )
𝑞 = 1:
𝛾0 = 1 − 𝛽0
=⇒ 𝜆(−→ 𝜇) = 𝜆0
𝛼1
𝑞 = 𝑚 − 𝑛:
𝜆0
=⇒ 𝛼 = 𝛼𝑚−𝑛
𝛾0 = 1 − 𝛽0
Which model error 𝐶∇ results in the power of test 𝛾 0 ? Or the other way around: Which model error
innere 𝐶∇ can be just detected with probability 𝛾 0 ? This question is discussed in the framework of internal
Zuverlässigkeit reliability.
Analysis 𝝀
122
7.4 Internal reliability
Using 𝑸 𝒚 :
=⇒ the more precise the observations are, the smaller the model error 𝐶∇ may be. It will be detected
with probability 𝛾 0 .
Using 𝑨:
• more observations =⇒ larger redundancy
=⇒ for a constant 𝐶∇: 𝜆 increases or the other way around for a constant 𝜆, 𝐶∇ gets smaller
• better network design, better configuration, improved distribution of observations, avoid bad
geometries in resection problems
=⇒ 𝐶∇ can be decreased
𝛿𝑦 describes the internal reliability; it measures the smallest possible error which can be detected
with probability 𝛾.
Question: How can ∇ be determined from 𝜆0 = (𝐶∇) T𝑄 𝑦−1𝑄𝑒ˆ𝑄 𝑦−1𝐶∇?
Case 𝒒 = 1 (datasnooping):
∇ is a scalar, 𝐶 = 𝑐𝑖 , 𝛿𝑦𝑖 = 𝑐𝑖 ∇
𝜆0 = 𝑐𝑖 T𝑄 𝑦−1𝑄𝑒ˆ𝑄 𝑦−1𝑐𝑖 ∇2
s
𝜆0
=⇒ |∇𝑖 | =
𝑐𝑖 𝑄 𝑦 𝑄𝑒ˆ𝑄 𝑦−1𝑐𝑖
T −1
123
7 Statistical Testing
Assumption: 𝑄 𝑦 is diagonal
h i
𝑐𝑖 T𝑄 𝑦−1 = 0, 0, . . . , 𝜎 𝑦−2𝑖 , 0, . . .
1×𝑚
𝜎 𝑦ˆ𝑖 = 𝜎 𝑦𝑖 =⇒ |∇𝑖 | = ∞
√
b) If 𝜎 𝑦ˆ𝑖 ≪ 𝜎 𝑦𝑖 : |∇𝑖 | = 𝜎 𝑦𝑖 𝜆0 is detectable
Local redundancy
𝜎 𝑦2ˆ𝑖
𝑟𝑖 = 1 − = local redundancy number
𝜎 𝑦2𝑖
Õ
𝑚
𝑟𝑖 = 𝑚 − 𝑛
𝑖=1
𝑟𝑖 = 𝑐𝑖 T
𝐼 − 𝑄 𝑦ˆ 𝑄 𝑦−1 𝑐𝑖
= 𝑐𝑖 T (𝐼 − 𝑃𝐴 ) 𝑐𝑖
= 𝑐𝑖 T 𝑃𝐴⊥𝑐𝑖
Õ
=⇒ 𝑟𝑖 = trace𝑃𝐴⊥ = 𝑚 − 𝑛
𝑖
n o
NB.: E 𝑒ˆ T𝑄 𝑦−1𝑒ˆ T = 𝑚 − 𝑛
124
7.5 External reliability
Redundancy
𝑒ˆ = 𝑃𝐴⊥𝑦
−1
T −1 T −1
= 𝐼 − 𝐴 𝐴 𝑄𝑦 𝐴 𝐴 𝑄𝑦 𝑦
= 𝑅𝑦 𝑅 = redundancy matrix
𝑒ˆ𝑖 = 𝑅𝑖 𝑗 𝑦
𝑗
= 𝑟𝑖 𝑦 + ...
𝑖
=⇒ 𝛿 𝑒ˆ = 𝑟𝑖 𝛿𝑦
𝑖
=⇒ Local redundancy is a quantity how redundancy is distributed among the single observations
or how a model error 𝛿𝑦 = 𝐶∇ is projected onto the residuals.
𝛿𝑦 := 𝐶∇ −→ 𝛿 𝑥?
ˆ
Problems:
• 𝛿 𝑥ˆ is a vector-valued quantity
• 𝛿 𝑥ˆ depends on possibly inhomogenous quantities with different physical units.
Remedy: Normalize 𝛿 𝑥ˆ using 𝑄 𝑥−1
ˆ =⇒ squared bias-to-noise-ratio
𝜆𝑥ˆ = 𝛿 𝑥ˆ T𝑄 𝑥−1
ˆ 𝛿 𝑥ˆ
= 𝛿 𝑥ˆ T𝐴T𝑄 𝑦−1𝐴𝛿 𝑥ˆ
= (𝑃𝐴𝛿𝑦) T𝑄 𝑦−1 (𝑃𝐴𝛿𝑦)
2
= k𝑃𝐴𝛿𝑦 k𝑄 −1
𝑦
125
7 Statistical Testing
⇓
2 2 ⊥ 2
k𝛿𝑦 k𝑄 −1 = k𝑃𝐴 𝛿𝑦 k 𝑄 −1 + k𝑃𝐴 𝛿𝑦 k 𝑄 −1
𝑦 𝑦 𝑦
T T T T T
or 𝛿𝑦 𝑄 𝑦−1𝛿𝑦 = 𝛿 𝑥ˆ 𝐴 𝑄 𝑦−1𝐴𝛿 𝑥ˆ + 𝛿𝑦 (𝑃𝐴⊥ ) 𝑄 𝑦−1 𝑃𝐴⊥𝛿𝑦
or 𝜆 𝑦 = 𝜆𝑥ˆ + 𝜆 0
=⇒ 𝜆𝑥ˆ = 𝜆 𝑦 − 𝜆 0
Answer:
𝜆 0 = 𝛿𝑦 T𝑄 𝑦−1𝑄𝑒ˆ𝑄 𝑦−1𝛿𝑦 = 𝛿𝑦 T𝑄 𝑦−1 𝑃𝐴⊥𝛿𝑦
T
= 𝛿𝑦 T (𝑃𝐴⊥ ) 𝑄 𝑦−1 𝑃𝐴⊥𝛿𝑦
T
= (𝑃𝐴⊥𝛿𝑦) 𝑄 𝑦−1 𝑃𝐴⊥𝛿𝑦
= k𝑃𝐴⊥𝛿𝑦 k𝑄
2
−1
𝑦
special case
𝑞 = 1, 𝑐𝑖 , 𝑄 𝑦 = diagonal
=⇒ 𝜆𝑥ˆ = 𝜆 𝑦𝑖 − 𝜆 0
1
= 𝜆 − 𝜆0
𝑟𝑖 0
1 − 𝑟𝑖
= 𝜆
𝑟𝑖 0
𝜎 𝑦2ˆ𝑖 𝜎 𝑦2𝑖
= 𝜆0
𝜎 𝑦2𝑖 (𝜎 𝑦2𝑖 − 𝜎 𝑦2ˆ𝑖 )
𝜎 𝑦2ˆ𝑖
= 𝜆0
𝜎 𝑦2𝑖 − 𝜎 𝑦2ˆ𝑖
1
= 𝜆0
𝜎 𝑦2 𝑖
𝜎 𝑦2ˆ
−1
𝑖
126
7.6 Reliability: a synthesis
127
8 Recursive estimation
−1
𝑥ˆ (1) = 𝐴1 T𝑄 1−1𝐴1 𝐴1 T𝑄 1−1𝑦 , 𝑄 𝑥ˆ (1) = 𝐴1 T𝑄 1−1𝐴1
−1
1
( −1
𝑥ˆ (1) = 𝐴1 T𝑄 1−1𝐴1 𝐴1 T𝑄 1−1𝑦
=⇒ −1 1
𝑄 𝑥ˆ (1) = 𝐴1 T𝑄 1−1𝐴1
−1 T −1
T −1 T −1 T −1
ˆ
𝑥 𝐴 𝑄 𝐴 + 𝐴 𝑄 𝐴 𝐴 𝑄 𝑦 + 𝐴 𝑄 𝑦
=
−1
1 1 2 2 1
(2) 1 2 1 1 2 2 2
T −1 T −1
= 𝑄 𝑥−1 𝑄 𝑥−1
ˆ (1) + 𝐴2 𝑄 2 𝐴2 ˆ (1) 𝑥 (1) + 𝐴2 𝑄 2 𝑦 2
ˆ
𝑄 𝑥ˆ (2) = 𝑄 −1 + 𝐴2 T𝑄 2−1𝐴2
−1
𝑥ˆ (1)
Aufdatierungs-
measurement update
=⇒
gleichungen covariance update
128
8.1 Partitioned model
for 𝑄 𝑥−1
ˆ
(1)
=⇒ 𝑄 𝑥−1 −1 T −1
ˆ (1) = 𝑄 𝑥ˆ (2) − 𝐴2 𝑄 2 𝐴2
−1
T −1 T −1
Substitute the result in 𝑥ˆ (2) = 𝑄 𝑥−1
ˆ + 𝐴 2 𝑄 2 𝐴 2 𝑄 −1 𝑥ˆ
𝑥ˆ (1) + 𝐴 2 𝑄 2 𝑦
(1) (1) 2
=⇒ 𝑥ˆ (2) = 𝑄 𝑥ˆ (2) 𝑄 𝑥−1 ˆ
𝑥
ˆ (2) (1) − 𝐴 2
T −1
𝑄 2 𝐴 ˆ
𝑥
2 (1) + 𝐴 2
T −1
𝑄 𝑦
2 2
= 𝑥ˆ (1) + 𝑄 𝑥ˆ (2) 𝐴2 T𝑄 2−1 𝑦 − 𝐴2𝑥ˆ (1)
2
= 𝑥ˆ (1) + 𝐾𝑣 2
𝑣 2 = 𝑦 − 𝐴2𝑥ˆ (1)
2
𝐾 = 𝑄 𝑥ˆ (2) 𝐴2 T𝑄 2−1
𝐵 T𝐴 = 0
𝐼
=⇒ −𝐴2 𝐼 =0
𝐴2
( !) ( !)
𝑥ˆ (1) 𝑥ˆ (1) 𝑄 𝑥ˆ (1) 0
−𝐴2 𝐼 E = 0; D =
𝑦 𝑦 0 𝑄2
2 2
n o
−1
𝐵T E 𝑦 = 0
n o T
𝐵T 𝑦
D 𝑦 = 𝑄𝑦
=⇒ 𝑦ˆ = 𝐼 − 𝑄 𝑦 𝐵 𝐵 𝑄 𝑦 𝐵
! !
𝑥ˆ (2) 𝐼 0 −𝑄 𝑥ˆ (1) 𝐴2 T −1 𝑥ˆ
T (1)
=⇒ = − 𝑄 2 + 𝐴2𝑄 𝑥ˆ (1) 𝐴2 −𝐴2 𝐼
𝑦ˆ 0𝐼 𝑄2 𝑦
2 2
129
8 Recursive estimation
=⇒ Measurement update
−1
𝑥ˆ (2) = 𝑥ˆ (1) + 𝑄 𝑥ˆ (1) 𝐴2 T 𝑄 2 + 𝐴2𝑄 𝑥ˆ (1) 𝐴2 T 𝑦 − 𝐴2𝑥ˆ (1)
2
= 𝑥ˆ (1) + 𝐾𝑣 2
−1
𝐾 = 𝑄 𝑥ˆ (1) 𝐴2 T 𝑄 2 + 𝐴2𝑄 𝑥ˆ (1) 𝐴2 T
𝑚 2 ×𝑚 2
= 𝑄 𝑥ˆ (1) 𝐴2 T𝑄 𝑣−12
=⇒ Covariance update
−1
𝑄 𝑥ˆ (2) = 𝑄 𝑥ˆ (1) − 𝑄 𝑥ˆ (1) 𝐴2 T 𝑄 2 + 𝐴2𝑄 𝑥ˆ (1) 𝐴2 T 𝐴2𝑄 𝑥ˆ (1)
= 𝑄 𝑥ˆ (1) − 𝐾𝐴2𝑄 𝑥ˆ (1)
= (𝐼 − 𝐾𝐴2 ) 𝑄 𝑥ˆ (1)
𝑦
𝑦
© 1 ª
© 1 ª
© ª © ª
𝐴1 𝑄1
®
®
𝐴2 ®
®
®
0
®
E .2 ® = . ®® 𝑥; D .2 ® = ®
𝑦 𝑦 𝑄2
.. ®
.. ®
.. ® ®
®
..
®
®
.
𝑦
𝑦
0
«
« 𝑘 ¬ « 𝑘 ¬
𝑄𝑘 ¬
« 𝑘 ¬
𝐴
Batch: ! −1 !
Õ
𝑘 Õ
𝑘
𝑥ˆ = 𝐴𝑖 T𝑄𝑖−1𝐴𝑖 𝐴𝑖 T𝑄𝑖−1𝑦
𝑖
𝑖=1 𝑖=1
Recursive:
𝑥ˆ (𝑘) = 𝑥ˆ (𝑘−1) + 𝐾𝑘 𝑣 𝑘
𝑣 𝑘 = 𝑦 − 𝐴𝑘 𝑥ˆ (𝑘−1)
−1
𝑘
𝐾𝑘 = 𝑄 𝑥ˆ (𝑘−1) 𝐴𝑘 T 𝑄𝑘 + 𝐴𝑘 𝑄 𝑥ˆ (𝑘−1) 𝐴𝑘 T
= 𝑄 𝑥ˆ (𝑘−1) 𝐴𝑘 T𝑄 𝑣−1
𝑘
𝑄 𝑥ˆ (𝑘 ) = (𝐼 − 𝐾𝑘 𝐴𝑘 ) 𝑄 𝑥ˆ (𝑘−1)
130
A Partitioning
𝑊 𝑋 𝐴 𝐵
𝑛×𝑛 𝑛×𝑘 𝑛×𝑛 𝑛×𝑘
= 𝐼𝑛 0
𝑌 𝑍 𝐶 𝐷 A, B, C, D are unknown
0 𝐼𝑘
𝑘×𝑛 𝑘×𝑘 𝑘×𝑛 𝑘×𝑘
| {z }
Inverse
(1) 𝑊 𝐴 + 𝑋𝐶 = 𝐼𝑛 , rank𝑊 = 𝑛
(2) 𝑊 𝐵 + 𝑋 𝐷 = 0
(3) 𝑌 𝐴 + 𝑍𝐶 = 0
(4) 𝑌 𝐵 + 𝑍 𝐷 = 𝐼𝑘
(5) 𝑊 −1 · (1) : 𝐴 + 𝑊 −1𝑋𝐶 = 𝑊 −1 =⇒ 𝐴 = 𝑊 −1 − 𝑊 −1𝑋𝐶
(6) Insert (5) into (3) : 𝑌𝑊 −1 − 𝑌𝑊 −1𝑋𝐶 + 𝑍𝐶 = 0 =⇒ 𝐶 = −(𝑍 − 𝑌𝑊 −1𝑋 ) −1𝑌𝑊 −1 (provided
𝐺 = 𝑍 − 𝑌𝑊 −1𝑋 is non-singular)
(7) 𝐷 = 𝐺 −1 = (𝑍 − 𝑌𝑊 −1𝑋 ) −1
(8) 𝐵 = −𝑊 −1𝑋𝐺 −1 = −𝑊 −1𝑋 (𝑍 − 𝑌𝑊 −1𝑋 ) −1
(1) 𝐴 − 𝑏𝐶 = 𝐼
(2) 𝐵 − 𝑏𝐷 = 0
(3) −𝑏 T𝐴 = 0
(4) −𝑏 T 𝐵 = 𝐼
(5) −𝑏 T𝐴 −𝑏 T𝑏𝐶 = −𝑏 T =⇒ 𝐶 = −(𝑏 T𝑏) −1𝑏 T
|{z}
=0
131
A Partitioning
−1
𝐼 −𝑏 𝐼 − 𝑏 (𝑏 T𝑏) −1𝑏 T −𝑏 (𝑏 T𝑏) −1
=
−𝑏 T 0 −(𝑏 T𝑏) −1𝑏 T −(𝑏 T𝑏) −1
𝑒ˆ = 𝑏 (𝑏 T𝑏) −1𝑏 T𝑦
𝜆ˆ = (𝑏 T𝑏) −1𝑏 T𝑦
then
(𝐴T𝐴)𝑅 + 𝐷𝑆 = 𝐼 (A.1)
(𝐴T𝐴)𝑆 T + 𝐷𝑄 = 0 (A.2)
𝐷 T𝑅 = 0 (A.3)
𝐷 T𝑆 T = 𝐼 (A.4)
𝐻 𝐷𝑄 = 0 =⇒ 𝑄 = 0
132
A.3 Inverse Partitioning Method: special case 2
=⇒ 𝜆ˆ = 0
=⇒ 𝑥ˆ = (𝐴T𝐴 + 𝐷𝐷 T ) −1𝐴T𝑦 + 𝐻 (𝐻 T 𝐷) −1𝑐
𝑒ˆ = 𝑦 − 𝐴𝑥ˆ = (𝐼 − 𝐴(𝐴T𝐴 + 𝐷𝐷 T ) −1𝐴T )𝑦
133
B Statistical Tables
f(x) f(x)
k1- -a = -k -a
2 2
a a/2 a/2
- ∞ 0 ka x - ∞ k1- -a 0 k -a x
2 2
𝑘𝛼 0 1 2 3 4 5 6 7 8 9
0.0 0.5000 0.4960 0.4920 0.4880 0.4840 0.4801 0.4761 0.4721 0.4681 0.4641
0.1 0.4602 0.4562 0.4522 0.4483 0.4443 0.4404 0.4364 0.4325 0.4286 0.4247
0.2 0.4207 0.4168 0.4129 0.4090 0.4052 0.4013 0.3974 0.3936 0.3897 0.3859
0.3 0.3821 0.3783 0.3745 0.3707 0.3669 0.3632 0.3594 0.3557 0.3520 0.3483
0.4 0.3446 0.3409 0.3372 0.3336 0.3300 0.3264 0.3228 0.3192 0.3156 0.3121
0.5 0.3085 0.3050 0.3015 0.2981 0.2946 0.2912 0.2877 0.2843 0.2810 0.2776
0.6 0.2743 0.2709 0.2676 0.2643 0.2611 0.2578 0.2546 0.2514 0.2483 0.2451
0.7 0.2420 0.2389 0.2358 0.2327 0.2296 0.2266 0.2236 0.2206 0.2177 0.2148
0.8 0.2119 0.2090 0.2061 0.2033 0.2005 0.1977 0.1949 0.1922 0.1894 0.1867
0.9 0.1841 0.1814 0.1788 0.1762 0.1736 0.1711 0.1685 0.1660 0.1635 0.1611
1.0 0.1587 0.1562 0.1539 0.1515 0.1492 0.1469 0.1446 0.1423 0.1401 0.1379
1.1 0.1357 0.1335 0.1314 0.1292 0.1271 0.1251 0.1230 0.1210 0.1190 0.1170
1.2 0.1151 0.1131 0.1112 0.1093 0.1075 0.1056 0.1038 0.1020 0.1003 0.0985
1.3 0.0968 0.0951 0.0934 0.0918 0.0901 0.0885 0.0869 0.0853 0.0838 0.0823
1.4 0.0808 0.0793 0.0778 0.0764 0.0749 0.0735 0.0721 0.0708 0.0694 0.0681
134
B.1 Standard Normal Distribution z
∫𝑘𝛼
Computation of one-sided level of significance 𝛼 = 1 − 𝑓 (𝑥) d𝑥 and
−∞
∫ 𝛼/2
𝑘 1−
two-sided level of significance 𝛼 = 2 𝑓 (𝑥) d𝑥 (continued).
−∞
𝑘𝛼 0 1 2 3 4 5 6 7 8 9
1.5 0.0668 0.0655 0.0643 0.0630 0.0618 0.0606 0.0594 0.0582 0.0571 0.0559
1.6 0.0548 0.0537 0.0526 0.0516 0.0505 0.0495 0.0485 0.0475 0.0465 0.0455
1.7 0.0446 0.0436 0.0427 0.0418 0.0409 0.0401 0.0392 0.0384 0.0375 0.0367
1.8 0.0359 0.0351 0.0344 0.0336 0.0329 0.0322 0.0314 0.0307 0.0301 0.0294
1.9 0.0287 0.0281 0.0274 0.0268 0.0262 0.0256 0.0250 0.0244 0.0239 0.0233
2.0 0.0228 0.0222 0.0217 0.0212 0.0207 0.0202 0.0197 0.0192 0.0188 0.0183
2.1 0.0179 0.0174 0.0170 0.0166 0.0162 0.0158 0.0154 0.0150 0.0146 0.0143
2.2 0.0139 0.0136 0.0132 0.0129 0.0125 0.0122 0.0119 0.0116 0.0113 0.0110
2.3 0.0107 0.0104 0.0102 0.0099 0.0096 0.0094 0.0091 0.0089 0.0087 0.0084
2.4 0.0082 0.0080 0.0078 0.0075 0.0073 0.0071 0.0069 0.0068 0.0066 0.0064
2.5 0.0062 0.0060 0.0059 0.0057 0.0055 0.0054 0.0052 0.0051 0.0049 0.0048
2.6 0.0047 0.0045 0.0044 0.0043 0.0041 0.0040 0.0039 0.0038 0.0037 0.0036
2.7 0.0035 0.0034 0.0033 0.0032 0.0031 0.0030 0.0029 0.0028 0.0027 0.0026
2.8 0.0026 0.0025 0.0024 0.0023 0.0023 0.0022 0.0021 0.0021 0.0020 0.0019
2.9 0.0019 0.0018 0.0018 0.0017 0.0016 0.0016 0.0015 0.0015 0.0014 0.0014
3.0 0.0013 0.0013 0.0013 0.0012 0.0012 0.0011 0.0011 0.0011 0.0010 0.0010
3.1 0.0010 0.0009 0.0009 0.0009 0.0008 0.0008 0.0008 0.0008 0.0007 0.0007
3.2 0.0007 0.0007 0.0006 0.0006 0.0006 0.0006 0.0006 0.0005 0.0005 0.0005
3.3 0.0005 0.0005 0.0005 0.0004 0.0004 0.0004 0.0004 0.0004 0.0004 0.0003
3.4 0.0003 0.0003 0.0003 0.0003 0.0003 0.0003 0.0003 0.0003 0.0003 0.0002
Calculation in Matlab:
135
B Statistical Tables
f(x) f(x)
2 2
c c
r,l=0 r,l=0
a a/2 a/2
0 ka x 0 k1- -a2 k -a
2
x
136
B.2 Central 𝜒 2 -Distribution
2
Computation of critical value 𝑘𝛼 = 𝜒1−𝛼;𝑟,𝜆=0 (continued).
137
B Statistical Tables
2
Computation of critical value 𝑘𝛼 = 𝜒1−𝛼;𝑟,𝜆=0 (continued).
Calculation in Matlab:
𝑘 1−𝛼/2 = 41.4 = ❝❤✐✷✐♥✈ (1 − 0.01/2, 21) 𝑘𝛼/2 = 8.034 = ❝❤✐✷✐♥✈ ( 0.01/2, 21)
138
B.3 Non-central 𝜒 2 -Distribution
Calculation in Matlab:
139
B Statistical Tables
f(x) f(x)
k1- -a = -k -a
2 2
a a/2 a/2
- ∞ 0 ka x - ∞ k1- -a 0 k -a x
2 2
140
B.4 Central t-Distribution
Calculation in Matlab:
141
B Statistical Tables
f(x) f(x)
1
F r1,r2 k1-a/2 =Fa/2;r2,r1 =
F1-a/2;r ,r
1 2
a a/2 a/2
0 ka x 0 k1-a/2 ka/2 x
𝛼 = 0.10, 1 − 𝛼 = 0.90
𝑟 2 \𝑟 1 1 2 3 4 5 6 7 8 9 10 12
1 39.86 49.50 53.59 55.83 57.24 58.20 58.91 59.44 59.86 60.19 60.71
2 8.526 9.000 9.162 9.243 9.293 9.326 9.349 9.367 9.381 9.392 9.408
3 5.538 5.462 5.391 5.343 5.309 5.285 5.266 5.252 5.240 5.230 5.216
4 4.545 4.325 4.191 4.107 4.051 4.010 3.979 3.955 3.936 3.920 3.896
5 4.060 3.780 3.619 3.520 3.453 3.405 3.368 3.339 3.316 3.297 3.268
6 3.776 3.463 3.289 3.181 3.108 3.055 3.014 2.983 2.958 2.937 2.905
7 3.589 3.257 3.074 2.961 2.883 2.827 2.785 2.752 2.725 2.703 2.668
8 3.458 3.113 2.924 2.806 2.726 2.668 2.624 2.589 2.561 2.538 2.502
9 3.360 3.006 2.813 2.693 2.611 2.551 2.505 2.469 2.440 2.416 2.379
10 3.285 2.924 2.728 2.605 2.522 2.461 2.414 2.377 2.347 2.323 2.284
11 3.225 2.860 2.660 2.536 2.451 2.389 2.342 2.304 2.274 2.248 2.209
12 3.177 2.807 2.606 2.480 2.394 2.331 2.283 2.245 2.214 2.188 2.147
13 3.136 2.763 2.560 2.434 2.347 2.283 2.234 2.195 2.164 2.138 2.097
14 3.102 2.726 2.522 2.395 2.307 2.243 2.193 2.154 2.122 2.095 2.054
15 3.073 2.695 2.490 2.361 2.273 2.208 2.158 2.119 2.086 2.059 2.017
16 3.048 2.668 2.462 2.333 2.244 2.178 2.128 2.088 2.055 2.028 1.985
17 3.026 2.645 2.437 2.308 2.218 2.152 2.102 2.061 2.028 2.001 1.958
18 3.007 2.624 2.416 2.286 2.196 2.130 2.079 2.038 2.005 1.977 1.933
19 2.990 2.606 2.397 2.266 2.176 2.109 2.058 2.017 1.984 1.956 1.912
20 2.975 2.589 2.380 2.249 2.158 2.091 2.040 1.999 1.965 1.937 1.892
22 2.949 2.561 2.351 2.219 2.128 2.060 2.008 1.967 1.933 1.904 1.859
24 2.927 2.538 2.327 2.195 2.103 2.035 1.983 1.941 1.906 1.877 1.832
26 2.909 2.519 2.307 2.174 2.082 2.014 1.961 1.919 1.884 1.855 1.809
28 2.894 2.503 2.291 2.157 2.064 1.996 1.943 1.900 1.865 1.836 1.790
30 2.881 2.489 2.276 2.142 2.049 1.980 1.927 1.884 1.849 1.819 1.773
40 2.835 2.440 2.226 2.091 1.997 1.927 1.873 1.829 1.793 1.763 1.715
50 2.809 2.412 2.197 2.061 1.966 1.895 1.840 1.796 1.760 1.729 1.680
60 2.791 2.393 2.177 2.041 1.946 1.875 1.819 1.775 1.738 1.707 1.657
80 2.769 2.370 2.154 2.016 1.921 1.849 1.793 1.748 1.711 1.680 1.629
100 2.756 2.356 2.139 2.002 1.906 1.834 1.778 1.732 1.695 1.663 1.612
200 2.731 2.329 2.111 1.973 1.876 1.804 1.747 1.701 1.663 1.631 1.579
500 2.716 2.313 2.095 1.956 1.859 1.786 1.729 1.683 1.644 1.612 1.559
∞ 2.706 2.303 2.084 1.945 1.847 1.774 1.717 1.670 1.632 1.599 1.546
142
B.5 Central F-Distribution
𝛼 = 0.10, 1 − 𝛼 = 0.90
𝑟 2 \𝑟 1 14 16 18 20 30 40 50 100 200 500 ∞
1 61.07 61.35 61.57 61.74 62.26 62.53 62.69 63.01 63.17 63.26 63.33
2 9.420 9.429 9.436 9.441 9.458 9.466 9.471 9.481 9.486 9.489 9.491
3 5.205 5.196 5.190 5.184 5.168 5.160 5.155 5.144 5.139 5.136 5.134
4 3.878 3.864 3.853 3.844 3.817 3.804 3.795 3.778 3.769 3.764 3.761
5 3.247 3.230 3.217 3.207 3.174 3.157 3.147 3.126 3.116 3.109 3.105
6 2.881 2.863 2.848 2.836 2.800 2.781 2.770 2.746 2.734 2.727 2.722
7 2.643 2.623 2.607 2.595 2.555 2.535 2.523 2.497 2.484 2.476 2.471
8 2.475 2.455 2.438 2.425 2.383 2.361 2.348 2.321 2.307 2.298 2.293
9 2.351 2.329 2.312 2.298 2.255 2.232 2.218 2.189 2.174 2.165 2.159
10 2.255 2.233 2.215 2.201 2.155 2.132 2.117 2.087 2.071 2.062 2.055
11 2.179 2.156 2.138 2.123 2.076 2.052 2.036 2.005 1.989 1.979 1.972
12 2.117 2.094 2.075 2.060 2.011 1.986 1.970 1.938 1.921 1.911 1.904
13 2.066 2.042 2.023 2.007 1.958 1.931 1.915 1.882 1.864 1.853 1.846
14 2.022 1.998 1.978 1.962 1.912 1.885 1.869 1.834 1.816 1.805 1.797
15 1.985 1.961 1.941 1.924 1.873 1.845 1.828 1.793 1.774 1.763 1.755
16 1.953 1.928 1.908 1.891 1.839 1.811 1.793 1.757 1.738 1.726 1.718
17 1.925 1.900 1.879 1.862 1.809 1.781 1.763 1.726 1.706 1.694 1.686
18 1.900 1.875 1.854 1.837 1.783 1.754 1.736 1.698 1.678 1.665 1.657
19 1.878 1.852 1.831 1.814 1.759 1.730 1.711 1.673 1.652 1.639 1.631
20 1.859 1.833 1.811 1.794 1.738 1.708 1.690 1.650 1.629 1.616 1.607
22 1.825 1.798 1.777 1.759 1.702 1.671 1.652 1.611 1.590 1.576 1.567
24 1.797 1.770 1.748 1.730 1.672 1.641 1.621 1.579 1.556 1.542 1.533
26 1.774 1.747 1.724 1.706 1.647 1.615 1.594 1.551 1.528 1.514 1.504
28 1.754 1.726 1.704 1.685 1.625 1.592 1.572 1.528 1.504 1.489 1.478
30 1.737 1.709 1.686 1.667 1.606 1.573 1.552 1.507 1.482 1.467 1.456
40 1.678 1.649 1.625 1.605 1.541 1.506 1.483 1.434 1.406 1.389 1.377
50 1.643 1.613 1.588 1.568 1.502 1.465 1.441 1.388 1.359 1.340 1.327
60 1.619 1.589 1.564 1.543 1.476 1.437 1.413 1.358 1.326 1.306 1.292
80 1.590 1.559 1.534 1.513 1.443 1.403 1.377 1.318 1.284 1.261 1.245
100 1.573 1.542 1.516 1.494 1.423 1.382 1.355 1.293 1.257 1.232 1.214
200 1.539 1.507 1.480 1.458 1.383 1.339 1.310 1.242 1.199 1.168 1.144
500 1.518 1.485 1.458 1.435 1.358 1.313 1.282 1.209 1.160 1.122 1.087
∞ 1.505 1.471 1.444 1.421 1.342 1.295 1.263 1.185 1.130 1.082 1.008
143
B Statistical Tables
𝛼 = 0.05, 1 − 𝛼 = 0.95
𝑟 2 \𝑟 1 1 2 3 4 5 6 7 8 9 10 12
1 161.4 199.5 215.7 224.6 230.2 234.0 236.8 238.9 240.5 241.9 243.9
2 18.51 19.00 19.16 19.25 19.30 19.33 19.35 19.37 19.38 19.40 19.41
3 10.13 9.552 9.277 9.117 9.013 8.941 8.887 8.845 8.812 8.786 8.745
4 7.709 6.944 6.591 6.388 6.256 6.163 6.094 6.041 5.999 5.964 5.912
5 6.608 5.786 5.409 5.192 5.050 4.950 4.876 4.818 4.772 4.735 4.678
6 5.987 5.143 4.757 4.534 4.387 4.284 4.207 4.147 4.099 4.060 4.000
7 5.591 4.737 4.347 4.120 3.972 3.866 3.787 3.726 3.677 3.637 3.575
8 5.318 4.459 4.066 3.838 3.687 3.581 3.500 3.438 3.388 3.347 3.284
9 5.117 4.256 3.863 3.633 3.482 3.374 3.293 3.230 3.179 3.137 3.073
10 4.965 4.103 3.708 3.478 3.326 3.217 3.135 3.072 3.020 2.978 2.913
11 4.844 3.982 3.587 3.357 3.204 3.095 3.012 2.948 2.896 2.854 2.788
12 4.747 3.885 3.490 3.259 3.106 2.996 2.913 2.849 2.796 2.753 2.687
13 4.667 3.806 3.411 3.179 3.025 2.915 2.832 2.767 2.714 2.671 2.604
14 4.600 3.739 3.344 3.112 2.958 2.848 2.764 2.699 2.646 2.602 2.534
15 4.543 3.682 3.287 3.056 2.901 2.790 2.707 2.641 2.588 2.544 2.475
16 4.494 3.634 3.239 3.007 2.852 2.741 2.657 2.591 2.538 2.494 2.425
17 4.451 3.592 3.197 2.965 2.810 2.699 2.614 2.548 2.494 2.450 2.381
18 4.414 3.555 3.160 2.928 2.773 2.661 2.577 2.510 2.456 2.412 2.342
19 4.381 3.522 3.127 2.895 2.740 2.628 2.544 2.477 2.423 2.378 2.308
20 4.351 3.493 3.098 2.866 2.711 2.599 2.514 2.447 2.393 2.348 2.278
22 4.301 3.443 3.049 2.817 2.661 2.549 2.464 2.397 2.342 2.297 2.226
24 4.260 3.403 3.009 2.776 2.621 2.508 2.423 2.355 2.300 2.255 2.183
26 4.225 3.369 2.975 2.743 2.587 2.474 2.388 2.321 2.265 2.220 2.148
28 4.196 3.340 2.947 2.714 2.558 2.445 2.359 2.291 2.236 2.190 2.118
30 4.171 3.316 2.922 2.690 2.534 2.421 2.334 2.266 2.211 2.165 2.092
40 4.085 3.232 2.839 2.606 2.449 2.336 2.249 2.180 2.124 2.077 2.003
50 4.034 3.183 2.790 2.557 2.400 2.286 2.199 2.130 2.073 2.026 1.952
60 4.001 3.150 2.758 2.525 2.368 2.254 2.167 2.097 2.040 1.993 1.917
80 3.960 3.111 2.719 2.486 2.329 2.214 2.126 2.056 1.999 1.951 1.875
100 3.936 3.087 2.696 2.463 2.305 2.191 2.103 2.032 1.975 1.927 1.850
200 3.888 3.041 2.650 2.417 2.259 2.144 2.056 1.985 1.927 1.878 1.801
500 3.860 3.014 2.623 2.390 2.232 2.117 2.028 1.957 1.899 1.850 1.772
∞ 3.842 2.996 2.605 2.372 2.214 2.099 2.010 1.939 1.880 1.831 1.752
144
B.5 Central F-Distribution
𝛼 = 0.05, 1 − 𝛼 = 0.95
𝑟 2 \𝑟 1 14 16 18 20 30 40 50 100 200 500 ∞
1 245.4 246.5 247.3 248.0 250.1 251.1 251.8 253.0 253.7 254.1 254.3
2 19.42 19.43 19.44 19.45 19.46 19.47 19.48 19.49 19.49 19.49 19.50
3 8.715 8.692 8.675 8.660 8.617 8.594 8.581 8.554 8.540 8.532 8.526
4 5.873 5.844 5.821 5.803 5.746 5.717 5.699 5.664 5.646 5.635 5.628
5 4.636 4.604 4.579 4.558 4.496 4.464 4.444 4.405 4.385 4.373 4.365
6 3.956 3.922 3.896 3.874 3.808 3.774 3.754 3.712 3.690 3.678 3.669
7 3.529 3.494 3.467 3.445 3.376 3.340 3.319 3.275 3.252 3.239 3.230
8 3.237 3.202 3.173 3.150 3.079 3.043 3.020 2.975 2.951 2.937 2.928
9 3.025 2.989 2.960 2.936 2.864 2.826 2.803 2.756 2.731 2.717 2.707
10 2.865 2.828 2.798 2.774 2.700 2.661 2.637 2.588 2.563 2.548 2.538
11 2.739 2.701 2.671 2.646 2.570 2.531 2.507 2.457 2.431 2.415 2.405
12 2.637 2.599 2.568 2.544 2.466 2.426 2.401 2.350 2.323 2.307 2.296
13 2.554 2.515 2.484 2.459 2.380 2.339 2.314 2.261 2.234 2.218 2.206
14 2.484 2.445 2.413 2.388 2.308 2.266 2.241 2.187 2.159 2.142 2.131
15 2.424 2.385 2.353 2.328 2.247 2.204 2.178 2.123 2.095 2.078 2.066
16 2.373 2.333 2.302 2.276 2.194 2.151 2.124 2.068 2.039 2.022 2.010
17 2.329 2.289 2.257 2.230 2.148 2.104 2.077 2.020 1.991 1.973 1.960
18 2.290 2.250 2.217 2.191 2.107 2.063 2.035 1.978 1.948 1.929 1.917
19 2.256 2.215 2.182 2.155 2.071 2.026 1.999 1.940 1.910 1.891 1.878
20 2.225 2.184 2.151 2.124 2.039 1.994 1.966 1.907 1.875 1.856 1.843
22 2.173 2.131 2.098 2.071 1.984 1.938 1.909 1.849 1.817 1.797 1.783
24 2.130 2.088 2.054 2.027 1.939 1.892 1.863 1.800 1.768 1.747 1.733
26 2.094 2.052 2.018 1.990 1.901 1.853 1.823 1.760 1.726 1.705 1.691
28 2.064 2.021 1.987 1.959 1.869 1.820 1.790 1.725 1.691 1.669 1.654
30 2.037 1.995 1.960 1.932 1.841 1.792 1.761 1.695 1.660 1.637 1.622
40 1.948 1.904 1.868 1.839 1.744 1.693 1.660 1.589 1.551 1.526 1.509
50 1.895 1.850 1.814 1.784 1.687 1.634 1.599 1.525 1.484 1.457 1.438
60 1.860 1.815 1.778 1.748 1.649 1.594 1.559 1.481 1.438 1.409 1.389
80 1.817 1.772 1.734 1.703 1.602 1.545 1.508 1.426 1.379 1.347 1.325
100 1.792 1.746 1.708 1.676 1.573 1.515 1.477 1.392 1.342 1.308 1.283
200 1.742 1.694 1.656 1.623 1.516 1.455 1.415 1.321 1.263 1.221 1.189
500 1.712 1.664 1.625 1.592 1.482 1.419 1.376 1.275 1.210 1.159 1.113
∞ 1.692 1.644 1.604 1.571 1.459 1.394 1.350 1.244 1.170 1.107 1.010
145
B Statistical Tables
𝛼 = 0.025, 1 − 𝛼 = 0.975
𝑟 2 \𝑟 1 1 2 3 4 5 6 7 8 9 10 12
1 647.8 799.5 864.2 899.6 921.8 937.1 948.2 956.7 963.3 968.6 976.7
2 38.51 39.00 39.17 39.25 39.30 39.33 39.36 39.37 39.39 39.40 39.41
3 17.44 16.04 15.44 15.10 14.88 14.73 14.62 14.54 14.47 14.42 14.34
4 12.22 10.65 9.979 9.605 9.364 9.197 9.074 8.980 8.905 8.844 8.751
5 10.01 8.434 7.764 7.388 7.146 6.978 6.853 6.757 6.681 6.619 6.525
6 8.813 7.260 6.599 6.227 5.988 5.820 5.695 5.600 5.523 5.461 5.366
7 8.073 6.542 5.890 5.523 5.285 5.119 4.995 4.899 4.823 4.761 4.666
8 7.571 6.059 5.416 5.053 4.817 4.652 4.529 4.433 4.357 4.295 4.200
9 7.209 5.715 5.078 4.718 4.484 4.320 4.197 4.102 4.026 3.964 3.868
10 6.937 5.456 4.826 4.468 4.236 4.072 3.950 3.855 3.779 3.717 3.621
11 6.724 5.256 4.630 4.275 4.044 3.881 3.759 3.664 3.588 3.526 3.430
12 6.554 5.096 4.474 4.121 3.891 3.728 3.607 3.512 3.436 3.374 3.277
13 6.414 4.965 4.347 3.996 3.767 3.604 3.483 3.388 3.312 3.250 3.153
14 6.298 4.857 4.242 3.892 3.663 3.501 3.380 3.285 3.209 3.147 3.050
15 6.200 4.765 4.153 3.804 3.576 3.415 3.293 3.199 3.123 3.060 2.963
16 6.115 4.687 4.077 3.729 3.502 3.341 3.219 3.125 3.049 2.986 2.889
17 6.042 4.619 4.011 3.665 3.438 3.277 3.156 3.061 2.985 2.922 2.825
18 5.978 4.560 3.954 3.608 3.382 3.221 3.100 3.005 2.929 2.866 2.769
19 5.922 4.508 3.903 3.559 3.333 3.172 3.051 2.956 2.880 2.817 2.720
20 5.871 4.461 3.859 3.515 3.289 3.128 3.007 2.913 2.837 2.774 2.676
22 5.786 4.383 3.783 3.440 3.215 3.055 2.934 2.839 2.763 2.700 2.602
24 5.717 4.319 3.721 3.379 3.155 2.995 2.874 2.779 2.703 2.640 2.541
26 5.659 4.265 3.670 3.329 3.105 2.945 2.824 2.729 2.653 2.590 2.491
28 5.610 4.221 3.626 3.286 3.063 2.903 2.782 2.687 2.611 2.547 2.448
30 5.568 4.182 3.589 3.250 3.026 2.867 2.746 2.651 2.575 2.511 2.412
40 5.424 4.051 3.463 3.126 2.904 2.744 2.624 2.529 2.452 2.388 2.288
50 5.340 3.975 3.390 3.054 2.833 2.674 2.553 2.458 2.381 2.317 2.216
60 5.286 3.925 3.343 3.008 2.786 2.627 2.507 2.412 2.334 2.270 2.169
80 5.218 3.864 3.284 2.950 2.730 2.571 2.450 2.355 2.277 2.213 2.111
100 5.179 3.828 3.250 2.917 2.696 2.537 2.417 2.321 2.244 2.179 2.077
200 5.100 3.758 3.182 2.850 2.630 2.472 2.351 2.256 2.178 2.113 2.010
500 5.054 3.716 3.142 2.811 2.592 2.434 2.313 2.217 2.139 2.074 1.971
∞ 5.024 3.689 3.116 2.786 2.567 2.408 2.288 2.192 2.114 2.048 1.945
146
B.5 Central F-Distribution
𝛼 = 0.025, 1 − 𝛼 = 0.975
𝑟 2 \𝑟 1 14 16 18 20 30 40 50 100 200 500 ∞
1 982.5 986.9 990.3 993.1 1001. 1006. 1008. 1013. 1016. 1017. 1018.
2 39.43 39.44 39.44 39.45 39.46 39.47 39.48 39.49 39.49 39.50 39.50
3 14.28 14.23 14.20 14.17 14.08 14.04 14.01 13.96 13.93 13.91 13.90
4 8.684 8.633 8.592 8.560 8.461 8.411 8.381 8.319 8.289 8.270 8.257
5 6.456 6.403 6.362 6.329 6.227 6.175 6.144 6.080 6.048 6.028 6.015
6 5.297 5.244 5.202 5.168 5.065 5.012 4.980 4.915 4.882 4.862 4.849
7 4.596 4.543 4.501 4.467 4.362 4.309 4.276 4.210 4.176 4.156 4.142
8 4.130 4.076 4.034 3.999 3.894 3.840 3.807 3.739 3.705 3.684 3.670
9 3.798 3.744 3.701 3.667 3.560 3.505 3.472 3.403 3.368 3.347 3.333
10 3.550 3.496 3.453 3.419 3.311 3.255 3.221 3.152 3.116 3.094 3.080
11 3.359 3.304 3.261 3.226 3.118 3.061 3.027 2.956 2.920 2.898 2.883
12 3.206 3.152 3.108 3.073 2.963 2.906 2.871 2.800 2.763 2.740 2.725
13 3.082 3.027 2.983 2.948 2.837 2.780 2.744 2.671 2.634 2.611 2.596
14 2.979 2.923 2.879 2.844 2.732 2.674 2.638 2.565 2.526 2.503 2.487
15 2.891 2.836 2.792 2.756 2.644 2.585 2.549 2.474 2.435 2.411 2.395
16 2.817 2.761 2.717 2.681 2.568 2.509 2.472 2.396 2.357 2.333 2.316
17 2.753 2.697 2.652 2.616 2.502 2.442 2.405 2.329 2.289 2.264 2.248
18 2.696 2.640 2.596 2.559 2.445 2.384 2.347 2.269 2.229 2.204 2.187
19 2.647 2.591 2.546 2.509 2.394 2.333 2.295 2.217 2.176 2.150 2.133
20 2.603 2.547 2.501 2.464 2.349 2.287 2.249 2.170 2.128 2.103 2.085
22 2.528 2.472 2.426 2.389 2.272 2.210 2.171 2.090 2.047 2.021 2.003
24 2.468 2.411 2.365 2.327 2.209 2.146 2.107 2.024 1.981 1.954 1.935
26 2.417 2.360 2.314 2.276 2.157 2.093 2.053 1.969 1.925 1.897 1.878
28 2.374 2.317 2.270 2.232 2.112 2.048 2.007 1.922 1.877 1.848 1.829
30 2.338 2.280 2.233 2.195 2.074 2.009 1.968 1.882 1.835 1.806 1.787
40 2.213 2.154 2.107 2.068 1.943 1.875 1.832 1.741 1.691 1.659 1.637
50 2.140 2.081 2.033 1.993 1.866 1.796 1.752 1.656 1.603 1.569 1.545
60 2.093 2.033 1.985 1.944 1.815 1.744 1.699 1.599 1.543 1.507 1.482
80 2.035 1.974 1.925 1.884 1.752 1.679 1.632 1.527 1.467 1.428 1.400
100 2.000 1.939 1.890 1.849 1.715 1.640 1.592 1.483 1.420 1.378 1.347
200 1.932 1.870 1.820 1.778 1.640 1.562 1.511 1.393 1.320 1.269 1.229
500 1.892 1.830 1.779 1.736 1.596 1.515 1.462 1.336 1.254 1.192 1.137
∞ 1.866 1.803 1.752 1.709 1.566 1.484 1.429 1.296 1.206 1.128 1.012
147
B Statistical Tables
𝛼 = 0.01, 1 − 𝛼 = 0.99
𝑟 2 \𝑟 1 1 2 3 4 5 6 7 8 9 10 12
1 4052. 4999. 5403. 5625. 5764. 5859. 5928. 5981. 6022. 6056. 6106.
2 98.50 99.00 99.17 99.25 99.30 99.33 99.36 99.37 99.39 99.40 99.42
3 34.12 30.82 29.46 28.71 28.24 27.91 27.67 27.49 27.35 27.23 27.05
4 21.20 18.00 16.69 15.98 15.52 15.21 14.98 14.80 14.66 14.55 14.37
5 16.26 13.27 12.06 11.39 10.97 10.67 10.46 10.29 10.16 10.05 9.888
6 13.75 10.92 9.780 9.148 8.746 8.466 8.260 8.102 7.976 7.874 7.718
7 12.25 9.547 8.451 7.847 7.460 7.191 6.993 6.840 6.719 6.620 6.469
8 11.26 8.649 7.591 7.006 6.632 6.371 6.178 6.029 5.911 5.814 5.667
9 10.56 8.022 6.992 6.422 6.057 5.802 5.613 5.467 5.351 5.257 5.111
10 10.04 7.559 6.552 5.994 5.636 5.386 5.200 5.057 4.942 4.849 4.706
11 9.646 7.206 6.217 5.668 5.316 5.069 4.886 4.744 4.632 4.539 4.397
12 9.330 6.927 5.953 5.412 5.064 4.821 4.640 4.499 4.388 4.296 4.155
13 9.074 6.701 5.739 5.205 4.862 4.620 4.441 4.302 4.191 4.100 3.960
14 8.862 6.515 5.564 5.035 4.695 4.456 4.278 4.140 4.030 3.939 3.800
15 8.683 6.359 5.417 4.893 4.556 4.318 4.142 4.004 3.895 3.805 3.666
16 8.531 6.226 5.292 4.773 4.437 4.202 4.026 3.890 3.780 3.691 3.553
17 8.400 6.112 5.185 4.669 4.336 4.102 3.927 3.791 3.682 3.593 3.455
18 8.285 6.013 5.092 4.579 4.248 4.015 3.841 3.705 3.597 3.508 3.371
19 8.185 5.926 5.010 4.500 4.171 3.939 3.765 3.631 3.523 3.434 3.297
20 8.096 5.849 4.938 4.431 4.103 3.871 3.699 3.564 3.457 3.368 3.231
22 7.945 5.719 4.817 4.313 3.988 3.758 3.587 3.453 3.346 3.258 3.121
24 7.823 5.614 4.718 4.218 3.895 3.667 3.496 3.363 3.256 3.168 3.032
26 7.721 5.526 4.637 4.140 3.818 3.591 3.421 3.288 3.182 3.094 2.958
28 7.636 5.453 4.568 4.074 3.754 3.528 3.358 3.226 3.120 3.032 2.896
30 7.562 5.390 4.510 4.018 3.699 3.473 3.304 3.173 3.067 2.979 2.843
40 7.314 5.179 4.313 3.828 3.514 3.291 3.124 2.993 2.888 2.801 2.665
50 7.171 5.057 4.199 3.720 3.408 3.186 3.020 2.890 2.785 2.698 2.562
60 7.077 4.977 4.126 3.649 3.339 3.119 2.953 2.823 2.718 2.632 2.496
80 6.963 4.881 4.036 3.563 3.255 3.036 2.871 2.742 2.637 2.551 2.415
100 6.895 4.824 3.984 3.513 3.206 2.988 2.823 2.694 2.590 2.503 2.368
200 6.763 4.713 3.881 3.414 3.110 2.893 2.730 2.601 2.497 2.411 2.275
500 6.686 4.648 3.821 3.357 3.054 2.838 2.675 2.547 2.443 2.356 2.220
∞ 6.635 4.605 3.782 3.319 3.017 2.802 2.640 2.511 2.408 2.321 2.185
148
B.5 Central F-Distribution
𝛼 = 0.01, 1 − 𝛼 = 0.99
𝑟 2 \𝑟 1 14 16 18 20 30 40 50 100 200 500 ∞
1 6143. 6170. 6192. 6209. 6261. 6287. 6303. 6334. 6350. 6360. 6366.
2 99.43 99.44 99.44 99.45 99.47 99.47 99.48 99.49 99.49 99.50 99.50
3 26.92 26.83 26.75 26.69 26.50 26.41 26.35 26.24 26.18 26.15 26.13
4 14.25 14.15 14.08 14.02 13.84 13.75 13.69 13.58 13.52 13.49 13.46
5 9.770 9.680 9.610 9.553 9.379 9.291 9.238 9.130 9.075 9.042 9.021
6 7.605 7.519 7.451 7.396 7.229 7.143 7.091 6.987 6.934 6.902 6.880
7 6.359 6.275 6.209 6.155 5.992 5.908 5.858 5.755 5.702 5.671 5.650
8 5.559 5.477 5.412 5.359 5.198 5.116 5.065 4.963 4.911 4.880 4.859
9 5.005 4.924 4.860 4.808 4.649 4.567 4.517 4.415 4.363 4.332 4.311
10 4.601 4.520 4.457 4.405 4.247 4.165 4.115 4.014 3.962 3.930 3.909
11 4.293 4.213 4.150 4.099 3.941 3.860 3.810 3.708 3.656 3.624 3.603
12 4.052 3.972 3.909 3.858 3.701 3.619 3.569 3.467 3.414 3.382 3.361
13 3.857 3.778 3.716 3.665 3.507 3.425 3.375 3.272 3.219 3.187 3.166
14 3.698 3.619 3.556 3.505 3.348 3.266 3.215 3.112 3.059 3.026 3.004
15 3.564 3.485 3.423 3.372 3.214 3.132 3.081 2.977 2.923 2.891 2.869
16 3.451 3.372 3.310 3.259 3.101 3.018 2.967 2.863 2.808 2.775 2.753
17 3.353 3.275 3.212 3.162 3.003 2.920 2.869 2.764 2.709 2.676 2.653
18 3.269 3.190 3.128 3.077 2.919 2.835 2.784 2.678 2.623 2.589 2.566
19 3.195 3.116 3.054 3.003 2.844 2.761 2.709 2.602 2.547 2.512 2.489
20 3.130 3.051 2.989 2.938 2.778 2.695 2.643 2.535 2.479 2.445 2.421
22 3.019 2.941 2.879 2.827 2.667 2.583 2.531 2.422 2.365 2.329 2.306
24 2.930 2.852 2.789 2.738 2.577 2.492 2.440 2.329 2.271 2.235 2.211
26 2.857 2.778 2.715 2.664 2.503 2.417 2.364 2.252 2.193 2.156 2.132
28 2.795 2.716 2.653 2.602 2.440 2.354 2.300 2.187 2.127 2.090 2.064
30 2.742 2.663 2.600 2.549 2.386 2.299 2.245 2.131 2.070 2.032 2.006
40 2.563 2.484 2.421 2.369 2.203 2.114 2.058 1.938 1.874 1.833 1.805
50 2.461 2.382 2.318 2.265 2.098 2.007 1.949 1.825 1.757 1.713 1.683
60 2.394 2.315 2.251 2.198 2.028 1.936 1.877 1.749 1.678 1.633 1.601
80 2.313 2.233 2.169 2.115 1.944 1.849 1.788 1.655 1.579 1.530 1.494
100 2.265 2.185 2.120 2.067 1.893 1.797 1.735 1.598 1.518 1.466 1.427
200 2.172 2.091 2.026 1.971 1.794 1.694 1.629 1.481 1.391 1.328 1.279
500 2.117 2.036 1.970 1.915 1.735 1.633 1.566 1.408 1.308 1.232 1.165
∞ 2.082 2.000 1.934 1.878 1.697 1.592 1.523 1.358 1.248 1.153 1.015
Calculation in Matlab:
𝑘𝛼 = ❢✐♥✈ (1 − 𝛼, 𝑟 1, 𝑟 2 )
149
B Statistical Tables
2
𝜒 2 -Distribution 𝜒 1−𝛼;𝑟 = 𝑟 𝐹 1−𝛼;𝑟,∞
p
Standard Normal Distribution 𝑧 1−𝛼/2 = 𝐹 1−𝛼;1,∞
p
t-Distribution 𝑡 1−𝛼/2;𝑟 = 𝐹 1−𝛼;1,𝑟
𝜏-Distribution 𝜏 r
𝑟 𝐹 1−𝛼 ;𝑞,𝑟 −𝑞,𝜆
1−𝛼;𝑞,𝑟 −𝑞,𝜆= 𝑟 −𝑞+𝑞𝐹
1−𝛼 ;𝑞,𝑟 −𝑞,𝜆
150
C Book recommendations and other material
151
C Book recommendations and other material
• Grafarend, Erik W.
Linear and Nonlinear Models – Fixed Effects, Random Effects, and Mixed Models
de Gruyter, 2006
ISBN 978-3-11-016216-5
• Jäger, Reiner et al.
Klassische und robuste Ausgleichungsverfahren. Ein Leitfaden für Ausbildung und
Praxis von Geodäten und Geoinformatikern
Wichmann, 2005
ISBN 3-87907-370-8
• Koch, Karl-Rudolf
Parameter Estimation and Hypothesis Testing in Linear Models
2nd updated and enlarged edition
Springer, 1999
ISBN 978-3-540-65257-1
• Koch, Karl-Rudolf
Parameterschätzung und Hypothesentests in linearen Modellen
Dritte, bearbeitete Auflage
Dümmlers, 1997
ISBN 3-427-78923-3
• Lay, David C.
Linear Algebra and its Applications
3rd edition
Addison-Wesley Publishing Company, 2003
ISBN 0-201-70970-8
• Magnus, Jan R. and Heinz Neudecker
Matrix Differential Calculus with Applications in Statistics and Ecnonometrics
John Wiley & Sons Ltd., 1988
ISBN 0-471-91516-5
• Mikhail, Edward M. and Fritz Ackermann
Observations and Least Squares
IEP-A Dun-Donnelley Publisher, 1976
ISBN 0-7002-2481-5
• Niemeier, Wolfgang
Ausgleichungsrechnung, statistische Auswertemethoden
2., überarbeitete und erweiterte Auflage
de Gruyter, 2008
ISBN 978-3-11-019055-7
• Strang, Gilbert
Linear Algebra and its Applications
4th edition
152
C.2 Popular science books, literature
• Sobel, Dava
Longitude: The True Story of a Lone Genius Who Solved the Greatest Scientific Prob-
lem of His Time
Fourth Estate, 1996
ISBN 1-85702-502-4
Deutsche Übersetzung:
Längengrad. Die wahre Geschichte eines einsamen Genies, welches das größte wis-
senschaftliche Problem seiner Zeit löste
Berliner Taschenbuch Verlag, 2003
ISBN 3-8333-0271-2
153
C Book recommendations and other material
• Kehlmann, Daniel
Die Vermessung der Welt
Rowohlt, Reinbek, 2005
ISBN 3-498-03528-2
• Krumm, Friedhelm
Geodetic Network Adjustment Examples
❤tt♣✿✴✴✇✇✇✳❣✐s✳✉♥✐✲st✉tt❣❛rt✳❞❡✴❧❡❤r❡✴❝❛♠♣✉s✲❞♦❝s✴❛❞❥✉st♠❡♥t❴❡①❛♠♣❧❡s✳♣❞❢
• Strang, Gilbert
Linear Algebra, MIT Course 18.06
❤tt♣✿✴✴♦❝✇✳♠✐t✳❡❞✉✴❝♦✉rs❡s✴♠❛t❤❡♠❛t✐❝s✴✶✽✲✵✻✲❧✐♥❡❛r✲❛❧❣❡❜r❛✲s♣r✐♥❣✲✷✵✶✵✴
154