0% found this document useful (0 votes)
21 views49 pages

2 Localization

Uploaded by

alex james
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views49 pages

2 Localization

Uploaded by

alex james
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 49

Nomenclature

General Conventions
𝒂 … true quantity 𝒂
𝒂 … estimated/approximated quantity
𝒂 … measured quantity

𝐚( ⋅ ) … true physical system


𝐚( ⋅ ) … approximated model of physical system

Symbols
𝒙 … system state
𝐟( ⋅ ) … system dynamics
𝐠( ⋅ ) … output equations

𝒖 … system input
𝒏 … system input noise

𝒚 … system output
𝒘 … system output noise

18
Navigation – Problem Definition

Navigation…

… determination of position and orientation on the surface of the Earth

… the science of locating the position and plotting the course of a ship

… the method of determining position, course, and distance traveled

… the act of determining position, location and course to the destination of


an aircraft or a vessel.

… the guidance of ships or airplanes from place to place

1
Navigation – Problem Definition

Where is the robot?

What is its orientation?

How does its environment


look like?

Where is the robot in this


environment?

2
Navigation – Problem Definition

Where is the robot?


What is the robot’s orientation?

Localization…
… is the determination of a robot’s current pose (position & orientation)
with respect to a reference frame in a known environment.

How does the robot’s environment look like?


Where is the robot in this environment?

(Simultaneous) Localization and Mapping…


… is the determination of a robot’s current pose (position & orientation)
with respect to a reference frame in an unknown environment,
while also building a map of this environment.

Navigation…
… is the localization of a robot within a reference frame
… and may contain the mapping of its environment

3
Localization - Basics

Localization is the determination of a robot’s current pose (position &


orientation) with respect to a reference frame

𝑌
𝜑 𝑡
𝑟𝑥 𝑡
Position: 𝒓 𝑡 =
𝑟𝑦 𝑡
𝑟𝑦 𝑡
𝐵
Orientation: 𝜑 𝑡

𝑟𝑥 𝑡
𝑟 Pose: 𝒑 𝑡 = 𝑟𝑦 𝑡
𝜑 𝑡

Reference Frame: 𝑋, 𝑌
𝑋
𝐼 𝑟𝑥 𝑡
4
Localization - What can we measure?

Egomotion Quantities  dependent on the robot‘s current motion


odometer (wheel rotation rate measurements)
displacement
visual odometer (e.g. optical sensor / monocular camera)
speedometer (e.g eddy current, wheel rotation rate measurements)
visual speedometer (e.g. optical sensor / monocular camera) velocity
GPS velocity (time phase shift measurement)
inertial measurement unit (IMU) acceleration & angular rate
engine commands forces & torques
Environment Quantities  dependent on the robot‘s current pose
distance of GPS satellites (time measurement)
distance of network stations (time measurement)
distance of wifi hotpots (time measurement) range observations
distance to objects (ultrasound) *
height (~pressure measurement) *
direction of earth’s magnetic field (magnetometer)
direction to objects with known position (monocular camera) bearing observations
direction to objects with unknown position (monocular camera) *
distance and direction of objects with known position
(time-of-flight camera, laser scanner, stereo camera)
range and bearing observations
distance and direction of objects with unknown position
(time-of-flight camera, laser scanner, stereo camera) *
* Observations that do not provide an absolute reference in the external world, but may be used for relative localization.
5
Localization - Methods

Dead Reckoning Localization


… is keeping track of how much a robot moves over a period of time by
observing egomotion quantities without reference to the external world.

Range-Bearing Localization
… is calculating the pose of a robot instantaneously by observing environment
quantities which provide a reference to the external world.

Sensor Fusion based Localization


… is combining heterogeneous observations to establish “better” pose
information, than by evaluating each observation individually.

6
Dead Reckoning Localization

Dead Reckoning Localization


… is keeping track of how much a robot moves over a period of time by
observing egomotion quantities without reference to the external world.

Egomotion Quantities  dependent on the robot‘s current motion


odometer (wheel rotation measurements)
displacment
visual odometer (e.g. optical sensor / monocular camera)
speedometer (e.g eddy current, wheel rotation rate measurements)
visual speedometer (e.g. optical sensor / monocular camera) velocity
GPS volocity (time phase shift measurement)
inertial measurement unit (IMU) acceleration & angular rate
engine commands forces & torques

7
Dead Reckoning - Differential Drive Robot

Dead Reckoning Localization


… is keeping track of how much a robot moves over a period of time by
observing egomotion quantities without reference to the external world.
𝑌
Current Robot Pose
𝜑 𝑡  changes due to actuation
𝜃𝑟 𝑟𝑥 𝑡
𝒑 𝑡 = 𝑟𝑦 𝑡
𝑟𝑦 𝑡 𝜑 𝑡

𝜃𝑙 Egomotion Quantities (Actuation)


 measured wheel rotation rates
𝑟 𝑡
𝜃𝑟
𝒖=
𝜃𝑙

Initial Robot Pose


𝑟𝑦 𝑡0  known or initialized as zero
𝜑 𝑡0
𝑟 𝑡0 𝑟𝑥 𝑡0
𝑋 𝒑 𝑡0 = 𝑟𝑦 𝑡0
𝑟𝑥 𝑡0 𝑟𝑥 𝑡 𝜑 𝑡0
8
Dead Reckoning - Differential Drive Robot

Dead Reckoning Vehicle Kinematics Model


 see MR01 Kinematics
Kinematics
𝑅 𝑅
𝑌 𝑟𝑥 = cos 𝜑 𝜃𝑟 + cos 𝜑 𝜃𝑙
2 2
𝜑 𝑡 𝑅
𝑟𝑦 = sin 𝜑 𝜃𝑟 + sin 𝜑 𝜃𝑙
𝑅
 𝒑 = 𝐟 𝒑, 𝒖
2 2
𝜃𝑟 𝜑=
𝑅
𝜃𝑟 − 𝜃𝑙
2𝑎

𝑟𝑦 𝑡 𝑟𝑥 𝑡0
Dead Reckoning Vehicle
𝜃𝑙 Kinematics Model 𝑟𝑥
𝑣1 𝑟𝑥
𝑟 𝑡 𝑽= I
B𝐑 𝜑 =
𝑟𝑦 𝑡0

𝜃𝑟 𝑣2 𝑟𝑦
𝑅
2
𝑅
2 cos 𝜑 − sin 𝜑 0 𝑟𝑦
0 0 sin 𝜑 cos 𝜑 0
𝜃𝑙 𝑅 𝑅 𝜔 0 0 1 𝜑
2𝑎 2𝑎
𝜑 𝑡0

𝑟𝑦 𝑡0
𝜑 𝑡0 𝜑
𝑟 𝑡0
𝑋
𝑟𝑥 𝑡0 𝑟𝑥 𝑡
9
Dead Reckoning Localization

𝒑 𝒕𝟎 Simulated/Real System

𝒖 𝒕 𝒑 𝒕
𝒑 𝒕 = 𝐟 𝒑 𝒕 ,𝒖 𝒕 ∫
simulated/real vehicle kinematics

Egomotion
Measurement

𝒖 𝒕
dead reckoning vehicle kinematics

𝒑 𝒕 = 𝐟 𝒑 𝒕 ,𝒖 𝒕 ∫ 𝒑 𝒕

𝒑 𝒕𝟎 Dead Reckoning Localization

10
Range/Bearing Localization

Range/Bearing Localization
… is calculating the pose of a robot instantaneously by observing
environment quantities which provide a reference to the external world.

Environment Observations  dependent on the robot‘s current pose


distance of GPS satellites (time measurement)
distance of network stations (time measurement)
distance of wifi hotpots (time measurement) range observations
distance to objects (ultrasound) *
height (~pressure measurement) *
direction of earth’s magnetic field (magnetometer)
direction to objects with known position (monocular camera) bearing observations
direction to objects with unknown position (monocular camera) *
distance and direction of objects with known position
(time-of-flight camera, laser scanner, stereo camera)
range and bearing observations
distance and direction of objects with unknown position
(time-of-flight camera, laser scanner, stereo camera) *

11
Range/Bearing Localization

Range/Bearing Localization
… is calculating the pose of a robot instantaneously by observing
environment quantities which provide a reference to the external world.
𝑌 Current Robot Pose
𝑀2 𝜑 𝑡  instantaneous pose
𝑦2 𝑟𝑥 𝑡
𝜑2 𝒑 𝑡 = 𝑟𝑦 𝑡
𝜑 𝑡
𝑟𝑦 𝑡
𝐵 Distinctive Landmarks (𝑖 = 1. . 𝑛 )
 position known w.r.t. reference frame
𝜑1 𝑚𝑖,𝑥 𝑡
𝑚2 𝒎𝑖 𝑡 =  Map
𝑚𝑖,𝑦 𝑡
𝑟
𝑦1 Distinctive Landmarks (𝑖 = 1. . 𝑛 )
 measurement w.r.t. body frame
cos 𝜑𝑖 𝑡
𝑚1 𝒚𝑖 𝑡 = 𝒚𝑖 𝑡
sin 𝜑𝑖 𝑡
𝑀1
𝑋 𝑦𝑖,𝑥 𝑡
𝐼 𝑟𝑥 𝑡 𝒚𝑖 𝑡 =
𝑦𝑖,𝑦 𝑡
12
Range/Bearing Localization

Range/Bearing Localization
… distance and direction measurements to known reference objects are
available, which allows for the calculation of the robot’s pose.
𝑌
Landmark-Pose Relation:
𝑀2 𝜑 𝑡
I
𝑦2 𝒎𝑖 = I𝒓 + I𝒚𝑖

I
𝒎𝑖 = I𝒓 + BI𝐑 𝜑 ⋅ B𝒚𝑖
𝑟𝑦 𝑡
cos 𝜑 − sin 𝜑 B
𝐵 I
𝒎𝑖 = I𝒓 + ⋅ 𝒚𝑖
sin 𝜑 cos 𝜑

B
𝑦𝑖,𝑥 cos 𝜑 − B𝑦𝑖,𝑦 sin 𝜑
𝑚2 I I
𝒎𝑖 = 𝒓 + B
𝑦𝑖,𝑥 sin 𝜑 + B𝑦𝑖,𝑦 cos 𝜑
𝑟
𝑦1 − B𝑦𝑖,𝑦 B
𝑦𝑖,𝑥
I I sin 𝜑
𝒎𝑖 = 𝒓 + ⋅
B
𝑦𝑖,𝑥 B
𝑦𝑖,𝑦 cos 𝜑

𝑚1
𝑀1
𝑋
𝐼 𝑟𝑥 𝑡
13
Range/Bearing Localization

Range/Bearing Localization
… distance and direction measurements to known reference objects are
available, which allows for the calculation of the robot’s pose.
𝑌
Closed-Form Pose Determination:
𝑀2 𝜑 𝑡 for 𝑛 = 2 landmarks:
𝑦2
𝑚1,𝑥 1 0 − B𝑦1,𝑦 B
𝑦1,𝑥 𝑟𝑥
B B
𝑚1,𝑦 0 1 𝑦1,𝑥 𝑦1,𝑦 𝑟𝑦
𝑟𝑦 𝑡 𝑚2,𝑥 = ⋅
1 0 − B𝑦2,𝑦 B
𝑦2,𝑦 sin 𝜑
𝐵 𝑚2,𝑦 B B cos 𝜑
𝒎
0 1 𝑦2,𝑥 𝑦2,𝑦
𝒀

𝑚2 𝑟𝑥
𝑟𝑦
𝑟 = 𝒀−1 ⋅ 𝒎  𝒑 = 𝐡 𝒚, 𝒎
𝑦1 sin 𝜑
cos 𝜑

sin 𝜑
with 𝜑 = atan2
𝑚1 cos 𝜑
𝑀1
𝑋
𝐼 𝑟𝑥 𝑡
14
Range/Bearing Localization

Range/Bearing Localization
… distance and direction measurements to known reference objects are
available, which allows for the calculation of the robot’s pose.
𝑌
Closed-Form Pose Determination:
𝑀2 𝜑 𝑡 for 𝑛 > 2 landmarks:
𝑦2
𝑚1,𝑥 1 0 − B𝑦1,𝑦 B
𝑦1,𝑥
B B 𝑟𝑥
𝑚1,𝑦 0 1 𝑦1,𝑥 𝑦1,𝑦 𝑟𝑦
𝑟𝑦 𝑡 ⋮ = ⋮ ⋮ ⋮ ⋮ ⋅
sin 𝜑
𝐵 𝑚𝑛,𝑥 1 B
0 − 𝑦𝑛,𝑦 B
𝑦𝑛,𝑦 cos 𝜑
𝑚𝑛,𝑦 B B
0 1 𝑦𝑛,𝑥 𝑦𝑛,𝑦
𝒎
𝒀
𝑚2
𝑟𝑥
𝑟 𝑟𝑦
𝑦1 sin 𝜑
= 𝒀+ ⋅ 𝒎  𝒑 = 𝐡 𝒚, 𝒎
cos 𝜑

𝑚1 with 𝜑 = atan2
sin 𝜑
𝑀1 cos 𝜑

𝑋 −1
𝐼 𝑟𝑥 𝑡 𝒀+ = 𝒀T 𝒀 ⋅ 𝒀T … pseudo inverse
 least squares solution
15
Range/Bearing Localization

Simulated/Real System 𝒎

𝒑 𝒕 𝒚 𝒕
𝒚 𝒕 = 𝐠 𝒑 𝒕 ,𝒎

simulated/real sensor characteristics

Environment
Measurement

𝒚 𝒕
modeled inverse sensor characteristics

𝒑 𝒕 = 𝐡 𝒚 𝒕 ,𝒎

𝒑 𝒕

Range/Bearing Localization 𝒎

16
Comparison and Conclusion

Dead Reckoning Range/Bearing


Localization Localization

egomotion variables environment variables


observations/measurements
𝒖 𝒕 𝒚 𝒕
initial pose environment map
required knowledge
𝒑 𝒕𝟎 𝒎
kinematic model geometric relation
determination function
𝒑 𝒕 = 𝐟 𝒑 𝒕 ,𝒖 𝒕 𝒑 𝒕 = 𝐡 𝒚 𝒕 ,𝒎
successive instantaneous
evaluation type
𝑡0 … 𝑡 𝑡

mathematical methods (numerical) integration (least squares) optimization

pose determination suffers from pose determination suffers from


disadvantage
accumulation of errors erratic pose changes

source measurement errors measurement errors

bounded for every evaluation


pose error grows infinitely independent from previous evaluations
(dependent on landmark distribution)
17
Sensor Fusion based Localization

𝒑 𝑡0 Simulated/Real System 𝒎

𝒖 𝑡 𝒑 𝑡 𝒚 𝑡
𝒑 𝑡 = 𝐟 𝒑 𝑡 ,𝒖 𝑡 ∫ 𝒚 𝑡 = 𝐠 𝒑 𝑡 ,𝒎

simulated/real vehicle kinematics simulated/real sensor characteristics


Uncertain
Measurements
Input Output
Discrete
Measurement Measurement
Measurements

𝒖 𝑡 Numerical Integration 𝒚 𝑡
modeled vehicle kinematics modeled sensor characteristics
𝒑− 𝑡
𝒑 𝑡 = 𝐟 𝒑 𝑡 ,𝒖 𝑡 ∫ 𝒚 𝑡 = 𝐠 𝒑− 𝑡 , 𝒎
𝒚 𝑡
+

𝒑+ 𝑡
+ 𝑲
𝒑+ 𝑡

𝒑 𝑡0 e.g. Extended Kalman Filter 𝒎


Sensor Fusion
19
Sensor Fusion based Localization

𝒑 𝑡0 Simulated/Real System 𝒎

𝒖 𝑡 𝒑 𝑡 𝒚 𝑡
𝒑 𝑡 = 𝐟 𝒑 𝑡 ,𝒖 𝑡 ∫ 𝒚 𝑡 = 𝐠 𝒑 𝑡 ,𝒎

simulated/real vehicle kinematics simulated/real sensor characteristics

+ 𝒏 𝑡 ~𝒩 𝟎, 𝑵 𝒘 𝑡 ~𝒩 𝟎, 𝑾 +
input measurement noise output measurement noise

A/D: 𝒖𝑘 = 𝒖 𝑡𝑘 = 𝒖 𝑘 ⋅ 𝑇𝑘 𝑇𝑘 ≤ 𝑇𝑙 A/D: 𝒚𝑙 = 𝐲 𝑡𝑙 = 𝒖 𝑙 ⋅ 𝑇𝑙

𝒖𝑘 𝒚𝑙
modeled vehicle kinematics 𝒑− 𝑡 = 𝑡𝑘 = 𝑡𝑙
modeled sensor characteristics
numerical 𝒑−
𝑙
𝒑 𝑡𝑘 = 𝐟 𝒑 𝑡𝑘 , 𝒖 𝑡𝑘−1 Integration 𝒚𝑙 = 𝐠 𝒑−
𝑙 ,𝒎 +
(e.g. RK45) 𝒚𝑙

𝒑+
𝑙
+ 𝑲
𝒑+
𝑙

𝒑0 e.g. Extended Kalman Filter 𝒎

20
Extended Kalman Filter: Signal Flow

𝒙 𝑡0 Simulated/Real System 𝒎

𝒖 𝑡 𝒙 𝑡 𝒚 𝑡
𝒙 𝑡 = 𝐟 𝒙 𝑡 ,𝒖 𝑡 ∫ 𝒚 𝑡 = 𝐠 𝒙 𝑡 ,𝒎

simulated/real vehicle kinematics simulated/real sensor characteristics

+ 𝒏 𝑡 ~𝒩 𝟎, 𝑵 𝒘 𝑡 ~𝒩 𝟎, 𝑾 +
input measurement noise output measurement noise

A/D: 𝒖𝑘 = 𝒖 𝑡𝑘 = 𝒖 𝑘 ⋅ 𝑇𝑘 𝑇𝑘 ≤ 𝑇𝑙 A/D: 𝒚𝑙 = 𝐲 𝑡𝑙 = 𝒖 𝑙 ⋅ 𝑇𝑙

𝒖𝑘 𝒚𝑙
modeled vehicle kinematics 𝒙− 𝑡 = 𝑡𝑘 = 𝑡𝑙
modeled sensor characteristics
numerical 𝒙−
𝑙
𝒙 𝑡𝑘 = 𝐟 𝒙 𝑡𝑘 , 𝒖 𝑡𝑘−1 Integration 𝒚𝑙 = 𝐠 𝒙−
𝑙 ,𝒎 +
(e.g. RK45) 𝒚𝑙

𝒙+
𝑙
+ 𝑲
𝒙+
𝑙

𝒙0 e.g. Extended Kalman Filter 𝒎

21
General Measurement Model

Measurement: 𝒏𝒂 𝒃𝒂 𝒘𝒂
Any measurement 𝒂 of an
𝑛-dimensional quantity 𝒂 is
subject to errors.
𝑎1

𝒂 = ⋮ … true quantity
𝑎𝑛 𝒂 𝒂
+ + +
𝑎1
𝒂 = ⋮ … measured quantity Measurement Model
𝑎𝑛

𝑛𝑎,1
Random Noise: 𝒏𝒂 = ⋮ ~ 𝒩 𝟎, 𝜮𝒏𝒂 with 𝜮𝒏𝒂 … noise covariance
𝑛𝑎,𝑛

𝑏𝑎,1
Bias / Offset: 𝒃𝒂 = ⋮ = const.
𝑏𝑎,𝑛
𝑤𝑎,1
Random Walk: 𝒘𝒂 = ⋮ ~ 𝒩 𝟎, 𝜮𝒘𝒂 with 𝜮𝒘𝒂 … rw covariance
𝑤𝑎,𝑛

22
1-Dimensional Random Noise

White Gaussian random process


 Gaussian (normal) distribution 𝒩 𝜇𝑥 , 𝜎𝑥 of signal amplitudes form 1 signal

1 1 𝑥−𝜇𝑥 2
− − −
Probability Density Function: 𝑝 𝑥 = 2𝜋 2 ⋅ 𝜎𝑥 2
⋅𝑒 2𝜎𝑥 2

𝑥 𝑥
3 3𝜎𝑥
2𝜎𝑥
2

𝑝 𝑥
𝜎𝑥
1

𝜇𝑥
0

𝑡
−𝜎𝑥
-1

𝜇𝑥 ± 𝜎𝑥 … 68.2% −2𝜎𝑥
-2

𝜇𝑥 ± 2𝜎𝑥 … 95.4%
𝜇𝑥 ± 3𝜎𝑥 … 99.7%
−3𝜎𝑥
-3

23
2-Dimensional Random Noise

White Gaussian random process


 Gaussian (normal) distribution 𝒩 𝝁𝒙 , 𝜮𝒙 of signal amplitudes form 𝑛 = 2 signals

𝑛 1 1
− − − 𝒙−𝝁𝒙 T 𝜮−1 𝒙−𝝁𝒙
Probability Density Function: 𝒑 𝒙 = 2𝜋 2 ⋅ 𝜮𝒙 2 ⋅𝑒 2 𝒙

𝑝(𝑥1 ) 𝑝(𝑥2 )

𝑥1 𝑥2

24
2-Dimensional Random Noise

White Gaussian random process


 Gaussian (normal) distribution 𝒩 𝝁𝒙 , 𝜮𝒙 of signal amplitudes form 𝑛 = 𝑛 signals

𝑛 1 1
− − − 𝒙−𝝁𝒙 T 𝜮−1 𝒙−𝝁𝒙
Probability Density Funktion: 𝒑 𝒙 = 2𝜋 2 ⋅ 𝜮𝒙 2 ⋅𝑒 2 𝒙

𝜇𝑥1
Mean: 𝝁𝒙 = 𝒙 = ⋮ =𝐄 𝒙
𝜇𝑥𝑛

𝜎𝑥21 ⋯ 𝜌𝜎𝑥1 𝜎𝑥2


T
Covariance: 𝜮𝒙 = 𝐜𝐨𝐯 𝒙 = ⋮ ⋱ ⋮ = 𝐄 𝒙 − 𝝁𝒙 𝒙 − 𝝁𝒙
𝜌𝜎𝑥1 𝜎𝑥2 ⋯ 𝜎𝑥2𝑛

1 𝑁
with 𝛴𝑥,𝑖𝑗 = 𝑘=1 𝑥𝑖𝑘 − 𝜇𝑥𝑖 𝑥𝑗𝑘 − 𝜇𝑥𝑗
𝑁

1 2 2
Variance: 𝜎𝑥2𝑖 = var 𝑥𝑖 = 𝑁
𝑘=1 𝑥𝑘 − 𝜇𝑥𝑖 =E 𝑥𝑖 − 𝜇𝑥𝑖
𝑁

2
Standard Deviation: 𝜎𝑥𝑖 = var 𝑥𝑖 = E 𝑥𝑖 − 𝜇𝑥𝑖

25
Ellipses

Ellipse Rotated Ellipse


𝑥2 𝑥2
𝑥2∗ 𝑏
𝑏
𝑥1∗
𝑏∗ 𝑎∗

𝛼
𝑎 𝑥1 𝑎 𝑥1

−1 −1
𝑎2 0 𝑥1 𝑎2 𝑐2 𝑥1
𝑥1 𝑥2 =1 𝑥1 𝑥2 =1
0 𝑏2 𝑥2 𝑐2 𝑏2 𝑥2
𝒙T 𝐴 𝒙T 𝐴
𝒙 𝒙

cos 𝛼 − sin 𝛼
with eig 𝐴  eigenvectors 𝒗1/2  𝑽 = 𝒗1 𝒗2 = 𝒙1∗ 𝒙2∗ =
sin 𝛼 cos 𝛼
𝑹2D
𝜆1 0 𝑎1∗2 0
eigenvalues 𝜆1/2  𝑫= =
0 𝜆2 0 𝑏2∗2

 principle axis transform


26
Covariance: Geometric Interpretation

Uncorrelated Signals Correlated Signals


𝑥2 𝑥2
𝑥2∗ 𝑟𝜎2
𝑟𝜎2 𝑥1∗
𝑟𝜎2∗ 𝑟𝜎1∗

𝛼
𝑟𝜎1 𝑥1 𝑟𝜎1 𝑥1

−1 −1
𝜎12 0 𝑥1 2 𝜎𝑥21 𝜌12 𝜎𝑥1 𝜎𝑥2 𝑥1
𝑥1 𝑥2 =𝑟 𝑥1 𝑥2
0 𝜎22 𝑥2 𝜌12 𝜎𝑥2 𝜎𝑥1 𝜎𝑥22 𝑥2
𝒙T 𝒙T
Σ 𝒙 Σ 𝒙
= 𝑟2
cos 𝛼 − sin 𝛼
with eig Σ  eigenvectors 𝒗1/2  𝑽 = 𝒗1 𝒗2 = 𝒙1∗ 𝒙2∗ =
sin 𝛼 cos 𝛼
𝑹2D
𝜆1 0 𝜎1∗2 0
eigenvalues 𝜆1/2  𝑫= =
0 𝜆2 0 𝜎2∗2

 principle axis transform


27
Covariance: Geometric Interpretation

Uncorrelated Signals Correlated Signals


𝑥2 𝑥2
𝑥2∗ 𝑟𝜎2
𝑟𝜎2 𝑥1∗
𝑟𝜎2∗ 𝑟𝜎1∗

𝛼
𝑟𝜎1 𝑥1 𝑟𝜎1 𝑥1

−1 −1
𝜎12 0 𝑥1 2 𝜎𝑥21 𝜌12 𝜎𝑥1 𝜎𝑥2 𝑥1
𝑥1 𝑥2 =𝑟 𝑥1 𝑥2
0 𝜎22 𝑥2 𝜌12 𝜎𝑥2 𝜎𝑥1 𝜎𝑥22 𝑥2
𝒙T 𝒙T
Σ 𝒙 Σ 𝒙
= 𝑟2
Correlation Factor 𝜌𝑖𝑗  correlation between signals 𝑥𝑖 and 𝑥𝑗

𝜌𝑖𝑗 = 0 … no correlation
𝜌𝑖𝑗 ∈ (0, 1] … (positive) correlation
𝜌𝑖𝑗 ∈ [−1, 0) … (negative) correlation

28
Systems in State Space

Systems in State Space

Nonlinear Continuous System

Linearization

Linear Continuous System

Discretization

Linear Discrete System

29
Nonlinear Continuous System

𝒙 𝑡0 Nonlinear Continuous System 𝒎

𝒖 𝑡 𝒙 𝑡 𝒚 𝑡
𝒙 𝑡 = 𝐟 𝒙 𝑡 ,𝒖 𝑡 ∫ 𝒚 𝑡 = 𝐠 𝒙 𝑡 ,𝒎

system dynamics/kinematics output equations

System Dynamics/Kinematics: 𝒙 𝑡 = 𝐟 𝒙 𝑡 ,𝒖 𝑡

Output Equations: 𝒚 𝑡 = 𝐠 𝒙 𝑡 ,𝒎

30
Linearized Continuous System

Linearized Continuous System

𝚫𝒖 𝑡 𝚫𝒚 𝑡
𝑩∗ + ∫ 𝑪∗
input matrix output matrix

𝑨∗
system matrix

System Dynamics/Kinematics: 𝚫𝒙 𝑡 ≈ 𝑨 𝑡∗ ⋅ 𝚫𝒙 𝑡 + 𝑩 𝑡∗ ⋅ 𝚫𝒖 𝑡

Output Equations: 𝚫𝒚 𝑡 ≈ 𝑪 𝑡∗ ⋅ 𝒙 𝑡

𝜕𝐟 𝒙,𝒖
𝑨∗ = 𝑨 𝑡∗ = with 𝒙 𝑡 = 𝒙 𝑡∗ + 𝚫𝒙 𝑡
𝜕𝒙 𝒙=𝒙 𝑡∗ ,𝒖=𝒖 𝑡∗
𝜕𝐟 𝒙,𝒖
𝑩∗ = 𝑩 𝑡∗ = 𝒖 𝑡 = 𝒖 𝑡∗ + 𝚫𝒖 𝑡
𝜕𝒖 𝒙=𝒙 𝑡∗ ,𝒖=𝒖 𝑡∗
𝜕𝐠 𝒙
𝑪∗ = 𝑪 𝑡∗ = 𝒚 𝑡 = 𝒚 𝑡∗ + 𝚫𝒚 𝑡
𝜕𝒙 𝒙=𝒙 𝑡∗
31
Linearized Discretized System

Linearized Discretized System

𝚫𝒖𝑘−1 𝚫𝒙𝑘 𝚫𝒚𝑘


𝐻𝑘−1 + 𝐶𝑘
discrete input matrix output matrix

𝚫𝒙𝑘−1
𝐹𝑘−1 𝑧 −1
discrete system matrix 𝑘 = 𝑡𝑘 /𝑇𝑘

State Transition: 𝚫𝒙𝑘 ≈ 𝑭𝑘−1 ⋅ 𝚫𝒙𝑘−1 + 𝑯𝑘−1 ⋅ 𝚫𝒖𝑘−1

Output Equations: 𝚫𝒚𝑘 ≈ 𝑪𝑘 ⋅ 𝚫𝒙𝑘

𝑇𝑘2
𝑭𝑘−1 = 𝐞𝑇𝑘𝑨 𝑡𝑘−1 = 𝑰 + 𝑇𝑘 𝑨 𝑡𝑘−1 + 𝑨 𝑡𝑘−1 2
+⋯
2!
𝑇𝑘 𝑇𝑘2 𝑇𝑘3
𝑯𝑘−1 = ∫0 𝐞𝜏𝑨 𝑡𝑘−1 𝑑𝜏 𝑩 𝑡𝑘−1 = 𝑇𝑘 𝑰 +
2!
𝑨 𝑡𝑘−1 +
3!
𝑨 𝑡𝑘−1 2
+ ⋯ 𝑩 𝑡𝑘−1
𝑪𝑘 = 𝑪 𝑡𝑘
with 𝒙𝑘 = 𝒙𝑘−1 + 𝚫𝒙𝑘−1 , 𝒖𝑘 = 𝒖𝑘−1 + 𝚫𝒖𝑘−1 , 𝒚𝑘 = 𝒚𝑘−1 + 𝚫𝒚𝑘−1
32
Sensor Fusion based Localization

Sensor Fusion based Localization


… is combining heterogeneous observations to establish “better” pose
information, than by evaluating each observation individually.

Fusion Algorithms:
• Kalman Filter
• Extended Kalman Filter
• Uncented Kalman Filter / Sigma-Point Kalman Filter
• Particle Filter

 Recursive State Estimation Filters

1. State Propagation through input measurements


2. Output Prediction
3. State Correction by weighing the Output Prediction/Measurement Difference

Different Filters differ in how to implement these steps!

33
Extended Kalman Filter: Given

Given
0. Initial State & Uncertainty

Initial State: 𝒙𝟎 = 𝒙 𝑡0 = 𝐄 𝒙 𝑡0
Initial Covariance: 𝑷𝟎 = 𝑷 𝑡0 = 𝜮𝒙𝟎 = 𝐄 𝒙𝟎 − 𝐄 𝒙 𝑡0 𝒙𝟎 − 𝐄 𝒙 𝑡0 T

1. System Model

System Dyn./Kin.: 𝒙 𝑡 = 𝐟 𝒙 𝑡 ,𝒖 𝑡
Output Equations: 𝒚 𝑡 = 𝐠 𝒙 𝑡 ,𝒎

2. Measurements & Noise Characteristics

Input Measurements: 𝒖 𝑡 = 𝒖 𝑡 + 𝒏 𝑡 𝒏 ~ 𝒩 𝟎, 𝑵
Output Measurements: 𝒚 𝑡 = 𝒚 𝑡 + 𝒘 𝑡 𝒘 ~ 𝒩 𝟎, 𝑾

Goal
Estimate of current State & Uncertainty

State Estimate: 𝒙 with 𝒙 = 𝒙 + 𝜟𝒙 𝜟𝒙 ~ 𝒩 𝟎, 𝑷 true estimation error

Covariance Estimate: 𝑷 with 𝒙 = 𝒙 + 𝜟𝒙 𝜟𝒙 ~ 𝒩 𝟎, 𝑷 estimated estimation error

34
Extended Kalman Filter: Idea
𝑢2
𝒖𝑘−1 , 𝑵
𝒖𝑘−3 , 𝑵
𝒖𝑘−2 , 𝑵

𝑢1
𝑥2 𝒙− − − −
𝑘 = 𝒙𝑙 , 𝑷𝑘 = 𝑷𝑙
𝒙− − 𝒙− −
𝑘−1 , 𝑷𝑘−1
𝑘−2 , 𝑷𝑘−2
propagation

𝒙− −
𝑘−3 , 𝑷𝑘−3

𝑥1
prediction
𝑦2
𝒚𝑙 , (𝑾𝑙 )

𝒚𝑙 , 𝑾
𝑦1
35
State Propagation and Output Prediction

Nonlinear State Propagation 𝒎


𝒙− 𝑡 = 𝑡𝑘 = 𝑡𝑙
𝒖𝑘−1 numerical 𝒙−
𝑙 𝒚𝑙
𝒙𝑘 = 𝐟 𝒙𝑘−1 , 𝒖𝑘−1 Integration 𝒚𝑙 = 𝐠 𝒙−
𝑙 ,𝒎
(e.g. RK45)

modeled system dynamics/kinematics modeled output equations

𝒙+
𝑙

𝒙0

(numerical) Integration
State Propagation: 𝒙𝑘 𝒙𝑘 = 𝐟 𝒙𝑘−1 , 𝒖𝑘−1
𝒙Start = 𝒙𝑘−1
 mean approximation through (numerical, e.g. RK45) integration
 assumption: 𝒖𝑘−1 = const. for 𝑡 = 𝑡𝑘−1 … 𝑡𝑘

Output Prediction: 𝒚 𝑙 = 𝐠 𝒙−
𝑙 ,𝒎

 output prediction at any 𝑡 = 𝑡𝑘 = 𝑡𝑙

36
Extended Kalman Filter: Idea
𝑢2
𝒖𝑘−1 , 𝑵
𝒖𝑘−3 , 𝑵
𝒖𝑘−2 , 𝑵

𝑢1
𝑥2 𝒙− − − −
𝑘 = 𝒙𝑙 , 𝑷𝑘 = 𝑷𝑙
𝒙− − 𝒙− −
𝑘−1 , 𝑷𝑘−1
𝑘−2 , 𝑷𝑘−2
propagation

𝒙− −
𝑘−3 , 𝑷𝑘−3

correction
𝑥1
prediction
𝑦2
𝒚𝑙 , (𝑾𝑙 )

𝒚𝑙 , 𝑾
𝑦1
37
Covariance Propagation

Linearized Discretized System

𝚫𝒖𝑘−1 𝚫𝒙𝑘 𝚫𝒚𝑘


𝐻𝑘−1 + 𝐶𝑘
discrete input matrix output matrix

𝚫𝒙𝑘−1
𝐹𝑘−1 𝑧 −1
discrete system matrix 𝑘 = 𝑡𝑘 /𝑇𝑘

State Covariance Propagation: 𝑷− − T T


𝑘 = 𝑭𝑘−1 ⋅ 𝑷𝑘−1 ⋅ 𝑭𝑘−1 + 𝑯𝑘−1 ⋅ 𝑾 ⋅ 𝑯𝑘−1

Output Covariance Propagation: 𝑾𝑘 = 𝑪𝑘 ⋅ 𝑷− T


𝑘 ⋅ 𝑪𝑘

 covariance propagation through linearized discretized system

T T
𝑷−
𝑘 = 𝐄 𝚫𝒙𝑘 𝚫𝒙𝑘 =𝐄 𝑭𝑘−1 𝚫𝒙𝑘−1 + 𝑯𝑘−1 𝚫𝒖𝑘−1 𝑭𝑘−1 𝚫𝒙𝑘−1 + 𝑯𝑘−1 𝚫𝒖𝑘−1

𝑘 = 𝐄{𝑭𝑘−1 𝚫𝒙𝑘−1 𝚫𝒙𝑘−1 𝑭𝑘−1 } + 𝐄{𝑭𝑘−1 𝒙𝑘−1 𝚫𝒖𝑘−1 𝑯𝑘−1 } + 𝐄{𝑯𝑘−1 𝚫𝒖𝑘−1 𝚫𝒙𝑘−1 𝑭𝑘−1 } + 𝐄{𝑯𝑘−1 𝚫𝒖𝑘−1 𝚫𝒖𝑘−1 𝑯𝑘−1 }
𝑷− T T T T T T T T

𝑷−
𝑘−1
=𝟎 =𝟎 =𝑾

38
Extended Kalman Filter: Idea
𝑢2
𝒖𝑘−1 , 𝑵
𝒖𝑘−3 , 𝑵
𝒖𝑘−2 , 𝑵

𝑢1
𝑥2 𝒙− − − −
𝑘 = 𝒙𝑙 , 𝑷𝑘 = 𝑷𝑙
𝒙− − 𝒙− −
𝑘−1 , 𝑷𝑘−1
𝑘−2 , 𝑷𝑘−2
propagation

𝒙− −
𝑘−3 , 𝑷𝑘−3

𝒙+ +
𝑙 , 𝑷𝑙
correction
𝑥1
prediction
𝑦2
𝒚𝑙 , (𝑾𝑙 )

𝒚𝑙 , 𝑾
𝑦1
39
Kalman Gain and Update / Correction

T T −1
Kalman Gain: 𝑲𝑙 = 𝑷−
𝑙 ∙ 𝑪𝑙 𝑪𝑙 ⋅ 𝑷−
𝑙 ∙ 𝑪𝑙 +𝑾

 Kalman Gain results from an least squares sum optimization


 Optimization: Minimizing the estimation error covariance 𝑷
 provides the optimal solution for linear systems (Kalman Filter)
 not optimal for nonlinear/linearized Systems (Extended Kalman Filter)

State Update: 𝒙+ − −
𝑙 = 𝒙𝑙 + 𝑲𝑙 𝒚𝑙 − 𝒚𝑙

Covariance Update: 𝑷+
𝑙 = 𝑰 − 𝑲𝑙 𝑪𝑙 ∙ 𝑷−
𝑙

Important Assumptions:

 measurements are unbiased and Gaussian


 measurement and state covariances are statistically independent
 linearization & discretization errors are small
40
Extended Kalman Filter: Idea
𝑢2
𝒖𝑘 , 𝑵
𝒖𝑘−1 , 𝑵
𝒖𝑘−3 , 𝑵
𝒖𝑘−2 , 𝑵

(numerical) Integration
𝒙𝑘−2 𝒙𝑘−2 = 𝐟 𝒙𝑘−3 , 𝒖𝑘−𝟑
𝒙Start = 𝒙𝑘−3
𝑷−
𝑘−2 = 𝑭𝑘−3 ∙ 𝑷𝑘−3 ∙ 𝑭T𝑘−3 +𝑯𝑘−3 ∙ 𝑵 ∙ 𝑯T𝑘−3 𝑢1
𝑥2 𝒙− − − −
𝑘 = 𝒙𝑙 , 𝑷𝑘 = 𝑷𝑙
𝒙− − 𝒙− −
𝑘−1 , 𝑷𝑘−1
𝑘−2 , 𝑷𝑘−2
propagation

𝒙− −
𝑘−3 , 𝑷𝑘−3
𝒚𝑙 = 𝐠 𝒙−
𝑙 ,𝒎
𝒙+ +
𝑙 , 𝑷𝑙 T
𝑾𝑙 = 𝑪𝑙 ∙ 𝑷−
𝑙 ∙ 𝑪𝑙
correction
T T −1 𝑥1
𝑲𝑙 = 𝑷−
𝑙 ∙ 𝑪𝑙 𝑪𝑙 ⋅ 𝑷−
𝑙 ∙ 𝑪𝑙 +𝑾 prediction
𝑦2
𝒙+ − −
𝑙 = 𝒙𝑙 + 𝑲𝑙 𝒚𝑙 − 𝒚𝑙 𝒚𝑙 , (𝑾𝑙 )

𝑷+
𝑙 = 𝑰 − 𝑲 𝑙 𝑪𝑙 ∙ 𝑷−
𝑙

𝒚𝑙 , 𝑾
𝑦1
41
Extended Kalman Filter: Algorithm

Prediction / Propagation: 𝑡 = 𝑡𝑘−1 … 𝑡𝑘


(numerical) Integration
𝒙−
𝑘 𝒙 = 𝐟 𝒙, 𝒖
𝒙Start = 𝒙𝑘−1
Discretization 𝜕𝐟 𝒙, 𝒖 Linearization
𝑭𝑘−1 = 𝑰 + 𝑇𝑘 𝑨𝑘−1 + ⋯ 𝑨𝑘−1 = 𝒙 = 𝐟 𝒙, 𝒖
𝜕𝒙 𝒙=𝒙 𝑡𝑘−1
𝒖=𝒖 𝑡𝑘−1
𝑇𝑘2 Discretization 𝜕𝐟 𝒙, 𝒖 Linearization
𝑯𝑘−1 = 𝑇𝑘 𝑰 + 𝑨 + ⋯ 𝑩𝑘−1 𝑩𝑘−1 = 𝒙 = 𝐟 𝒙, 𝒖
2! 𝑘−1 𝜕𝒖 𝒙=𝒙 𝑡𝑘−1
𝒖=𝒖 𝑡𝑘−1
𝑷− T T
𝑘 = 𝑭𝑘−1 ∙ 𝑷𝑘−1 ∙ 𝑭𝑘−1 +𝑯𝑘−1 ∙ 𝑵 ∙ 𝑯𝑘−1

Correction / Update: 𝑡 = 𝑡𝑘 = 𝑡𝑙

𝒚𝑙 = 𝐠 𝒙, 𝒎
𝒙=𝒙−
𝑙

𝜕𝐠 𝒙, 𝒎 Linearization
𝑪𝑙 = 𝒚𝑙 = 𝐠 𝒙, 𝒎
𝜕𝒙 𝒙=𝒙−
𝑙

T T −1
𝑲𝑙 = 𝑷−
𝑙 ∙ 𝑪𝑙 𝑪𝑙 ⋅ 𝑷−
𝑙 ∙ 𝑪𝑙 +𝑾
𝒙+ −
𝑙 = 𝒙𝑙 + 𝑲𝑙 𝒚𝑙 − 𝒚𝑙
must be provided as symbolic expressions
𝑷+
𝑙 = 𝑰 − 𝑲𝑙 𝑪𝑙 ∙ 𝑷−
𝑙
42
Extended Kalman Filter: Signal Flow

𝒙 𝑡0 Simulated/Real System 𝒎

𝒖 𝑡 𝒙 𝑡 𝒚 𝑡
𝒙 𝑡 = 𝐟 𝒙 𝑡 ,𝒖 𝑡 ∫ 𝒚 𝑡 = 𝐠 𝒙 𝑡 ,𝒎

simulated/real vehicle kinematics simulated/real sensor characteristics

+ 𝒏 𝑡 ~𝒩 𝟎, 𝑵 𝒘 𝑡 ~𝒩 𝟎, 𝑾 +
input measurement noise output measurement noise

A/D: 𝒖𝑘 = 𝒖 𝑡𝑘 = 𝒖 𝑘 ⋅ 𝑇𝑘 𝑇𝑘 ≤ 𝑇𝑙 A/D: 𝒚𝑙 = 𝐲 𝑡𝑙 = 𝒖 𝑙 ⋅ 𝑇𝑙

𝒙− , 𝑷− 𝑡 = 𝑡𝑘 = 𝑡𝑙
𝒖𝑘 , 𝑵 𝒚𝑙 , 𝑾
𝒙− −
𝑙 , 𝑷𝑙
modeled vehicle kinematics modeled sensor characteristics
numerical
𝒙 𝑡𝑘 = 𝐟 𝒙 𝑡𝑘 , 𝒖 𝑡𝑘−1 Integration 𝒚𝑙 = 𝐠 𝒙−
𝑙 ,𝒎 +
(e.g. RK45)

𝒚𝑙 , (𝑾)
𝒙+ +
𝑙 , 𝑷𝑙

+ 𝑲
𝒙+ +
𝑙 , 𝑷𝑙

𝒙0 , 𝑷0 e.g. Extended Kalman Filter 𝒎

43
EKF Sensor Fusion based Localization

System Definition
…definition of state, inputs and outputs

𝑌 System State
𝑀2 𝜑 𝑡 𝑥1 𝑟𝑥
𝑦2 𝒙 = 𝑥2 = 𝒑 = 𝑟𝑦
𝜃𝑟
𝑥3 𝜑

𝑟𝑦 𝑡 Input Measurements
𝐵
𝑢1 𝜃
𝜃𝑙 𝒖=
𝑢2
= 𝑟
𝜃𝑙
𝑚2
𝑟 Output Measurements
𝑦1  Distinctive Landmarks (𝑖=1..𝑛 )
measurement w.r.t. body frame
𝑟𝑦 𝑡0 𝑦𝑖,1 𝑡 cos 𝜑𝑖 𝑡
𝜑 𝑡0 𝒚𝑖 𝑡 = = 𝒚𝑖 𝑡
𝑟 𝑡0 𝑀1 𝑦𝑖,2 𝑡 sin 𝜑𝑖 𝑡
𝑚1
𝑋  measurements assumed in cartesian
𝐼 𝑟𝑥 𝑡0 𝑟𝑥 𝑡
coordinates
44
EKF Sensor Fusion based Localization

Initial Knowledge
…definition of initial state, the map or other known parameters

𝑌 Initial System State


 known or initialized as zero
𝑀2 𝜑 𝑡
𝑦2 𝑟𝑥 𝑡0
𝜃𝑟 𝒙0 = 𝒑 𝑡0 = 𝑟𝑦 𝑡0
𝜑 𝑡0
𝑟𝑦 𝑡
𝐵 Distinctive Landmarks (𝑖 = 1. . 𝑛 )
𝜃𝑙  Distinctive Landmarks (𝑖 = 1. . 𝑛)
w.r.t. reference frame
𝑚2
𝑚𝑖,𝑥 𝑡
𝒎𝑖 𝑡 =  Map
𝑟 𝑚𝑖,𝑦 𝑡
𝑦1
Vehicle Parameters
𝑟𝑦 𝑡0
𝜑 𝑡0 𝑅 … wheel radius
𝑟 𝑡0 𝑀1 𝑎 … distance of wheels from
𝑚1 Center of Mass (CoM)
𝑋
𝐼 𝑟𝑥 𝑡0 𝑟𝑥 𝑡
45
EKF Sensor Fusion based Localization

System Model
…definition of system dynamics/kinematics and output equations

𝑌 System Dynamics/Kinematics 𝒙 = 𝐟 𝒙, 𝒖
𝑀2 𝜑 𝑡
𝑅 𝑅
𝑦2 𝑥1 = cos 𝑥3 𝑢1 + cos 𝑥3 𝑢2
𝜃𝑟 2 2
𝑅 𝑅
𝑥2 = sin 𝑥3 𝑢1 + sin 𝑥3 𝑢2
2 2
𝑅
𝑟𝑦 𝑡 𝑥3 =
2𝑎
𝑢1 − 𝑢2
𝐵
𝜃𝑙 Output Equations 𝒚 = 𝐠 𝒙− , 𝒎

𝑚2 I
𝒎𝑖 = I𝒓 + I𝒚𝑖 = I𝒓 + BI𝐑 𝜑 ⋅ B𝒚𝑖
𝑟 B
𝒚𝑖 = BI𝐑 𝜑 T
⋅ I
𝒎𝑖 − I𝒓
𝑦1

𝑟𝑦 𝑡0 𝑦𝑖,1 𝑚𝑖,𝑥 𝑥
𝜑 𝑡0 = BI𝐑 𝑥3 𝐓
⋅ − 1
𝑦𝑖,2 𝑚𝑖,𝑦 𝑥2
𝑟 𝑡0 𝑀1
𝑚1
𝑋
𝐼 𝑟𝑥 𝑡0 𝑟𝑥 𝑡
46
EKF Sensor Fusion based Localization

Uncertainty Definition
…definition of initial state, input and output covariances

𝑌 System State
𝑀2 𝜑 𝑡 𝜎𝑥21 0 0
𝑦2 𝜎𝑥22
𝜃𝑟 𝑷0 = 0 0
0 0 𝜎𝑥23
𝑟𝑦 𝑡
𝐵 Input Measurements
𝜃𝑙 𝜎𝑢21 0
𝑵=
0 𝜎𝑢22
𝑚2
𝑟 Output Measurements
𝑦1
𝜎𝑦21,1 0 ⋯ 0 0
𝑟𝑦 𝑡0 0 𝜎𝑦21,2 ⋯ 0 0
𝜑 𝑡0
𝑟 𝑡0 𝑀1 𝑾= ⋮ ⋮ ⋱ ⋮ ⋮
𝑚1 0 0 2
⋯ 𝜎𝑦𝑛,1 0
𝑋
𝐼 𝑟𝑥 𝑡0 𝑟𝑥 𝑡 0 0 ⋯ 0 𝜎𝑦2𝑛,2
47
Extended Kalman Filter: Algorithm

Prediction / Propagation: 𝑡 = 𝑡𝑘−1 … 𝑡𝑘


(numerical) Integration
𝒙−
𝑘 𝒙 = 𝐟 𝒙, 𝒖
𝒙Start = 𝒙𝑘−1
Discretization 𝜕𝐟 𝒙, 𝒖 Linearization
𝑭𝑘−1 = 𝑰 + 𝑇𝑘 𝑨𝑘−1 + ⋯ 𝑨𝑘−1 = 𝒙 = 𝐟 𝒙, 𝒖
𝜕𝒙 𝒙=𝒙 𝑡𝑘−1
𝒖=𝒖 𝑡𝑘−1
𝑇𝑘2 Discretization 𝜕𝐟 𝒙, 𝒖 Linearization
𝑯𝑘−1 = 𝑇𝑘 𝑰 + 𝑨 + ⋯ 𝑩𝑘−1 𝑩𝑘−1 = 𝒙 = 𝐟 𝒙, 𝒖
2! 𝑘−1 𝜕𝒖 𝒙=𝒙 𝑡𝑘−1
𝒖=𝒖 𝑡𝑘−1
𝑷− T T
𝑘 = 𝑭𝑘−1 ∙ 𝑷𝑘−1 ∙ 𝑭𝑘−1 +𝑯𝑘−1 ∙ 𝑵 ∙ 𝑯𝑘−1

Correction / Update: 𝑡 = 𝑡𝑘 = 𝑡𝑙

𝒚𝑙 = 𝐠 𝒙, 𝒎
𝒙=𝒙−
𝑙

𝜕𝐠 𝒙, 𝒎 Linearization
𝑪𝑙 = 𝒚𝑙 = 𝐠 𝒙, 𝒎
𝜕𝒙 𝒙=𝒙−
𝑙

T T −1
𝑲𝑙 = 𝑷−
𝑙 ∙ 𝑪𝑙 𝑪𝑙 ⋅ 𝑷−
𝑙 ∙ 𝑪𝑙 +𝑾
𝒙+ −
𝑙 = 𝒙𝑙 + 𝑲𝑙 𝒚𝑙 − 𝒚𝑙
must be provided as symbolic expressions
𝑷+
𝑙 = 𝑰 − 𝑲𝑙 𝑪𝑙 ∙ 𝑷−
𝑙
48
Extended Kalman Filter: Signal Flow

𝒙 𝑡0 Simulated/Real System 𝒎

𝒖 𝑡 𝒙 𝑡 𝒚 𝑡
𝒙 𝑡 = 𝐟 𝒙 𝑡 ,𝒖 𝑡 ∫ 𝒚 𝑡 = 𝐠 𝒙 𝑡 ,𝒎

simulated/real vehicle kinematics simulated/real sensor characteristics

+ 𝒏 𝑡 ~𝒩 𝟎, 𝑵 𝒘 𝑡 ~𝒩 𝟎, 𝑾 +
input measurement noise output measurement noise

A/D: 𝒖𝑘 = 𝒖 𝑡𝑘 = 𝒖 𝑘 ⋅ 𝑇𝑘 𝑇𝑘 ≤ 𝑇𝑙 A/D: 𝒚𝑙 = 𝐲 𝑡𝑙 = 𝒖 𝑙 ⋅ 𝑇𝑙

𝒙− , 𝑷− 𝑡 = 𝑡𝑘 = 𝑡𝑙
𝒖𝑘 , 𝑵 𝒚𝑙 , 𝑾
𝒙− −
𝑙 , 𝑷𝑙
modeled vehicle kinematics modeled sensor characteristics
numerical
𝒙 𝑡𝑘 = 𝐟 𝒙 𝑡𝑘 , 𝒖 𝑡𝑘−1 Integration 𝒚𝑙 = 𝐠 𝒙−
𝑙 ,𝒎 +
(e.g. RK45)

𝒚𝑙 , (𝑾)
𝒙+ +
𝑙 , 𝑷𝑙

+ 𝑲
𝒙+ +
𝑙 , 𝑷𝑙

𝒙0 , 𝑷0 e.g. Extended Kalman Filter 𝒎

49

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy