Lecture 8 Map Building 1
Lecture 8 Map Building 1
Yan Meng
ep e oof Electrical
Department ec c andd Computer
Co pu e Engineering
g ee g
Stevens Institute of Technology
Today’s
y Content
• Probabilistic map-based localization
¾ Markov localization
¾ Kalman filter localization
• Other examples
p of localization system
y
¾ Landmark-based localization
¾ Position beacon systems
• Autonomous
A map bbuilding
ildi
¾ Stochastic map technique
¾ Other mapping techniques: cyclic and dynamic environments
General robot localization p
problem and solution strategy
gy
• Consider a mobile robot moving in a known environment.
• However, after a certain movement, the robot will get very uncertain
about
b iits position.
ii
Î update using an observation of its environment.
• Compute the
¾ new (posteriori) position estimate p ( k + 1 k + 1) and
¾ its covariance Σ p ( k + 1 k + 1)
predicted ffeature
matched predictions
observattions
Mapp and observations
data base
YES
p
Matching
1. Prediction based on previous estimate and odometry
2 Observation with on
2. on-board
board sensors
raw sensor data or
3. Measurement prediction based on prediction and map extracted features
Percception
4. Matching of observation and map Observation
5 Estimation -> position update (posteriori position)
5. on-board sensors
Markov Ù Kalman Filter Localization
• Markov
a ov localization
oca at o • Kalman
a a filter
te localization
oca at o
¾ localization starting from any ¾ tracks the robot and is inherently
unknown position very precise and efficient.
¾ recovers from ambiguous ¾ However,
However if the uncertainty of the
situation. robot becomes to large (e.g.
¾ However, to update the probability collision with an object) the
of all positions within the whole Kalman filter will fail and the
state space at any time requires a position is definitively lost.
discrete representation of the
space (g
(grid).
id). The
he required
equi ed memo
memoryy
and calculation power can thus
become very important if a fine
grid is used.
Markov Localization
• Markov localization uses an explicit, discrete representation for the
probability of all positions in the state space.
space
• During
i eachh update,
d the
h probability
b bili for
f eachh state (element)
( l ) off the
h
entire space is updated.
Probabilityy theoryy
• P(A): Probability that A is true.
¾ e.g.
e g p(rt = l): probability that the robot r is at position l at time t
• We wish to compute the probability of each individual robot position
given actions and sensor measures.
• P(A|B): Conditional probability of A given that we know B.
¾ e.g. p(rt = l| it): probability that the robot is at position l given the
sensors input it.
• Product rule:
• Bayes rule:
Markov Localization
• Bayes rule:
¾ Mapp from
f a belieff state and a sensor input
p to a refined
f belieff state (SEE):
( )
¾ Mapp from
f a belieff state and an action to new belieff state (ACT):
( )
¾ Summing over all possible ways in which the robot may have reached l.
¾ t-i is used instead of t-1 because the topological distance between n’ and
n can very depending
d di on theth specific
ifi topological
t l i l map
Markov Localization: Case Studyy 1 - Topological
p g Map
p ((4))
• The calculation is performed by multiplying the
probability of generating perceptual event i at position n by the
probability of having failed to generate perceptual events at all nodes
between n’ and n.
Markov Localization: Case Studyy 1 - Topological
p g Map
p ((5))
• Example calculation
¾ Assume that the robot has two nonzero belieff states
o p(1-2) = 1.0 ; p(2-3) = 0.2 *
at that it is facing east with certainty
¾ Robot detects an open hallway on left and an open door on right
¾ State 2-3 will progress potentially to 3 and 3-4 to 4.
¾ State 3 and 3-4 can be eliminated because the likelihood of detecting an open door
is zero.
¾ The likelihood of reaching state 4 is the product of the initial likelihood p(2-3)= 0.2,
(a) the likelihood of not detecting anything at node 3; and (b) the likelihood of
detecting a hallway on the left and a door on the right at node 4. (for simplicity we
assume that
h the h likelihood
lik lih d off detecting
d i nothinghi at node
d 3-4
3 4 is
i 1.0)
1 0)
¾ This leads to:
o 0.2 ⋅ [0.6 ⋅ 0.4 + 0.4 ⋅ 0.05] ⋅ 0.7 ⋅ [0.9 ⋅ 0.1] → p(4) = 0.003.
o Similar
Si il calculation
l l ti for
f progress fromf 1-2
12 → p(2)
(2) = 0.3.
03
* Note that the probabilities do not sum up to one. For simplicity normalization was avoided in this example
Markov Localization: Case Studyy 2 – Grid Map
p ((1))
• Fine fixed decomposition grid (x, y, θ), 15 cm x 15 cm x 1°
¾ Action and perception update
• Action update:
¾ Sum over pprevious ppossible ppositions
and motion model
Courtesy of
W. Burgard
Markov Localization: Case Studyy 2 – Grid Map
p ((3))
• The 1D case
1 Start
1.
¾No knowledge at start, thus we have
an uniform probability distribution.
2 Robot perceives first pillar
2.
¾Seeing only one pillar, the probability
being at pillar 1, 2 or 3 is equal.
3. Robot moves
¾Action model enables to estimate the
new probability distribution based
on the previous one and the motion.
4. Robot perceives second pillar
¾Base on all prior knowledge the
probability being at pillar 2 becomes
dominant
Markov Localization: Case Studyy 2 – Grid Map
p ((4))
• Example 1: Office Building
Courtesy of
W. Burgard
Position 5
Position 3
Position 4
Markov Localization: Case Studyy 2 – Grid Map
p ((5))
Courtesy of
• Example 2: Museum W. Burgard
¾ Laser scan 1
Markov Localization: Case Studyy 2 – Grid Map
p ((6))
Courtesy of
• Example 2: Museum W. Burgard
¾ Laser scan 2
Markov Localization: Case Studyy 2 – Grid Map
p ((7))
Courtesy of
• Example 2: Museum W. Burgard
¾ Laser scan 3
Markov Localization: Case Studyy 2 – Grid Map
p ((8))
Courtesy of
• Example 2: Museum W. Burgard
¾ Laser scan 13
Markov Localization: Case Studyy 2 – Grid Map
p ((9))
Courtesy of
• Example 2: Museum W. Burgard
¾ Laser scan 21
Markov Localization: Case Studyy 2 – Grid Map
p ((10))
• Fine fixed decomposition grids result in a huge state space
¾ Very extensive processing power needed
¾ Large memory requirement
• Reducing complexity
¾ Various
V i approached
h d hhave bbeen proposed
d ffor reducing
d i complexity
l i
¾ The main goal is to reduce the number of states that are updated in each
step
• Randomized Sampling / Particle Filter /Monte Carlo Algorithm
¾ Approximated belief state by representing only a ‘representative’ subset
off all states (possible
(p locations))
¾ E.g update only 10% of all possible locations
¾ The sampling process is typically weighted, e.g. put more samples
around the local peaks in the probability density function
¾ However, you have to ensure some less likely locations are still tracked,
otherwise the robot might get lost