0% found this document useful (0 votes)
74 views26 pages

Lecture 8 Map Building 1

The document discusses probabilistic localization techniques for mobile robots. It covers Markov localization which uses an explicit grid-based representation to track the probability of a robot's position. Kalman filter localization precisely tracks the robot's position but can fail if uncertainty grows too large. The document provides examples of using topological maps with Markov localization and calculating position updates based on actions and sensor measurements.

Uploaded by

jack2423
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
74 views26 pages

Lecture 8 Map Building 1

The document discusses probabilistic localization techniques for mobile robots. It covers Markov localization which uses an explicit grid-based representation to track the probability of a robot's position. Kalman filter localization precisely tracks the robot's position but can fail if uncertainty grows too large. The document provides examples of using topological maps with Markov localization and calculating position updates based on actions and sensor measurements.

Uploaded by

jack2423
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 26

CpE 521

A Introduction to Autonomous Mobile Robots

Lecture 8: Localization and Map Building


Part 2

Yan Meng
ep e oof Electrical
Department ec c andd Computer
Co pu e Engineering
g ee g
Stevens Institute of Technology
Today’s
y Content
• Probabilistic map-based localization
¾ Markov localization
¾ Kalman filter localization
• Other examples
p of localization system
y
¾ Landmark-based localization
¾ Position beacon systems
• Autonomous
A map bbuilding
ildi
¾ Stochastic map technique
¾ Other mapping techniques: cyclic and dynamic environments
General robot localization p
problem and solution strategy
gy
• Consider a mobile robot moving in a known environment.

• As it starts to move, say from a precisely known location, it might


keepp track of its location usingg odometry.
y

• However, after a certain movement, the robot will get very uncertain
about
b iits position.
ii
Î update using an observation of its environment.

• Observation leads to an estimate of the robot’s position which can be


fused with the odometric estimation to get the best possible update of
the robot’s actual position.
Two-step
p robot p
position update
p
• Action update
¾ action model ACT

with ot: encoder measurement, st-1: prior belief state


¾ increases uncertainty
• Perception update
¾ perception model SEE

with it: exteroceptive sensor inputs,


inputs ss’t: updated belief state
¾ decreases uncertainty
Probabilistic,, Map-Based
p Localization Problem Statement
• Given
¾ the position estimate p ( k k )
¾ its covariance Σ p ( k k ) for time k,
¾ the current control input u (k )
¾ the current set of observations Z ( k + 1) and
¾ the map M (k )

• Compute the
¾ new (posteriori) position estimate p ( k + 1 k + 1) and
¾ its covariance Σ p ( k + 1 k + 1)

• Such a procedure usually involves five steps:


The Five Steps
p for Map-Based
p Localization
Prediction of position Estimation
Encoder Measurement and estimate (fusion)
Position (odometry)

predicted ffeature
matched predictions

observattions
Mapp and observations
data base
YES

p
Matching
1. Prediction based on previous estimate and odometry
2 Observation with on
2. on-board
board sensors
raw sensor data or
3. Measurement prediction based on prediction and map extracted features

Percception
4. Matching of observation and map Observation
5 Estimation -> position update (posteriori position)
5. on-board sensors
Markov Ù Kalman Filter Localization

• Markov
a ov localization
oca at o • Kalman
a a filter
te localization
oca at o
¾ localization starting from any ¾ tracks the robot and is inherently
unknown position very precise and efficient.
¾ recovers from ambiguous ¾ However,
However if the uncertainty of the
situation. robot becomes to large (e.g.
¾ However, to update the probability collision with an object) the
of all positions within the whole Kalman filter will fail and the
state space at any time requires a position is definitively lost.
discrete representation of the
space (g
(grid).
id). The
he required
equi ed memo
memoryy
and calculation power can thus
become very important if a fine
grid is used.
Markov Localization
• Markov localization uses an explicit, discrete representation for the
probability of all positions in the state space.
space

• This is usuallyy done byy representing


p g the environment byy a ggrid or a
topological graph with a finite number of possible states (positions).

• During
i eachh update,
d the
h probability
b bili for
f eachh state (element)
( l ) off the
h
entire space is updated.
Probabilityy theoryy
• P(A): Probability that A is true.
¾ e.g.
e g p(rt = l): probability that the robot r is at position l at time t
• We wish to compute the probability of each individual robot position
given actions and sensor measures.
• P(A|B): Conditional probability of A given that we know B.
¾ e.g. p(rt = l| it): probability that the robot is at position l given the
sensors input it.
• Product rule:

• Bayes rule:
Markov Localization
• Bayes rule:

¾ Mapp from
f a belieff state and a sensor input
p to a refined
f belieff state (SEE):
( )

¾ p(l): belief state before perceptual update process


¾ p(i |l): probability to get measurement i when being at position l
o consult robots map, identify the probability of a certain sensor reading for each
possible position in the map
¾ p(i): normalization factor so that sum over all l for L equals 1.
1
Markov Localization
• Bayes rule:

¾ Mapp from
f a belieff state and an action to new belieff state (ACT):
( )

¾ Summing over all possible ways in which the robot may have reached l.

• Markov assumption: update only depends on previous state and its


most recent actions and perception.
Markov Localization: Case Studyy 1 - Topological
p g Map
p ((1))
• 1994 AAAI National Robot Contest: winner Dervish Robot
• Discrete,
Discrete topological representation
• Topological Localization with Sonar
Markov Localization: Case Studyy 1 - Topological
p g Map
p ((2))
• Topological map of office-type environment
Markov Localization: Case Studyy 1 - Topological
p g Map
p ((3))
• Update of believe state for position n given the percept-pair i

¾ p(n¦i): new likelihood for being in position n


¾ p(
p(n):
) current believe state
¾ p(i¦n): probability of seeing i in n (see table)
• No action update !
¾ However, the robot is moving and therefore we can apply a combination
of action and perception update

¾ t-i is used instead of t-1 because the topological distance between n’ and
n can very depending
d di on theth specific
ifi topological
t l i l map
Markov Localization: Case Studyy 1 - Topological
p g Map
p ((4))
• The calculation is performed by multiplying the
probability of generating perceptual event i at position n by the
probability of having failed to generate perceptual events at all nodes
between n’ and n.
Markov Localization: Case Studyy 1 - Topological
p g Map
p ((5))
• Example calculation
¾ Assume that the robot has two nonzero belieff states
o p(1-2) = 1.0 ; p(2-3) = 0.2 *
at that it is facing east with certainty
¾ Robot detects an open hallway on left and an open door on right
¾ State 2-3 will progress potentially to 3 and 3-4 to 4.
¾ State 3 and 3-4 can be eliminated because the likelihood of detecting an open door
is zero.
¾ The likelihood of reaching state 4 is the product of the initial likelihood p(2-3)= 0.2,
(a) the likelihood of not detecting anything at node 3; and (b) the likelihood of
detecting a hallway on the left and a door on the right at node 4. (for simplicity we
assume that
h the h likelihood
lik lih d off detecting
d i nothinghi at node
d 3-4
3 4 is
i 1.0)
1 0)
¾ This leads to:
o 0.2 ⋅ [0.6 ⋅ 0.4 + 0.4 ⋅ 0.05] ⋅ 0.7 ⋅ [0.9 ⋅ 0.1] → p(4) = 0.003.
o Similar
Si il calculation
l l ti for
f progress fromf 1-2
12 → p(2)
(2) = 0.3.
03

* Note that the probabilities do not sum up to one. For simplicity normalization was avoided in this example
Markov Localization: Case Studyy 2 – Grid Map
p ((1))
• Fine fixed decomposition grid (x, y, θ), 15 cm x 15 cm x 1°
¾ Action and perception update
• Action update:
¾ Sum over pprevious ppossible ppositions
and motion model

¾ Discrete version of eq. 5.22


• Perception
p update:
p
¾ Given perception i, what is the
probability to be at location l
Courtesy of
W. Burgard
Markov Localization: Case Studyy 2 – Grid Map
p ((2))
• The critical challenge is the calculation of p(i¦l)
¾ The number of possible sensor readings and geometric contexts is extremely large
¾ p(i¦l) is computed using a model of the robot’s sensor behavior, its position l, and
the local environment metric map around l.
¾ Assumptions
o Measurement error can be described by a distribution with a mean
o Non-zero chance for any measurement

Courtesy of
W. Burgard
Markov Localization: Case Studyy 2 – Grid Map
p ((3))
• The 1D case
1 Start
1.
¾No knowledge at start, thus we have
an uniform probability distribution.
2 Robot perceives first pillar
2.
¾Seeing only one pillar, the probability
being at pillar 1, 2 or 3 is equal.

3. Robot moves
¾Action model enables to estimate the
new probability distribution based
on the previous one and the motion.
4. Robot perceives second pillar
¾Base on all prior knowledge the
probability being at pillar 2 becomes
dominant
Markov Localization: Case Studyy 2 – Grid Map
p ((4))
• Example 1: Office Building

Courtesy of
W. Burgard

Position 5

Position 3
Position 4
Markov Localization: Case Studyy 2 – Grid Map
p ((5))
Courtesy of
• Example 2: Museum W. Burgard

¾ Laser scan 1
Markov Localization: Case Studyy 2 – Grid Map
p ((6))
Courtesy of
• Example 2: Museum W. Burgard

¾ Laser scan 2
Markov Localization: Case Studyy 2 – Grid Map
p ((7))
Courtesy of
• Example 2: Museum W. Burgard

¾ Laser scan 3
Markov Localization: Case Studyy 2 – Grid Map
p ((8))
Courtesy of
• Example 2: Museum W. Burgard

¾ Laser scan 13
Markov Localization: Case Studyy 2 – Grid Map
p ((9))
Courtesy of
• Example 2: Museum W. Burgard

¾ Laser scan 21
Markov Localization: Case Studyy 2 – Grid Map
p ((10))
• Fine fixed decomposition grids result in a huge state space
¾ Very extensive processing power needed
¾ Large memory requirement
• Reducing complexity
¾ Various
V i approached
h d hhave bbeen proposed
d ffor reducing
d i complexity
l i
¾ The main goal is to reduce the number of states that are updated in each
step
• Randomized Sampling / Particle Filter /Monte Carlo Algorithm
¾ Approximated belief state by representing only a ‘representative’ subset
off all states (possible
(p locations))
¾ E.g update only 10% of all possible locations
¾ The sampling process is typically weighted, e.g. put more samples
around the local peaks in the probability density function
¾ However, you have to ensure some less likely locations are still tracked,
otherwise the robot might get lost

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy