Full Text 01
Full Text 01
Implementation of SLAM
algorithms in a small-scale
vehicle using model-based
development
iii
Acknowledgments
First and foremost, we would like to thank Simon Tegelid and Magnus Carlsson
at ÅF, for endless support whenever needed. Additional accolades go out to our
supervisor Peter Johansson at the Department of Electrical Engineering who was
very helpful with technical support. We would also like to extend our gratitude
to our examiner Mattias Krysander, also at the aforementioned department, who
provided an extensive amount of good insight about the subject and also lots of
useful hints regarding academic writing in general.
v
Contents
Notation xi
1 Introduction 1
1.1 The Company . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.3 Thesis Scope . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.4 Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.5 Risk analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.5.1 General risks . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.5.2 Kinematic vehicle model . . . . . . . . . . . . . . . . . . . . 3
1.6 Equipment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.7 Thesis methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.8 Related work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
vii
viii Contents
4 Vehicle Dynamics 29
4.1 Longitudinal dynamics . . . . . . . . . . . . . . . . . . . . . . . . . 29
4.1.1 Gear ratio . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
4.1.2 Wheel torque . . . . . . . . . . . . . . . . . . . . . . . . . . 29
4.1.3 Forward velocity . . . . . . . . . . . . . . . . . . . . . . . . 30
4.2 Lateral dynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
4.2.1 Ackermann angle . . . . . . . . . . . . . . . . . . . . . . . . 31
4.2.2 Heading angle . . . . . . . . . . . . . . . . . . . . . . . . . . 31
4.3 Coordinates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
5 Sensor theory 33
5.1 Gyroscope . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
5.2 Wheel encoders . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
5.3 Laser range finder . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
6.6.1 /sensor_data . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
6.6.2 /scan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
6.6.3 /car_position . . . . . . . . . . . . . . . . . . . . . . . . . . 55
6.6.4 /clock . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
7 ROS Implementation 57
7.1 CAN Reader . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
7.2 Odometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
7.3 SLAM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
7.4 rViz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
7.5 Position . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
7.6 simulinkPosition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
7.7 ROS Control App . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
7.8 Joystick Input . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
7.9 CAN Writer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
7.10 CPU Recoder . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
8 Simulations 63
8.1 Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
8.1.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
8.1.2 Maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
8.1.3 Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
8.2 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
8.2.1 Mapping accuracy . . . . . . . . . . . . . . . . . . . . . . . . 69
8.2.2 Path accuracy . . . . . . . . . . . . . . . . . . . . . . . . . . 72
8.2.3 Average position error . . . . . . . . . . . . . . . . . . . . . 76
8.2.4 Average CPU load . . . . . . . . . . . . . . . . . . . . . . . . 77
8.3 Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
8.4 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
9 Real-world experiment 81
9.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
9.2 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
9.3 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
10 Discussion 87
10.1 Achievements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
10.2 Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
10.3 Complications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
10.4 Future work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
Bibliography 91
Notation
Abbreviations
Abbreviation Description
ROS Robot Operating System
LI DAR Light detection and ranging
SLAM Simultaneous localization and mapping
I MU Inertial measurement unit
GU I Graphical user interface
AI Artificial intelligence
P DF Probability Density Function
MEMS micro-electro-mechanical systems
LU T Lookup table
CP U Central Processing Unit
PC Personal Computer
CAN Controller Area Network
xi
Introduction
1
1.1 The Company
1.2 Background
In recent times, many companies in the automotive industry have spent a lot of
time and funds on researching the possibility of producing autonomous cars. The
research has moved forward to the point that there are actually semi-autonomous
cars available in the market today, and it is widely believed that at some point in
time, almost all vehicles will be autonomous. In order to achieve this, there is
obviously a need for advanced sensors and functions. One of the major issues
is the fact that the vehicle needs to be aware of its current position within an
unknown environment. This is known as the Simultaneous Localization And
Mapping problem, commonly abbreviated as SLAM.
As ÅF is a company that works a lot with autonomous driving, a competence
project concerning this has been started. The goal with this project is to assemble
a small-scale vehicle with embedded sensors and microprocessors, and imple-
ment functions which will enable this vehicle to drive autonomously. As imple-
menting a SLAM algorithm is a vital part of achieving this, this thesis will focus
on finding the best SLAM algorithm for this purpose and implement it in the
vehicle.
1
2 1 Introduction
In order to reach the first priority goals, the following needs to be investigated:
In order to reach the second priority goals, the following needs to be investigated:
• A theoretical investigation comparing different pathfinding algorithms for
active SLAM, as well as simulating different algorithms in Simulink.
• Theoretical studies and simulations where dynamic objects are considered.
1.4 Limitations
The following limitations are set for this project:
• Ethical aspects of any kind will not be discussed in this thesis.
• There are no requirements of the environment of which the vehicle will
operate in.
1.5 Risk analysis 3
1.6 Equipment
Fig. 1.1 displays an overview of the system architecture used in this thesis.
The hardware equipment and software components are further explained be-
low:
• ROS
• Lubuntu 14.04
• STM32 board
– A PCB containing a stm32f4 microcontroller, CAN transceivers, as
well as being connected to a RPLIDAR, magnet encoders and an IMU.
• Nitrogen6_MAX
– A motherboard containing CAN transceivers, Ethernet, Wi-Fi, and an
imx6 quad-core processor.
• Magnet encoders
– Hall effect sensors attached to the wheel axis of the physical vehicle,
measuring each time a magnet mounted inside the wheel passes by.
• IMU 9250
– Inertial measurement unit, connected to a magnetometer, a gyro and
an accelerometer .
• RPLIDAR
– A 2D laser scanner capable of producing scans in 360 different angles.
where x(t) is the system’s state vector, and u(t) and y(t) is the input and the output
of the system, respectively. The derivative of the state vector ẋ(t) is a function of
the current state x(t) and the current input u(t). Similarly, the output of the
system y(t) is a function of the current state and the current input. For discrete-
time systems, state-space equations [20, p. 19] are defined as
7
8 2 Recursive Bayesian Estimation
For linear continuous-time systems, the state space model can be formulated as
For linear discrete-time systems, the state space model can be formulated as
The pose of a robot is due to its current state xt . When the system enters a new
state xt+1 , which is due to some control command of the robot ut , the new state
will produce a measurement zt+1 . The relationship between controls, states, and
measurements is shown in Fig. 2.1 below. A Bayes Filter is an algorithm which
calculates the probability density function (PDF) for the state vector xt . This is
done in two steps, the prediction step bel(xt ) and the correction step bel(xt ) as
Z
bel(xt ) = p(xt | ut , xt−1 ) bel(xt−1 ) dxt−1 (2.5a)
where in the prediction step (given by Eq. 2.5a), a new state xt is predicted us-
ing only the control command ut and the previous state xt−1 . This prediction
is altered in the correction step by using sensor data in order to get a better
estimation[28, p.27]. A scaling factor η is introduced to scale the resulting PDF
in order to make the integral sum 1 [28, p.17]. Pseudo-code for the algorithm is
described in Algorithm 1.
where Ct is a constant and the noise source is Gaussian. The current mea-
surement zt depends on the current state xt and the noise source δt .
• The last prerequisite is that the start belief bel(x0 ) has to be Gaussian dis-
tributed.
The Kalman filter algorithm itself, shown below in Algorithm 2 expresses the
belief of the current state xt , at a specific time t. The bel(xt ), is expressed by
the mean µt and the covariance Σt , which is also the output of the algorithm.[28,
p.42]
Figure 2.2: Linearization of the nonlinear function g(x). Dashed line indi-
cates the Taylor approximation of g(x)
the fact that it uses nonlinear functions. These functions need to go through a
linearization process. This process linearizes the functions by for example using
a first order Taylor expansion around a linearization point, this can be seen in
Fig. 2.2. The resulting Gaussian distribution can then be calculated in closed
form. This enables the extraction of interesting characteristics of the distribution
such as the mean µt and the covariance Σt . Given the mean and the covariance,
the belief bel(xt ) can be represented [28, p.56-58]. The EKF algorithm is shown
below in Algorithm 3 [28, p.59].
2.5.1 Sampling
During sampling (lines 3-7 in Algorithm 4), new values are generated for each
particle in the filter. These values are only temporary, and the particles them-
selves will not be updated until the resampling stage. This, for each particle, new
[m]
state is due to the control ut and the previous state of the particle xt . After
[m]
this, the importance factor wt is calculated. The importance factor is due to the
measurement zt and is used as a likelihood measure of the particle used in the
forthcoming resampling of the particle filter. [28, p. 99]
2.5.2 Resampling
During resampling (lines 8-10 in Algorithm 4) the actual particles in the filter are
updated. This is done by randomly selecting a particle generated in the sampling
stage, where the probability of selection for each particle is due to its importance
2.5 Particle filter 13
factor. The resampling is done with replacement, so each particle can be drawn
several times. [28, p.99]
This chapter describes the basic features of a generic SLAM algorithm, as well
as introducing a few SLAM algorithms. The key features of these algorithms are
then summarized in Section 3.5, where each algorithm is a also evaluated in terms
of being appropriate for implementation in the physical vehicle of this thesis.
15
16 3 Simultaneous localization and mapping
where the current pose xt depends on the possible nonlinear function g(ut , xt−1 ),
with ut being the control input, xt−1 the previous pose and δt being a random
Gaussian variable with zero mean.[17, p. 19]
zt = h(xt , m) + t (3.2)
where the current measurement zt , depends on the possible nonlinear func-
tion h(xt , m), where xt denotes the current pose, m represents the map, t a ran-
dom Gaussian variable with zero mean.[17, p. 19]
keeps track of where the robot is likely positioned within a map, as well as keep-
ing track of specific landmarks observed.[4]
3.2.1 Algorithm
When the robot is turned on, the sensors on the robot such as the wheel encoders
and the gyroscope will gather information of the positioning of the robot. More-
over, landmarks from the environment are also extracted based on new observa-
tions from the LIDAR which is mounted on the robot. These new observations
are associated with previous observations and updated in the EKF algorithm.
However, if an observation of a landmark cannot be associated to a previous
observation the observation itself is presented to the EKF algorithm as a new
observation.[4, p. 29] This process is illustrated in Fig. 3.1.
18 3 Simultaneous localization and mapping
First and foremost, the robot localizes itself within the map being created
using observed landmarks and information from the sensors. The landmarks
should preferably be observable from multiple angles, as well as not being sepa-
rated from other landmarks. These prerequisites gives the EKF algorithm a good
possibility to distinguish between landmarks at a later time. Moreover, the land-
marks being used should also be stationary. [4, p. 18]
To describe the EKF SLAM process in short terms, it consists of two stages.
• Prediction step
– Predict the current state expressed by the predicted mean and the pre-
dicted covariance. These are being calculated based on control input,
matrices, the previous covariance and the previous mean. [27, p. 20-
28]
• Correction step
3.2 EKF SLAM 19
– First and foremost, the idea behind the correction step is the data as-
sociation. The main idea behind associating data in a SLAM context is
to distinguish between an earlier observed landmark and a newly ob-
served one. [27, p.15-16,30] When a new measurement has occurred
the procedure is as following:
The computational cost of the EKF SLAM algorithm is quite costly in comparison
to other SLAM algorithms. When new landmarks are detected, they are added
to the state vector of the filter, and the map of with N identified landmarks, will
increase linearily. [23, p.7765] In terms of numbers, the use of memory is O(N 2 ).
The computation cost of computing one step of the algorithm is approximately
O(N 2 ) [27, p.57] [5, p. 792], the complete cost for computing the whole algorithm
is O(N 3 ), heavily dependant on the size of the map[5, p.792].
A bottleneck when using the EKF algorithm is the computational cost which in-
creases when new landmarks are observed. This leads to a prerequisite of that the
environment should not contain too many landmarks. However, since the num-
ber of landmarks in the environment are few, the localization within the map
and the data association is getting increasingly more difficult.[28, p. 330-331]
Moreover, the data association is made by computing the maximum likelihood
and in case of a faulty association, the future results which are based on previous
associations by the algorithm will also be faulty.[28, p.331-332]
20 3 Simultaneous localization and mapping
3.3 FastSLAM
Another way of solving the SLAM problem is using the FastSLAM algorithm,
which makes use of the Rao-Blackwellized particle filter. FastSLAM takes ad-
vantage of something that most other SLAM algorithms do not, namely the fact
that each observation only concerns a small amount of the state variables. The
pose of the robot is probabilistically dependent on its previous pose, whereas the
landmark locations are probabilistically dependent on the position of the robot.
[19, p. 27] This is not taken into account by the EKF SLAM algorithm, for exam-
ple, which with each update has to recalculate its covariance matrix, resulting in
poor scaling in large maps. [19, p. 29] This is not a problem in FastSLAM, as it
factors out the calculations to simpler subproblems, as shown by
N
Y
p(s t , θ | z t , u t , nt ) = p(s t | z t , u t , nt ) p(θn | s t , z t , u t , nt ) (3.3)
n=1
which factors out the robot path s t and each of the N landmarks θn . One major
difference here between FastSLAM and, for example EKF SLAM, is that the entire
path s t is calculated, as opposed to just the current pose xt . This means that
FastSLAM is a full SLAM algorithm and EKF SLAM is an online SLAM algorithm,
as described in Section 3.1.3.
[m]
St = (s t[m] , µ1,t , Σ1,t , ..., µN ,t , ΣN ,t ) (3.4)
Each particle contains N + 1 variables, one robot path s t[m] , and N landmark
posteriors, where each landmark is represented by an EKF with a mean µn,t and
a variance Σn,t . During one iteration of the algorithm, the robot path for every
particle s t[m] is updated once, as well as each landmark filter µn,t , Σn,t .
3.3.2 Algorithm
Each iteration of the algorithm can be summarized in four steps. These are pre-
sented below.
[m]
1. For each of the M particles, sample a new robot path st from p(st | st−1 , ut )
4. Resample the particles, with particles with a higher importance factor hav-
ing a higher draw probability.
3.3 FastSLAM 21
In step 1, a new robot path is sampled for each particle. This path is drawn
from the motion model, with the robot control as input. After this, the posteriors
of the landmarks are updated due to the new measurement. After this, each
particle is given an importance factor, which is a measure on how realistic the
variables in the particle are. Later, the particles are resampled in order to get rid
of the ones which do not have high probability.
[19, p. 10]
3.4 GraphSLAM
GraphSLAM is another approach of solving the SLAM problem. The idea be-
hind GraphSLAM is to address the SLAM problem by using a graph. The graph
contains nodes which represent the poses of the robot x0 , ..., xt as well as land-
marks in the map which are denoted as m0 , ..., mt . However, this is not enough to
address the SLAM problem, as there are also constraints between the positions
xt , xt−1 , xt−2 , ..., xt−n of the robot and the landmarks m0 , ..., mt . [26, p.361] These
constraints represents the distance between adjacent locations such as xt−1 , xt as
well as the distance between the locations to the landmarks m0 , ...mt . These con-
straints can be constructed due to information from the odometry source ut .[26,
p. 361-362] The graph mentioned earlier can be converted to a sparse matrix.
Which in contrast to dense matrices can be more efficiently used. [17, p. 1] Fig.
3.3 illustrates how the landmarks m0 , ..., mt , the locations x0 , ..., xt and the con-
straints are linked together.
Figure 3.3: Robot poses seen as circles, landmarks of the map seen as
squares, dashed lines representing the motion of the robot based on control
commands given to the robot as well as solid lines representing the distance
to the landmarks given by measurements.
3.4.1 Algorithm
The PDF for the full SLAM problem is denoted p(x0:t , m|z1:t , u1:t ). For simplicity,
all the poses x0:t and the landmarks m can be described by a state space vector y
24 3 Simultaneous localization and mapping
in Fig. 3.4.
Then in order to achieve the most likely pose of the robot within the map and
the positions of the landmarks in the map, an optimization algorithm has to be
computed. [17, p. 11]
Moreover, the PDF p(y|z1:t , u1:t ) contains the possibly nonlinear functions
g(ut , xt−1 ) and h(xt , mi ), whereas mi represents the i-th landmark observed at
time t. [17, p.4,11] The optimization algorithm expects linear functions. Thus, a
necessary step is to linearize these. This could be done by for example executing
a Taylor linearization algorithm. [17, p.11]
As specified earlier, the GraphSLAM approach of solving the SLAM problem
is by introducing a graph. This graph can be converted to a sparse matrix. Basi-
cally, this sparse matrix contains all the information acquired by the robot while
moving in an environment. For example, the robots movement through the map
and landmarks observed at different locations. Thus, as the robot moves through
the environment the sparse matrix will increase in size and contain even more
information.[17, p. 14]
In the following Figs. 3.5,3.6,3.7 below, it is illustrated of how the robot poses
illustrated as blue squares and landmarks illustrated as red squares observed are
added to the sparse matrix. Moreover, the blue squares are also adjacent to each
other due to their relationship as the new pose is dependant on the old pose. The
diagonal grey squares shows that the landmarks of each row m1, m2, m3, m4 and
its corresponding column m1, m2, m3, m4 is the one and same landmark.
as two separate landmarks. [28, p. 363] In comparison with the EKF SLAM
algorithm where the future data associations can be faulty due to earlier data
associations, the GraphSLAM algorithm can reexamine older data associations,
thus reducing the risk of introducing errors for future data associations. [28, p.
339,363] The GraphSLAM algorithm solves the full SLAM problem [17, p. 15],
in comparison to the EKF slam algorithm which only solves the online SLAM
problem.
• EKF
– Robustness
* Given what has been stated in 3.2.3, a faulty data association will
result in erroneous future results, thus the robustness of the EKF
algorithm is low. [28, p.331-332]
– Efficiency
* In Section 3.2.2 the total computational cost of executing the al-
gorithm is O(N 3 ), to a great degree dependant on the size of the
map[5, p.792]. In conclusion this makes the EKF SLAM algorithm
more suitable for smaller maps containing fewer landmarks due
to the heavy computational load.[27, p. 57] However, as stated
in 3.2.3 the localization and data association part can be difficult
when the landmarks in the environment are few. [28, p. 330-331]
– ROS compatibility
* Implemented in the mrpt_ekf_slam_2d package.
• FastSLAM
– Robustness
* Given what has been stated in 3.3.5, the FastSLAM algorithm is sig-
nificantly more robust than the EKF SLAM algorithm due to the
fact that the data association from measurements to landmarks is
particle-based which reduces the impact of a wrong data associa-
tion.
– Efficiency
3.5 Theoretical evaluation of SLAM 27
– Robustness
* Given what has been stated in 3.4.3, the robustness of the Graph-
SLAM algorithm is higher than the EKF algorithm, partly due to
the fact that older data associations can be reexamined if a wrong
data association has been performed. Basically, this reduces the
risk of producing erroneous future data associations thus increas-
ing the robustness.
– Efficiency
* In Section 3.4.2 it is stated that the use of memory is linearly de-
pendent in comparison to the EKF algorithm which uses O(N 2 )
based on number of landmarks N Moreover, the computational
load is time dependent, thus if the path is long the algorithm can
be quite costly as stated in 3.4.2.
– ROS compatibility
* Implemented in the slam_karto package.
From what can be read above, the EKF SLAM algorithm is not too robust,
while both the GraphSLAM algorithm and the FastSLAM algorithm have a higher
robustness. Moreover, in terms of efficiency the EKF SLAM algorithm is very
costly. Given that all algorithms have been implemented in ROS packages, each
of these are viable choices. However, as the EKF SLAM algorithm lacks in both
robustness and effeciency, the conclusion of using other more effective algorithms
such as the FastSLAM or GraphSLAM instead of the EKF algorithm can be reached.
These algorithms are able to efficiently deal with maps containing more land-
marks, as well as larger maps in terms of sheer size. With maps containing a cou-
ple of hundred landmarks, for example when trying to map the outside world,
the computational complexity of the EKF algorithm could prove to be a huge
problem to deal with. [18, p. 1], [28, p. 330]
Vehicle Dynamics
4
This chapter describes some of the vehicle dynamics theory which was used to
develop the dynamic model of the small-scale vehicle. Some of the theory pre-
sented in this chapter was ultimately not used in the final model, but is still
included here since it describes the initial modeling approach. This chapter only
concerns the movement of the vehicle, whereas the theory concerning the sensors
is presented in Chapter 5.
29
30 4 Vehicle Dynamics
P =τ∗ω (4.2)
where P is the power of torque and ω is the angular velocity.
If it is assumed that there are no losses, the power at the motor and the wheels
will be equal, meaning that
Pmotor = Pwheels (4.3)
and together with with (4.2) this implies
If (4.1) is substituted into (4.4) we obtain the torque of the wheels as a function
of the gear ratio and torque of the motor
ZT
v= adt (4.9)
t
δ = l/R (4.10)
which causes the vehicle to turn in a specific heading angle, or yaw. l is the
length of the wheelbase and R is the radius of the heading angle, i.e. the distance
between the center of gravity and the turn center, as Fig 4.1 shows. [16, p .129]
v = R ∗ ψ̇ (4.11)
Since the radius of the heading angle is related to the Ackermann angle, (4.10)
can be substituted into (4.11) in order to get the yaw rate
v
ψ̇ = ∗δ (4.12)
l
Finally, the heading angle can be obtained through integration as
ZT
ψ= ψ̇dt (4.13)
t
32 4 Vehicle Dynamics
4.3 Coordinates
Coordinates of the vehicle can then be calculated by integrating the velocities in
the x and y direction over time as
Z
x= vx dt (4.14a)
Z
y= vy dt (4.14b)
If the forward velocity v, vehicle length l, yaw rate ω = ψ̇, and yaw ψ of the ve-
hicle are known, the velocities vx and vy can be calculated using a transformation
matrix [10] as
" # " #" #
vx v cos(ψ) −sin(ψ)
= l (4.15)
vy 2 ∗ω
sin(ψ) cos(ψ)
Sensor theory
5
The sensors used in this thesis are a gyroscope, wheel encoders, and a LIDAR.
The measurements from these sensors provide the necessary input to the SLAM
algorithms. The SLAM algorithms will process this data in order to present a map
of the surrounding environment, including the pose of the vehicle positioned
within the map. The following sections will provide a description of each sensor
mentioned.
5.1 Gyroscope
Gyroscopes have been widely used for military and civilian purposes the last cen-
tury. Not only for aeronautical purposes but also by the marine. The traditional
gyroscopes used to be mechanical. However, what is being used in many ap-
plications nowadays are MEMS gyroscopes (micro-electro-mechanical systems),
which are being used in cell phones, robotic projects, and for military purposes.
[14, p. 171] Gyroscopes in general can be described by the the formula of Coriolis
force
Fc = −2 ∗ m ∗ (ω ∗ v) (5.1)
As can be seen from (5.1) the Coriolis force is depending on the mass m of an
object, the angular velocity ω as well as the velocity v of the object. This object
is a part of the gyroscope and the mass of the object as well as the velocity of the
object are known. Given that the object rotates, and by having information about
the mass and velocity of the object the angular velocity can be detected.[14, p.
173] Given the angular velocity, the total angle that the vehicle has rotated can
be calculated as
33
34 5 Sensor theory
Zt
a= ψdt (5.2)
0
The algorithm calculates the Vcar in cm/s. The average number of high pulses
registered by the two Hall effect sensors is denoted P ulsesavg . Which is multi-
plied by (1/10) of the circumference of the wheels, thus giving a distance trav-
eled. By dividing with the time period Tnew the vcar resembles the well known
formula for velocity v = s/t. The wheel circumference is divided by ten due to
the fact that there are ten magnets attached on each wheel.
1
d= ct (5.3)
2 wait
where c is the speed of light. [21, p. 31]
Simulink Model Implementation
6
The vehicle, sensors, and the environment of the vehicle were modelled in Simulink.
This chapter describes the implementation of all the subsystems of the model,
which can be seen in Fig 6.1 below. The model is not able to be simulated in
real-time, due to the computation time being too high.
37
38 6 Simulink Model Implementation
6.1 Input
The input subsystem provides the input stimuli to the system. This is done in two
different modes, keyboard, and file. In keyboard mode, the steering angle and the
velocity of the vehicle is controlled using the arrow keys on the keyboard. In file
mode, the velocity and steering angled is defined by a .mat file. These .mat files
are recorded when running the model in keyboard mode. A typical workflow
while using this model would be to first record the instructions for the vehicle to
a .mat file, and then use the same .mat file for repeated experiments.
6.2.1 Velocity
6.2.1.1 Initial model
The initial velocity model was modelled after how velocity is created in the ve-
hicle itself, with the motor being a torque source which is converted to force,
and ultimately velocity. This is described in Section 4.1. However this model
was never completed, and scrapped. This is because conception of such a model
proved to be too time-consuming, and might have required a master thesis on its
own to do.
order Filter in order to model the acceleration and deceleration of the vehicle.
The time it takes for the vehicle to go from a stationary state to be moving at full
speed was measured in order to calculate the time constant τ of the first order
filter.
6.2.2 Servo
A simple model was also used for the servo. The servo control signal is a value
from -100 to 100, which is then passed through a look-up table which ranges from
-45°to 45°, which is the maximum steering angle of the vehicle. A first-order filter
is used to model the delay of the steering of the vehicle, and the time constant τ
was calculated by measuring the time it takes for the wheel of the vehicle to turn
from 0°to 45°.
6.2.4 Position
The X-position and Y-position subsystems realize the transformation matrix de-
scribed in Section 4.3 in order to determine the current position of the vehicle.
6.3 Map
6.3.1 Parameters
In order to emulate the environment of the vehicle, a map definition needed to
be created. These maps are defined by two parameters; dimensions and walls. The
dimensions array is a 1-by-2 matrix where the first index is the width of the map
and the second is the height of the map. The walls matrix is an n-by-4 matrix
which describes the walls in the map, where each row contains the coordinates of
a wall. These can be either diagonal, vertical, or horizontal. The map parameters
are stored in a .mat file which can then be read by the Simulink model file.
6.3.2 Implementation
The map is implemented in two S-functions called vehicleMap and drawMap. The
former calculates all necessary values to display the current position and yaw of
the vehicle within this map, and the latter draws this is in a MATLAB plot. In
this plot, the position and yaw of the vehicle is represented by a triangle. This is
shown in Fig. 6.3 below. The inputs of vehicleMap is shown in Table 6.1 below,
and the Simulink model diagram is shown in Fig. 6.4.
40 6 Simulink Model Implementation
First and foremost, what is being derived above in algorithm 6 is the angular
velocity in rad/s. The bias output from the gyroscope’s z-axis has been measured
42 6 Simulink Model Implementation
and an average bias output has been calculated. To transform this output to
an angular velocity in rad/s, the average bias (count/number) is multiplied by
DPS/ADC * pi/180. Where DPS = degrees per second of range specified, there
are three possible alternatives: 250 °/s, 500 °/s and 2000 °/s. ADC = Analog to
digital converter, resolution of 16 bits. This division gives the angular rate in °/s,
by multiplying with pi/180 the angular rate can be expressed in radians/s.
Moreover, the noise from the gyroscope has also been modelled using Simulink.
The variance and the standard deviation has been derived from 21000 samples
of the gyroscope left stationary.
6.4.1.2 Implementation
Given the precondition that the vehicle is standing still, samples were obtained
from the physical sensor. These samples were converted into angular velocities
in rad/s by algorithm 6. Given these converted samples, figures were created in
Matlab which display the Gaussian nature of the values from the gyroscope. Fig.
6.5 shows the angular velocities from the gyroscope, and how the curve resembles
a Gaussian distribution. This hypothesis is also in line with conclusions of [13,
p.7-8,22] when measuring sensor errors.
Figure 6.5: Bar chart displaying the noise of the gyroscope, curve showing a
Gaussian distribution
The Gaussian noise as well as the bias offset was implemented into the Simulink
model. This can be seen in Fig. 6.6. This model outputs an angular rate expressed
in rad/s corrupted by noise and bias with the aim of resembling the actual gyro-
scope implemented in the physical car.
6.4 Odometry Sensors 43
Moreover, Fig. 6.8 shows the angular difference between the gyroscope model
and real time measurements on the physical gyroscope.
44 6 Simulink Model Implementation
Figure 6.8: Comparison between gyroscope model and real time measure-
ments on the physical gyroscope. Top display shows the model output after
129.2 seconds, and bottom display shows the total angle measured from the
physical gyroscope. (Notice, both are initialized to 1.507 rad, 90 degrees)
6.4.2.1 Description
A sensor model of the wheel encoders has been implemented into the model of
the car in Simulink. They have been modelled with regards to measurements
of the output from the wheel encoders attached on the physical car. However,
the data being received is actually an estimate of the speed of the physical car
and not a number of magnets detected since the last timestamp. With that being
said, the sensor model implemented is using these measurements to estimate
and mimic the transmitted car speed from the physical wheel encoders attached
on the car. These car speed measurements are not that precise though, which
could be explained by a bias offset and noise. The bias consists of truncation
errors and the algorithm which converts magnet pulses to car speed in cm/s. The
truncation error occurs when the algorithm truncates the floating number to an
u_int8 which is to be transmitted on the CAN bus. The noise can be expressed in
terms of the variance around the mean velocity.
6.4 Odometry Sensors 45
Basically, what can be seen from Table 6.2 is the sent instruction MI to the
motor, the average measured velocity µ, the standard deviation σ , the variance
σ 2 , the physical time [s] for the car to travel a distance expressed in seconds,
the physical distance [cm] the car travelled during the measurement. Moreover,
the factor [tq] denotes the portion of time of which the speed of the vehicle is
above a certain threshold, thus disregarding the portion of time of which the
vehicle is accelerating. Furthermore, the factor [vq] corrects the actual distance
the car travelled during the performed test, by disregarding the distance travelled
during acceleration to a more steady velocity.
The Fig. 6.9 shows the speed received from the physical wheel encoders.
6.4.2.3 Implementation
The model was implemented based on the values in Table 6.2. The bias has been
modelled through numerous measurements on the wheel encoder values and
46 6 Simulink Model Implementation
Figure 6.9: Plot of the measured speed from the wheel encoders
Table 6.3: Measured velocity compared to the sensed velocity from the phys-
ical wheel encoders
Motor instruction 0 5 10 15 20
Sensed velocity 0 3.9 10 15.7 21.3
Measured velocity 0 9.82 23.3 36 49
In Fig. 6.10 below, the car speed from the sensor implemented in Simulink is
plotted. The plot is showing the car speed in cm/s plotted over time. Moreover,
the car speed is depending on user input in the model which can be seen as the
steps where a user has increased the car speed. In addition, the sensor model
implemented includes a noise source which depends on the current speed of the
car model, this noise can be seen in the plot below.
Fig. 6.11 below shows the model implementation in Simulink. Basically, what
can be seen is the conversion from m/s to cm/s, a LUT converting the velocity
input to what the sensor detects, as well as a Matlab function which adds noise
to the velocity depending on the current velocity.
6.5 LIDAR 47
6.5 LIDAR
6.5.1 Description
The LIDAR is implemented as an S-function called lidar, and models the RPLI-
DAR used in this thesis. The inputs of this function are shown in Table 6.4 below.
48 6 Simulink Model Implementation
6.5.2 Operation
Before vehicleMap has initialized the map, the function waits in an idle state. Af-
ter this, the distance is measured. Noise, range constraints, and an error PDF is
then applied to this value to make it resemble the RPLIDAR. This value is then
written to the state vector. Subsequently, the LIDAR is rotated one degree to pre-
pare it for the next measurement, which will take place after a delay. This delay
is equal to the time it takes for the RPLIDAR to rotate one degree. If the LIDAR
has reached the end of its rotating cycle, the state vector is first written to the
output and then flushed. A flowchart of the function is shown in Fig. 6.13 below,
and a diagram of the subsystem is shown in Fig. 6.12.
From Fig. 6.14 it can be seen that when the tilt of the reflection surface ex-
ceeds 40°, the percentage of missed measurements increases dramatically. Hypo-
thetically, this due to the tilt angle of the measured object causing a larger reflec-
tion angle of the laser. This means that the reflected laser is more likely to miss
the receiver. From observing Fig. 6.15 it can also be concluded that the LIDAR
does not operate probably when the distance to the measured object is getting
close to its range constraints. In the model, these sets of error probabilities are
realized as two look-up tables. When a measurement is simulated, the angle and
distance to the measured object are passed to its respective look-up table, and a
probability of error is obtained. This probability is then used in a binomial ran-
dom variable which is multiplied with the measured distance, meaning that the
measurement will be equal to 0 if the random variable generates a miss.
52 6 Simulink Model Implementation
6.6.1 /sensor_data
This topic contains the speed and the angular velocity of the vehicle. The message
type is custom defined, and is comprised of a Header [9] and a Float64MultiArray
[8]. This topic publishes the current odometry values periodically, with a clock
period of 10 ms.
6.6.2 /scan
This topic contains the most recent LIDAR scan as well as a set of constants as
required by the ROS message type sensor_msgs/LaserScan [7]. The /scan topic
is published whenever the LIDAR has made a full 360°rotation. This means that
the publishing frequency of the /scan message is effectively 5.5 Hz, as this is the
rotational frequency of the LIDAR model.
6.6.3 /car_position
This topic contains the position of the vehicle in the Matlab/Simulink generated
map. This topic is solely for testing purposes to compare the x and y position of
the vehicle at different time instances. The topic consists of a Float64MultiArray
and a header with the current time stamp. The publishing frequency of the topic
is 10 Hz.
6.6.4 /clock
This topic contains the timestamp, expressed as a ROS timestamp. A ROS times-
tamp constains two values, one value of how many seconds have passed, and one
value of how many nanoseconds have passed since the last measured second. For
example, if the time stamp is supposed to be 5.8, the value for seconds will be
5 and the value of nanoseconds will be 0.8 ∗ 109 This topic was introduced due
to the fact that the model cannot run in real time. Basically, the clock topic con-
tains the relative time being used in the model. The message type being used is
ros_graph/Clock, and is published at a frequency of 100 Hz.
ROS Implementation
7
In order to control the vehicle and obtain its sensor data, a set of ROS nodes was
implemented, as shown in Fig. 7.1 below. Everything within a dotted rectangle
in the figure is a ROS node, where each dotted line represents a device running
the node. Each node is described in the following sections.
Figure 7.1: Diagram showing the active ROS nodes while executing a SLAM
algorithm on the physical vehicle.
57
58 7 ROS Implementation
7.2 Odometry
The purpose of the odometry node is to present the angular rotation and the x
and y position of the vehicle model as well as the physical vehicle. This is done
by subscribing to the sensor_data topic, described in Section 7.1. Basically, the
data is extracted from the topic in the form of a velocity, an angular velocity and
a time stamp. With the help of this data, velocities in the x and y direction can
be calculated. By using these velocities and the difference between the current
timestamp and the previous timestamp the position in x and y of the vehicle
and the angle of which the vehicle has turned can be calculated. This data is
presented as a topic which contains the current coordinates of the vehicle as well
as the total angle rotated. Moreover, it is also presented to the TF tree, which
is a transform tree that relates coordinate frames to each other. The coordinates
of these frames depends on what kind of frame it is. For example, the base link
frame is the coordinate frame of the entire robot, so the coordinates for this frame
is the actual position of the robot.The odometry node presents the transform from
the odom frame to the base_link frame. This correlation shows the relative x and
y position and orientation of the robot frame (base_link) to the odometry frame.
7.3 SLAM
The SLAM node is basically the implementation of the SLAM algorithm wrapped
into ROS. Basically, the SLAM algorithm requires information from the sensors,
such as the LIDAR data as well as the odometry data. By acquiring this data it is
able to generate a map topic which contains the information about its surround-
ings. Moreover, it also presents a transform from map to the odom coordinate
frame. With this information, the coordinate frames in the map can be related
to each other and the positioning of the vehicle within the map can be calculated
and presented. Fig. 7.2, depicts the transform tree, which shows how the coordi-
nate frames relate to each other.
7.4 rViz 59
Figure 7.2: An overview of the transform tree. The broadcaster is the name
of the node which presents the transform. How often the transform is up-
dated can be seen in the average transform. The buffer length is how many
seconds of data is available in the tf buffer. Transform times are seen in the
most recent and oldest transform, where the former contains the timestamp
of the first transform of that type, and the latter contains vice versa.
7.4 rViz
The rViz node is mainly a visualization tool which is able to provide a live update
of the map generated from the SLAM algorithm. Moreover, the trajectory of the
vehicle in the map can also be visualized. An overview of the rViz window is
shown in Fig. 7.3.
60 7 ROS Implementation
Figure 7.3: An overview of the rViz window. The main window shows the
map generated by a SLAM algorithm. The little red square is the current
pose of the vehicle.
7.5 Position
The general idea behind the position node is to save the trajectory data of the
vehicle. The position of the vehicle is obtained by looking up a transform from
the fixed world frame to the base_link frame which represents the actual position
of the vehicle. The estimated position is generated by the SLAM algorithm. The
topic was introduced for testing purposes in order to compare the actual position
of the vehicle with an estimated position to validate each SLAM algorithm. The
data being saved to the text file is the x and y coordinates as well as a time stamp.
The values are stored into a vector with a frequency of 10 Hz, basically, the same
as the frequency of the /car_position topic referred in Section 6.6.3.
7.6 simulinkPosition
The main idea behind the simulinkPosition node is to save the trajectory of the
vehicle within the map generated from Simulink. The node subscribes to the
topic from simulink /car_position and saves the x and y coordinates as well as
a time stamp into a text file. These coordinates represents the actual position of
the vehicle in the map, thus, not distorted by the sensors in the model. This topic
is described in Section 6.6.3.
7.7 ROS Control App 61
63
64 8 Simulations
Figure 8.1: Diagram showing the active ROS nodes while simulating a SLAM
algorithm
8.1.2 Maps
Three maps were designed for the simulation. The main idea was to have three
maps which grow in size, as all SLAM algorithms scale with the map size, and
this might give an idea of how well a particular algorithm scales. The maps are
shown in Figures 8.2, 8.3, and 8.4 below, where the axis tick labels show distance
in meters.
8.1 Method 65
8.1.3 Tests
Four different types of tests were performed on each algorithm, namely:
• Mapping accuracy
• Path accuracy
• Average position error
• Average CPU load
In the mapping accuracy test, the SLAM algorithm is used on a specific map
simulation. When the map is obtained, this is saved, and presented next to the
actual map in order to show how the generated map from the SLAM algorithm
differs from the actual map.
The path accuracy test compares the generated path of each algorithm with the
actual path the kinematic vehicle has travelled. This is presented in a plot which
includes the actual path, and the generated path from each algorithm.
The average position error is a performance metric which shows how much the es-
timated position from the SLAM algorithm differs from the actual position. This
is calculated by computing the difference between the estimated position and ac-
tual position at each time instance, and computing the mean.
In the average CPU load test, the CPU usage of each SLAM algorithm was recorded
using the CPU recorder node described in Section 7.10. For these results, the top
method was used, with a sampling period of 10 ms.
8.1 Method 67
Each test is performed for each algorithm, for each map. Also, each of these tests
is performed for simulations with ideal sensors, and simulations with added sen-
sor noise.
After the theoretical investigation had been conducted, it was decided to per-
form simulations with GraphSLAM and FastSLAM. This investigation is thor-
oughly described in Section 3.5. These algorithms are implemented in the ROS
packages slam_karto and gMapping, which were used for the simulations. The
active ROS nodes for the slam _karto and gMapping simulations are illustrated
in Fig. 8.5 and Fig. 8.6, respectively.
8.2 Results
All simulation results are presented in the sections below. The tests themselves
were carried out as specified in Section 8.1.3.
Fig 8.7 below shows the results for the mapping of the small map. The results of
each algorithm/sensor model appear to be fairly similar.
(a) gmapping with ideal sensor model (b) slam_karto with ideal sensor model
(c) gmapping with noisy sensor model (d) slam_karto with noisy sensor model
(a) gmapping with ideal sensor model (b) slam_karto with ideal sensor model
(c) gmapping with noisy sensor model (d) slam_karto with noisy sensor model
(a) gmapping with ideal sensor model (b) slam_karto with ideal sensor model
(c) gmapping with noisy sensor model (d) slam_karto with noisy sensor model
Figure 8.9: Mapping accuracy results on large map. When the noisy sensor
model is used, as shown in (d), the map produced by slam_karto is highly
inaccurate. This is because the loop closing fails.
72 8 Simulations
Fig 8.11 below shows the generated path of both algorithms compared to the
theoretical path when running simulations of the medium map. The generated
path of both algorithms does not deviate greatly from the theoretical path using
the ideal model. Both algorithms however, especially slam_karto, show a signifi-
cantly erroneous path with the noisy sensor model.
As can be seen in Table 8.1, when an ideal sensor model is used, the algorithms
produce similar results, although slam_karto slightly outperforms gMapping on
the medium map. A specific cause for this remains inconclusive. What can be
said regarding this in general is that given the same environment and sensor
data available, slam_karto simply works better than gMapping under those very
particular circumstances.
In both the ideal and noisy simulation sets, the clear relationship is that the aver-
age position error always seems to increase with the size of the map. This can be
attributed to the fact that if the robot travels for a longer distance, the measure-
ments may drift for a longer time until the algorithm has a chance to synchronize,
i.e. through a loop closure. This is illustrated pretty well in Fig 8.12, where for
example gMapping drifts a lot while travelling through the long corridor until it
is able to correct itself.
Fig 8.13 shows how the position error over time for one of the simulations.
The figure shows that the error does not have any time-related behaviour, it’s
magnitude does not for example increase nor decrease over time. The error is in
contrast related to the algorithms ability (or disability) to predict the position of
the vehicle. This ability can be due to the quality of sensor data the algorithm
has to work with, and how "difficult" the current estimation problem is to solve.
8.2 Results 77
Thus, the error is more related to the state of the vehicle and its environment,
rather than the time elapsed.
Figure 8.13: Large map with ideal sensors- position error over time
In the section below, the average CPU load is expressed as a percentage. This
percentage is the average percentage of time that the SLAM algorithm utilizes
the CPU. This figure refers to the usage of one CPU core, so a number higher
than 100% is possible if the algorithm uses more than one core.
78 8 Simulations
Table 8.3: Average CPU load of gMapping and slam_karto in different maps
using ideal sensor model.
Table 8.4: Average CPU load of gMapping and slam_karto in different maps
using noisy sensor model.
Something which is somewhat surprising here is the the fact that for slam_karto,
the CPU usage goes down significantly for the large map. If you cross-examine
this observation with the fact that this simulation produced a highly inaccurate
map (see Fig. 8.9 (d)) and a highly inaccurate path (see Fig 8.12 (b)), this may
indicate that the algorithm is not doing all computations necessary to work prop-
erly.
8.3 Issues
When doing these experiments, there were some problems with running the
slam_karto node. The problem was that when performing a loop closure, the
node crashed almost every time. A crash itself did not result in erroneous re-
sults, but it meant that the SLAM was aborted, so no further mapping could be
conducted from that point on. Also, since the crash resulted in a loop closure
not being executed, this means that the generated map was not as accurate as it
would have been following a successful loop closure.
The cause of the slam_karto crashes was never found. In order to still obtain
results which could be used in this thesis, the slam_karto experiments had to
performed over and over again until the loop closure algorithm managed to be
executed without crashing.
8.4 Conclusion 79
8.4 Conclusion
The results show that both the gMapping and slam_karto algorithms are able to
solve the SLAM problem. The generated maps (seen in Fig. 8.7, 8.8, and 8.9)
from the two algorithms are similar in the ideal case. Moreover, the position-
ing error does not differ a lot between the two algorithms, with each algorithm
outperforming the other on at least one map.
However, when noise is introduced, it can clearly be seen that the loop closure
of slam_karto starts failing. A good example of this can be seen in Fig. 8.8 d),
where the robot has observed the lower room twice, but fails to recognize the
fact that it as been there before. A similar issue can be seen in Fig. 8.11 b). It
should also be noted that the loop closure of slam_karto mostly resulted in the
node crashing, or generated a map that was completely incomprehensible. The
results shown in this report simply show the best result out of many trials. The
gMapping algorithm, however, produced similar results every time, and there
were no issues with the loop closure.
Another interesting observation can be seen in Fig. 8.12 b). In this simula-
tion, gMapping continuously makes small position mistakes which are quickly
corrected. slam_karto, on the other hand, predicts the position really well un-
til it reaches the lower corridor of the map, where the positioning error starts
to increase. The algorithm never recovers from this error, and keeps on produc-
ing false position estimates from that point. This, along with the fact that the
slam_karto node almost always crashes when performing a loop closure, indi-
cates that it may not be as reliable as gMapping.
Tables 8.3 and 8.4 show that in all simulations performed, gMapping has at
least twice as much CPU load as slam_karto. This large CPU load of the algo-
rithm, which at times even occupies an entire core by itself, is an indicator that
the algorithm might have problems being executed using limited resources. Re-
gardless, it was still chosen to be the one to be implemented on the physical
vehicle, due to the fact that slam_karto was deemed too unreliable.
Real-world experiment
9
9.1 Overview
Chapter 8 introduced simulation results on three different maps generated by two
different SLAM algorithms. From these simulations, it was decided that gMap-
ping was the appropriate algorithm to use with the physical vehicle. This chap-
ter describes a real-world experiment featuring a map generated by the physical
vehicle while running gMapping.
The real-world experiment of the gMapping algorithm was performed at the
ÅF office building in Linköping. The room chosen was a small office approxi-
mately measuring 5.3x4.8 m. A sketch of this office can be seen in Fig. 9.1 below,
and a photo can be seen in Fig 9.2
81
82 9 Real-world experiment
Figure 9.1: Sketch of test environment. The circles represent table legs, and
the squares two large cupboards which divide the room.
9.2 Results 83
9.2 Results
The experiment was conducted three times, and the generated maps are shown
in Fig. 9.3 below. The computational load for each trial is shown in Table 9.1.
84 9 Real-world experiment
(a) Trial 1
(b) Trial 2
(c) Trial 3
During the generation of each map, the robot was controlled manually, which
meant that the robot drove differently at each trial. All maps are fairly accurate
9.2 Results 85
in the sense that they depict all landmarks correctly. This is evident if any of the
maps in Fig. 9.3 are compared with the sketch of the test area shown in Fig. 9.1,
as it can be seen that all features shown in the sketch are present. The algorithm
does seem to struggle a little bit with the shape of the bottom cupboard as map
(a) and (b) depict it not as a perfect rectangle, but slightly skewed. It probably
would have been easier for the algorithm to map this feature had it been able
to travel along the south side of the cupboard, in order to perform a better loop
closure. For the purposes of this thesis though, the mapping accuracy is highly
satisfactory.
86 9 Real-world experiment
9.3 Conclusion
The maps shown in Fig. 9.3 show that the gMapping algorithm is able to success-
fully create a map of the office. All objects (the cupboards and the chair legs) are
successfully depicted in the map. The positioning part of the algorithm has not
been evaluated in the real world experiment due to the lack of a sophisticated
enough test environment to do so.
The CPU load in this experiment, shown in Table 9.1 is similar to the one in
the simulations, meaning that almost an entire core is occupied while the algo-
rithm is running. As the imx6 processor used on the physical vehicle is a quad-
core, this is not a problem if only the SLAM algorithm is executed. However if
additional functionality was to be added to the vehicle, for example a pathfind-
ing algorithm, some performance issues may arise. These could be solved by, for
example, reducing the amount of particles in the filter, but may also result in the
algorithm producing more erroneous results.
10 Discussion
10.1 Achievements
This thesis has yielded the following results:
• A Simulink model of the physical vehicle and its sensors
• A small-scale vehicle with a working SLAM algorithm
• A comprehensive study comparing different SLAM algorithms
The model-based development approach was proven to be beneficial for this the-
sis. Comparing the estimated position with an actual position would have been
significantly difficult if only the real-world experiment had been performed, as
it would have required a highly sophisticated test apparatus. Without the model,
the results section of this thesis would have been rather thin. The model was
probably not necessary for the final implementation itself though, since the out-
put of the algorithms could have been examined visually in order to evaluate
which one should be implemented. The performance numbers could not have
been obtained without it though, and the model-based approach can be really
useful in situations where testing is difficult and/or expensive.
The performance of the SLAM algorithm implemented were highly satisfac-
tory. One of the main reasons for the ADEPT project is do develop something
which could be demonstrated at job fairs, the SLAM algorithm works really well
in this scenario. One of the reasons for this is that it works seamlessly without
any special kind of environment, as long as too many moving objects are avoided.
The results are also very visual, and could easily appeal to a large audience.
The comparison of the SLAM algorithms was also successful. Several per-
formance tests were devised, and clearly show how well the SLAM algorithms
perform under different circumstances. There’s something this thesis has which
87
88 10 Discussion
other similar works lack, and that is a noisy sensor model. Most other works as-
sume ideal sensors. This is unrealistic, since there’s always sensors noise present
to some degree. A good study of SLAM algorithms needs to examine how well
they perform in the presence of noise, otherwise the evaluation of the algorithm
may be overly optimistic.
10.2 Method
If this thesis was to be conducted again, there could be several improvements on
its method. One is the fact that a lot of time was spent trying to create a realistic
model of the vehicle itself. Since this proved to be too time-consuming, it was
abandoned in favor of a simple vehicle model, but nevertheless it consumed a lot
of time which could have been better spent.
Another aspect which could be improved is the fact that the whole environ-
ment model was created from scratch in Simulink. This was also probably unnec-
essarily time-consuming, as ROS-compliant robot simulators already exist, such
as Stage or Gazebo. This was not known, however, until the model was almost
completed. If these were to have been used (along with Simulink for the sensor
model), an equal or better model could probably have been implemented in a
shorter amount of time. This time could have been spent completing some of the
lower priority tasks, or do some further debugging on slam_karto in order to find
out more explicitly why it had a tendency to crash.
10.3 Complications
During the thesis work complications arose regarding one of the specified ques-
tions which was to be examined. The specified question was "Development of
a model in Simulink and evaluating the model by comparing the output of the
model to a theoretical position/direction within a map". As the modelling of
the vehicle itself proved too time-consuming, most time was instead spent on
modelling the sensors of the vehicle. This resulted in a very simplistic vehicle
model, which only approximately resembles the vehicle used in this thesis. For
example, when it comes to acceleration and forward velocity, a generic model is
used, which has little to do with the physical vehicle. Thus, comparing the model
output to a theoretical position/direction wouldn’t say much, due to the acceler-
ation and forward velocity not being based on the physical vehicle. As a result,
this evaluation had to be omitted from the thesis. However, since the thesis was
about evaluating SLAM algorithms, a realistic motion model of the vehicle is not
of high importance. What is of importance here is instead accurate modelling of
the sensors, which was done successfully to some degree.
There were also some hardware issues with the physical vehicle. Some com-
ponents broke, meaning that time had to be spent to make repairs. As some com-
ponents had to be ordered, the physical vehicle was not available while waiting
for the new components to arrive. After gathering the results of the real-world
experiment, the STM32 board broke. Since this board is hard to replace, it meant
10.4 Future work 89
that no additional experiments could be done, for example trying to make a map
of a larger environment.
91
92 Bibliography
54852-simple-2d-kinematic-vehicle-steering-model-and-animation,
1999. Accessed: 2017-06-27. Cited on page 32.
[13] Manon Kok. Probabilistic modeling for sensor fusion with inertial measure-
ments. Linköping Studies in Science and Technology. Dissertations: 1814.
Department of Electrical Engineering, Linköping University, 2016. Cited
on page 42.
[15] Martin Meywerk. Vehicle dynamics. Automotive Series. West Sussex, Eng-
land : Wiley, 2015. Cited on page 29.
[16] William F. Milliken and Douglas L. Milliken. Race Car Vehicle Dynamics.
Warrendale, PA : Society of Automotive Engineers, 1 edition, 1995. Cited on
page 31.
[20] Erik Narby. Modeling and Estimation of Dynamic Tire Properties. Master’s
thesis, Linköping University, Linköping, Sweden, 2005. Cited on page 7.
[23] S.B. Samsuri, H. Zamzuri, M.A. Abdul Rahman, and Ab Mazlan, S.A. Com-
putational cost analysis of extended Kalman filter in simultaneous local-
ization and mapping (EKF-SLAM) problem for autonomous vehicle. 2015.
Cited on page 19.
[24] Machado J. Santos, D. Portugal, and P. R. Rocha. An evaluation of 2d slam
techniques available in robot operating system. 2013 IEEE International
Symposium on Safety, Security, and Rescue Robotics (SSRR), Safety, Secu-
rity, and Rescue Robotics (SSRR), 2013 IEEE International Symposium on,
(1), 2013. Cited on page 6.
[25] Pedro Machado João Santos. Smokenav - simultaneous localization and map-
ping in reduced visibility scenarios. 2013. Cited on page 6.
[26] Roland Siegwart, R. Illah NourBakhsh, and Davide Scaramuzza. Introduc-
tion to Autonomous Mobile Robots. Springer-Verlag London Limited 2003,
2 edition, 2011. Cited on pages 23 and 25.
[27] Cyrill Stachniss. Robot mapping EKF SLAM. http://ais.
informatik.uni-freiburg.de/teaching/ws12/mapping/pdf/
slam04-ekf-slam.pdf, 2012. Cited on pages 18, 19, and 26.
[28] Sebastian Thrun, Wolfram Burgard, and Dieter Fox. Probabillistic Robotics.
Cambridge, Mass. : MIT Press, 1 edition, 2005. Cited on pages 6, 7, 9, 10,
11, 12, 13, 19, 21, 25, 26, and 27.
[29] Wei Wang, Li Dongying, and Yu Wenxian. Simultaneous localization and
mapping embedded with particle filter algorithm. 2016 10th European Con-
ference on Antennas and Propagation (EuCAP), Antennas and Propagation
(EuCAP), 2016 10th European Conference on, page 1, 2016. Cited on page
2.
[30] Yifan Xia, Jie Li, Lin Qi, and Hao Fan. Loop closure detection for visual
slam using pcanet features. 2016 International Joint Conference on Neu-
ral Networks (IJCNN), Neural Networks (IJCNN), 2016 International Joint
Conference on, page 2274, 2016. Cited on page 16.