0% found this document useful (0 votes)
41 views105 pages

Full Text 01

Uploaded by

huseyn haydarov
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
41 views105 pages

Full Text 01

Uploaded by

huseyn haydarov
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 105

Master of Science Thesis in Datorteknik, Fordonssystem

Department of Electrical Engineering, Linköping University, 2017

Implementation of SLAM
algorithms in a small-scale
vehicle using model-based
development

Johan Alexandersson och Olle Nordin


Master of Science Thesis in Datorteknik, Fordonssystem
Implementation of SLAM algorithms in a small-scale vehicle using
model-based development
Johan Alexandersson och Olle Nordin
LiTH-ISY-EX–17/5101–SE

Supervisor: Peter A Johansson


isy, Linköpings universitet
Simon Tegelid
ÅF Consulting

Examiner: Mattias Krysander


isy, Linköpings universitet

Department of Electrical Engineering


Department of Electrical Engineering
Linköping University
SE-581 83 Linköping, Sweden

Copyright © 2017 Johan Alexandersson och Olle Nordin


Abstract
As autonomous driving is rapidly becoming the next major challenge in the auto-
motive industry, the problem of Simultaneous Localization And Mapping (SLAM)
has never been more relevant than it is today. This thesis presents the idea of
examining SLAM algorithms by implementing such an algorithm on a radio con-
trolled car which has been fitted with sensors and microcontrollers. The software
architecture of this small-scale vehicle is based on the Robot Operating System
(ROS), an open-source framework designed to be used in robotic applications.
This thesis covers Extended Kalman Filter (EKF)-based SLAM, FastSLAM,
and GraphSLAM, examining these algorithms in both theoretical investigations,
simulations, and real-world experiments. The method used in this thesis is model-
based development, meaning that a model of the vehicle is first implemented in
order to be able to perform simulations using each algorithm. A decision of which
algorithm to be implemented on the physical vehicle is then made backed up by
these simulation results, as well as a theoretical investigation of each algorithm.
This thesis has resulted in a dynamic model of a small-scale vehicle which can
be used for simulation of any ROS-compliant SLAM-algorithm, and this model
has been simulated extensively in order to provide empirical evidence to define
which SLAM algorithm is most suitable for this application. Out of the algo-
rithms examined, FastSLAM was proven to the best candidate, and was in the
final stage, through usage of the ROS package gMapping, successfully imple-
mented on the small-scale vehicle.

iii
Acknowledgments
First and foremost, we would like to thank Simon Tegelid and Magnus Carlsson
at ÅF, for endless support whenever needed. Additional accolades go out to our
supervisor Peter Johansson at the Department of Electrical Engineering who was
very helpful with technical support. We would also like to extend our gratitude
to our examiner Mattias Krysander, also at the aforementioned department, who
provided an extensive amount of good insight about the subject and also lots of
useful hints regarding academic writing in general.

Linköping, October 2017


Johan Alexandersson and Olle Nordin

v
Contents

Notation xi

1 Introduction 1
1.1 The Company . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.3 Thesis Scope . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.4 Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.5 Risk analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.5.1 General risks . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.5.2 Kinematic vehicle model . . . . . . . . . . . . . . . . . . . . 3
1.6 Equipment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.7 Thesis methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.8 Related work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

2 Recursive Bayesian Estimation 7


2.1 State-space representation . . . . . . . . . . . . . . . . . . . . . . . 7
2.2 Bayesian Filtering . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.3 Kalman filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.4 Extended Kalman filter . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.5 Particle filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.5.1 Sampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.5.2 Resampling . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.5.3 Rao-Blackwellized Particle Filter . . . . . . . . . . . . . . . 13

3 Simultaneous localization and mapping 15


3.1 Introduction to SLAM . . . . . . . . . . . . . . . . . . . . . . . . . . 15
3.1.1 Data association . . . . . . . . . . . . . . . . . . . . . . . . . 15
3.1.2 Loop closure . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
3.1.3 Full SLAM and online SLAM . . . . . . . . . . . . . . . . . 16
3.1.4 General models for SLAM problem . . . . . . . . . . . . . . 16
3.2 EKF SLAM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
3.2.1 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
3.2.2 Computational complexity . . . . . . . . . . . . . . . . . . . 19

vii
viii Contents

3.2.3 Data association . . . . . . . . . . . . . . . . . . . . . . . . . 19


3.3 FastSLAM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
3.3.1 Particle structure . . . . . . . . . . . . . . . . . . . . . . . . 20
3.3.2 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
3.3.3 FastSLAM 2.0 . . . . . . . . . . . . . . . . . . . . . . . . . . 21
3.3.4 Computational complexity . . . . . . . . . . . . . . . . . . . 21
3.3.5 Data Association . . . . . . . . . . . . . . . . . . . . . . . . 22
3.4 GraphSLAM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
3.4.1 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
3.4.2 Computational complexity . . . . . . . . . . . . . . . . . . . 25
3.4.3 Data association . . . . . . . . . . . . . . . . . . . . . . . . . 25
3.5 Theoretical evaluation of SLAM . . . . . . . . . . . . . . . . . . . . 26

4 Vehicle Dynamics 29
4.1 Longitudinal dynamics . . . . . . . . . . . . . . . . . . . . . . . . . 29
4.1.1 Gear ratio . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
4.1.2 Wheel torque . . . . . . . . . . . . . . . . . . . . . . . . . . 29
4.1.3 Forward velocity . . . . . . . . . . . . . . . . . . . . . . . . 30
4.2 Lateral dynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
4.2.1 Ackermann angle . . . . . . . . . . . . . . . . . . . . . . . . 31
4.2.2 Heading angle . . . . . . . . . . . . . . . . . . . . . . . . . . 31
4.3 Coordinates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32

5 Sensor theory 33
5.1 Gyroscope . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
5.2 Wheel encoders . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
5.3 Laser range finder . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34

6 Simulink Model Implementation 37


6.1 Input . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
6.2 Kinematic Vehicle . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
6.2.1 Velocity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
6.2.2 Servo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
6.2.3 Angle & Angular Velocity . . . . . . . . . . . . . . . . . . . 39
6.2.4 Position . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
6.3 Map . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
6.3.1 Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
6.3.2 Implementation . . . . . . . . . . . . . . . . . . . . . . . . . 39
6.4 Odometry Sensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
6.4.1 Gyroscope . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
6.4.2 Wheel encoders . . . . . . . . . . . . . . . . . . . . . . . . . 44
6.5 LIDAR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
6.5.1 Description . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
6.5.2 Operation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
6.5.3 Probability of error . . . . . . . . . . . . . . . . . . . . . . . 50
6.6 ROS Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
Contents ix

6.6.1 /sensor_data . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
6.6.2 /scan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
6.6.3 /car_position . . . . . . . . . . . . . . . . . . . . . . . . . . 55
6.6.4 /clock . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55

7 ROS Implementation 57
7.1 CAN Reader . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
7.2 Odometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
7.3 SLAM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
7.4 rViz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
7.5 Position . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
7.6 simulinkPosition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
7.7 ROS Control App . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
7.8 Joystick Input . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
7.9 CAN Writer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
7.10 CPU Recoder . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61

8 Simulations 63
8.1 Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
8.1.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
8.1.2 Maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
8.1.3 Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
8.2 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
8.2.1 Mapping accuracy . . . . . . . . . . . . . . . . . . . . . . . . 69
8.2.2 Path accuracy . . . . . . . . . . . . . . . . . . . . . . . . . . 72
8.2.3 Average position error . . . . . . . . . . . . . . . . . . . . . 76
8.2.4 Average CPU load . . . . . . . . . . . . . . . . . . . . . . . . 77
8.3 Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
8.4 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79

9 Real-world experiment 81
9.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
9.2 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
9.3 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86

10 Discussion 87
10.1 Achievements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
10.2 Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
10.3 Complications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
10.4 Future work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89

Bibliography 91
Notation

Abbreviations

Abbreviation Description
ROS Robot Operating System
LI DAR Light detection and ranging
SLAM Simultaneous localization and mapping
I MU Inertial measurement unit
GU I Graphical user interface
AI Artificial intelligence
P DF Probability Density Function
MEMS micro-electro-mechanical systems
LU T Lookup table
CP U Central Processing Unit
PC Personal Computer
CAN Controller Area Network

xi
Introduction
1
1.1 The Company

This thesis has been conducted at ÅF in Linköping. ÅF is a Swedish consulting


company with over 8000 employees worldwide [6]. ÅF as a company focuses
heavily on technology areas such as industry, infrastructure and energy [6]. The
thesis work was conducted at ÅF’s technology department.

1.2 Background
In recent times, many companies in the automotive industry have spent a lot of
time and funds on researching the possibility of producing autonomous cars. The
research has moved forward to the point that there are actually semi-autonomous
cars available in the market today, and it is widely believed that at some point in
time, almost all vehicles will be autonomous. In order to achieve this, there is
obviously a need for advanced sensors and functions. One of the major issues
is the fact that the vehicle needs to be aware of its current position within an
unknown environment. This is known as the Simultaneous Localization And
Mapping problem, commonly abbreviated as SLAM.
As ÅF is a company that works a lot with autonomous driving, a competence
project concerning this has been started. The goal with this project is to assemble
a small-scale vehicle with embedded sensors and microprocessors, and imple-
ment functions which will enable this vehicle to drive autonomously. As imple-
menting a SLAM algorithm is a vital part of achieving this, this thesis will focus
on finding the best SLAM algorithm for this purpose and implement it in the
vehicle.

1
2 1 Introduction

1.3 Thesis Scope


The purpose of this thesis is to implement a Simultaneous-Location-And-Mapping
(SLAM) algorithm in a small-scale vehicle running the Robot Operating System
(ROS). Which SLAM algorithm to be chosen will be supported by a theoretical
investigation. Three different algorithms will be investigated, Extended Kalman
Filter based SLAM (EKF)[12], particle filter-based SLAM (FastSLAM) [29], and
graph-based SLAM (GraphSlam) [11]. FastSlam is implemented in the ROS pack-
ages gmapping [2] and coreslam [1], and Graphslam is implemented in the pack-
age slam_karto [3]. In addition, simulations will be done on a model created
in Simulink representing the actual car and further information regarding the
SLAM algorithm implemented in the model will be presented.
The SLAM algorithm is supposed to simultaneously create a map of the vehicle’s
environment as well as calculating the position of the vehicle within this map.
This data is to be broadcast to another ROS node on a PC, which will show the
map and the vehicle’s position in real-time in a GUI.
There will be two different levels of priority in this thesis. The first priority is to
implement passive SLAM, and the second priority is to implement active SLAM.
Another second priority goal is to make the algorithm consider dynamic objects.

In order to reach the first priority goals, the following needs to be investigated:

• A theoretical investigation comparing different SLAM-algorithms in order


to make a valid decision on which one to use
• How to run a SLAM algorithm using limited resources (since it will be run
on an embedded processor). This means first and foremost to pick a SLAM
algorithm which uses few resources. However, if this is not sufficient, mod-
ifications to the algorithm (or the implementation of one) may be needed.
• Development of a model in Simulink and evaluating the model by compar-
ing the output of the model to a theoretical position/direction within a map

In order to reach the second priority goals, the following needs to be investigated:
• A theoretical investigation comparing different pathfinding algorithms for
active SLAM, as well as simulating different algorithms in Simulink.
• Theoretical studies and simulations where dynamic objects are considered.

1.4 Limitations
The following limitations are set for this project:
• Ethical aspects of any kind will not be discussed in this thesis.
• There are no requirements of the environment of which the vehicle will
operate in.
1.5 Risk analysis 3

1.5 Risk analysis


There are several risks that certain parts of this thesis may be delayed or fail.
These are explained in the following sections.

1.5.1 General risks


A major part of this thesis will be conducted during the summer, when super-
visors may be on vacation leave. This will limit the amount of support that is
available.

1.5.2 Kinematic vehicle model


There are several risks regarding the kinematic vehicle model:
• The sensors in the kinematic vehicle model should be able to represent the
actual sensors in the car as well as possible. The sensors being modelled
should be validated by some measurements on the real sensors as well as
theoretical values. However, that is not a guarantee that they actually rep-
resent the actual sensors.
• The kinematic vehicle model should be validated using a measuring system.
Currently, the measuring system is under development at Linköpings Uni-
versitet. This basically implies that the testing phase of the model could
be delayed. Alternative solutions for measuring could be by plain eye sight
comparing the difference in position within the maps.
4 1 Introduction

1.6 Equipment
Fig. 1.1 displays an overview of the system architecture used in this thesis.

Figure 1.1: System architecture overview

The hardware equipment and software components are further explained be-
low:

• ROS

– A framework and operating system developed for robots. This thesis


will use the ROS Kinetic distribution

• Lubuntu 14.04

– Lightweight Ubuntu operating system, operating on the imx6 proces-


sor on the Nitrogen6_MAX board.
1.7 Thesis methodology 5

• STM32 board
– A PCB containing a stm32f4 microcontroller, CAN transceivers, as
well as being connected to a RPLIDAR, magnet encoders and an IMU.
• Nitrogen6_MAX
– A motherboard containing CAN transceivers, Ethernet, Wi-Fi, and an
imx6 quad-core processor.
• Magnet encoders
– Hall effect sensors attached to the wheel axis of the physical vehicle,
measuring each time a magnet mounted inside the wheel passes by.
• IMU 9250
– Inertial measurement unit, connected to a magnetometer, a gyro and
an accelerometer .
• RPLIDAR
– A 2D laser scanner capable of producing scans in 360 different angles.

1.7 Thesis methodology


Fig. 1.2 illustrates the method workflow of this thesis. The first phase is a pre-
study, where the basic theory of SLAM algorithms will be studied and ROS/Simulink
tutorials will be conducted. This is just to get a good initial understanding of the
subject. The next phase of the thesis is to concurrently develop a model of the
physical vehicle in Simulink and doing a theoretical investigation on the SLAM
algorithms chosen for investigation in this thesis. Once the model is done, some
ROS implementation is needed to be able to perform simulations with the vehi-
cle model. When the simulations are done, the results will be used to choose an
algorithm to be implemented on the physical vehicle.

Figure 1.2: Thesis methodology


6 1 Introduction

1.8 Related work


This thesis, as stated above in Section 1.3, includes conducting a theoretical in-
vestigation about three SLAM algorithms. Given that the SLAM problem has
been known for a long time, there exists papers related to each of the SLAM al-
gorithms examined in this thesis. [4, p.6]. From the book Probabilistic robotics
[28] by Sebastian Thrun, Wolfram Burgard and Dieter Fox, each SLAM algorithm
have been described thoroughly and it is being widely cited in this thesis. How-
ever, given the specified thesis scope described in Section 1.3 the theoretical in-
vestigation is also to be backed up by simulation results. Various aspects of the
GraphSLAM and the FastSLAM algorithms have been compared to each other in
the paper "An evaluation of 2D SLAM Techniques available in Robot Operating
System" [24, p.1-6] and the master thesis "SmokeNav - Simultaneous Localization
and Mapping in Reduced Visibility Scenarios". [25, p.26-33]. Moreover, at pages
33-36 in the master thesis [25], the reader is provided with real world results
from GraphSLAM and FastSLAM implementations. Furthermore, real world re-
sults of a GraphSLAM and FastSLAM implementation has been provided in [24,
p.5-6].
Recursive Bayesian Estimation
2
A key component of what enables a robot to successfully perform navigation is
sensing. In order to navigate properly, the robot needs information about its en-
vironment. This information is supplied by sensors. A major problem, however,
is that the sensor data will always be corrupted by noise to some degree. That
means that if the robot just uses the raw sensor data as it is, the navigation will
produce erroneous results. To apprehend the fact that all measurements are in
the presence of noise, filtering is needed. One approach of doing this is through
Recursive Bayesian Estimation [28, p. 13], which uses probabilistic models in
order to filter out the noise.

2.1 State-space representation


In order to define states of a system, state-space representation is used. In general,
a continuous-time system is said to be in state-space form [20, p. 19] if it can be
defined as

ẋ(t) = f (x(t), u(t)) (2.1a)


y(t) = g(x(t), u(t)) (2.1b)

where x(t) is the system’s state vector, and u(t) and y(t) is the input and the output
of the system, respectively. The derivative of the state vector ẋ(t) is a function of
the current state x(t) and the current input u(t). Similarly, the output of the
system y(t) is a function of the current state and the current input. For discrete-
time systems, state-space equations [20, p. 19] are defined as

xt+1 = f (xt , ut ) (2.2a)


yt = g(xt , ut ) (2.2b)

7
8 2 Recursive Bayesian Estimation

For linear continuous-time systems, the state space model can be formulated as

ẋ(t) = Ax(t) + Bu(t) (2.3a)


y(t) = Cx(t) + Du(t) (2.3b)

For linear discrete-time systems, the state space model can be formulated as

xt+1 = Axt + Bxt (2.4a)


yt = Cxt + Bxt (2.4b)

2.2 Bayesian Filtering

The pose of a robot is due to its current state xt . When the system enters a new
state xt+1 , which is due to some control command of the robot ut , the new state
will produce a measurement zt+1 . The relationship between controls, states, and
measurements is shown in Fig. 2.1 below. A Bayes Filter is an algorithm which

Figure 2.1: Relationship between states, controls, and measurements.


2.3 Kalman filter 9

calculates the probability density function (PDF) for the state vector xt . This is
done in two steps, the prediction step bel(xt ) and the correction step bel(xt ) as
Z
bel(xt ) = p(xt | ut , xt−1 ) bel(xt−1 ) dxt−1 (2.5a)

bel(xt ) = η p(zt | xt ) bel(xt ) (2.5b)

where in the prediction step (given by Eq. 2.5a), a new state xt is predicted us-
ing only the control command ut and the previous state xt−1 . This prediction
is altered in the correction step by using sensor data in order to get a better
estimation[28, p.27]. A scaling factor η is introduced to scale the resulting PDF
in order to make the integral sum 1 [28, p.17]. Pseudo-code for the algorithm is
described in Algorithm 1.

Algorithm 1 Recursive Bayes Filter


1: function bayes_filter(bel(xt−1 ), ut , zt )
2: for all xt do R
3: bel(xt ) = p(xt | u1 , xt−1 ) bel(xt−1 ) dxt−1
4: bel(xt ) = η p(zt | xt ) bel(xt )
5: end for
6: return bel(xt )
7: end function

2.3 Kalman filter


A technique which implements a Bayes filter is the Kalman filter, which is used
in linear Gaussian systems. Basically, it computes a belief for a continuous state
[28, p.40]. This state represents the information regarding the vehicle such as the
odometry, and its surrounding environment for example landmarks[28, p.20].
The belief of the state at a specific time instance t bel(xt ), can be expressed by its
mean and its covariance.[28, p.40]
The Kalman filter needs three specific properties as prerequisites [28, p. 41-
42], and these are:

• The current state xt , needs to be represented by linear arguments, meaning


that
xt = At (xt−1 ) + Bt ut + t (2.6)
where At and Bt are constants. Moreover, the added noise source should be
of Gaussian nature. The current state xt depends on the previous state xt−1 ,
the control ut , and the noise t .

• The current measurement zt requires its arguments to be of linear nature


meaning that
zt = Ct xt + δt (2.7)
10 2 Recursive Bayesian Estimation

where Ct is a constant and the noise source is Gaussian. The current mea-
surement zt depends on the current state xt and the noise source δt .
• The last prerequisite is that the start belief bel(x0 ) has to be Gaussian dis-
tributed.
The Kalman filter algorithm itself, shown below in Algorithm 2 expresses the
belief of the current state xt , at a specific time t. The bel(xt ), is expressed by
the mean µt and the covariance Σt , which is also the output of the algorithm.[28,
p.42]

Algorithm 2 Kalman filter algorithm


1: function Kalman_filter(µt−1 , Σt−1 , ut , zt )
2: µt = At µt−1 + Bt ut
3: Σt = At Σt−1 ATt +Rt
4: Kt = Σt CtT (Ct Σt CtT +Qt )−1
5: µt = µt + Kt (zt − Ct µt )
6: Σt = (I − Kt Ct )Σt
7: return µt , Σt
8: end function

Basically, the algorithm calculates predictions of its mean µt and covariance


Σt at line 2 and 3. Whereas At and Bt are matrices which multiplied by the state
vector xt−1 and the control vector ut represents the state transition probability
with linear arguments, which is one of the requirements for the Kalman filter
algorithm to be successful. These predictions are used in the correction step, lines
4-6. The first step of the correction is the calculation of the Kalman gain denoted
Kt . Moreover, the calculation of the mean µt is performed with regards to the
predicted mean µt the measurement zt - the predicted measurement (2.7) Ct µt
times the Kalman gain. At last, the new covariance Σt is calculated with regards
to the Kalman gain factor as well as the predicted covariance Σt [28, p.43].

2.4 Extended Kalman filter


The Extended Kalman Filter is an approach of dealing with nonlinear models, of
which the traditional normal Kalman filter cannot handle. Basically, real world
measurements and state transitions are often not linear, thus a normal Kalman
filter’s assumption of linear data is not accurate. The approach of how EKF pro-
cesses this is by describing the state transition probability and the measurement
probability with nonlinear functions [28, p. 54]
x(t) = g(u(t), x(t − 1)) + (t) (2.8a)
z(t) = h(x(t)) + δ(t) (2.8b)
As stated above the EKF uses nonlinear functions. The output of the algo-
rithm is an approximation in the form of a Gaussian distribution, with an esti-
mated mean and covariance. The approximation part of the algorithm is due to
2.4 Extended Kalman filter 11

Figure 2.2: Linearization of the nonlinear function g(x). Dashed line indi-
cates the Taylor approximation of g(x)

the fact that it uses nonlinear functions. These functions need to go through a
linearization process. This process linearizes the functions by for example using
a first order Taylor expansion around a linearization point, this can be seen in
Fig. 2.2. The resulting Gaussian distribution can then be calculated in closed
form. This enables the extraction of interesting characteristics of the distribution
such as the mean µt and the covariance Σt . Given the mean and the covariance,
the belief bel(xt ) can be represented [28, p.56-58]. The EKF algorithm is shown
below in Algorithm 3 [28, p.59].

Algorithm 3 Extended Kalman filter algorithm


1: function Extended_Kalman_filter(µt−1 , Σt−1 , ut , zt )
2: µt = g(ut , µt−1 )
3: Σt = Gt Σt−1 GtT +Rt
4: Kt = Σt HtT (Ht Σt HtT +Qt )−1
5: µt = µt + Kt (zt − h(µt ))
6: Σt = (I − Kt Ht )Σt
7: return µt , Σt
8: end function

As can be seen from Algorithm 3 it is slightly different compared to the Kalman


filter Algorithm 2. This is due to the fact that the EKF uses nonlinear functions
to calculate the mean µt and the covariance Σt .
12 2 Recursive Bayesian Estimation

2.5 Particle filter


Another implementation of a Bayes filter is the particle filter. In contrast to the
Extended Kalman filter, it does not use a parametric model for the probability
distributions. The main idea of the particle filter is that it instead keeps a set of
[m]
M samples, where each sample xt (known as a "particle") is a set of potential
state variable values. The probability itself of these state variables is represented
by how many particles contain similar state variable values, meaning that areas
with high probability will have a higher density of particles than the areas where
the probability is low [19, p. 27].
The particle filter is described in Algorithm 4[28, p. 98], where X̃t is the
predicted state vector, and Xt is the corrected state vector.

Algorithm 4 Particle Filter


1: function particle_filter(Xt−1 , ut , zt )
2: X̃t = Xt = ∅
3: for m = 1 to M do
[m] [m]
4: sample xt ∼ p(xt | ut , xt−1 )
[m] [m]
5: wt = p(zt | xt )
[m] [m]
6: X̃t = X̃t + (xt , wt )
7: end for
8: for m = 1 to M do
[i]
9: draw i with probability ∝ wt
[i]
10: add xt to Xt
11: end for
12: return Xt
13: end function

2.5.1 Sampling
During sampling (lines 3-7 in Algorithm 4), new values are generated for each
particle in the filter. These values are only temporary, and the particles them-
selves will not be updated until the resampling stage. This, for each particle, new
[m]
state is due to the control ut and the previous state of the particle xt . After
[m]
this, the importance factor wt is calculated. The importance factor is due to the
measurement zt and is used as a likelihood measure of the particle used in the
forthcoming resampling of the particle filter. [28, p. 99]

2.5.2 Resampling
During resampling (lines 8-10 in Algorithm 4) the actual particles in the filter are
updated. This is done by randomly selecting a particle generated in the sampling
stage, where the probability of selection for each particle is due to its importance
2.5 Particle filter 13

factor. The resampling is done with replacement, so each particle can be drawn
several times. [28, p.99]

2.5.3 Rao-Blackwellized Particle Filter


There’s also another version of the Particle Filter called the Rao-Blackwellized par-
ticle filter. This is an extension of the regular particle filter where some state
variables are represented by particles, and some with Gaussians. [28, p. 437]
Simultaneous localization and
3
mapping

This chapter describes the basic features of a generic SLAM algorithm, as well
as introducing a few SLAM algorithms. The key features of these algorithms are
then summarized in Section 3.5, where each algorithm is a also evaluated in terms
of being appropriate for implementation in the physical vehicle of this thesis.

3.1 Introduction to SLAM


Simultaneous localization and mapping is a problem where a moving object needs
to build a map of an unknown environment, while simultaneously calculating its
position within this map. [12, p. 229]
There are several areas which could benefit from having autonomous vehicles
with SLAM algorithms implemented. Examples would be the mining industry,
underwater exploration, and planetary exploration.[12, p. 229] The SLAM prob-
lem in general can be formulated using a probability density function denoted
p(xt , m|z1:t , u1:t ), where xt is the position of the vehicle, m is the map, and z1:t is
a vector of all measurements. u1:t is a vector of the control signals of the vehicle,
which is either the control commands themselves or odometry, depending on the
application.

3.1.1 Data association


Basically, the concept data association is to investigate the relationship between
older data and new data gathered. In a SLAM context it is of necessity to relate
older measurements to newer measurements. This enables the process of deter-
mining the locations of landmarks in the environment, and thus this also gives
information regarding the robots position within the map. [4, p. 26]

15
16 3 Simultaneous localization and mapping

3.1.2 Loop closure


The concept of loop closure in a SLAM context is the ability of a vehicle to rec-
ognize that a location has already been visited. By applying a loop closure algo-
rithm, the accuracy of both the map and the vehicles position within the map
can be increased. [30, p.1] However, this is not an easy task to perform, due
to the fact that the operating environment of the vehicle could contain similar
structural circumstances as previously visited locations. In that case if the loop
closure algorithm performs poorly, it could lead to a faulty loop closure, which
could corrupt both the map as well as the pose of the vehicle within the map.[30,
p.1]

3.1.3 Full SLAM and online SLAM


There are two different types of SLAM problems. The online SLAM problem of
which only the current pose xt and the map m are expressed, given the control
input u1:t and measurements z1:t . As well as the full SLAM problem which ex-
presses the entire trajectory of the robot. The PDF of the full SLAM problem is
denoted as p(x0:t , m|z1:t , u1:t ), where all the poses of the robot are considered, in-
cluding its initial pose x0 . [17, p. 21]

3.1.4 General models for SLAM problem


A motion model for the SLAM problem can be described by

xt = g(ut , xt−1 ) + δt (3.1)

where the current pose xt depends on the possible nonlinear function g(ut , xt−1 ),
with ut being the control input, xt−1 the previous pose and δt being a random
Gaussian variable with zero mean.[17, p. 19]

A measurement model for the SLAM problem can be described by:

zt = h(xt , m) + t (3.2)
where the current measurement zt , depends on the possible nonlinear func-
tion h(xt , m), where xt denotes the current pose, m represents the map, t a ran-
dom Gaussian variable with zero mean.[17, p. 19]

3.2 EKF SLAM


The EKF SLAM (Extended Kalman Filter) approach of solving the SLAM prob-
lem is by using sensor data gathered from the movement and rotation of the
robot. This can be done by for example using wheel encoders and a gyroscope.
Furthermore, it is also necessary to gather information of the environment by,
for example, using a Laser Range Finder (LIDAR). With this data, the algorithm
3.2 EKF SLAM 17

keeps track of where the robot is likely positioned within a map, as well as keep-
ing track of specific landmarks observed.[4]

3.2.1 Algorithm
When the robot is turned on, the sensors on the robot such as the wheel encoders
and the gyroscope will gather information of the positioning of the robot. More-
over, landmarks from the environment are also extracted based on new observa-
tions from the LIDAR which is mounted on the robot. These new observations
are associated with previous observations and updated in the EKF algorithm.
However, if an observation of a landmark cannot be associated to a previous
observation the observation itself is presented to the EKF algorithm as a new
observation.[4, p. 29] This process is illustrated in Fig. 3.1.
18 3 Simultaneous localization and mapping

Figure 3.1: EKF SLAM algorithm.


[4, p. 10]

First and foremost, the robot localizes itself within the map being created
using observed landmarks and information from the sensors. The landmarks
should preferably be observable from multiple angles, as well as not being sepa-
rated from other landmarks. These prerequisites gives the EKF algorithm a good
possibility to distinguish between landmarks at a later time. Moreover, the land-
marks being used should also be stationary. [4, p. 18]
To describe the EKF SLAM process in short terms, it consists of two stages.

• Prediction step
– Predict the current state expressed by the predicted mean and the pre-
dicted covariance. These are being calculated based on control input,
matrices, the previous covariance and the previous mean. [27, p. 20-
28]
• Correction step
3.2 EKF SLAM 19

– First and foremost, the idea behind the correction step is the data as-
sociation. The main idea behind associating data in a SLAM context is
to distinguish between an earlier observed landmark and a newly ob-
served one. [27, p.15-16,30] When a new measurement has occurred
the procedure is as following:

* Store the locations of newly observed landmarks. [27, p.15-16,30]

* Associate previously observed landmarks with newly observed ones.


[27, p.15-16,30]

Furthermore, the correction gain, also known as the Kalman gain is


computed. Basically, it is a correction gain factor necessary to up-
date the current mean and covariance of the current state. The cur-
rent mean and covariance is not only dependent on the Kalman gain
though, but they are also taking the predicted mean and the predicted
covariance and the measurement into account. [27, p.37-38]

3.2.2 Computational complexity

The computational cost of the EKF SLAM algorithm is quite costly in comparison
to other SLAM algorithms. When new landmarks are detected, they are added
to the state vector of the filter, and the map of with N identified landmarks, will
increase linearily. [23, p.7765] In terms of numbers, the use of memory is O(N 2 ).
The computation cost of computing one step of the algorithm is approximately
O(N 2 ) [27, p.57] [5, p. 792], the complete cost for computing the whole algorithm
is O(N 3 ), heavily dependant on the size of the map[5, p.792].

3.2.3 Data association

A bottleneck when using the EKF algorithm is the computational cost which in-
creases when new landmarks are observed. This leads to a prerequisite of that the
environment should not contain too many landmarks. However, since the num-
ber of landmarks in the environment are few, the localization within the map
and the data association is getting increasingly more difficult.[28, p. 330-331]
Moreover, the data association is made by computing the maximum likelihood
and in case of a faulty association, the future results which are based on previous
associations by the algorithm will also be faulty.[28, p.331-332]
20 3 Simultaneous localization and mapping

3.3 FastSLAM
Another way of solving the SLAM problem is using the FastSLAM algorithm,
which makes use of the Rao-Blackwellized particle filter. FastSLAM takes ad-
vantage of something that most other SLAM algorithms do not, namely the fact
that each observation only concerns a small amount of the state variables. The
pose of the robot is probabilistically dependent on its previous pose, whereas the
landmark locations are probabilistically dependent on the position of the robot.
[19, p. 27] This is not taken into account by the EKF SLAM algorithm, for exam-
ple, which with each update has to recalculate its covariance matrix, resulting in
poor scaling in large maps. [19, p. 29] This is not a problem in FastSLAM, as it
factors out the calculations to simpler subproblems, as shown by
N
Y
p(s t , θ | z t , u t , nt ) = p(s t | z t , u t , nt ) p(θn | s t , z t , u t , nt ) (3.3)
n=1

which factors out the robot path s t and each of the N landmarks θn . One major
difference here between FastSLAM and, for example EKF SLAM, is that the entire
path s t is calculated, as opposed to just the current pose xt . This means that
FastSLAM is a full SLAM algorithm and EKF SLAM is an online SLAM algorithm,
as described in Section 3.1.3.

3.3.1 Particle structure


The particle filter used in FastSLAM is comprised of M particles, where each
particle is defined by

[m]
St = (s t[m] , µ1,t , Σ1,t , ..., µN ,t , ΣN ,t ) (3.4)

Each particle contains N + 1 variables, one robot path s t[m] , and N landmark
posteriors, where each landmark is represented by an EKF with a mean µn,t and
a variance Σn,t . During one iteration of the algorithm, the robot path for every
particle s t[m] is updated once, as well as each landmark filter µn,t , Σn,t .

3.3.2 Algorithm
Each iteration of the algorithm can be summarized in four steps. These are pre-
sented below.
[m]
1. For each of the M particles, sample a new robot path st from p(st | st−1 , ut )

2. With the new measurement, update the landmark filters

3. Give each particle an importance factor

4. Resample the particles, with particles with a higher importance factor hav-
ing a higher draw probability.
3.3 FastSLAM 21

In step 1, a new robot path is sampled for each particle. This path is drawn
from the motion model, with the robot control as input. After this, the posteriors
of the landmarks are updated due to the new measurement. After this, each
particle is given an importance factor, which is a measure on how realistic the
variables in the particle are. Later, the particles are resampled in order to get rid
of the ones which do not have high probability.
[19, p. 10]

3.3.3 FastSLAM 2.0


The FastSLAM algorithm (and particle filters in general) performs more accu-
rately when there is more diversity among the particles. Particles will not be
diverse if, for example, the odometry sensors are noisy and the laser range finder
is accurate, as this will render many of the motion hypothesises unlikely and
many particles will have a low importance factor. Because of this drawback of
the FastSLAM algorithm, an updated one has been developed, named FastSLAM
2.0. FastSLAM 2.0 is quite similar to its predecessor, with the exception that
when estimating the motion, it also considers the range sensors. This creates bet-
ter guesses in general, and will result in more particles having higher importance
factors. [19, p. 63]

3.3.4 Computational complexity


If the FastSLAM is directly implemented as described above, it has O(M ∗ N )
computational complexity, where M is the number of particles and N is the num-
ber of landmarks. [19, p. 48] To implement an algorithm like this would not be
a good idea, since it would scale linearly with the number of landmarks. This
would render the algorithm unsuitable for large maps.
To apprehend this issue, FastSLAM is implemented using binary trees rather
than the array structure shown in 3.4. The algorithm will then have O(MLog(N ))
complexity [28, p. 465], which is illustrated in Fig. 3.2 below, which illustrates
such a tree in a single particle with N = 8.
Using this binary tree approach also signifcantly reduces the amount of mem-
ory used by the algorithm. [19, p. 58]
22 3 Simultaneous localization and mapping

Figure 3.2: Binary tree data structure of FastSLAM particle

3.3.5 Data Association


Another advantage of the FastSLAM algorithm is the fact that its data associa-
tion is on a per-particle basis. This means that each particle will have its own
hypothesis on which landmark the measurement is associated with. If a measure-
ment associates incorrectly with a landmark in a particle, this particle will have
a low importance factor, thus making the impact of this wrong association very
small. In other algorithms, such as EKF-based SLAM, once an incorrect data as-
sociation has been made, it will cause the algorithm to produce incorrect results
from that point. This is one of the main advantages of the FastSLAM algorithm in
comparison to other algorithms, and contributes to the algorithm having a high
robustness. [19, p. 41]
3.4 GraphSLAM 23

3.4 GraphSLAM
GraphSLAM is another approach of solving the SLAM problem. The idea be-
hind GraphSLAM is to address the SLAM problem by using a graph. The graph
contains nodes which represent the poses of the robot x0 , ..., xt as well as land-
marks in the map which are denoted as m0 , ..., mt . However, this is not enough to
address the SLAM problem, as there are also constraints between the positions
xt , xt−1 , xt−2 , ..., xt−n of the robot and the landmarks m0 , ..., mt . [26, p.361] These
constraints represents the distance between adjacent locations such as xt−1 , xt as
well as the distance between the locations to the landmarks m0 , ...mt . These con-
straints can be constructed due to information from the odometry source ut .[26,
p. 361-362] The graph mentioned earlier can be converted to a sparse matrix.
Which in contrast to dense matrices can be more efficiently used. [17, p. 1] Fig.
3.3 illustrates how the landmarks m0 , ..., mt , the locations x0 , ..., xt and the con-
straints are linked together.

Figure 3.3: Robot poses seen as circles, landmarks of the map seen as
squares, dashed lines representing the motion of the robot based on control
commands given to the robot as well as solid lines representing the distance
to the landmarks given by measurements.

3.4.1 Algorithm
The PDF for the full SLAM problem is denoted p(x0:t , m|z1:t , u1:t ). For simplicity,
all the poses x0:t and the landmarks m can be described by a state space vector y
24 3 Simultaneous localization and mapping

in Fig. 3.4.

Figure 3.4: Statespace vector containing all poses and landmarks

Then in order to achieve the most likely pose of the robot within the map and
the positions of the landmarks in the map, an optimization algorithm has to be
computed. [17, p. 11]
Moreover, the PDF p(y|z1:t , u1:t ) contains the possibly nonlinear functions
g(ut , xt−1 ) and h(xt , mi ), whereas mi represents the i-th landmark observed at
time t. [17, p.4,11] The optimization algorithm expects linear functions. Thus, a
necessary step is to linearize these. This could be done by for example executing
a Taylor linearization algorithm. [17, p.11]
As specified earlier, the GraphSLAM approach of solving the SLAM problem
is by introducing a graph. This graph can be converted to a sparse matrix. Basi-
cally, this sparse matrix contains all the information acquired by the robot while
moving in an environment. For example, the robots movement through the map
and landmarks observed at different locations. Thus, as the robot moves through
the environment the sparse matrix will increase in size and contain even more
information.[17, p. 14]
In the following Figs. 3.5,3.6,3.7 below, it is illustrated of how the robot poses
illustrated as blue squares and landmarks illustrated as red squares observed are
added to the sparse matrix. Moreover, the blue squares are also adjacent to each
other due to their relationship as the new pose is dependant on the old pose. The
diagonal grey squares shows that the landmarks of each row m1, m2, m3, m4 and
its corresponding column m1, m2, m3, m4 is the one and same landmark.

Figure 3.5: Robot poses and landmarks observed


3.4 GraphSLAM 25

Figure 3.6: Robot poses and landmarks observed

Figure 3.7: Robot poses and landmarks observed

3.4.2 Computational complexity


Some interesting statistics when discussing the GraphSLAM algorithm is the com-
putational load and the use of memory. In comparison with the EKF SLAM algo-
rithm of which the use of memory grows by O(N 2 ) when new landmarks N are in-
troduced and the computational cost by O(N 3 ), the GraphSLAMs use of memory
grows linearly when new landmarks N are introduced. As for the computational
cost of the GraphSLAM algorithm it is dependent on time, thus, if the path of
the vehicle is very long time-wise the computational cost can be costly. However,
in general the attributes of the GraphSLAM algorithm is very desirable when the
environment contains a lot of landmarks, of which the EKF algorithm tends to
fail.[26, p. 362]. [28, p. 330]

3.4.3 Data association


The data association part of the GraphSLAM algorithm is done using a vector
containing correspondences between landmarks observed in the map. [28, p.362]
The algorithm performs a correspondence test to check the probability that dif-
ferent landmarks observed within the map actually is the same landmark. If the
result of this correspondence test exceeds a certain value, the algorithm will de-
termine the landmarks as the same landmark, otherwise it will determine these
26 3 Simultaneous localization and mapping

as two separate landmarks. [28, p. 363] In comparison with the EKF SLAM
algorithm where the future data associations can be faulty due to earlier data
associations, the GraphSLAM algorithm can reexamine older data associations,
thus reducing the risk of introducing errors for future data associations. [28, p.
339,363] The GraphSLAM algorithm solves the full SLAM problem [17, p. 15],
in comparison to the EKF slam algorithm which only solves the online SLAM
problem.

3.5 Theoretical evaluation of SLAM


Given the specified thesis scope in Section 1.3, a theoretical investigation regard-
ing three different SLAM algorithms, namely EKF, GraphSLAM and FastSLAM
has been conducted.
There are several parameters of interest when discussing SLAM algorithms,
some of which have been investigated are the following: robustness, efficiency as
well as ROS compatibility. These parameters are evaluated and compared to each
other below.

• EKF

– Robustness
* Given what has been stated in 3.2.3, a faulty data association will
result in erroneous future results, thus the robustness of the EKF
algorithm is low. [28, p.331-332]
– Efficiency
* In Section 3.2.2 the total computational cost of executing the al-
gorithm is O(N 3 ), to a great degree dependant on the size of the
map[5, p.792]. In conclusion this makes the EKF SLAM algorithm
more suitable for smaller maps containing fewer landmarks due
to the heavy computational load.[27, p. 57] However, as stated
in 3.2.3 the localization and data association part can be difficult
when the landmarks in the environment are few. [28, p. 330-331]
– ROS compatibility
* Implemented in the mrpt_ekf_slam_2d package.

• FastSLAM

– Robustness
* Given what has been stated in 3.3.5, the FastSLAM algorithm is sig-
nificantly more robust than the EKF SLAM algorithm due to the
fact that the data association from measurements to landmarks is
particle-based which reduces the impact of a wrong data associa-
tion.
– Efficiency
3.5 Theoretical evaluation of SLAM 27

* In Section 3.3.4 the total computational cost is stated as O(MLog(N )),


meaning that the complexity is due to the number of particles M
and the number of landmarks N .
– ROS compatibility
* Implemented in the gMapping package.
• GraphSLAM

– Robustness
* Given what has been stated in 3.4.3, the robustness of the Graph-
SLAM algorithm is higher than the EKF algorithm, partly due to
the fact that older data associations can be reexamined if a wrong
data association has been performed. Basically, this reduces the
risk of producing erroneous future data associations thus increas-
ing the robustness.
– Efficiency
* In Section 3.4.2 it is stated that the use of memory is linearly de-
pendent in comparison to the EKF algorithm which uses O(N 2 )
based on number of landmarks N Moreover, the computational
load is time dependent, thus if the path is long the algorithm can
be quite costly as stated in 3.4.2.
– ROS compatibility
* Implemented in the slam_karto package.

From what can be read above, the EKF SLAM algorithm is not too robust,
while both the GraphSLAM algorithm and the FastSLAM algorithm have a higher
robustness. Moreover, in terms of efficiency the EKF SLAM algorithm is very
costly. Given that all algorithms have been implemented in ROS packages, each
of these are viable choices. However, as the EKF SLAM algorithm lacks in both
robustness and effeciency, the conclusion of using other more effective algorithms
such as the FastSLAM or GraphSLAM instead of the EKF algorithm can be reached.
These algorithms are able to efficiently deal with maps containing more land-
marks, as well as larger maps in terms of sheer size. With maps containing a cou-
ple of hundred landmarks, for example when trying to map the outside world,
the computational complexity of the EKF algorithm could prove to be a huge
problem to deal with. [18, p. 1], [28, p. 330]
Vehicle Dynamics
4
This chapter describes some of the vehicle dynamics theory which was used to
develop the dynamic model of the small-scale vehicle. Some of the theory pre-
sented in this chapter was ultimately not used in the final model, but is still
included here since it describes the initial modeling approach. This chapter only
concerns the movement of the vehicle, whereas the theory concerning the sensors
is presented in Chapter 5.

4.1 Longitudinal dynamics


Longitudinal dynamics describe the movement of a vehicle along the longitudi-
nal axis. Thus, it can be said that longitudinal dynamics describe the speed of
the vehicle in the forward and backward direction, due to how different forces
act upon the vehicle.

4.1.1 Gear ratio


In a vehicle the motor and the wheels rotate at different speeds. This is due to the
fact that they are connected by different sized gears. The relationship between
the angular velocity ω of the motor and gear is known as the gear ratio [15, p. 43]
and is defined as
ωmotor
ngear = (4.1)
ωwheels

4.1.2 Wheel torque


The torque τ at the wheel of the vehicle can be computed with

29
30 4 Vehicle Dynamics

P =τ∗ω (4.2)
where P is the power of torque and ω is the angular velocity.
If it is assumed that there are no losses, the power at the motor and the wheels
will be equal, meaning that
Pmotor = Pwheels (4.3)
and together with with (4.2) this implies

τmotor ∗ ωmotor = τwheels ∗ ωwheels (4.4)

If (4.1) is substituted into (4.4) we obtain the torque of the wheels as a function
of the gear ratio and torque of the motor

τwheels = ngear ∗ τmotor (4.5)

4.1.3 Forward velocity


To obtain the forward velocity of the vehicle, the longitudinal force of the driving
wheels needs to be calculated. This is obtained by
τ
F= (4.6)
r
and in the terms of the wheels it is denoted by
τwheels
F= (4.7)
rwheels
Newton’s second law of motion can then be used to determine the acceleration
of the vehicle as
F
a = wheels (4.8)
mvehicle
where mvehicle denotes the mass of the vehicle. The rotational inertia of the
wheels and drive shaft can be neglected or included as mass equivalent. Finally,
the velocity is obtained through integration as

ZT
v= adt (4.9)
t

4.2 Lateral dynamics


Lateral dynamics describe the movement of the vehicle along the lateral axis.
Lateral movement is mainly determined by the heading angle, which needs to
be calculated in order to determine the lateral velocity of the vehicle.
4.2 Lateral dynamics 31

4.2.1 Ackermann angle


The Ackermann steering angle is defined as the wheelbase angle

δ = l/R (4.10)
which causes the vehicle to turn in a specific heading angle, or yaw. l is the
length of the wheelbase and R is the radius of the heading angle, i.e. the distance
between the center of gravity and the turn center, as Fig 4.1 shows. [16, p .129]

Figure 4.1: The Ackermann angle

4.2.2 Heading angle


If the Ackermann angle, the length of the wheelbase, and the forward velocity is
known, the heading angle of the vehicle can be calculated. The forward velocity
is defined as

v = R ∗ ψ̇ (4.11)
Since the radius of the heading angle is related to the Ackermann angle, (4.10)
can be substituted into (4.11) in order to get the yaw rate
v
ψ̇ = ∗δ (4.12)
l
Finally, the heading angle can be obtained through integration as

ZT
ψ= ψ̇dt (4.13)
t
32 4 Vehicle Dynamics

4.3 Coordinates
Coordinates of the vehicle can then be calculated by integrating the velocities in
the x and y direction over time as

Z
x= vx dt (4.14a)
Z
y= vy dt (4.14b)

If the forward velocity v, vehicle length l, yaw rate ω = ψ̇, and yaw ψ of the ve-
hicle are known, the velocities vx and vy can be calculated using a transformation
matrix [10] as
" # " #" #
vx v cos(ψ) −sin(ψ)
= l (4.15)
vy 2 ∗ω
sin(ψ) cos(ψ)
Sensor theory
5
The sensors used in this thesis are a gyroscope, wheel encoders, and a LIDAR.
The measurements from these sensors provide the necessary input to the SLAM
algorithms. The SLAM algorithms will process this data in order to present a map
of the surrounding environment, including the pose of the vehicle positioned
within the map. The following sections will provide a description of each sensor
mentioned.

5.1 Gyroscope
Gyroscopes have been widely used for military and civilian purposes the last cen-
tury. Not only for aeronautical purposes but also by the marine. The traditional
gyroscopes used to be mechanical. However, what is being used in many ap-
plications nowadays are MEMS gyroscopes (micro-electro-mechanical systems),
which are being used in cell phones, robotic projects, and for military purposes.
[14, p. 171] Gyroscopes in general can be described by the the formula of Coriolis
force

Fc = −2 ∗ m ∗ (ω ∗ v) (5.1)

As can be seen from (5.1) the Coriolis force is depending on the mass m of an
object, the angular velocity ω as well as the velocity v of the object. This object
is a part of the gyroscope and the mass of the object as well as the velocity of the
object are known. Given that the object rotates, and by having information about
the mass and velocity of the object the angular velocity can be detected.[14, p.
173] Given the angular velocity, the total angle that the vehicle has rotated can
be calculated as

33
34 5 Sensor theory

Zt
a= ψdt (5.2)
0

5.2 Wheel encoders


Wheel encoders are being used to provide the microcontroller with information
about the velocity of the vehicle. There are several different types of wheel en-
coders based on different technologies, such as, optical wheel encoders, mechan-
ical as well as magnetic wheel encoders just to name a few. In this thesis a mag-
netic wheel encoder is being used which consists of a Hall effect sensor and a
number of magnets. The Hall effect sensor outputs a high voltage level when-
ever the magnetic field surrounding it is sufficiently strong, (a magnet passing
by)[22, p.3-4]. The pulse outputted by the Hall effect sensor is being registered
by the microcontroller. By counting the number of pulses generated since the
previous sampling instance, the microcontroller is able to estimate the speed of
the vehicle.
Below is some pseudo code for calculating the vehicle velocity in Algorithm
5.

Algorithm 5 Vehicle car speed


1: function car_speed(()P ulsesavg ,T)
2: Tnew = T − Told
3: vcar = (P ulsesavg ∗ (wheelCircum/10))/Tnew
4: Tnew = Told return vcar
5: end function

The algorithm calculates the Vcar in cm/s. The average number of high pulses
registered by the two Hall effect sensors is denoted P ulsesavg . Which is multi-
plied by (1/10) of the circumference of the wheels, thus giving a distance trav-
eled. By dividing with the time period Tnew the vcar resembles the well known
formula for velocity v = s/t. The wheel circumference is divided by ten due to
the fact that there are ten magnets attached on each wheel.

5.3 Laser range finder


Laser Range Finders (LIDAR) can be used to measure the distance to objects in the
vicinity of the vehicle. A laser range finder consists of a laser emitter and receiver,
and is sometimes mounted on a rotating motor in order to measure distances
in every direction of the vehicle. The basic operation principle is that a short
pulse of light is emitted, reflected on an object, and then received. The time twait
between the emitting and receiving can then be used to calculate the distance d
to the object as
5.3 Laser range finder 35

1
d= ct (5.3)
2 wait
where c is the speed of light. [21, p. 31]
Simulink Model Implementation
6
The vehicle, sensors, and the environment of the vehicle were modelled in Simulink.
This chapter describes the implementation of all the subsystems of the model,
which can be seen in Fig 6.1 below. The model is not able to be simulated in
real-time, due to the computation time being too high.

Figure 6.1: Block Diagram of Simulink Model

37
38 6 Simulink Model Implementation

6.1 Input
The input subsystem provides the input stimuli to the system. This is done in two
different modes, keyboard, and file. In keyboard mode, the steering angle and the
velocity of the vehicle is controlled using the arrow keys on the keyboard. In file
mode, the velocity and steering angled is defined by a .mat file. These .mat files
are recorded when running the model in keyboard mode. A typical workflow
while using this model would be to first record the instructions for the vehicle to
a .mat file, and then use the same .mat file for repeated experiments.

6.2 Kinematic Vehicle


The kinematic vehicle, shown in Fig. 6.2 is comprised of five subsystems, each of
which is described in the sections below.

Figure 6.2: Model of Kinematic Vehicle

6.2.1 Velocity
6.2.1.1 Initial model
The initial velocity model was modelled after how velocity is created in the ve-
hicle itself, with the motor being a torque source which is converted to force,
and ultimately velocity. This is described in Section 4.1. However this model
was never completed, and scrapped. This is because conception of such a model
proved to be too time-consuming, and might have required a master thesis on its
own to do.

6.2.1.2 Final model


The final model is considerably simpler than the initial model. In this model, the
input signal is the desired velocity of the vehicle. This is passed through a first-
6.3 Map 39

order Filter in order to model the acceleration and deceleration of the vehicle.
The time it takes for the vehicle to go from a stationary state to be moving at full
speed was measured in order to calculate the time constant τ of the first order
filter.

6.2.2 Servo
A simple model was also used for the servo. The servo control signal is a value
from -100 to 100, which is then passed through a look-up table which ranges from
-45°to 45°, which is the maximum steering angle of the vehicle. A first-order filter
is used to model the delay of the steering of the vehicle, and the time constant τ
was calculated by measuring the time it takes for the wheel of the vehicle to turn
from 0°to 45°.

6.2.3 Angle & Angular Velocity


This subsystem calculates the yaw rate and the yaw angle for the vehicle, which
is due to the steering angle and the speed. This is calculated by using the Acker-
mann angle, which is explained in Eq. 4.10 in Section 4.2

6.2.4 Position
The X-position and Y-position subsystems realize the transformation matrix de-
scribed in Section 4.3 in order to determine the current position of the vehicle.

6.3 Map
6.3.1 Parameters
In order to emulate the environment of the vehicle, a map definition needed to
be created. These maps are defined by two parameters; dimensions and walls. The
dimensions array is a 1-by-2 matrix where the first index is the width of the map
and the second is the height of the map. The walls matrix is an n-by-4 matrix
which describes the walls in the map, where each row contains the coordinates of
a wall. These can be either diagonal, vertical, or horizontal. The map parameters
are stored in a .mat file which can then be read by the Simulink model file.

6.3.2 Implementation
The map is implemented in two S-functions called vehicleMap and drawMap. The
former calculates all necessary values to display the current position and yaw of
the vehicle within this map, and the latter draws this is in a MATLAB plot. In
this plot, the position and yaw of the vehicle is represented by a triangle. This is
shown in Fig. 6.3 below. The inputs of vehicleMap is shown in Table 6.1 below,
and the Simulink model diagram is shown in Fig. 6.4.
40 6 Simulink Model Implementation

Figure 6.3: Plot of map generated by vehicleMap S-function

Table 6.1: Inputs of vehicleMap

Inputs Direction Size Definition


Coords In 2 Current coordinates of the vehicle.
Yaw In 1 Current yaw of the vehicle.
Triangle Coords Out 6 Coordinates of where to draw the triangle
Collision Out 1 Boolean indicating if the vehicle has hit a wall
Init Out 1 Boolean indicating if map has been initialized
6.4 Odometry Sensors 41

Figure 6.4: Map subsystem

6.4 Odometry Sensors


6.4.1 Gyroscope
6.4.1.1 Description
The model receives the actual angular velocity produced from the car model, and
outputs an estimation of what the sensor would have measured. These measure-
ments are corrupted by noise and bias. The model has been designed with re-
gards to actual measurements on the values from the gyroscope. By measuring
the output when the physical vehicle is standing still, an average error can be cal-
culated and modelled. The pseudo code for deriving the bias is presented below
in algorithm 6

Algorithm 6 Gyroscope bias


1: for Gz in array do
2: count = count + int(Gz)
3: number = number + 1
4: end for
5: ψ = (count ∗ 250 ∗ 3.14/(number ∗ 32768))

First and foremost, what is being derived above in algorithm 6 is the angular
velocity in rad/s. The bias output from the gyroscope’s z-axis has been measured
42 6 Simulink Model Implementation

and an average bias output has been calculated. To transform this output to
an angular velocity in rad/s, the average bias (count/number) is multiplied by
DPS/ADC * pi/180. Where DPS = degrees per second of range specified, there
are three possible alternatives: 250 °/s, 500 °/s and 2000 °/s. ADC = Analog to
digital converter, resolution of 16 bits. This division gives the angular rate in °/s,
by multiplying with pi/180 the angular rate can be expressed in radians/s.
Moreover, the noise from the gyroscope has also been modelled using Simulink.
The variance and the standard deviation has been derived from 21000 samples
of the gyroscope left stationary.

6.4.1.2 Implementation
Given the precondition that the vehicle is standing still, samples were obtained
from the physical sensor. These samples were converted into angular velocities
in rad/s by algorithm 6. Given these converted samples, figures were created in
Matlab which display the Gaussian nature of the values from the gyroscope. Fig.
6.5 shows the angular velocities from the gyroscope, and how the curve resembles
a Gaussian distribution. This hypothesis is also in line with conclusions of [13,
p.7-8,22] when measuring sensor errors.

Figure 6.5: Bar chart displaying the noise of the gyroscope, curve showing a
Gaussian distribution

The Gaussian noise as well as the bias offset was implemented into the Simulink
model. This can be seen in Fig. 6.6. This model outputs an angular rate expressed
in rad/s corrupted by noise and bias with the aim of resembling the actual gyro-
scope implemented in the physical car.
6.4 Odometry Sensors 43

Figure 6.6: Gyroscope implementation in Simulink

6.4.1.3 Test phase


Evaluation of the model has been done by implementing a test environment in
Simulink. Basically, a ROS subscriber block has been implemented in Simulink
which receives the actual values from the physical car. The values of the z-axis of
the gyroscope are being measured and computed as an angular rate in rad/s as
well as an angle in rad. The comparison between the Simulink model of the sen-
sor and the physical sensor values can then be done in the Simulink environment.
Fig. 6.7 below shows the test environment.

Figure 6.7: Gyroscope test environment in Simulink

Moreover, Fig. 6.8 shows the angular difference between the gyroscope model
and real time measurements on the physical gyroscope.
44 6 Simulink Model Implementation

Figure 6.8: Comparison between gyroscope model and real time measure-
ments on the physical gyroscope. Top display shows the model output after
129.2 seconds, and bottom display shows the total angle measured from the
physical gyroscope. (Notice, both are initialized to 1.507 rad, 90 degrees)

6.4.2 Wheel encoders

6.4.2.1 Description

A sensor model of the wheel encoders has been implemented into the model of
the car in Simulink. They have been modelled with regards to measurements
of the output from the wheel encoders attached on the physical car. However,
the data being received is actually an estimate of the speed of the physical car
and not a number of magnets detected since the last timestamp. With that being
said, the sensor model implemented is using these measurements to estimate
and mimic the transmitted car speed from the physical wheel encoders attached
on the car. These car speed measurements are not that precise though, which
could be explained by a bias offset and noise. The bias consists of truncation
errors and the algorithm which converts magnet pulses to car speed in cm/s. The
truncation error occurs when the algorithm truncates the floating number to an
u_int8 which is to be transmitted on the CAN bus. The noise can be expressed in
terms of the variance around the mean velocity.
6.4 Odometry Sensors 45

6.4.2.2 Test phase


The test phase of the wheel encoders was executed at our office on the floor. More-
over, we used equipment such as a yardstick, rulers, as well as a mobile phone
for measuring time of which the vehicle traveled a certain distance. Each mea-
surement conducted consisted of about 40 measured samples. Table 6.2 below
shows the measurements done from the test environment. The results on each
row differs quite a bit, this is mostly due to the fact that the engine on our car
restarted randomly during runtime.

Table 6.2: Results of measurements on the wheel encoders

MI µ σ σ2 [s] [cm] [tq] [vq]


5 4.1 0.98 0.96 6.59 58 0.83 0.96
5 4.1 1.1 1.2 4.3 33 0.83 0.95
5 4.4 0.79 0.62 4.6 40 0.83 0.94
5 4.1 0.31 0.1 4.15 34 0.44 0.73
10 10.9 0.34 0.11 5.13 94 0.73 0.89
10 10.7 0.94 0.89 6.4 122 0.76 0.87
10 11 0.91 0.81 5.3 103 0.83 0.87
10 10.7 0.47 0.22 3.55 50 0.38 0.65
15 15.9 0.94 0.89 6.1 177 0.8 0.93
15 15.7 1.24 1.56 3.6 103 0.74 0.85
15 15.9 1.51 2.26 6.35 196 0.75 0.88
15 15.9 1.1 1.21 5.2 144 0.72 0.88
20 22.1 1.32 1.75 4.8 190 0.72 0.84
20 21.5 1.02 1.05 4 142 0.56 0.73
20 20.2 1.99 3.99 3.4 122 0.58 0.80
20 19.8 0.95 0.9 4.65 167 0.61 0.79

Basically, what can be seen from Table 6.2 is the sent instruction MI to the
motor, the average measured velocity µ, the standard deviation σ , the variance
σ 2 , the physical time [s] for the car to travel a distance expressed in seconds,
the physical distance [cm] the car travelled during the measurement. Moreover,
the factor [tq] denotes the portion of time of which the speed of the vehicle is
above a certain threshold, thus disregarding the portion of time of which the
vehicle is accelerating. Furthermore, the factor [vq] corrects the actual distance
the car travelled during the performed test, by disregarding the distance travelled
during acceleration to a more steady velocity.
The Fig. 6.9 shows the speed received from the physical wheel encoders.

6.4.2.3 Implementation
The model was implemented based on the values in Table 6.2. The bias has been
modelled through numerous measurements on the wheel encoder values and
46 6 Simulink Model Implementation

Figure 6.9: Plot of the measured speed from the wheel encoders

physical measurements on the actual car driving a predefined distance of 2 m.


From Table 6.3 below, the measured velocity from physical tests are significantly
higher than the car speed which the sensors believe the car is driving in. The way
this is implemented in Simulink is by introducing a look up table, with an actual
velocity input scaled to the sensed value.

Table 6.3: Measured velocity compared to the sensed velocity from the phys-
ical wheel encoders

Motor instruction 0 5 10 15 20
Sensed velocity 0 3.9 10 15.7 21.3
Measured velocity 0 9.82 23.3 36 49

In Fig. 6.10 below, the car speed from the sensor implemented in Simulink is
plotted. The plot is showing the car speed in cm/s plotted over time. Moreover,
the car speed is depending on user input in the model which can be seen as the
steps where a user has increased the car speed. In addition, the sensor model
implemented includes a noise source which depends on the current speed of the
car model, this noise can be seen in the plot below.
Fig. 6.11 below shows the model implementation in Simulink. Basically, what
can be seen is the conversion from m/s to cm/s, a LUT converting the velocity
input to what the sensor detects, as well as a Matlab function which adds noise
to the velocity depending on the current velocity.
6.5 LIDAR 47

Figure 6.10: Wheel encoder noise plotted over time in Simulink

Figure 6.11: Wheel encoder model implemented in Simulink

6.5 LIDAR

6.5.1 Description

The LIDAR is implemented as an S-function called lidar, and models the RPLI-
DAR used in this thesis. The inputs of this function are shown in Table 6.4 below.
48 6 Simulink Model Implementation

Table 6.4: Inputs of LIDAR S-functions

Inputs Direction Size Definition


Coords In 2 Current coordinates of the vehicle.
Yaw In 1 Current yaw of the vehicle.
Lidar Offset In 1 Vertical Displacement of LIDAR
Collision Out 1 Boolean indicating if the vehicle has hit a wall
Distances Out 360 Distances to objects in all angles

6.5.2 Operation
Before vehicleMap has initialized the map, the function waits in an idle state. Af-
ter this, the distance is measured. Noise, range constraints, and an error PDF is
then applied to this value to make it resemble the RPLIDAR. This value is then
written to the state vector. Subsequently, the LIDAR is rotated one degree to pre-
pare it for the next measurement, which will take place after a delay. This delay
is equal to the time it takes for the RPLIDAR to rotate one degree. If the LIDAR
has reached the end of its rotating cycle, the state vector is first written to the
output and then flushed. A flowchart of the function is shown in Fig. 6.13 below,
and a diagram of the subsystem is shown in Fig. 6.12.

Figure 6.12: Diagram of LIDAR subsystem


6.5 LIDAR 49

Figure 6.13: Flowchart of LIDAR S-function


50 6 Simulink Model Implementation

6.5.3 Probability of error


Early on in the modeling process, it was observed that the LIDAR did not quite
perform as expected. When doing a full rotation, the LIDAR failed produce mea-
surements for all 360 degrees. During one of the early tests, the LIDAR was
placed in a cardboard box. It was observed that the amount of missed measure-
ments was usually distributed among the same rotational points throughout re-
peated experiments. This raised the hypothesis that the probability of the LIDAR
missing a measurement is due to how the laser hits the object. When the LIDAR
is in a box, there are two conditions which will differ with every measurement,
proximity to the wall, and the angle to the wall. Thus, it was decided to investi-
gate how each of these conditions affected the probability of LIDAR misses.
In order to investigate the missed measurements, a test area was set up as
shown in Fig. 6.16, where a cardboard box was placed in front of the vehicle.
This cardboard box was then tilted in different angles and placed at different
distances in order to determine what causes the LIDAR to miss a measurement.
The results of this experiment can be seen in Fig. 6.14 and Fig. 6.15, respectively.
No conclusive testing was done to see if different types of surfaces affected the
probability of error.

Figure 6.14: Percentage of misses due to wall incline


6.5 LIDAR 51

Figure 6.15: Percentage of misses due to distance to wall.

From Fig. 6.14 it can be seen that when the tilt of the reflection surface ex-
ceeds 40°, the percentage of missed measurements increases dramatically. Hypo-
thetically, this due to the tilt angle of the measured object causing a larger reflec-
tion angle of the laser. This means that the reflected laser is more likely to miss
the receiver. From observing Fig. 6.15 it can also be concluded that the LIDAR
does not operate probably when the distance to the measured object is getting
close to its range constraints. In the model, these sets of error probabilities are
realized as two look-up tables. When a measurement is simulated, the angle and
distance to the measured object are passed to its respective look-up table, and a
probability of error is obtained. This probability is then used in a binomial ran-
dom variable which is multiplied with the measured distance, meaning that the
measurement will be equal to 0 if the random variable generates a miss.
52 6 Simulink Model Implementation

Figure 6.16: Measuring environment for investigating LIDAR misses


6.6 ROS Interface 53

6.6 ROS Interface


The ROS Interface subsystem, shown in Fig. 6.17 enables the model to com-
municate with ROS, and makes heavy use of the Robotics Systems Toolbox for
Simulink. The ROS Interface takes the LIDAR and Odometry data generated by
the model and it publishes in two ROS topics, /scan and /sensor_data. Moreover,
it also publishes the current position of the car in the Simulink map on the topic
car_position. It also publishes the clock, since ROS needs to be run on the same
clock as Simulink if the two are to be used in the same system.
54 6 Simulink Model Implementation

Figure 6.17: Block Diagram of ROS Interface


6.6 ROS Interface 55

6.6.1 /sensor_data
This topic contains the speed and the angular velocity of the vehicle. The message
type is custom defined, and is comprised of a Header [9] and a Float64MultiArray
[8]. This topic publishes the current odometry values periodically, with a clock
period of 10 ms.

6.6.2 /scan
This topic contains the most recent LIDAR scan as well as a set of constants as
required by the ROS message type sensor_msgs/LaserScan [7]. The /scan topic
is published whenever the LIDAR has made a full 360°rotation. This means that
the publishing frequency of the /scan message is effectively 5.5 Hz, as this is the
rotational frequency of the LIDAR model.

6.6.3 /car_position
This topic contains the position of the vehicle in the Matlab/Simulink generated
map. This topic is solely for testing purposes to compare the x and y position of
the vehicle at different time instances. The topic consists of a Float64MultiArray
and a header with the current time stamp. The publishing frequency of the topic
is 10 Hz.

6.6.4 /clock
This topic contains the timestamp, expressed as a ROS timestamp. A ROS times-
tamp constains two values, one value of how many seconds have passed, and one
value of how many nanoseconds have passed since the last measured second. For
example, if the time stamp is supposed to be 5.8, the value for seconds will be
5 and the value of nanoseconds will be 0.8 ∗ 109 This topic was introduced due
to the fact that the model cannot run in real time. Basically, the clock topic con-
tains the relative time being used in the model. The message type being used is
ros_graph/Clock, and is published at a frequency of 100 Hz.
ROS Implementation
7
In order to control the vehicle and obtain its sensor data, a set of ROS nodes was
implemented, as shown in Fig. 7.1 below. Everything within a dotted rectangle
in the figure is a ROS node, where each dotted line represents a device running
the node. Each node is described in the following sections.

Figure 7.1: Diagram showing the active ROS nodes while executing a SLAM
algorithm on the physical vehicle.

57
58 7 ROS Implementation

7.1 CAN Reader


The can_reader node is tasked with obtaining the sensor data sent by the STM32
on the CAN bus, and making this data accessible to the other ROS nodes. The
CAN bus is being read continuously, and depending on the CAN ID of the re-
ceived data, a function handling LIDAR data or a function handling odometry
data is called. These functions parse the data to a graspable format, an exam-
ple of this would be the conversion of the z axis of the gyroscopes digital value
to an angular velocity. The LIDAR data is published using a topic called /scan,
which is of the message type sensor_msgs/Laserscan. The sensor data topic is
called /sensor_data, containing car speed, timestamp, gyroscope values to men-
tion some, with the message type of the topic being a customized message called
can/Sensor_data .

7.2 Odometry
The purpose of the odometry node is to present the angular rotation and the x
and y position of the vehicle model as well as the physical vehicle. This is done
by subscribing to the sensor_data topic, described in Section 7.1. Basically, the
data is extracted from the topic in the form of a velocity, an angular velocity and
a time stamp. With the help of this data, velocities in the x and y direction can
be calculated. By using these velocities and the difference between the current
timestamp and the previous timestamp the position in x and y of the vehicle
and the angle of which the vehicle has turned can be calculated. This data is
presented as a topic which contains the current coordinates of the vehicle as well
as the total angle rotated. Moreover, it is also presented to the TF tree, which
is a transform tree that relates coordinate frames to each other. The coordinates
of these frames depends on what kind of frame it is. For example, the base link
frame is the coordinate frame of the entire robot, so the coordinates for this frame
is the actual position of the robot.The odometry node presents the transform from
the odom frame to the base_link frame. This correlation shows the relative x and
y position and orientation of the robot frame (base_link) to the odometry frame.

7.3 SLAM
The SLAM node is basically the implementation of the SLAM algorithm wrapped
into ROS. Basically, the SLAM algorithm requires information from the sensors,
such as the LIDAR data as well as the odometry data. By acquiring this data it is
able to generate a map topic which contains the information about its surround-
ings. Moreover, it also presents a transform from map to the odom coordinate
frame. With this information, the coordinate frames in the map can be related
to each other and the positioning of the vehicle within the map can be calculated
and presented. Fig. 7.2, depicts the transform tree, which shows how the coordi-
nate frames relate to each other.
7.4 rViz 59

Figure 7.2: An overview of the transform tree. The broadcaster is the name
of the node which presents the transform. How often the transform is up-
dated can be seen in the average transform. The buffer length is how many
seconds of data is available in the tf buffer. Transform times are seen in the
most recent and oldest transform, where the former contains the timestamp
of the first transform of that type, and the latter contains vice versa.

7.4 rViz
The rViz node is mainly a visualization tool which is able to provide a live update
of the map generated from the SLAM algorithm. Moreover, the trajectory of the
vehicle in the map can also be visualized. An overview of the rViz window is
shown in Fig. 7.3.
60 7 ROS Implementation

Figure 7.3: An overview of the rViz window. The main window shows the
map generated by a SLAM algorithm. The little red square is the current
pose of the vehicle.

7.5 Position

The general idea behind the position node is to save the trajectory data of the
vehicle. The position of the vehicle is obtained by looking up a transform from
the fixed world frame to the base_link frame which represents the actual position
of the vehicle. The estimated position is generated by the SLAM algorithm. The
topic was introduced for testing purposes in order to compare the actual position
of the vehicle with an estimated position to validate each SLAM algorithm. The
data being saved to the text file is the x and y coordinates as well as a time stamp.
The values are stored into a vector with a frequency of 10 Hz, basically, the same
as the frequency of the /car_position topic referred in Section 6.6.3.

7.6 simulinkPosition

The main idea behind the simulinkPosition node is to save the trajectory of the
vehicle within the map generated from Simulink. The node subscribes to the
topic from simulink /car_position and saves the x and y coordinates as well as
a time stamp into a text file. These coordinates represents the actual position of
the vehicle in the map, thus, not distorted by the sensors in the model. This topic
is described in Section 6.6.3.
7.7 ROS Control App 61

7.7 ROS Control App


The ROS Control App is an android mobile application, which is used to steer the
physical vehicle. This is done by moving a joystick in x and y directions, which
publishes the angular and linear movement on a topic called /joystick.

7.8 Joystick Input


The purpose of the Joystick node is to interface the data from the ROS Control
App and supply this to the physical vehicle. Basically, the node subscribes to
the joystick topic from the ROS Control App. The data from the application can
be seen as an instruction to rotate the vehicle as well as instruction to move the
vehicle forward or backwards. This data is being being split up into two topics,
/throttle as well as /steering which are both of the type std_msgs/Int8. The throt-
tle topic contains instructions to the engine of the vehicle to move the vehicle
either forward or backwards, and the steering topic contains the instructions to
the servo on the vehicle.

7.9 CAN Writer


The task of the can_writer node is to send data over the CAN bus to the STM32.
Basically, this data being sent is steering instructions to the servo on the vehi-
cle, instructions to the motor to move the car forward or backwards as well as
instructions to the lights and the buzzer on the vehicle. This data is obtained
from various topics in the ROS environment, such as the /throttle 7.8 topic, the
/steering 7.8 topic and the /carLights topic.

7.10 CPU Recoder


The task of the CPU recorder node is to periodically sample the CPU usage of a
process and store it in a text file. It uses two different linux commands for this,
top and ps, and stores the results from each method in a separate text file. Two
different methods are used as a security measure, since it would be very time
consuming to have to do all simulations again if results from the other method
was needed.
Simulations
8
8.1 Method
8.1.1 Overview
In order to determine which algorithm was suitable to use on the vehicle, sim-
ulations were performed on the model. Three maps were created, each of dif-
ferent size and each map containing different amount of features in the envi-
ronment. This is to check the performance of each algorithm during different
circumstances. Control instructions were recorded for each map using the Input
block in the Simulink model, so that an identical vehicle path could be used for
each simulation. After this, the simulation was run with this path and all the
simulated data was saved in a ROS bag. In this way, the real-time limitations of
the model could be avoided while running the SLAM algorithms.
Fig. 8.1 below shows the active ROS nodes while simulations are taking place.
The player node replays the data stored in the ROS bag and publishes the data
on the /scan, /sensor_data and /car_position topic. The Odometry node presents
odometry information to the SLAM node, the rViz node and the Position node.
The simulinkPosition, cpu_recorder, position, and rViz nodes were used for
testing purposes. Thorough descriptions of all these nodes can be found in Chap-
ter 7.

63
64 8 Simulations

Figure 8.1: Diagram showing the active ROS nodes while simulating a SLAM
algorithm

8.1.2 Maps
Three maps were designed for the simulation. The main idea was to have three
maps which grow in size, as all SLAM algorithms scale with the map size, and
this might give an idea of how well a particular algorithm scales. The maps are
shown in Figures 8.2, 8.3, and 8.4 below, where the axis tick labels show distance
in meters.
8.1 Method 65

Figure 8.2: Plot of small map

Figure 8.3: Plot of medium map


66 8 Simulations

Figure 8.4: Plot of large map

8.1.3 Tests
Four different types of tests were performed on each algorithm, namely:

• Mapping accuracy
• Path accuracy
• Average position error
• Average CPU load

In the mapping accuracy test, the SLAM algorithm is used on a specific map
simulation. When the map is obtained, this is saved, and presented next to the
actual map in order to show how the generated map from the SLAM algorithm
differs from the actual map.

The path accuracy test compares the generated path of each algorithm with the
actual path the kinematic vehicle has travelled. This is presented in a plot which
includes the actual path, and the generated path from each algorithm.

The average position error is a performance metric which shows how much the es-
timated position from the SLAM algorithm differs from the actual position. This
is calculated by computing the difference between the estimated position and ac-
tual position at each time instance, and computing the mean.

In the average CPU load test, the CPU usage of each SLAM algorithm was recorded
using the CPU recorder node described in Section 7.10. For these results, the top
method was used, with a sampling period of 10 ms.
8.1 Method 67

Each test is performed for each algorithm, for each map. Also, each of these tests
is performed for simulations with ideal sensors, and simulations with added sen-
sor noise.

After the theoretical investigation had been conducted, it was decided to per-
form simulations with GraphSLAM and FastSLAM. This investigation is thor-
oughly described in Section 3.5. These algorithms are implemented in the ROS
packages slam_karto and gMapping, which were used for the simulations. The
active ROS nodes for the slam _karto and gMapping simulations are illustrated
in Fig. 8.5 and Fig. 8.6, respectively.

Figure 8.5: ROS network during simulation of the slam_karto algorithm.


68 8 Simulations

Figure 8.6: ROS network during simulation of the gmapping algorithm


8.2 Results 69

8.2 Results
All simulation results are presented in the sections below. The tests themselves
were carried out as specified in Section 8.1.3.

8.2.1 Mapping accuracy


This section presents how well each algorithm performs in terms of generating an
accurate map. Each figure shows how one map is generated by both algorithms,
as well as using both the ideal and the noisy sensor model for each algorithm.

8.2.1.1 Small map

Fig 8.7 below shows the results for the mapping of the small map. The results of
each algorithm/sensor model appear to be fairly similar.

(a) gmapping with ideal sensor model (b) slam_karto with ideal sensor model

(c) gmapping with noisy sensor model (d) slam_karto with noisy sensor model

Figure 8.7: Mapping accuracy results on small map


70 8 Simulations

8.2.1.2 Medium map


Fig 8.8 shows the results of mapping of the medium map. In this case, the noise
appears to have an effect on the mapping.

(a) gmapping with ideal sensor model (b) slam_karto with ideal sensor model

(c) gmapping with noisy sensor model (d) slam_karto with noisy sensor model

Figure 8.8: Mapping accuracy results on medium map


8.2 Results 71

8.2.1.3 Large map


Fig 8.9 shows the mapping of the large map. Here, the performance of both
algorithms varies greatly.

(a) gmapping with ideal sensor model (b) slam_karto with ideal sensor model

(c) gmapping with noisy sensor model (d) slam_karto with noisy sensor model

Figure 8.9: Mapping accuracy results on large map. When the noisy sensor
model is used, as shown in (d), the map produced by slam_karto is highly
inaccurate. This is because the loop closing fails.
72 8 Simulations

8.2.2 Path accuracy


The following sections contain the results of the path accuracy test for all three
maps and both the ideal and noisy sensor models. Each figure contains the theo-
retical path as well as the generated path of slam_karto and gMapping.

8.2.2.1 Small map


Fig 8.10 below shows the generated path of both algorithms compared to the
theoretical path when running simulations on the small map. gMapping is not
handling the noise as well as slam_karto in this case.
8.2 Results 73

(a) Ideal sensor model

(b) Noisy sensor model

Figure 8.10: Estimated path on small map


74 8 Simulations

8.2.2.2 Medium map

Fig 8.11 below shows the generated path of both algorithms compared to the
theoretical path when running simulations of the medium map. The generated
path of both algorithms does not deviate greatly from the theoretical path using
the ideal model. Both algorithms however, especially slam_karto, show a signifi-
cantly erroneous path with the noisy sensor model.

(a) Ideal sensor model

(b) Noisy sensor model

Figure 8.11: Estimated path on medium map


8.2 Results 75

8.2.2.3 Large map


Fig 8.12 below shows the generated path of both algorithms compared to the
theoretical path when running simulations on the large map. The loop closure
problems of slam_karto, described in Section 8.2.1.3, causes its path to drift away
during the end of the simulation.

(a) Ideal sensor model

(b) Noisy sensor model

Figure 8.12: Estimated map on large map


76 8 Simulations

8.2.3 Average position error


8.2.3.1 Ideal sensor model

Table 8.1: Average position error of gMapping and slam_karto in different


maps with ideal sensors.

Small Map Medium Map Large Map


gMapping 0.08 0.3055 0.4308
slam_karto 0.0913 0.1970 0.4012

As can be seen in Table 8.1, when an ideal sensor model is used, the algorithms
produce similar results, although slam_karto slightly outperforms gMapping on
the medium map. A specific cause for this remains inconclusive. What can be
said regarding this in general is that given the same environment and sensor
data available, slam_karto simply works better than gMapping under those very
particular circumstances.

8.2.3.2 Noisy sensor model

Table 8.2: Average position error of gMapping and slam_karto in different


maps with noisy sensors.

Small Map Medium Map Large Map


gMapping 0.1983 0.3270 0.4260
slam_karto 0.0783 0.5307 0.6829

In both the ideal and noisy simulation sets, the clear relationship is that the aver-
age position error always seems to increase with the size of the map. This can be
attributed to the fact that if the robot travels for a longer distance, the measure-
ments may drift for a longer time until the algorithm has a chance to synchronize,
i.e. through a loop closure. This is illustrated pretty well in Fig 8.12, where for
example gMapping drifts a lot while travelling through the long corridor until it
is able to correct itself.
Fig 8.13 shows how the position error over time for one of the simulations.
The figure shows that the error does not have any time-related behaviour, it’s
magnitude does not for example increase nor decrease over time. The error is in
contrast related to the algorithms ability (or disability) to predict the position of
the vehicle. This ability can be due to the quality of sensor data the algorithm
has to work with, and how "difficult" the current estimation problem is to solve.
8.2 Results 77

Thus, the error is more related to the state of the vehicle and its environment,
rather than the time elapsed.

Figure 8.13: Large map with ideal sensors- position error over time

8.2.4 Average CPU load

In the section below, the average CPU load is expressed as a percentage. This
percentage is the average percentage of time that the SLAM algorithm utilizes
the CPU. This figure refers to the usage of one CPU core, so a number higher
than 100% is possible if the algorithm uses more than one core.
78 8 Simulations

8.2.4.1 Ideal sensor model

Small Map Medium Map Large Map


gMapping 68.61% 99.19% 102.21%
slam_karto 16.74% 39.03% 56.94%

Table 8.3: Average CPU load of gMapping and slam_karto in different maps
using ideal sensor model.

8.2.4.2 Noisy sensor model

Small Map Medium Map Large Map


gMapping 90.5% 97.93% 70.45%
slam_karto 34.85% 46.45% 21.91%

Table 8.4: Average CPU load of gMapping and slam_karto in different maps
using noisy sensor model.

Something which is somewhat surprising here is the the fact that for slam_karto,
the CPU usage goes down significantly for the large map. If you cross-examine
this observation with the fact that this simulation produced a highly inaccurate
map (see Fig. 8.9 (d)) and a highly inaccurate path (see Fig 8.12 (b)), this may
indicate that the algorithm is not doing all computations necessary to work prop-
erly.

8.3 Issues

When doing these experiments, there were some problems with running the
slam_karto node. The problem was that when performing a loop closure, the
node crashed almost every time. A crash itself did not result in erroneous re-
sults, but it meant that the SLAM was aborted, so no further mapping could be
conducted from that point on. Also, since the crash resulted in a loop closure
not being executed, this means that the generated map was not as accurate as it
would have been following a successful loop closure.
The cause of the slam_karto crashes was never found. In order to still obtain
results which could be used in this thesis, the slam_karto experiments had to
performed over and over again until the loop closure algorithm managed to be
executed without crashing.
8.4 Conclusion 79

8.4 Conclusion
The results show that both the gMapping and slam_karto algorithms are able to
solve the SLAM problem. The generated maps (seen in Fig. 8.7, 8.8, and 8.9)
from the two algorithms are similar in the ideal case. Moreover, the position-
ing error does not differ a lot between the two algorithms, with each algorithm
outperforming the other on at least one map.
However, when noise is introduced, it can clearly be seen that the loop closure
of slam_karto starts failing. A good example of this can be seen in Fig. 8.8 d),
where the robot has observed the lower room twice, but fails to recognize the
fact that it as been there before. A similar issue can be seen in Fig. 8.11 b). It
should also be noted that the loop closure of slam_karto mostly resulted in the
node crashing, or generated a map that was completely incomprehensible. The
results shown in this report simply show the best result out of many trials. The
gMapping algorithm, however, produced similar results every time, and there
were no issues with the loop closure.
Another interesting observation can be seen in Fig. 8.12 b). In this simula-
tion, gMapping continuously makes small position mistakes which are quickly
corrected. slam_karto, on the other hand, predicts the position really well un-
til it reaches the lower corridor of the map, where the positioning error starts
to increase. The algorithm never recovers from this error, and keeps on produc-
ing false position estimates from that point. This, along with the fact that the
slam_karto node almost always crashes when performing a loop closure, indi-
cates that it may not be as reliable as gMapping.
Tables 8.3 and 8.4 show that in all simulations performed, gMapping has at
least twice as much CPU load as slam_karto. This large CPU load of the algo-
rithm, which at times even occupies an entire core by itself, is an indicator that
the algorithm might have problems being executed using limited resources. Re-
gardless, it was still chosen to be the one to be implemented on the physical
vehicle, due to the fact that slam_karto was deemed too unreliable.
Real-world experiment
9
9.1 Overview
Chapter 8 introduced simulation results on three different maps generated by two
different SLAM algorithms. From these simulations, it was decided that gMap-
ping was the appropriate algorithm to use with the physical vehicle. This chap-
ter describes a real-world experiment featuring a map generated by the physical
vehicle while running gMapping.
The real-world experiment of the gMapping algorithm was performed at the
ÅF office building in Linköping. The room chosen was a small office approxi-
mately measuring 5.3x4.8 m. A sketch of this office can be seen in Fig. 9.1 below,
and a photo can be seen in Fig 9.2

81
82 9 Real-world experiment

Figure 9.1: Sketch of test environment. The circles represent table legs, and
the squares two large cupboards which divide the room.
9.2 Results 83

Figure 9.2: Photo of test environment

9.2 Results
The experiment was conducted three times, and the generated maps are shown
in Fig. 9.3 below. The computational load for each trial is shown in Table 9.1.
84 9 Real-world experiment

(a) Trial 1

(b) Trial 2

(c) Trial 3

Figure 9.3: Map of small office generated by gMapping

During the generation of each map, the robot was controlled manually, which
meant that the robot drove differently at each trial. All maps are fairly accurate
9.2 Results 85

in the sense that they depict all landmarks correctly. This is evident if any of the
maps in Fig. 9.3 are compared with the sketch of the test area shown in Fig. 9.1,
as it can be seen that all features shown in the sketch are present. The algorithm
does seem to struggle a little bit with the shape of the bottom cupboard as map
(a) and (b) depict it not as a perfect rectangle, but slightly skewed. It probably
would have been easier for the algorithm to map this feature had it been able
to travel along the south side of the cupboard, in order to perform a better loop
closure. For the purposes of this thesis though, the mapping accuracy is highly
satisfactory.
86 9 Real-world experiment

Table 9.1: Computational load of the gMapping algorithm

Trial No. CPU load


1 88.13%
2 89.5%
3 94.48%

9.3 Conclusion
The maps shown in Fig. 9.3 show that the gMapping algorithm is able to success-
fully create a map of the office. All objects (the cupboards and the chair legs) are
successfully depicted in the map. The positioning part of the algorithm has not
been evaluated in the real world experiment due to the lack of a sophisticated
enough test environment to do so.
The CPU load in this experiment, shown in Table 9.1 is similar to the one in
the simulations, meaning that almost an entire core is occupied while the algo-
rithm is running. As the imx6 processor used on the physical vehicle is a quad-
core, this is not a problem if only the SLAM algorithm is executed. However if
additional functionality was to be added to the vehicle, for example a pathfind-
ing algorithm, some performance issues may arise. These could be solved by, for
example, reducing the amount of particles in the filter, but may also result in the
algorithm producing more erroneous results.
10 Discussion

10.1 Achievements
This thesis has yielded the following results:
• A Simulink model of the physical vehicle and its sensors
• A small-scale vehicle with a working SLAM algorithm
• A comprehensive study comparing different SLAM algorithms
The model-based development approach was proven to be beneficial for this the-
sis. Comparing the estimated position with an actual position would have been
significantly difficult if only the real-world experiment had been performed, as
it would have required a highly sophisticated test apparatus. Without the model,
the results section of this thesis would have been rather thin. The model was
probably not necessary for the final implementation itself though, since the out-
put of the algorithms could have been examined visually in order to evaluate
which one should be implemented. The performance numbers could not have
been obtained without it though, and the model-based approach can be really
useful in situations where testing is difficult and/or expensive.
The performance of the SLAM algorithm implemented were highly satisfac-
tory. One of the main reasons for the ADEPT project is do develop something
which could be demonstrated at job fairs, the SLAM algorithm works really well
in this scenario. One of the reasons for this is that it works seamlessly without
any special kind of environment, as long as too many moving objects are avoided.
The results are also very visual, and could easily appeal to a large audience.
The comparison of the SLAM algorithms was also successful. Several per-
formance tests were devised, and clearly show how well the SLAM algorithms
perform under different circumstances. There’s something this thesis has which

87
88 10 Discussion

other similar works lack, and that is a noisy sensor model. Most other works as-
sume ideal sensors. This is unrealistic, since there’s always sensors noise present
to some degree. A good study of SLAM algorithms needs to examine how well
they perform in the presence of noise, otherwise the evaluation of the algorithm
may be overly optimistic.

10.2 Method
If this thesis was to be conducted again, there could be several improvements on
its method. One is the fact that a lot of time was spent trying to create a realistic
model of the vehicle itself. Since this proved to be too time-consuming, it was
abandoned in favor of a simple vehicle model, but nevertheless it consumed a lot
of time which could have been better spent.
Another aspect which could be improved is the fact that the whole environ-
ment model was created from scratch in Simulink. This was also probably unnec-
essarily time-consuming, as ROS-compliant robot simulators already exist, such
as Stage or Gazebo. This was not known, however, until the model was almost
completed. If these were to have been used (along with Simulink for the sensor
model), an equal or better model could probably have been implemented in a
shorter amount of time. This time could have been spent completing some of the
lower priority tasks, or do some further debugging on slam_karto in order to find
out more explicitly why it had a tendency to crash.

10.3 Complications
During the thesis work complications arose regarding one of the specified ques-
tions which was to be examined. The specified question was "Development of
a model in Simulink and evaluating the model by comparing the output of the
model to a theoretical position/direction within a map". As the modelling of
the vehicle itself proved too time-consuming, most time was instead spent on
modelling the sensors of the vehicle. This resulted in a very simplistic vehicle
model, which only approximately resembles the vehicle used in this thesis. For
example, when it comes to acceleration and forward velocity, a generic model is
used, which has little to do with the physical vehicle. Thus, comparing the model
output to a theoretical position/direction wouldn’t say much, due to the acceler-
ation and forward velocity not being based on the physical vehicle. As a result,
this evaluation had to be omitted from the thesis. However, since the thesis was
about evaluating SLAM algorithms, a realistic motion model of the vehicle is not
of high importance. What is of importance here is instead accurate modelling of
the sensors, which was done successfully to some degree.
There were also some hardware issues with the physical vehicle. Some com-
ponents broke, meaning that time had to be spent to make repairs. As some com-
ponents had to be ordered, the physical vehicle was not available while waiting
for the new components to arrive. After gathering the results of the real-world
experiment, the STM32 board broke. Since this board is hard to replace, it meant
10.4 Future work 89

that no additional experiments could be done, for example trying to make a map
of a larger environment.

10.4 Future work


Since the model of the vehicle has already been developed, it could be used to test
any ROS-compliant SLAM algorithm with minor further implementation. Sev-
eral newer algorithms which have not been covered in this thesis could be used,
such as HectorSlam or Google Cartographer. HectorSlam would be of extra inter-
est, as it uses no odometry, meaning that the inaccurate results provided by the
wheel encoders in the physical vehicle would not affect the results in any way.
90 10 Discussion
Bibliography

[1] coreslam - ros wiki. http://wiki.ros.org/coreslam. Accessed: 24-


03-2017. Cited on page 2.
[2] gmapping - ros wiki. http://wiki.ros.org/gmapping. Accessed: 24-
03-2017. Cited on page 2.
[3] slamkarto - ros wiki. http://wiki.ros.org/slam_karto. Accessed:
24-03-2017. Cited on page 2.
[4] EKF SLAM for dummies
. https://ocw.mit.edu/courses/aeronautics-and-astronautics/
16-412j-cognitive-robotics-spring-2005/projects/1aslam_
blas_repo.pdf, 2005. Accessed: 24-03-2017. Cited on pages 6, 15, 17,
and 18.
[5] Fast large-scale SLAM with improved accuracy in mobile robot. 2010
IEEE International Conference on Robotics and Biomimetics, Robotics and
Biomimetics (ROBIO), 2010 IEEE International Conference on, page 791,
2010. Cited on pages 19 and 26.
[6] About the company ÅF. http://www.afconsult.com/sv/om-af/
historia/, 2016. Accessed: 2017-07-19. Cited on page 1.
[7] sensor_msgs/laserscan message. http://docs.ros.org/api/sensor_
msgs/html/msg/LaserScan.html, 2017. Accessed: 2017-06-27. Cited
on page 55.
[8] std_msgs/float64multiarray message. http://docs.ros.org/jade/
api/std_msgs/html/msg/Float64MultiArray.html, 2017. Ac-
cessed: 2017-06-27. Cited on page 55.
[9] std_msgs/header message. http://docs.ros.org/jade/api/std_
msgs/html/msg/Header.html, 2017. Accessed: 2017-06-27. Cited on
page 55.
[10] Mark Compere. Simple 2D kinematic vehicle steering model and animation.
https://se.mathworks.com/matlabcentral/fileexchange/

91
92 Bibliography

54852-simple-2d-kinematic-vehicle-steering-model-and-animation,
1999. Accessed: 2017-06-27. Cited on page 32.

[11] A. Dine, A. Elouardi, B. Vincke, and S. Bouaziz. Graph-based simultaneous


localization and mapping: Computational complexity reduction on a mul-
ticore heterogeneous architecture. IEEE Robotics & Automation Magazine,
(4):160, 2016. Cited on page 2.

[12] M. W. M. Gamini Dissanayake, Paul Newman, Steven Clark, Hugh F.


Durrant-Whyte, and M. Csorba. A solution to the simultaneous localiza-
tion and map building (slam) problem. IEEE Transactions on Robotics and
Automation, 17(3):229–241, 2001. Cited on pages 2 and 15.

[13] Manon Kok. Probabilistic modeling for sensor fusion with inertial measure-
ments. Linköping Studies in Science and Technology. Dissertations: 1814.
Department of Electrical Engineering, Linköping University, 2016. Cited
on page 42.

[14] M. Laššák and K. Draganová. Improvement of Low-Cost MEMS Gyroscope


Characteristics by Data Filtering and Fusion. Advances in Military Technol-
ogy, 16(2):171 – 178, 2016. Cited on page 33.

[15] Martin Meywerk. Vehicle dynamics. Automotive Series. West Sussex, Eng-
land : Wiley, 2015. Cited on page 29.

[16] William F. Milliken and Douglas L. Milliken. Race Car Vehicle Dynamics.
Warrendale, PA : Society of Automotive Engineers, 1 edition, 1995. Cited on
page 31.

[17] Curotto Molina and Franco Andreas. Graphslam algorithm implementation


for solving simultaneous localization and mapping. Master’s thesis, Univer-
sidad de Chile, 2016. Cited on pages 16, 23, 24, and 26.

[18] M. Montemerlo, S. Thrun, D. Koller, and B. Wegbreit. Fastslam: A factored


solution to the simultaneous localization and mapping problem. page 1,
2013. Cited on page 27.

[19] Michael Montemerlo and Sebastian Thrun. FastSLAM - A Scalable Method


for the Simultaneous Localization and Mapping Problem in Robotics.
Springe Berlin Heildelberg New York, 1 edition, 2007. Cited on pages 12,
20, 21, and 22.

[20] Erik Narby. Modeling and Estimation of Dynamic Tire Properties. Master’s
thesis, Linköping University, Linköping, Sweden, 2005. Cited on page 7.

[21] Ulrich Nehmzow. Mobile Robotics - A Practical Introduction. Springer-


Verlag London Limited 2003, 2 edition, 2003. Cited on page 35.

[22] Ed Ramsden. Hall-effect sensors - theory and applications. Amsterdam ;


Boston : Elsevier/Newnes, 2 edition, 2006. Cited on page 34.
Bibliography 93

[23] S.B. Samsuri, H. Zamzuri, M.A. Abdul Rahman, and Ab Mazlan, S.A. Com-
putational cost analysis of extended Kalman filter in simultaneous local-
ization and mapping (EKF-SLAM) problem for autonomous vehicle. 2015.
Cited on page 19.
[24] Machado J. Santos, D. Portugal, and P. R. Rocha. An evaluation of 2d slam
techniques available in robot operating system. 2013 IEEE International
Symposium on Safety, Security, and Rescue Robotics (SSRR), Safety, Secu-
rity, and Rescue Robotics (SSRR), 2013 IEEE International Symposium on,
(1), 2013. Cited on page 6.
[25] Pedro Machado João Santos. Smokenav - simultaneous localization and map-
ping in reduced visibility scenarios. 2013. Cited on page 6.
[26] Roland Siegwart, R. Illah NourBakhsh, and Davide Scaramuzza. Introduc-
tion to Autonomous Mobile Robots. Springer-Verlag London Limited 2003,
2 edition, 2011. Cited on pages 23 and 25.
[27] Cyrill Stachniss. Robot mapping EKF SLAM. http://ais.
informatik.uni-freiburg.de/teaching/ws12/mapping/pdf/
slam04-ekf-slam.pdf, 2012. Cited on pages 18, 19, and 26.

[28] Sebastian Thrun, Wolfram Burgard, and Dieter Fox. Probabillistic Robotics.
Cambridge, Mass. : MIT Press, 1 edition, 2005. Cited on pages 6, 7, 9, 10,
11, 12, 13, 19, 21, 25, 26, and 27.
[29] Wei Wang, Li Dongying, and Yu Wenxian. Simultaneous localization and
mapping embedded with particle filter algorithm. 2016 10th European Con-
ference on Antennas and Propagation (EuCAP), Antennas and Propagation
(EuCAP), 2016 10th European Conference on, page 1, 2016. Cited on page
2.
[30] Yifan Xia, Jie Li, Lin Qi, and Hao Fan. Loop closure detection for visual
slam using pcanet features. 2016 International Joint Conference on Neu-
ral Networks (IJCNN), Neural Networks (IJCNN), 2016 International Joint
Conference on, page 2274, 2016. Cited on page 16.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy