0% found this document useful (0 votes)
11 views11 pages

Hcis21 030

Uploaded by

NAM KIỀU HẢI
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views11 pages

Hcis21 030

Uploaded by

NAM KIỀU HẢI
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 11

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/351948618

2D Autonomous Robot Localization Using Fast SLAM 2.0 and YOLO in Long
Corridors

Chapter · May 2021


DOI: 10.1007/978-981-16-3264-8_19

CITATIONS READS

3 297

4 authors, including:

Abdellah Chehri Alfred Zimmermann


Royal Military College of Canada Reutlingen University - Herman Hollerith Center
276 PUBLICATIONS 2,644 CITATIONS 167 PUBLICATIONS 1,732 CITATIONS

SEE PROFILE SEE PROFILE

Saadane Rachid
Hassania School of Public Works
222 PUBLICATIONS 1,718 CITATIONS

SEE PROFILE

All content following this page was uploaded by Saadane Rachid on 11 December 2021.

The user has requested enhancement of the downloaded file.


2D Autonomous Robot Localization using Fast SLAM 2.0
and YOLO in Long Corridors

Abdellah Chehri1 *, Ahmed Zarai1, Alfred Zimmermann2, Rachid Saadane3


1
University of Quebec in Chicoutimi, UQAC, 555, bvd de l’Université, G7H 2B1, Chicoutimi
(Québec), Canada.
2
Reutlingen University, Germany Reutlingen University, Faculty of Informatics, Alteburg-
straße 150, 72762 Reutlingen, Germany
3 SIRC/LaGeS-EHTP, EHTP Km 7 Route El Jadida, Oasis, Morocco

* Corresponding author: achehri@uqac.ca

Abstract. Autonomous navigation is one of the main areas of research in mobile


robots and intelligent connected vehicles. In this context, we are interested in
presenting a general view on robotics, the progress of research, and advanced
methods related to this field to improve autonomous robots' localization. We seek
to evaluate algorithms and techniques that give robots the ability to move safely
and autonomously in a complex and dynamic environment. Under these con-
straints, we focused our work in the paper on a specific problem: to evaluate a
simple, fast and light SLAM algorithm that can minimize localization errors. We
presented and validated a FastSLAM 2.0 system combining scan matching and
loop closure detection. To allow the robot to perceive the environment and detect
objects, we have studied one of the best deep learning technique using convolu-
tional neural networks (CNN). We validate our testing using the YOLOv3 algo-
rithm.

Keywords: autonomous vehicles, autonomous vehicles, SLAM, Lidar sensor,


Deep Learning, convolutional neural networks.

1 Introduction
Robotics and autonomous vehicles promise advancements in the world. In intelligent
robotics, one of the most strategic elements is mobility or navigation, which represents
the ability of robots to orient themselves, move around, and recognize their environ-
ment in real-time. The design of an autonomous robot faces many problems since it is
unable to react properly to the real world. This implies a number of constraints, includ-
ing the appropriate design, the choice of sensors, methods of localization, mapping,
navigation, planning, classification of objects, and others [1]-[5].
Locating and mapping the environment of a mobile robot has received the most re-
search attention over the past decade. In order to navigate safely and in a specific way,
an autonomous mobile robot must find its position in its environment.
2

Local localization consists of evaluating the position and orientation by integrating


the information provided by various sensors. Moreover, the integration is started from
the initial position and is continuously updated over time.
The mobile robot uses the sensory information of the odometry to analyze the meas-
urement information in the environment. Therefore, it adapts the real environmental
characteristics to the map (topological or geometric) known by the robot in order to
locate himself successfully [6].
However, for a robot to react correctly to its environment, it must be autonomous
and intelligent. In this context, we are interested in presenting a general view and meth-
ods related to the localization of autonomous robots, and subsequently, we will make a
combination of research on artificial intelligence technology applied to robotics.
We can automate a robot by using specific sensors. The robot must have the choice
of three main sensors: cameras, radar, and LiDAR.
To that end, the robot must plan and make decisions, create a map of the environment
using data from the Lidar sensor via simultaneous localization and mapping (SLAM).
It must navigate the constrained environments by designing algorithms for planning
trajectories and movements. Path planners can be used to calculate an obstacle-free path
in a given map.
For these robots to perform their tasks efficiently, they need minimum artificial in-
telligence. This intelligence generally provides them with cognition, decision-making,
and manipulation of different objects. The robot must also perceive the environment to
detect, track objects, and estimate movement interactively. In this sense, the best tech-
nique that can be used is deep learning for image classification and object detection
using convolutional neural networks (CNNs).
Finally, algorithms and applications can be used to analyze, design systematically,
and visualize complex systems' behavior in time and frequency domains.
This paper is organized as follows. Section II will describe the simultaneous locali-
zation and mapping. Section III will be comprised of You Only Look Once is a real-
time object detection algorithm. Simulation results are given in Section IV. Section V
will cover the conclusion of this paper.

2 SLAM: Problem and Solution


To localize and map autonomous mobile robots, we will focus on the most efficient
SLAM method [6]-[7]. The SLAM is the basis of most navigation systems.

2.1 SLAM: fundamental principles and issues

SLAM is a process by which a robot can create a map of an environment and at the
same time use that map to infer its location. In SLAM, the platform's trajectory and the
location of all landmarks are estimated online, without needing to know the location a
priori.
Consider the case of a mobile robot moving through an environment making relative
observations of many unknown landmarks using a sensor located on the robot, as shown
in Figure 1. Simultaneous estimation of robot locations and landmarks is required. True
3

locations are never known or directly measured. Observations are made between the
real robot and the benchmarks. It can be seen that there are errors between the estimated
locations and the actual landmarks. SLAM is relevant. It is considered a fundamental
problem for autonomous robots and autonomous vehicles [2].

Figure 1. Schematic diagram of simultaneous localization and mapping (SLAM) [6].

At a time 𝑘𝑘, the following quantities are defined:


 𝑥𝑥𝑘𝑘 : the state vector describing the location and orientation of the vehicle.
 𝑢𝑢𝑘𝑘 : the control vector, applied at time 𝑘𝑘 − 1 to drive the vehicle to a state 𝑋𝑋𝑘𝑘
at time 𝑘𝑘.
 𝑚𝑚𝑖𝑖 : a vector describing the location of the ith reference point whose real lo-
cation is assumed to be invariant in time.
 𝑧𝑧𝑖𝑖𝑖𝑖 : an observation taken from the vehicle of the location of the ith landmark
at time 𝑘𝑘. In addition, the following sets are also defined:
 𝑋𝑋0:𝑘𝑘 = {𝑥𝑥0 , 𝑥𝑥1 , … , 𝑥𝑥𝑘𝑘 } = {𝑋𝑋0:𝑘𝑘−1 , … , 𝑥𝑥𝑘𝑘 } : the history of the vehicle locations.
 𝑈𝑈0:𝑘𝑘 = {𝑢𝑢1 , 𝑢𝑢2 , … , 𝑢𝑢𝑘𝑘 } = {𝑈𝑈0:𝑘𝑘−1 , … , 𝑢𝑢𝑘𝑘 } : the history of control inputs.
 𝑚𝑚 = {𝑚𝑚1 , 𝑚𝑚2 , … , 𝑚𝑚𝑛𝑛 } : the set of all reference points.
 𝑍𝑍0:𝑘𝑘 = {𝑧𝑧1 , 𝑧𝑧2 , … , 𝑧𝑧𝑘𝑘 } = {𝑍𝑍0:𝑘𝑘−1 , … , 𝑧𝑧𝑘𝑘 } : the set of all significant observations
or benchmarks.

2. 2. Probabilistic SLAM

In probabilistic form, simultaneous localization and mapping (SLAM) require that


the probability distribution be calculated for all times 𝑘𝑘.

𝑃𝑃(𝑥𝑥𝑘𝑘 , 𝑚𝑚|𝑍𝑍0:𝑘𝑘 , 𝑈𝑈0:𝑘𝑘 , 𝑥𝑥0 ) (1)


4

This tracking distribution describes the common posterior density of landmark loca-
tions and vehicle condition (at time 𝑘𝑘), taking into account recorded observations and
control inputs up to and including time 𝑘𝑘, as well as the initial state of the vehicle [9].
In general, a recursive solution to the SLAM problem is desirable. Starting with an
estimate of the distribution 𝑃𝑃(𝑥𝑥𝑘𝑘−1 , 𝑚𝑚|𝑍𝑍0:𝑘𝑘−1 , 𝑈𝑈0:𝑘𝑘−1 ) at time 𝑘𝑘 − 1, following a con-
trol 𝑢𝑢𝑘𝑘 and an observation 𝑧𝑧𝑘𝑘 , this calculation is based on Bayes' theorem. Thus it is
necessary to define a state transition model and an observation model describing the
effect of the control input and the corresponding observation [9].
The observation model describes the publication of making a 𝑧𝑧𝑘𝑘 observation when
the vehicle location and landmarks are known that it is described as:
𝑃𝑃(𝑧𝑧𝑘𝑘 |𝑥𝑥𝑘𝑘 , 𝑚𝑚) (2)

It is reasonable to assume that once space and the vehicle map are permanent, the
observations are conditionally independent given the map and the current state of the
mobile robot.
The mobile robot motion model can be described in terms of the tracking distribution
over state transitions as:
𝑃𝑃(𝑥𝑥𝑘𝑘 |𝑥𝑥𝑘𝑘−1 , 𝑢𝑢𝑘𝑘 ) (3)

The state transition is assumed to be a Markov process in which the next state de-
pends only on the immediately preceding state 𝑥𝑥𝑘𝑘−1 and the applied control 𝑢𝑢𝑘𝑘 and 'it
is independent of both observations and the map.
The SLAM algorithm is now implemented in a standard two-step recursive (sequen-
tial) procedure, prediction (update time), and correction (update measurements):
• Update time:
𝑃𝑃(𝑥𝑥𝑘𝑘 , 𝑚𝑚|𝑍𝑍0:𝑘𝑘−1 , 𝑈𝑈0:𝑘𝑘 , 𝑥𝑥0 ) = ∫ 𝑃𝑃(𝑥𝑥𝑘𝑘 |𝑥𝑥𝑘𝑘−1 , 𝑢𝑢𝑘𝑘 ) ×
𝑃𝑃(𝑥𝑥𝑘𝑘−1 , 𝑚𝑚|𝑍𝑍0:𝑘𝑘−1 , 𝑈𝑈0:𝑘𝑘−1 , 𝑥𝑥0 )𝑑𝑑𝑥𝑥𝑘𝑘−1 (4)

• Update measurements:
𝑃𝑃�𝑧𝑧𝑘𝑘 �𝑥𝑥𝑘𝑘 , 𝑚𝑚�𝑃𝑃�𝑥𝑥𝑘𝑘 , 𝑚𝑚�𝑍𝑍0:𝑘𝑘−1 , 𝑈𝑈0:𝑘𝑘 , 𝑥𝑥0 �
𝑃𝑃(𝑥𝑥𝑘𝑘 , 𝑚𝑚|𝑍𝑍0:𝑘𝑘 , 𝑈𝑈0:𝑘𝑘 , 𝑥𝑥0 ) = (5)
𝑃𝑃�𝑧𝑧𝑘𝑘 �𝑍𝑍0:𝑘𝑘−1 , 𝑈𝑈0:𝑘𝑘 �

Equations (4) and (5) provide a recursive procedure for calculating the posterior joint
𝑃𝑃(𝑥𝑥𝑘𝑘 , 𝑚𝑚|𝑍𝑍0:𝑘𝑘 , 𝑈𝑈0:𝑘𝑘 , 𝑥𝑥0 ) for the state of the robot 𝑥𝑥𝑘𝑘 and the map 𝑚𝑚 at time 𝑘𝑘, based on
all observations 𝑍𝑍0:𝑘𝑘 and all control inputs 𝑈𝑈0:𝑘𝑘 up to time 𝑘𝑘.
Recursion is a function of a vehicle model 𝑃𝑃(𝑥𝑥𝑘𝑘 |𝑥𝑥𝑘𝑘−1 , 𝑢𝑢𝑘𝑘 ) and an observation model
𝑃𝑃(𝑧𝑧𝑘𝑘 |𝑥𝑥𝑘𝑘 , 𝑚𝑚). It is interesting to note that the problem of cartographic construction can
be formulated as the computation of the conditional density 𝑃𝑃(𝑚𝑚|𝑋𝑋0:𝑘𝑘 , 𝑍𝑍0:𝑘𝑘 , 𝑈𝑈0:𝑘𝑘 ). This
assumes that the location of the mobile robot 𝑥𝑥𝑘𝑘 is known (or at least deterministic) at
all times, subject to knowledge of the initial location. An 𝑚𝑚 map is then constructed by
merging observations from different locations. Conversely, the localization problem
can be formulated as the computation of the probability distribution
𝑃𝑃(𝑥𝑥𝑘𝑘 |𝑍𝑍0:𝑘𝑘 , 𝑈𝑈0:𝑘𝑘 , 𝑚𝑚). This assumes that the locations of the landmarks are known with
certainty and that the goal is to calculate an estimate of the location of the robots relative
to these landmarks.
5

2. 3. Fast SLAM

The FastSLAM algorithm, introduced by Montemerlo et al. [10], marked a funda-


mental conceptual change in the conception of recursive probabilistic SLAM.
FastSLAM, based on recursive Monte Carlo sampling, this particle filtering was the
first to directly represent the nonlinear process model and the non-Gaussian pose dis-
tribution.
FastSLAM still linearizes the observational model, but this is generally a reasonable
approximation for slewing distance measurements when the vehicle pose is known.
High dimensional state space is a problem at SLAM, making direct application of
particulate filters impractical. However, it is possible to reduce the sample space by
applying Rao-Blackwellization (RB), in which a later state is partitioned according to
the product rule 𝑃𝑃(𝑥𝑥1 , 𝑥𝑥2 ) = 𝑃𝑃(𝑥𝑥2 |𝑥𝑥1 )𝑃𝑃(𝑥𝑥1 ) and, if 𝑃𝑃(𝑥𝑥2 |𝑥𝑥1 ) can be represented ana-
(𝑖𝑖)
lytically, only 𝑃𝑃(𝑥𝑥1 ) needs to be sampled 𝑥𝑥1 ~𝑃𝑃(𝑥𝑥1 ).
(𝑖𝑖) (𝑖𝑖) 𝑁𝑁
The posterior distribution is therefore represented by the set �𝑥𝑥1 , 𝑃𝑃�𝑥𝑥2 �𝑥𝑥1 �� and
𝑖𝑖
1 (𝑖𝑖)
statistics such as the marginal (𝑥𝑥1 ) = ∑𝑁𝑁 𝑃𝑃�𝑥𝑥2 �𝑥𝑥1 � can be obtained with higher
𝑁𝑁 𝑖𝑖
precision by performing common space sampling.
The subsequent SLAM state can be factored into a vehicle component and a condi-
tional map component:
𝑃𝑃(𝑋𝑋0:𝑘𝑘 , 𝑚𝑚|𝑍𝑍0:𝑘𝑘−1 , 𝑈𝑈0:𝑘𝑘 , 𝑥𝑥0 ) = 𝑃𝑃(𝑋𝑋0:𝑘𝑘 , |𝑍𝑍0:𝑘𝑘 , 𝑈𝑈0:𝑘𝑘 , 𝑥𝑥0 )𝑃𝑃(𝑚𝑚|𝑋𝑋0:𝑘𝑘 , 𝑍𝑍0:𝑘𝑘 ) (6)

Here, the probability distribution is on the trajectory 𝑋𝑋0:𝑘𝑘 rather than on a simple 𝑥𝑥𝑘𝑘
pose because, if the distribution is conditioned on the trajectory, the map marks become
independent.

3 You Only Look Once (YOLO) Algorithm


YOLO is one of the efficient object detection systems for real-time processing. It's
a new approach to object detection. Previous work on object detection reused classifiers
to perform detection. Processing images with YOLO is simple and straightforward. The
system scales the input image to 448 × 448, performs a convolution network on the
image, and limits the resulting detections by model confidence [11].
Usually, fully connected neural networks do not perform well on images. This is
because if each pixel is an input, therefore we will add more layers, which means that
the number of parameters will increase exponentially.
What distinguishes one image from another is its spatial structure. Areas that are
close (pixels) to each other are very significant. Therefore, the special size of the input
volumes must be preserved to provide better performance for image classification tasks.
The VGG16 model was created by the Visual Geometry Group at the University of
Oxford, which won the ImageNet 2014 competition because it is one of the easiest and
efficient models to use.
The VGG ImageNet team has created a smaller and faster model. This model should
recognize 1000 categories of images and use images of size 224 of 224. In addition, it
is one of the architectures used to simulate YOLO.
6

4 Simulation Results
This section presents the results and simulations of the methods studied and chosen
in localization, mapping, and intelligent system applied to autonomous robots.
Using MATLAB and a Phyton Linux system, we simulate both FastSLAM and
YOLO algorithms. We will evaluate the performance and efficiency, which the robot
must carry for the localization and mapping combined with object detection.
Portable laser rangefinders, also known as LIDAR, simultaneous location and map-
ping (SLAM) are an efficient method of acquiring ground planes. Real-time ground
plan generation and visualization help the robot assess the quality and coverage of cap-
ture data.
As we said previously, we simulate the FastSLAM 2.0 method studied previously
by using sensory information from LIDAR to the SLAM database of Matlab.
This simulation's contribution is based on a new technique called loop closure. This
technique aims to determine the environment map and the location of the robot with
great precision. This simulation aims to create a map of the environment using lidar
scans and recover the robot's trajectory.
We loaded a sub-sampled Matlab data set comprising laser scans collected from an
autonomous robot in an indoor environment. The average displacement between two
scans is approximately 0.6 meters.
A floor plan and approximate path of the robot are provided for illustration purposes.
This image shows the relative environment being mapped and the approximate route of
the robot.
In this simulation, a "Jackal" robot data from Clearpath Robotics was used. The robot
is equipped with a "SICK TiM-511" laser scanner with a maximum range of 10 meters.
We set the maximum lidar range slightly lower than the maximum scan range (8m)
because laser readings are less accurate near the maximum range. We also set the grid
map's resolution to 20 cells per meter, which gives an accuracy of 5 cm.
The following loop closure parameters are empirically defined. We have used a
higher loop close threshold to reject false positives in the loop close identification pro-
cess. Scans collected in an environment with similar or repeated functionality are more
likely to produce false positives.
We used a larger search radius for loop closures, which allows the algorithm to
search a broader range of the map around the pose estimate. We will adjust the param-
eters of the loop closure as follows:
Loop close threshold = 210
Buckle Closure Search Radius = 8
We will first add the first ten analyzes to test our algorithm. We reconstruct the scene
by tracing the scans and poses followed by FastSLAM 2.0. We will gradually add ana-
lyzes to the FastSLAM algorithm. The loop closures should be automatically detected
when the robot is moving. Graphical pose optimization is performed whenever a buckle
closure is identified. In this sense, we trace the sweeps and poses each time a loop
closure is identified, and we also visually verify the results. Finally, we draw the final
constructed map so that all analyzes have been added to FastSLAM 2.0.
7

An image of the scans and the pose graph is overlaid on the original floor plan. The
results show that the map matches the original floor plan long after adding all the ana-
lytics and optimizing the pose graph (Figure 2 and Figure 3).

Figure 2: Map of the occupancy grid

Figure 3: Optimized 3D pose graph.

This preliminary simulation presented a FastSLAM 2.0 system combining with scan
matching and loop closure detection, which gives us a graphical optimization.
8

The FastSLAM algorithm incrementally processes lidar analyzes and creates a pose
graph linking these analyzes. The robot can establish one or more loop closures along
its moving path. The FastSLAM algorithm uses the loop closure information to update
the map and adjust the estimated robot path.
This method gives us an excellent performance for mapping and locating the robot,
so SLAM with the LIDAR sensor trains the latter to autonomy.

4.1 Perceive the Environment with YOLO

One of the critical tasks required for an intelligent autonomous robot is detecting and
classifying objects.
The integrated YOLOV3 application allows the mobile robot to perform object de-
tection and tracking, motion estimation, 3D point cloud processing, and sensor fusion.
We use deep learning for image classification, regression, and function learning us-
ing convolutional neural networks (CNN).
YOLOV3 is the new version of the YOLO system; it also formed a new classifier
network, better than the others [12].
The system predicts bounding boxes using dimension clusters as anchor boxes [11].
These predictions are relative to the grid position and anchor size instead of the full-
frame for better performance.
YOLOv3 predicts frames at three different scales.
The system extracts the characteristics of these scales using a similar concept to
highlight pyramid networks. The Basic Features Extractor adds multiple convolutional
layers. The last of these predict a selection box encoding 3-dimensional tensor, objec-
tivity, and class predictions.
In the experiment with COCO (Data set used) [13], we predict 3 frames at each scale
so that the tensor is: 𝑁𝑁 × 𝑁𝑁 × [3 × (4 + 1 + 80)], for the 4 shifts of the selection
frame, 1 objectivity prediction and 80 class predictions. Then, we take the map of the
characteristics of 2 previous layers and we resample it twice.
A new network is used to perform feature extraction. This new network is a hybrid
approach between the network used in YOLOv2, Darknet-19, and this new type of re-
sidual network. The network uses successive 3 × 3 and 1 × 1 convulsive layer and has
shortened connections, significantly larger. This network has 53 conventional layers.
That's why it's called Darknet-53.
We have implemented the YOLOv3 algorithm with Ubuntu because YOLO's pro-
gramming has a lot of architecture and our conventional 53-layer neural network, so it
is better to program it under a Linux system.
• mAP: "mean Average Precision" is the metric for measuring the precision of
object detectors.
• Intersection on Union (IOU) measures the overlap between 2 regions. It
measures the quality of the object detector's prediction with the ground truth (the
limit of the real object).
• SSD is applied to have data increase for small objects to improve mAP; SSD
stands for "Single-Shot MultiBox Detector."
9

• COCO: "common objects in context"; COCO is large-scale object detection,


segmentation, and captioning data set.
In terms of COCO's bizarre average, i.e., AP metric average, YOLO on par with
SSD variants, but it is three times faster [13].
However, there is still a little behind compared to other models like RetinaNet. How-
ever, when looking at the old detection metric of mAP (Average Accuracy) at IOU
(Intersection on Union) = .5 (or 𝐴𝐴𝐴𝐴50 in the graph). This indicates that YOLOv3 is a
very powerful detector that excels in producing decent boxes for objects. With the new
multiscale predictions, we find that YOLOv3 has relatively high 𝐴𝐴𝐴𝐴𝑆𝑆 performance.
According to our simulation, in Linux, we have observed that YOLOv3 presents
considerable advantages compared to other detection systems according to the accuracy
compared to the speed on the metric 𝐴𝐴𝐴𝐴50 . Namely, it's faster and better.
It can be seen from Figure 4 that the prediction is completed within 0.078041 sec-
onds. So the percentages of each object are illustrated in the Linux terminal.

Fig. 4. Prediction and detection of different objects in real-time using YOLOv3.

5 Conclusion and Future Works


In this paper, we studied the concept of autonomous and intelligent robots. In the
first part of the work, we reviewed the localization and mapping techniques of autono-
mous robots and vehicles. The second part of this work was devoted to studying the
most efficient methods to be carried by the robot. The goal is to identify the objects
around the autonomous robot to aid in autonomously safe navigation. Information of
the items around the autonomous robot can be a great asset in avoiding a collision. The
detection, classification, and tracking of the objects will also help the autonomous ro-
bots to localize themselves better in the environment.
To make the robot intelligent and autonomous, we must collect information via spe-
cific sensors to import and process data, such as cameras, Lidar, etc. On the other hand,
10

a robot has to plan and make decisions. That's why we studied localization and simul-
taneous mapping (SLAM) to create a map of the environment and locate the robot.
Using MATLAB software and Python under Linux system, we focused on the sim-
ulation of the algorithms studied, interpreting the results. Regarding SLAM, we have
validated a FastSLAM 2.0 system using sensory information from LIDAR and loop
closure detection, which gave a graphical optimization. This method shows excellent
performance for the mapping and localization of the autonomous robot. We validate
our testing using the YOLOv3 algorithms. This algorithm is a very powerful detector
as well as it is fast and accurate.
Many interesting perspectives could be considered in future work. We can integrate
more efficient filters into SLAM and concentrate on the complexities of computation,
the association of data, and the environment's representation.

References
1. A. Georgiev and P. K. Allen, "Localization methods for a mobile robot in urban environ-
ments," in IEEE Transactions on Robotics, vol. 20, no. 5, pp. 851-864, Oct. 2004.
2. A. Chehri, H. T. Mouftah, “Localization for Vehicular Ad Hoc Network and Autonomous
Vehicles, Are We Done Yet?”, Connected and Autonomous Vehicles in Smart Cities, CRC
Press, Taylor & Francis Group, 2020.
3. A. Chehri and P. Fortier, "Autonomous Vehicles in Underground Mines, Where We Are,
Where We Are Going?" 2020 IEEE 91st Vehicular Technology Conference (VTC2020-
Spring), Antwerp, Belgium, 2020, pp. 1-5, 2020.
4. Chehri A., Quadar N., Saadane R. (2020) Communication and Localization Techniques in
VANET Network for Intelligent Traffic System in Smart Cities: A Review. In: Qu X., Zhen
L., Howlett R., Jain L. (eds) Smart Transportation Systems 2020. Smart Innovation, Systems
and Technologies, vol 185. Springer, Singapore.
5. Kong, Fantian, Chen, Youping, Xie, Jingming, et al. Mobile robot localization based on
extended kalman filter. WCICA. p. 9242-9246, 2006.
6. H. Durrant-Whyte and T. Bailey, "Simultaneous localization and mapping: part I," in IEEE
Robotics & Automation Magazine, vol. 13, no. 2, pp. 99-110, June 2006.
7. M. Bosse, et al., "Simultaneous localization and map building in large-scale cyclic environ-
ments using the Atlas framework", Int. J. Robot. Res., vol. 23, no. 12, pp. 1113-1140, 2004.
8. A. Chehri, P. Fortier, and P. M. Tardif, "Geo-Location with Wireless Sensor Networks using
Non-linear Optimization, " Int. J. of Comput. Sci. and Network Security, vol. 8, no. 1, pp.
145-154, Jan. 2008.
9. S. L. Bowman, N. Atanasov, K. Daniilidis and G. J. Pappas, "Probabilistic data association
for semantic SLAM," 2017 IEEE International Conference on Robotics and Automation
(ICRA), Singapore, 2017, pp. 1722-1729, doi: 10.1109/ICRA.2017.7989203.
10. M. Montemerlo, et al. "Fast-SLAM: A factored solution to the simultaneous localization and
mapping problem", Proc. AAAI Nat. Conf. Artif. Intell., pp. 593-598, 2002.
11. J. Redmon and A. Farhadi, YOLO9000: Better faster stronger, 2016,
12. J. Redmon and A. Farhadi, "YOLOv3: An incremental improvement" in arXiv:1804.02767,
2018, [online] Available: https://arxiv.org/abs/1804.02767.
13. Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, et al., "Microsoft COCO:
Common objects in context" in Computer Vision—ECCV, Cham, Switzerland:Springer, pp.
740-755, 2014.

View publication stats

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy