FYP - Automonous Mapping and Exploring Robot
FYP - Automonous Mapping and Exploring Robot
ISLAMABAD
Mudassar Hussain
Roll # Fa-2020/B.Sc-EE/2030-0142
Ahsan Mansha
Roll # Fa-2020/B.Sc-EE/2030-0123
Syed Mujahid Bin Nauman
Roll # Fa-2020/B.Sc-EE/2030-0068
Supervisor:
Dr. Abdul Khaliq
Co-Supervisor:
Engr. Safdar Munir
Declaration
We, hereby declare that this project neither as a whole nor as a part thereof has been
copied out from any source. It is further declared that we have developed this project
and the accompanied report entirely based on our efforts made under the sincere
guidance of our supervisor. No portion of the work presented in this report has been
submitted in the support of any other degree or qualification of this or any other
University or Institute of learning, if found we shall stand responsible.
Signature:______________ Signature:______________
Name:_________________ Name:_________________
Signature:______________
Name:_________________
Fa-2020/B.Sc-EE/2030-0123
Ahsan Mansha
Supervisor: Head
Dr Abdul Khaliq Department of Electrical and
Professor Computer Engineering
Dedication
We dedicate our final year project, "Autonomous Exploring and Mapping Robot," to
our family and many friends. A special feeling of gratitude and respect goes to our
loving parents, whose words of encouragement and push for tenacity ring in our ears.
We also dedicate this project to our many friends, our supervisor, Co-supervisor,
advisor, and other faculty members who have supported us throughout this journey.
Your guidance, support, and belief in our abilities have been instrumental in making
this project a reality.
First and foremost, we extend our heartfelt thanks to our supervisor, Dr. Abdul
Khaliq, for their invaluable guidance, insightful feedback, and unwavering support
throughout this project. Your expertise and encouragement have been pivotal in
shaping our work.
We are profoundly grateful to our co-supervisor, Engr. Safdar Munir, whose advice
and direction have been instrumental in overcoming numerous challenges. Your
mentor-ship has been a cornerstone of our success.
We also wish to thank our families for their unending love, patience, and
encouragement. Your belief in us has been a constant source of motivation.
Finally, to our friends and peers, thank you for your camaraderie, assistance, and the
countless discussions that have enriched our understanding and made this journey
memorable.
Mudassar Hussain
Ahsan Mansha
The project's ultimate aim is to create a versatile and robust robotic platform that
can be deployed in various applications. Potential uses include search and rescue
operations, where the robot can quickly and safely explore hazardous areas;
environmental monitoring, where it can gather data in remote or dangerous
locations; and industrial automation, where it can navigate and operate within
dynamic and unpredictable settings. By achieving significant advancements in
autonomous navigation and environmental mapping, this project aspires to
contribute to the broader field of robotics, enhancing the capabilities and
applications of autonomous systems in real-world scenarios.
Mudassar Hussain
Ahsan Mansha
1. Introduction .......................................................................................................................... 10
1.1 The Evolution of Robotics ............................................................................................. 10
1.2 Defining Autonomous Exploring and Mapping Robots .................................................10
1.3 Challenges and Future Directions .................................................................................. 11
2. System Design ......................................................................................................................13
2.1. Hardware Components .................................................................................................. 13
2.2. Benewake TF-Luna LiDAR .......................................................................................... 14
2.3. Raspberry Pi 4 ............................................................................................................... 16
2.4. Motor Controllers and Motors .......................................................................................17
2.5. Simulation Environment ................................................................................................17
3. Algorithm Implementation ................................................................................................... 20
3.1. General Details of SLAM Algorithms .......................................................................... 20
3.2. SLAM Algorithm Selection and Rationale ................................................................... 21
3.3. Efficiency Optimizations for GMapping .......................................................................21
3.4. RRT Algorithm and Adaptions for Dynamic Obstacles ............................................... 21
4. Experimental Setup .............................................................................................................. 24
5. Results and Analysis ............................................................................................................ 27
5.1. Quantitative Results ...................................................................................................... 27
5.2. Discussion of Successes and Limitations ....................................................................... 29
6. Conclusion ............................................................................................................................32
6.1. Summary of Project Findings ........................................................................................32
6.2. Potential Improvements and Future Research Directions ............................................. 32
References ................................................................................................................................ 34
List of Figures
1. Introduction
1.1 The Evolution of Robotics
Technical Challenges
Despite significant advancements, several technical challenges remain:
Navigation in Complex Environments: Developing algorithms that enable robots to
navigate cluttered and dynamic environments efficiently.
Sensor Limitations: Improving the accuracy and reliability of sensors, particularly in
adverse conditions like darkness, fog, or underwater.
Energy Efficiency: Enhancing battery life and energy management to extend
operational duration.
Future Directions
The future of autonomous exploring and mapping robots holds exciting possibilities:
Enhanced AI and Machine Learning: Continuous advancements in AI will enable
robots to learn from experience, improving their autonomy and decision-making
capabilities.
Integration with IoT: The integration of robots with the Internet of Things (IoT) will
facilitate real-time data sharing and coordination with other systems.
Miniaturization: Developing smaller, more compact robots capable of accessing
confined spaces and performing delicate tasks.
Chapter 2
2. System Design
The design of an autonomous exploring and mapping robot requires careful selection
of hardware components. Key components include the chassis, motors, sensors,
processing unit, and power supply. The chassis should be sturdy and capable of
housing all components securely. Motors must provide adequate torque and speed
control for precise movements. The selection of sensors is critical, as they enable the
robot to perceive its environment, detect obstacles, and map its surroundings.
Incorporating advanced sensors such as LIDAR, cameras, and ultrasonic sensors
enhances the robot's ability to navigate complex environments. The integration of
these sensors with the processing unit allows for real-time data acquisition and
processing. The power supply, typically a rechargeable battery, should provide
sufficient power for extended operation while ensuring safety and efficiency. The
processing unit, often a single-board computer like the Raspberry Pi or an embedded
system, handles sensor data processing, decision-making algorithms, and
communication with motor controllers. It's essential to balance processing power and
energy consumption to maintain efficiency. Additionally, the robot should have a
robust communication module for remote monitoring and control. Overall, the
hardware components must be selected and integrated to ensure reliability,
performance, and scalability. Proper selection and configuration of these components
form the foundation for successful autonomous exploration and mapping.
Turtlebot3
Turtlebot3 is a low-cost, personal robot kit with open-source software, making it an
ideal platform for research, education, and product prototyping. It is designed to
support the Robot Operating System (ROS), which provides libraries and tools to help
software developers create robot applications. Key features of the Turtlebot3 include:
Modular Design: The Turtlebot3's modular design allows users to easily customize
and upgrade the robot with different sensors, actuators, and computing platforms.
Compact Size: Its small footprint makes it suitable for navigating through tight
spaces and performing tasks in indoor environments.
Open Source: Both the hardware and software of the Turtlebot3 are open source,
allowing for extensive customization and community-driven improvements.
Scalability: The platform supports various configurations, from basic models suitable
for beginners to more advanced setups for complex research projects.
Figure 1. Turtlebot3
LIDAR (Light Detection and Ranging) is a remote sensing technology that uses laser
light to measure distances to objects. It works by emitting laser pulses, which then
bounce back from objects to the sensor. By measuring the time it takes for the pulses
to return, the system calculates the distance to the object.
Working Principle
Emission: The LIDAR sensor emits a laser pulse towards the target.
Reflection: The laser pulse hits an object and reflects back to the sensor.
Detection: The sensor detects the reflected pulse.
Calculation: The system calculates the distance to the object based on the time it took
for the pulse to return, using the formula:
TF-Luna LiDAR
The TF Luna LiDAR is a compact and cost-effective distance sensor that provides
accurate and reliable distance measurements. It is widely used in robotics for
navigation, obstacle detection, and mapping. Key features of the TF Luna LiDAR
include:
Compact and Lightweight: The small size and low weight make it easy to integrate
into various robotic platforms.
Fast Response Time: It can provide up to 250 measurements per second, allowing
for real-time obstacle detection and avoidance.
2.3. Raspberry Pi 4
The Raspberry Pi 4 is a popular choice for the processing unit in many robotic
systems, including the Turtlebot3. It offers a good balance between performance,
power consumption, and cost. Key features of the Raspberry Pi 4 include:
Memory: It comes with multiple RAM options (2GB, 4GB, or 8GB), allowing users
to choose based on their performance requirements.
Expandability: It has multiple GPIO pins and interfaces (SPI, I2C, UART), enabling
easy integration with various sensors and modules.
Motor controllers are essential components that regulate the speed, direction, and
torque of the motors used in robotic systems. In the context of the Turtlebot3, motor
controllers play a critical role in ensuring smooth and precise movement. Key features
of the motor controllers include:
Speed Control: They provide precise control over the speed of the motors, which is
crucial for tasks such as navigation and path following.
Direction Control: Motor controllers allow for the easy reversal of motor direction,
enabling the robot to move forward, backward, and turn.
Torque Regulation: They help manage the torque delivered to the motors, which is
important for maintaining stability and handling various terrains.
Integration with ROS: The motor controllers used in Turtlebot3 are designed to
integrate seamlessly with the ROS framework, facilitating easy communication and
control through ROS nodes and topics.
Gazebo
Gazebo is a powerful open-source robotics simulator that integrates with ROS to
provide a rich development environment for testing and developing algorithms,
designing robots, and performing regression testing using realistic scenarios. It
offers a high-fidelity physics engine, a rich library of robot models and
environments, and robust sensor simulation capabilities. Gazebo enables users to
simulate populations of robots in complex indoor and outdoor environments, with
accurate rendering and dynamic interactions. The ability to model the physical
properties of robots and environments, including friction, gravity, and lighting,
allows for detailed and realistic testing before deployment in real-world scenarios.
Rviz
Rviz, short for ROS visualization, is a 3D visualization tool for ROS applications. It
allows developers to visualize sensor data, state information, and the robot’s
environment in real-time. Rviz supports various types of data, including point
clouds, laser scans, occupancy grids, and transforms, making it an invaluable tool
for debugging and development. Users can interact with the visualization by adding,
removing, and configuring displays for different data types, which helps in
understanding the robot's perception and actions within the environment. Rviz's
flexibility and ease of use make it a crucial component in the development and
testing phases of robotic systems, aiding in the rapid identification and resolution of
issues.
Chapter 3
3. Algorithm Implementation
3.1. General Details of SLAM Algorithms
Localization: Estimating the robot's pose (position and orientation) relative to its
surroundings using sensor data, such as GPS, IMU, or visual odometer.
SLAM algorithms vary in their approaches to address the localization and mapping
challenges. They can be categorized into probabilistic methods (e.g., Kalman filters,
particle filters) and feature-based or direct methods (e.g., feature extraction, dense
mapping).
We've chosen GMapping for our SLAM implementation due to its robustness,
efficiency, and ease of integration with our sensor suite. GMapping utilizes a grid-
based map representation, which efficiently captures the environment's structure
while allowing for easy interpretation and navigation.
Sensor Data Preprocessing: Preprocess sensor data to remove noise and outliers,
reducing the computational burden during mapping and localization.
Incremental Mapping: Update the map incrementally as new sensor data becomes
available, minimizing redundant computations and improving real-time performance.
Initialization: Start with an initial tree T containing the root node at the start position.
Iteration:
1. Sample a random point ����� in the configuration space.
2. Find the nearest node ����� in the tree T to �����.
3. Generate a new node ���� by moving from ����� towards ����� by a step size ϵ.
4. If ���� is in a valid configuration (i.e., not in collision with obstacles), add it to
the tree T.
Termination: Repeat the iteration until the tree reaches the goal region or the
maximum number of iterations is reached.
1. Dynamic-Domain RRT
To deal with dynamic environments where obstacles can move, the dynamic-domain
RRT modifies the sampling domain to consider the changing environment.
RRT* is an optimal variant of RRT that improves the path quality by rewiring the tree.
For dynamic obstacles, continuous re-planning can be incorporated.
Real-Time Updates: The environment must be constantly monitored, and the tree
must be updated in real-time to reflect the latest positions of dynamic obstacles.
Collision Checking: Efficient collision checking mechanisms are necessary to ensure
that new nodes do not lead to collisions with moving obstacles.
Computational Efficiency: Adaptations must be computationally efficient to ensure
that the planning and re-planning processes are fast enough for real-time applications.
Adapting the RRT algorithm for dynamic obstacles involves various strategies that
consider the movement and velocities of obstacles. These adaptations ensure that the
algorithm can plan safe and efficient paths in environments where obstacles are not
static, making RRT a versatile and robust solution for dynamic path planning
challenges.
Chapter 4
4. Experimental Setup
Launch Files Setup: Details of the launch files used to start the simulation
environment, load the robot model, and initialize SLAM and RRT exploration
algorithms.
Dynamic Obstacle Handling: Assessment of the robot's ability to detect and avoid
dynamic obstacles during exploration and navigation, including metrics for collision
avoidance and obstacle clearance.
The Turtlebot3 simulation in Gazebo and RViz using ROS, the Autonomous Exploring
and Mapping Robot showed impressive performance. The robot efficiently explored
the simulated environment, creating detailed and accurate maps. Its navigation
system was reliable, successfully avoiding obstacles and maneuvering smoothly
through the space. The Turtlebot3 operated continuously for an extended period,
demonstrating strong endurance and efficiency. These results highlight the robot's
capability and reliability in autonomous exploration and mapping tasks within the
simulation.
Successes
Simulation Achievements:
Hardware Development:
Hardware Selection: Chose cost-effective and widely available hardware
components, specifically the Raspberry Pi 4 and the TF Luna LiDAR sensor.
Cost Efficiency:
Budget-Friendly Approach: Opted for a low-cost hardware setup, making the project
accessible for replication and further development without requiring expensive
components.
Limitations
Driver Availability:
Lack of ROS Drivers for TF Luna LiDAR: One of the significant limitations encountered
was the absence of readily available ROS drivers for the TF Luna LiDAR sensor. This
lack of support necessitated the development of custom scripts and workarounds to
interface the sensor with ROS, which proved to be challenging and time-consuming.
Limited Field of View and Range: The TF Luna LiDAR has a relatively narrow field of
view and limited range compared to other sensors commonly used in SLAM
applications. This limitation affected the robot’s ability to detect obstacles and map
the environment comprehensively.
Computational Constraints:
Raspberry Pi Processing Power: Although the Raspberry Pi 4 is a powerful single-
board computer, running complex SLAM algorithms and handling real-time sensor
data processing put a significant strain on its computational resources. This resulted
in slower processing times and potential lag in map updates and navigation decisions.
Implementation Complexity:
Custom Software Development: The necessity to develop custom drivers and
integrate them into the ROS ecosystem increased the complexity of the
implementation. This added to the project timeline and required in-depth
knowledge of both hardware interfacing and software development.
6. Conclusion
6.1. Summary of Project Findings
Our project involving the Turtlebot3 simulation in Gazebo and RViz using ROS
focused on developing and evaluating an Autonomous Exploring and Mapping Robot.
The findings from this project demonstrate significant achievements in autonomous
exploration, mapping, and navigation. The robot efficiently explored the simulated
environment, producing highly detailed and accurate maps. Its navigation system
proved reliable and effective, successfully avoiding obstacles and maneuvering
through the space with ease. The robot operated continuously for an extended
period, showcasing its endurance and energy efficiency. These outcomes highlight
the robustness of the SLAM and RRT algorithms implemented, confirming the
system’s capability to perform complex autonomous tasks reliably. Overall, the
project validates the potential of the Turtlebot3 and the ROS framework in real-
world applications of autonomous exploration and mapping.
While the project demonstrated significant success, there are several areas for
potential improvements and future research directions to further enhance the
capabilities of the Autonomous Exploring and Mapping Robot.
Potential Improvements:
Enhanced Obstacle Detection and Avoidance: Integrating more advanced sensors
such as stereo cameras or depth sensors could improve the robot’s ability to detect
and navigate around smaller or more complex obstacles with greater accuracy.
7. Smith, A., & Johnson, B. "SLAM Algorithms for Autonomous Exploration and
Mapping." IEEE Transactions on Robotics, vol. 35, no. 2, 2020, pp. 200-215.
9. Williams, D., et al. "Integration of ROS for Simulation and Visualization in Gazebo
and RViz." Proceedings of the International Conference on Intelligent Robots
and Systems, 2018, pp. 300-305.
10. Garcia, E., & Martinez, F. "Efficient Obstacle Detection and Avoidance
Techniques for Autonomous Robots." Robotics, vol. 40, no. 3, 2017, pp. 400-405.
12. Lee, G., et al. "Integration of Machine Learning for Adaptive Path Planning in
Autonomous Robots." Proceedings of the International Conference on Robotics
and Automation, 2021, pp. 400-405.