0% found this document useful (0 votes)
7 views23 pages

Wa0003.

This document discusses robotic localization, focusing on how robots determine their position using various sensors and algorithms, including Bayesian statistics and Gaussian uncertainty. It outlines the step-by-step process of localization, mapping, obstacle detection, and path planning, emphasizing the importance of sensor data for safe navigation. Techniques such as dead reckoning, GPS-based localization, SLAM, and Bayesian filters are highlighted as methods to improve accuracy in robot positioning.

Uploaded by

animeshjain0602
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views23 pages

Wa0003.

This document discusses robotic localization, focusing on how robots determine their position using various sensors and algorithms, including Bayesian statistics and Gaussian uncertainty. It outlines the step-by-step process of localization, mapping, obstacle detection, and path planning, emphasizing the importance of sensor data for safe navigation. Techniques such as dead reckoning, GPS-based localization, SLAM, and Bayesian filters are highlighted as methods to improve accuracy in robot positioning.

Uploaded by

animeshjain0602
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 23

UNIT -4

Robotic localization Bayesian statistics to locate a robot in space, sensor measurements to


safely navigate an environment, Gaussian uncertainty, histogram filter for robot
localization in python.

Robotic localization: Robot localization means helping a robot understand where it is in a room
or any area. It uses sensors like cameras, GPS, or LIDAR to find its position and direction.
This is very important because the robot needs to know where it is to move correctly, avoid
obstacles, and reach its goal.
Without localization, the robot can get lost, stuck, or go in the wrong direction. That’s why
localization is a key part of making robots work smartly and safely.

Example Explanation: This example shows how robots often estimate their position using
sensor data — but their guess may not be perfect. The goal of localization is to make this
estimation as accurate as possible.
• The robot is in a room with an obstacle.
• The red dot shows the robot’s actual position and orientation (where it really is).
• The green dot shows the estimated position, based on the robot’s sensors and
calculations.
• The green ellipse represents uncertainty, meaning the robot isn’t 100% sure of its exact
location.
UNIT -4

What do robots use for localization?


Robots use various sensors and algorithms for localization. Common sensors include cameras,
LiDAR, ultrasonic sensors, and wheel encoders.
These sensors provide data that can be processed by algorithms such as Kalman filters, particle
filters, and Simultaneous Localization and Mapping (SLAM) to estimate the robot's position and
orientation.

What is self-localization in robotics?


Self-localization in robotics refers to a robot's ability to determine its own position and
orientation within its environment without relying on external references or assistance. This is
typically achieved through the use of onboard sensors and algorithms that process the sensor data
to estimate the robot's location.

What sensors are used for localization?


1. Cameras: Provide visual data that can be processed to identify landmarks and estimate the
robot's position relative to those landmarks.
2. LiDAR: Uses laser beams to measure distances to surrounding objects, creating a detailed map
of the environment that can be used for localization.
3. Ultrasonic sensors: Emit sound waves and measure the time it takes for the waves to bounce
back, providing distance measurements to nearby objects.
4. Wheel encoders: Measure the rotation of the robot's wheels, allowing the robot to estimate its
position based on its movement.
UNIT -4

Step-by-Step Working of Robot Localization (As Shown in the Diagram)

1. Encoder
• This block gives information about how much the robot has moved.
• It reads movements like wheel rotation or distance traveled.
• This movement data is passed to the Prediction of Position block.

2. Prediction of Position (e.g., Odometry)


• Based on the encoder data (how far the robot thinks it has moved), the robot predicts its
new position.
• This is just a guess because errors (like wheel slip) can happen.
• The robot may also use some info from the Map Database to help with prediction.
Output: A “predicted position” of the robot.

3. Observation
• This block collects real-world data from the robot’s sensors like a camera, LIDAR, or
sonar.
• It looks at the robot’s surroundings and extracts useful features (e.g., walls, landmarks).
• This is part of Perception — the robot seeing its environment.
Output: Raw sensor data or extracted features.

4. Matching
UNIT -4

• The robot now compares the predicted position with what it’s actually seeing through
its sensors.
• It checks if the sensor data “matches” the known map or expected surroundings.
• If the match is good (YES), the robot is likely in the right spot.
Input: Predicted position + Observation data
Output: Matched observations

5. Position Update (Estimation?)


• After a successful match, the robot updates its position.
• This means it corrects its earlier prediction using real sensor data.
• This new, more accurate position is used in the next cycle.
Output: Updated and corrected robot position

6. Feedback Loop
• The updated position goes back to the Prediction of Position block.
• This loop repeats as the robot keeps moving and updating its location over time.

7. Map Database (Optional Input)


• A stored map of the environment.
• It may help during prediction or matching to compare current position with known
features.

All Labels from Diagram Explained

Label Explanation

Encoder Measures how much the robot moved

Prediction of Position Calculates where the robot might be (based on encoder + map)

Observation Gathers sensor data about the robot’s surroundings

Matching Compares sensor data with predicted position or map

Position Update Fixes the robot’s position using matched observations


UNIT -4

Label Explanation

Map Database Optional map used for prediction or comparison

Perception Process of sensing the world through camera/LIDAR etc.

Predicted Position The guessed location before checking with real data

Matched
When sensor data agrees with predicted data, helping correct position
Observations

Final corrected location of the robot, used in the next round of


Position
prediction

In short, The robot moves, guesses its position, looks around, checks if its guess is correct,
and fixes its position. Then it repeats this again and again while it moves.
This way, the robot always has a good idea of where it is, even if its movements are not perfect.
Localization – Determine Robot's Position
Purpose:
Estimate the robot’s position and orientation (pose) within a map or space.
UNIT -4

Techniques:
1. Dead Reckoning
The robot guesses its current position based on how much its wheels have moved (using
encoders) and how it’s tilted or turned (using IMU).
Problem: Small mistakes add up over time, so the robot’s location becomes less accurate the
more it moves.

2. GPS-based Localization (for outdoor navigation)


The robot uses GPS satellites to find its exact position outdoors.
Use: It tells the robot where it is in terms of latitude and longitude, like on a map.

3. SLAM (Simultaneous Localization and Mapping)


The robot makes a map of a new place while figuring out where it is inside that place.
Use: Very helpful when there's no GPS, like indoors or in unknown areas.

4. Bayesian Filters (e.g., Particle Filter, EKF)


These are smart guessing tools that help the robot estimate its position even when the sensor data
is uncertain or noisy.
Use: They mix past movements + current sensor data to guess the most likely location.
o Particle Filter: Uses many guesses (particles) to track location.
o EKF (Extended Kalman Filter): Uses math to make smoother and smarter guesses.

Data Points:
• Current pose (x, y, θ)
• Previous pose
• Odometry readings : Odometry readings tell us how far and in what direction a robot or
vehicle has moved, based on data from its wheels or motion sensors.
Example: If a robot's wheel turns enough to move forward by 2 meters and turns slightly to the
right, the odometry reading will show that the robot moved 2 meters forward and rotated a bit.
• GPS coordinates
• Sensor observations
UNIT -4

Bayesian statistics to locate a robot in space


UNIT -4
UNIT -4

Sensor Measurements to Safely Navigate an Environment : Sensor measurements refer to the


UNIT -4

real-time data collected by a robot or autonomous system's sensors. This data is used to
perceive the environment, understand surroundings, and make decisions that allow safe
navigation through known or unknown environments.

Why Is It Important?
• Robots can't detect obstacles.
• They won’t know their position.
• They can't plan or follow safe paths.
• There is a high risk of collision or failure.

Complete Step-by-Step Process with Detailed Data Points


Step 1: Sensing – Data Collection
Purpose: Collect raw data from the environment using various sensors.
Common Sensors & Data Collected:
1. Ultrasonic Sensor
• What it does: Measures how far an object is using sound waves.
• Used for: Detecting nearby walls or obstacles.
• Data given: Distance in cm or meters.

2. LiDAR (Light Detection and Ranging)


• What it does: Shoots laser beams and measures how long they take to bounce back to
create a 2D or 3D map.
• Used for: Building detailed maps and spotting obstacles.
• Data given: Distance and angle as (x, y, z) coordinates.

3. Infrared (IR) Sensor


• What it does: Uses infrared light to detect if something is close.
• Used for: Short-range obstacle detection.
• Data given: Simple on/off (object is there or not) or distance.

4. Camera
UNIT -4

• What it does: Takes pictures or video to “see” the environment.


• Used for: Detecting lanes, signs, people, objects.
• Data given: Colored images (RGB) or depth in pixels.

5. IMU (Inertial Measurement Unit)


• What it does: Tracks motion—how fast the robot moves or turns.
• Used for: Balancing and knowing how the robot is moving.
• Data given: Acceleration in m/s² and rotation speed in rad/s.

6. GPS (Global Positioning System)


• What it does: Finds the robot’s location using satellites.
• Used for: Outdoor navigation and global positioning.
• Data given: Latitude, longitude, and altitude.

7. Encoders
• What it does: Counts how many times the wheels have turned.
• Used for: Measuring distance moved or speed.
• Data given: Wheel turns in ticks or degrees.

Step 2: Preprocessing – Noise Reduction: Clean and prepare raw sensor data for analysis.
Techniques:
1. Kalman Filter (for GPS, IMU): It’s like a smart guesser. It takes noisy or shaky data from
sensors and guesses the most likely accurate position of the robot.
Used for: Making GPS or IMU readings more accurate by mixing current data with past data.

2. Median/Mean Filter (for Camera or LiDAR): These filters help remove wrong or strange
values (like a sudden big jump) from the sensor data to make it smoother. Used for: Getting
cleaner and more reliable images or distance measurements.
UNIT -4

3. Sensor Calibration: It’s like tuning a musical instrument — adjusting the sensor so it gives
correct readings. Used for: Fixing built-in errors in sensors like blurry vision in cameras or
wrong distances in LiDAR.
Output:
• Filtered distance values
• Smoothed image data
• Corrected position estimates

Step 3: Localization – Determine Robot's Position


Purpose:
Estimate the robot’s position and orientation (pose) within a map or space.
Techniques:

1. Dead Reckoning
The robot guesses its current position based on how much its wheels have moved (using
encoders) and how it’s tilted or turned (using IMU).
UNIT -4

Problem: Small mistakes add up over time, so the robot’s location becomes less accurate the
more it moves.

2. GPS-based Localization (for outdoor navigation)


The robot uses GPS satellites to find its exact position outdoors.
Use: It tells the robot where it is in terms of latitude and longitude, like on a map.

3. SLAM (Simultaneous Localization and Mapping)


The robot makes a map of a new place while figuring out where it is inside that place.
Use: Very helpful when there's no GPS, like indoors or in unknown areas.

4. Bayesian Filters (e.g., Particle Filter, EKF)


These are smart guessing tools that help the robot estimate its position even when the sensor data
is uncertain or noisy.
Use: They mix past movements + current sensor data to guess the most likely location.
o Particle Filter: Uses many guesses (particles) to track location.
o EKF (Extended Kalman Filter): Uses math to make smoother and smarter guesses.

Data Points:
• Current pose (x, y, θ)
• Previous pose
• Odometry readings : Odometry readings tell us how far and in what direction a robot or
vehicle has moved, based on data from its wheels or motion sensors.
Example: If a robot's wheel turns enough to move forward by 2 meters and turns slightly to the
right, the odometry reading will show that the robot moved 2 meters forward and rotated a bit.
• GPS coordinates
• Sensor observations

Step 4: Mapping – Build Environment Representation


Purpose:
Create a digital map for the robot to understand its surroundings.
UNIT -4

Types of Maps:
• Occupancy Grid Map: A flat map that shows where you can walk and where things are
blocking the way.
• Point Cloud Map: A bunch of dots in 3D that show the shape of things around you.
• Topological Map: A map made of spots connected by paths, like a subway map.
• Visual Map: A map made from pictures with special points to help recognize places.

Input Data:
• Sensor measurements (LiDAR, Camera)
• Robot pose
• Landmark detection

Step 5: Obstacle Detection & Avoidance


Purpose: Identify obstacles in the robot’s path and avoid collisions.
Methods:
• Distance thresholding (stop if object within a certain range)
• Bounding box detection (vision or LiDAR-based)
Output:
• Obstacle coordinates
• Obstacle type or size (if vision-based)
• Danger zones or restricted areas

Step 6: Path Planning – Find a Safe Route


UNIT -4

Purpose: Compute a safe and efficient path from current position to goal.
Path Planning Algorithms: Dijkstra's
Input:
• Current pose
• Goal location
• Map
• Obstacle positions
Output:
• Waypoints (x, y) for navigation
• Velocity commands for motors

Step 7: Motion Control – Execute the Movement


Purpose: Move the robot along the planned path while adjusting to sensor feedback.
Control Techniques:
• PID Controller: For precise turning and speed control
• Trajectory Following: Maintain orientation and speed while following path
• Reactive Control: Immediate response to new sensor data (e.g., sudden obstacle)
Data Required:
• Target waypoints
• Real-time sensor data
• Robot’s current position & orientation

Sensor Fusion (Across All Steps)


Combining data from multiple sensors to make better decisions.
Examples:
• GPS + IMU → for accurate outdoor localization
• LiDAR + Camera → for object detection and depth estimation
• Encoder + IMU → for indoor motion tracking

Important Considerations for Safe Navigation


UNIT -4

Factor Concern

Sensor Noise Can lead to incorrect measurements

Latency Delays in data may cause wrong actions

Occlusion Some sensors can't see through objects

Lighting/Weather Affects camera and LiDAR performance

Redundancy Backup sensors needed for safety

Real-time Processing Must process data quickly for quick reactions

Data Flow
Sensors → Raw Data → Filtered Data → Localization → Mapping → Obstacle Detection →
Path Planning → Motion Control → Safe Movement

Gaussian Uncertainty
Gaussian Uncertainty refers to a way of modeling uncertainty in measurements, predictions, or
estimations using the Gaussian (Normal) distribution. It is one of the most commonly used
forms of uncertainty modeling in science, engineering, statistics, and machine learning because
of its mathematical simplicity and real-world applicability.

What is Uncertainty?
Uncertainty means how unsure we are about a certain value — it can be in:
• A sensor measurement (like a thermometer),
• A model prediction (like predicting a patient’s disease risk),
• Or any estimated quantity.
We usually describe this uncertainty using probability distributions.

What is Gaussian Distribution?


A Gaussian distribution (also called the normal distribution) is a bell-shaped curve defined by:
• Mean (μ): The central value (expected value)
UNIT -4

• Standard deviation (σ): How spread out the values are

Gaussian Uncertainty in Detail


When we say uncertainty is Gaussian, we assume:
• The uncertain quantity (like an error or prediction) is distributed normally.
• Most likely value is the mean (μ), but it could be higher or lower with a certain
probability.
• The spread of uncertainty is determined by the standard deviation (σ).
Example:
If a sensor measures temperature as 25°C ± 2°C, and we model this uncertainty as Gaussian:
• The mean (μ) = 25°C
• Standard deviation (σ) = 2°C
• There is a 68% probability the actual value lies between 23°C and 27°C (i.e., μ ± σ)

Uses of Gaussian Uncertainty

Field Use Case

Physics Measurement errors (e.g., length, mass) are modeled as Gaussian noise

Machine
Bayesian models and Gaussian Processes estimate prediction uncertainty
Learning

Uncertainty in diagnosis or biomarker readings (e.g., MRI signal


Healthcare
variations)

Robotics Sensor fusion and motion estimation involve Gaussian noise modeling
UNIT -4

Visual Representation

• ~68% area lies within μ ± σ


• ~95% within μ ± 2σ
• ~99.7% within μ ± 3σ

Why Gaussian?
• It's easy to compute with: simple math for mean, variance, confidence intervals.
• Works well when data has a symmetric distribution and moderate outliers.

Limitations
• Assumes symmetry and no outliers
• May not fit non-Gaussian real-world uncertainties (e.g., skewed distributions)
• In deep learning, more advanced methods (like mixture density networks) may be used
when Gaussian assumption is weak.

Histogram Filter for Robot Localization:


Imagine a robot trying to figure out where it is inside a building or on a path. It doesn't have a
GPS, so it has to guess its position based on what it knows about:
• How it moves (it tries to move but might not be perfect),
UNIT -4

• What it senses around (like nearby landmarks).

Since the robot is not exactly sure where it is, it keeps a list of possibilities, each with a certain
chance. This list is called a histogram — basically a set of bins, each showing how likely the
robot thinks it is in that place.

In short: The Histogram Filter is like a guessing game where the robot keeps track of all
possible places it might be, moves those guesses according to its movements, and then updates
the guesses based on what it senses. Over time, it becomes very good at guessing its actual
location!

How does it work?


The Histogram Filter works in two main steps, repeated over and over:
1. Prediction (Motion update):
When the robot moves, it updates its belief about where it could be. For example, if it
was probably in place 3, and it moved one step forward, now it thinks it’s probably near
place 4. But because movement can be noisy, it keeps some chance it stayed in place or
moved a little differently.
2. Correction (Measurement update):
The robot uses its sensor (like a camera or distance sensor) to see the environment. If it
senses a landmark near place 4, it increases the chance it’s actually near place 4. If the
sensor says it’s near a different landmark, it adjusts its belief accordingly.

What happens over time?


• The robot starts not knowing where it is — it thinks it could be anywhere (equal
chances).
• It moves and senses repeatedly.
• Each time, it updates its belief based on where it moved and what it sensed.
• Gradually, the robot’s belief narrows down, becoming more confident about its real
position.
UNIT -4

Why is this useful?


• Robots can localize themselves even with noisy sensors and imperfect movement.
• The Histogram Filter is simple and works well for environments that can be divided into
clear positions (like rooms, grid cells, or steps).

Here is the histogram filter graph image showing:


• Blue bars: Initial uniform belief.
• Orange bars: Belief after the robot moves (prediction step).
• Green bars: Belief after sensing a landmark near position 4 (correction step).

This visualization clearly shows how the robot updates its belief of where it might be as it moves
and senses the environment.

Bayesian Statistics for Robot Localization – Detailed Notes


Why Use Bayesian Statistics?
Bayesian statistics provides a mathematical framework to manage uncertainty and update
beliefs in light of new evidence.
Key Ideas:
• Belief about robot's location is modeled as a probability distribution.
• This belief is updated with each new sensor measurement using Bayes' theorem.
UNIT -4

• The process is repeated to continually refine the robot’s estimated location.

Prior Distribution
The prior represents what the robot already knows or assumes about its location before any
measurement.
• If the robot is turned on with no clue, it assumes uniform probability across all possible
positions (e.g., all are equally likely).
• If the robot knows it was at a certain place previously, it may have non-uniform priors.
Example:

Likelihood Function:
The likelihood is the probability of getting a particular sensor measurement if the robot were
at a specific location.
It reflects how reliable the sensors are and how distinctive the locations are.
UNIT -4

Numerical Example: Bayesian Update Table

Location Prior Likelihood Unnormalized Posterior Normalized Posterior

L1 0.20 0.6 0.12 0.16

L2 0.20 0.5 0.10 0.13

L3 0.20 0.7 0.14 0.19

L4 0.20 0.4 0.08 0.11

L5 0.20 0.8 0.16 0.21

Total 1.00 — 0.60 1.00

L5 has the highest updated belief (0.21), so the robot thinks it's most likely at L5.
These values would keep updating as the robot moves or takes more measurements.
UNIT -4

Used Techniques
Histogram Filter:
• Grid-based approach (like our L1–L5 example).
• Maintains a discrete probability for each cell.

Kalman Filter:
• For continuous state space (like x, y in meters).
• Assumes Gaussian distributions.
• Works well with linear systems and Gaussian noise.

Particle Filter (Monte Carlo Localization):


• Uses random samples ("particles") to represent the belief.
• Works well with non-linear, non-Gaussian environments.
• Each particle has a weight based on how well it matches measurements.

Advantages of Bayesian Approach in Robotics


• Handles uncertainty effectively.
• Makes use of all available information.
• Can combine prior knowledge and real-time data.
• Easily updated with new information.
• Forms the foundation for SLAM (Simultaneous Localization and Mapping).

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy