Wa0003.
Wa0003.
Robotic localization: Robot localization means helping a robot understand where it is in a room
or any area. It uses sensors like cameras, GPS, or LIDAR to find its position and direction.
This is very important because the robot needs to know where it is to move correctly, avoid
obstacles, and reach its goal.
Without localization, the robot can get lost, stuck, or go in the wrong direction. That’s why
localization is a key part of making robots work smartly and safely.
Example Explanation: This example shows how robots often estimate their position using
sensor data — but their guess may not be perfect. The goal of localization is to make this
estimation as accurate as possible.
• The robot is in a room with an obstacle.
• The red dot shows the robot’s actual position and orientation (where it really is).
• The green dot shows the estimated position, based on the robot’s sensors and
calculations.
• The green ellipse represents uncertainty, meaning the robot isn’t 100% sure of its exact
location.
UNIT -4
1. Encoder
• This block gives information about how much the robot has moved.
• It reads movements like wheel rotation or distance traveled.
• This movement data is passed to the Prediction of Position block.
3. Observation
• This block collects real-world data from the robot’s sensors like a camera, LIDAR, or
sonar.
• It looks at the robot’s surroundings and extracts useful features (e.g., walls, landmarks).
• This is part of Perception — the robot seeing its environment.
Output: Raw sensor data or extracted features.
4. Matching
UNIT -4
• The robot now compares the predicted position with what it’s actually seeing through
its sensors.
• It checks if the sensor data “matches” the known map or expected surroundings.
• If the match is good (YES), the robot is likely in the right spot.
Input: Predicted position + Observation data
Output: Matched observations
6. Feedback Loop
• The updated position goes back to the Prediction of Position block.
• This loop repeats as the robot keeps moving and updating its location over time.
Label Explanation
Prediction of Position Calculates where the robot might be (based on encoder + map)
Label Explanation
Predicted Position The guessed location before checking with real data
Matched
When sensor data agrees with predicted data, helping correct position
Observations
In short, The robot moves, guesses its position, looks around, checks if its guess is correct,
and fixes its position. Then it repeats this again and again while it moves.
This way, the robot always has a good idea of where it is, even if its movements are not perfect.
Localization – Determine Robot's Position
Purpose:
Estimate the robot’s position and orientation (pose) within a map or space.
UNIT -4
Techniques:
1. Dead Reckoning
The robot guesses its current position based on how much its wheels have moved (using
encoders) and how it’s tilted or turned (using IMU).
Problem: Small mistakes add up over time, so the robot’s location becomes less accurate the
more it moves.
Data Points:
• Current pose (x, y, θ)
• Previous pose
• Odometry readings : Odometry readings tell us how far and in what direction a robot or
vehicle has moved, based on data from its wheels or motion sensors.
Example: If a robot's wheel turns enough to move forward by 2 meters and turns slightly to the
right, the odometry reading will show that the robot moved 2 meters forward and rotated a bit.
• GPS coordinates
• Sensor observations
UNIT -4
real-time data collected by a robot or autonomous system's sensors. This data is used to
perceive the environment, understand surroundings, and make decisions that allow safe
navigation through known or unknown environments.
Why Is It Important?
• Robots can't detect obstacles.
• They won’t know their position.
• They can't plan or follow safe paths.
• There is a high risk of collision or failure.
4. Camera
UNIT -4
7. Encoders
• What it does: Counts how many times the wheels have turned.
• Used for: Measuring distance moved or speed.
• Data given: Wheel turns in ticks or degrees.
Step 2: Preprocessing – Noise Reduction: Clean and prepare raw sensor data for analysis.
Techniques:
1. Kalman Filter (for GPS, IMU): It’s like a smart guesser. It takes noisy or shaky data from
sensors and guesses the most likely accurate position of the robot.
Used for: Making GPS or IMU readings more accurate by mixing current data with past data.
2. Median/Mean Filter (for Camera or LiDAR): These filters help remove wrong or strange
values (like a sudden big jump) from the sensor data to make it smoother. Used for: Getting
cleaner and more reliable images or distance measurements.
UNIT -4
3. Sensor Calibration: It’s like tuning a musical instrument — adjusting the sensor so it gives
correct readings. Used for: Fixing built-in errors in sensors like blurry vision in cameras or
wrong distances in LiDAR.
Output:
• Filtered distance values
• Smoothed image data
• Corrected position estimates
1. Dead Reckoning
The robot guesses its current position based on how much its wheels have moved (using
encoders) and how it’s tilted or turned (using IMU).
UNIT -4
Problem: Small mistakes add up over time, so the robot’s location becomes less accurate the
more it moves.
Data Points:
• Current pose (x, y, θ)
• Previous pose
• Odometry readings : Odometry readings tell us how far and in what direction a robot or
vehicle has moved, based on data from its wheels or motion sensors.
Example: If a robot's wheel turns enough to move forward by 2 meters and turns slightly to the
right, the odometry reading will show that the robot moved 2 meters forward and rotated a bit.
• GPS coordinates
• Sensor observations
Types of Maps:
• Occupancy Grid Map: A flat map that shows where you can walk and where things are
blocking the way.
• Point Cloud Map: A bunch of dots in 3D that show the shape of things around you.
• Topological Map: A map made of spots connected by paths, like a subway map.
• Visual Map: A map made from pictures with special points to help recognize places.
Input Data:
• Sensor measurements (LiDAR, Camera)
• Robot pose
• Landmark detection
Purpose: Compute a safe and efficient path from current position to goal.
Path Planning Algorithms: Dijkstra's
Input:
• Current pose
• Goal location
• Map
• Obstacle positions
Output:
• Waypoints (x, y) for navigation
• Velocity commands for motors
Factor Concern
Data Flow
Sensors → Raw Data → Filtered Data → Localization → Mapping → Obstacle Detection →
Path Planning → Motion Control → Safe Movement
Gaussian Uncertainty
Gaussian Uncertainty refers to a way of modeling uncertainty in measurements, predictions, or
estimations using the Gaussian (Normal) distribution. It is one of the most commonly used
forms of uncertainty modeling in science, engineering, statistics, and machine learning because
of its mathematical simplicity and real-world applicability.
What is Uncertainty?
Uncertainty means how unsure we are about a certain value — it can be in:
• A sensor measurement (like a thermometer),
• A model prediction (like predicting a patient’s disease risk),
• Or any estimated quantity.
We usually describe this uncertainty using probability distributions.
Physics Measurement errors (e.g., length, mass) are modeled as Gaussian noise
Machine
Bayesian models and Gaussian Processes estimate prediction uncertainty
Learning
Robotics Sensor fusion and motion estimation involve Gaussian noise modeling
UNIT -4
Visual Representation
Why Gaussian?
• It's easy to compute with: simple math for mean, variance, confidence intervals.
• Works well when data has a symmetric distribution and moderate outliers.
Limitations
• Assumes symmetry and no outliers
• May not fit non-Gaussian real-world uncertainties (e.g., skewed distributions)
• In deep learning, more advanced methods (like mixture density networks) may be used
when Gaussian assumption is weak.
Since the robot is not exactly sure where it is, it keeps a list of possibilities, each with a certain
chance. This list is called a histogram — basically a set of bins, each showing how likely the
robot thinks it is in that place.
In short: The Histogram Filter is like a guessing game where the robot keeps track of all
possible places it might be, moves those guesses according to its movements, and then updates
the guesses based on what it senses. Over time, it becomes very good at guessing its actual
location!
This visualization clearly shows how the robot updates its belief of where it might be as it moves
and senses the environment.
Prior Distribution
The prior represents what the robot already knows or assumes about its location before any
measurement.
• If the robot is turned on with no clue, it assumes uniform probability across all possible
positions (e.g., all are equally likely).
• If the robot knows it was at a certain place previously, it may have non-uniform priors.
Example:
Likelihood Function:
The likelihood is the probability of getting a particular sensor measurement if the robot were
at a specific location.
It reflects how reliable the sensors are and how distinctive the locations are.
UNIT -4
L5 has the highest updated belief (0.21), so the robot thinks it's most likely at L5.
These values would keep updating as the robot moves or takes more measurements.
UNIT -4
Used Techniques
Histogram Filter:
• Grid-based approach (like our L1–L5 example).
• Maintains a discrete probability for each cell.
Kalman Filter:
• For continuous state space (like x, y in meters).
• Assumes Gaussian distributions.
• Works well with linear systems and Gaussian noise.