0% found this document useful (0 votes)
66 views10 pages

H7122 Autonomous Vehicles Lecture Notes

Uploaded by

Toby Kelly
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
66 views10 pages

H7122 Autonomous Vehicles Lecture Notes

Uploaded by

Toby Kelly
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 10

H7122 Autonomous Vehicles Lecture Notes

Lecture 1: Introduction

Learning outcomes: main concepts, challenges and techniques of autonomous vehicles with a focus
on analysis, modelling, simulation and control

What is an autonomous car?

 Car using automation to perform the main transportation capabilities of a conventional car
 Car automation uses mechatronics, AI and advanced control

What are the benefits?


 Reduces energy consumption with optimal routing and
driving
 Improved safety with corners, pedestrians and collisions
 Provides techniques such as platooning to reduce energy
consumption
Sensors
 Internal – measure the internal values of the car
including motor speed, wheel load and battery information
 Environmental – information from the environment including distances
 Passive – energy from the environment – webcams, microphones and temperatures
 Active – emit energy and measure radiation – LIDAR, SONAR, RADAR

LiDAR (Light and Radar)


 Used to measure distances
 Usually a laser emits a pulse of light, where molecules scatter light and the detector measures
the receiving time and intensity of the signal
IMU (internal measurement units)
 optical gyros are used as they are more accurate and have no mechanical parts. Like mechanical
gyros do
 a cylindrical laser beam is placed where the rotation causes a shorter or longer travelling path of
the beam
 the phase shift of the beam is then proportional to the angular rate
GPS (Global positioning system)
 compromises of 6 equally spaced orbits over 12,000 miles above the earth with 5 satellites in
each orbit
 ground receiver can see 12 satellites at the most, and calculates the position with longitude,
latitude and altitude
 need 4 satellites minimum
 continuously send time and orbital position
 distance to each satellite: C Δ T
RADAR (Radio Detection And Ranging)
 radio signal is transmitted and bounced off an object
 it is then received where the direction and velocity of the target can be detected, based on the
doppler effect
λ0 v
Δλ=( λ− λ0 ) =
c

H7122 AUTONOMOUS VEHICLES | lecture notes


Where
Δ λ=wavelength shift
λ=measured wavelength
λ 0=emitted wavelength
c=speed of light
v=line of sight velocity
Wheel speed sensor (encoder)
 measure the position or speed of the wheels or steering – optical or magnetic encoders
 resolution depends on number of teeth – coud also be used to detect damage
Actuators
Steering-by-wire

Throttle-by-wire

Brake-by-wire

Planning Algorithm

 Global routing – fastest and safest way from the initial position to the destination
 Behaviour reasoning – overall behaviour of the vehicle
Control algorithms
 Should be accurate for safety and should be robust – require optimal control and model-
predictive control

Lecture 2: Kinematics and Dynamics of Unicycle Robots


 Continuous-time signal – varies continuously with time but doesn’t have to be a continuous
function of time  if the output and input signals are continuous-time signals then the system
is a continuous-time system
 Discrete-time signal – x(k) which is defined only at a sequence of discrete values

H7122 AUTONOMOUS VEHICLES | lecture notes


Linear:

T ( a x1 +b x 2 )=aT ( x 1 ) +bT ( x 2 )=a y 1+ b y 2


Time-invariant:

T ( x ( t−τ ) ) = y ( t−τ )

Vehicle axis system


 Denoted by x-y-z and is fixed on the vehicle and moves with it
Earth-fixed Axis system
 Denoted by X-Y-Z and are defined in terms of a fixed reference point on the ground
Mechanics, kinematics and dynamics
 Mechanics – action of forces on material objects with a mass (F = ma)
 Kinematics – mechanics  motion of objects (position, velocity and acceleration) ignoring the
forces involves (rw = V)
 Dynamics – time-varying phenomena of the combined mechanics and kinematics

Modelling of QBot2

Moment:
d
M ψ = ( F R−F L )
2
Speed of left and right motor:

(
vl = r icc −
d
2 )
θ̇ ( wc )

v =( r )
d
r icc − θ̇ ( wc )
2

v R + v L =2 r ICC θ̇ ( wc )
v c =r ICC θ̇
1
v c = ( V L+ V R )
2
Yaw:

ψ̇ R d=v R

H7122 AUTONOMOUS VEHICLES | lecture notes


ψ̇ L d=v L
1
ψ̇= ( v R −v L )
d

Lecture 3: Mapping and localisation


Localisation

 Localisation is the problem of identifying the vehicles pose (position and orientation) ->
normally global localisation uses GPS
the accuracy of GPS is 10-30m and is significantly affected by satellite signal condition
 The measurement of difference sensors are combined to improve the localisation accuracy
within 30cm
 The problem with odometrical localisation is that it uses the integral – therefore the error will
build up the more you integrate
 Therefore, more sophisticated algorithms are needed to take this error into account

Mapping

 Mapping is the problem of the robot figuring out a map of its environment – perceiving
 Avs use off-line and continuously generated maps
 Local maps are generated by LiDAR or cameras
Simultaneous Localisation and Mapping (SLAM)
 Allows the AV to detect its pose
 Builds a map of the environment
 It uses probability theory

Probability Recap
P ( A∧B )=P ( A ) P ( B ) If A and B are independent

P ( A∨B )=P ( A ) + P ( B ) If A and B are mutually exclusive

Conditional probability – the probability of A, if B


has already happened.
If something is uncertain – random variable

 The majority of uncertainties can be modelled as a ‘Normal distribution’


 The central limit theorem: the sum of a large number of random variables is approximately
normal
Bayes’ Rule
P ( B| A ) P ( A )
P ( A|B )=
P (B )
A and B are priors – we know it has already
happened

(A|B) and B|A is a posterior

H7122 AUTONOMOUS VEHICLES | lecture notes


How is probability and Bayes’ rule related to mapping?

 In conditional probability, P(M|Z) tells us that the probability of M occurring given that we’ve
observed Z, but in mapping:
“The probability that the map M is correct given that we have a sensor reading Z”.
 So, we use Bayes’ rule to update the map based on our observations.

Algorithm for SLAM – Occupancy Grid Mapping

 Partitioning the area around the vehicle as a grid of fixed-size


cells:
 0 – empty
 1 – occupied
The cells then use probability to work out the probability of a
generated map, m, that matches the positions of the robot
x1.k and the corresponding measurements z1..k

Lecture 4: Vision and Vision-based motion planning


Visualisation – percept the surrounding environment of the robots using inexpensive vision-depth
cameras

Image processing – directly processing the images and generate new ones from or extract attributes
such as lines or colours

 Visualising is used for mapping and motion planning


Why do we need processing techniques?
 Image processing is used to create a clear judgement about the state of the robot and the
surrounding environment by only extracting the required information and removing
unnecessary information from the original image (e.g. lines are needed for lane tracking)
 The images are then used for path planning – next waypoint
Different techniques

Technique Description
Image  Filters out the pixels with the values beyond thresholds (only want
thresholding white or black for example – histogram)
 For more info – robo lecture
 Transfer functions:
Binary thresholding:

{
T ( f ( x , y ) ) 255 if t h 1 ≤ f ( x , y ) ≤t h2 , otherwise 0
0

H7122 AUTONOMOUS VEHICLES | lecture notes


Threshold to zero:

{
T ( f ( x , y ) ) f ( x , y ) if t h1 ≤ f ( x , y ) ≤ t h2 , otherwise 0
0
Truncate to a value:

{
T ( f ( x , y ) ) f ( x , y ) truncate , otherwise 0
0

Convolution
 Mathematical operation of two functions (f and g) that indicate how the shape of one function
is changed by the other function:
h ( x , y ) =f ( x , y )∗g ( x , y )
 This is used because edge and blob detection algorithms use linear filters in frequency domains:
+∞
f ( x )∗g ( x )= ∑ f ( x ) g ( x−a )
α=−∞
 The filter is moved from minus infinity
to plus infinity.
 Multiplying two signals together
 Convolution in time domain is
equivalent to multiplication in the
frequency domain (as many image
processing operations are done in the
time domain)
 Mirror g wrt y and shift it from minus
infinity towards f.
 Convolution is the area under the
intersected part which increases and
then maintains constant
Edge detection
 Detects lines as connected pixels separating two regions
 It works by defining a filter function g(x,y) that is convolved with the image to reveal the edges
Sobel edge detection
 Two separate filters, one in x-axis and one in y-axis:

[ ] [ ]
−1 0 1 −1 −2 −1
g x ( . )= −2 0 2 ; g y ( . )= 0 0 0
−1 0 1 1 2 1
 If you want to detect the edge in the image 
 If the LHS and RHS are the same, then when summed up they will cancel
out – NO EDGE DETECTED
 choose 9 pixels, don’t care about middle column, RHS will be 0, but LHS
will be (-1)(255)+(-2)(255)+(-1)(255) – A very NEGATIVE no
 then take the integral (sum them up)
or:
h x ( x , y )=f ( x , y )∗g x ( . )
h y ( x , y ) =f ( x , y )∗g y ( .' )
Then, the overall gradient image of the original image is:
h ( x , y ) =√ h2x ( x , y ) +h2y ( x , y )

H7122 AUTONOMOUS VEHICLES | lecture notes


Blob detection
 want to find areas that have similar properties
 one of the main algorithms is called Laplacian of Gaussian (LoG)
 image  gaussian filter (smoothing) – cancels out sudden changes (noise
for e.g. to cancel abnormalities)
 Then take the second derivative (Laplacian) of the new image after it
has been filtered:
d2 L d2 L
h ( x , y ) =Δ L (≔ ∇ . ∇ L )= +
d x2 d y2
 Then, at some points, you will have a collection of points which are different from others
 E.g. in this image, the blue points, when added up, have very similar values to those that are
outside of the boundary and very different to the area outside

Kalman Filter
Current ^ ^
X k =K k . Z k + ( 1−K k ) . X
estimation k−1

Kalman Gain Measured Previous


value Estimation

 We want to find ^x k  the estimate of the signal x, and we want to find it for each consequent
k’s (at each time interval)
 z k is the measured value, but we don’t perfectly know this, ^x k−1 is the estimate of the signal on
the previous state
 The only unknown component is the Kalman gain, K k , which Is calculated for each consequent
state
 Kalman filter works by finding the most optimum averaging factor for each consequent state,
by also remembering information about previous states
Steps
1 – equations of a Kalman filter:
x k = A x k−1 +B u k + wk−q
1

H7122 AUTONOMOUS VEHICLES | lecture notes


z k =H x k +v k
2

 Each signal value (xk) may be evaluated using a stochastic equation (1) = any x k is a linear
combination of the previous value ( A x k−1 ¿ plus a control signal ( Bu k ) and a process noise (wk-q-
), most of the time, uk doesn’t exist
 Equation 2 – any measurement value (with unknown accuracy) is a linear combination of the
signal value and the measurement noise (these are considered to be Gaussian)
 A, B and H are general form matrices
 If we are sure our system fits the model (normally does), then we have to estimate the mean
and standard deviation of the noise functions, wk-1 and vk
 The better estimation of noise parameters – better estimates
2 – start the process

Time update (prediction) Measurement update (correction)


^x k = A ^x k−1 +B u k
T

K k =P−¿ k
H ¿ ¿¿

Pk
−¿= A P k−1 A +Q ¿
^x k = x^−¿+
k
K ¿¿
k

−¿¿
Pk = ( I −K k H ) P k
 We know A, B and H, but we need to determine R and Q
 To start the process, we need to know the estimate of x 0 and P0
 We now iterate through the estimates, where the previous estimate is the input for the current
state

Measurement Update
(Correction)

1. Compute Kalman Gain


Time Update (PREDICTION)
T
−¿ H ¿ ¿¿
1. Project the next state (TAKEN K k =P k
BEFORE):
2. Update the estimate
−¿=A ^x k−1+ B uk ¿
^x
k
via zk:

2. Project the error covariance ^x k = x^−¿+


k
K ¿¿
k

ahead(measure of uncertainty
in the estimating state: 3. Update the error
T covariance:
P−¿=
k
AP k−1 A +Q ¿

Pk = ( I −K k H ) P−¿¿
k

Initial Outputs at k will be


estimates at input for k+1

 ^x k−¿→ ¿ the prior estimate which in a way, is the rough estimate before the measurement
update correction
−¿ →¿
 Pk the prior error covariance  used in the measurement update equations

H7122 AUTONOMOUS VEHICLES | lecture notes


 In the measurement update equation, we find ^x k which is the estimate of x at time k (THE THING
WE NEED) and we find PK which is necessary for k+1 estimate with ^x k
 The Kalman Gain ( K K ) is not needed for the next iteration step
 Values we evaluate at the measurement update stage are called posterior values
Example
 You want to know the position of the car(y k) but you only have the GPS and throttle (u K) (AV).

This can be written with a state


equation:

 Measuring the state so C = 1


 Xk = [position]
 GPS reading is noisy – vk  v N ( 0 , R ) →Gaussian distribution with 0 mean and covariance, R
 Noise will take values with most around 0 and less further away – Gaussian:

 Since we have a single output system, the covariance is scalar and is equal to the variance of the
measurement noise
 Wind, changes in car velocity - wk

 We can estimate the position using a mathematical model but x K is uncertain because of the
process noise

H7122 AUTONOMOUS VEHICLES | lecture notes


 Kalman filter combines yk and ^x k to find the best estimate of the car’s position – by multiplying
the prediction and measurement probability functions together and scaling it
 Kalman filter is state observer but uses stochastic algorithm

H7122 AUTONOMOUS VEHICLES | lecture notes

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy