H7122 Autonomous Vehicles Lecture Notes
H7122 Autonomous Vehicles Lecture Notes
Lecture 1: Introduction
Learning outcomes: main concepts, challenges and techniques of autonomous vehicles with a focus
on analysis, modelling, simulation and control
Car using automation to perform the main transportation capabilities of a conventional car
Car automation uses mechatronics, AI and advanced control
Throttle-by-wire
Brake-by-wire
Planning Algorithm
Global routing – fastest and safest way from the initial position to the destination
Behaviour reasoning – overall behaviour of the vehicle
Control algorithms
Should be accurate for safety and should be robust – require optimal control and model-
predictive control
T ( x ( t−τ ) ) = y ( t−τ )
Modelling of QBot2
Moment:
d
M ψ = ( F R−F L )
2
Speed of left and right motor:
(
vl = r icc −
d
2 )
θ̇ ( wc )
v =( r )
d
r icc − θ̇ ( wc )
2
v R + v L =2 r ICC θ̇ ( wc )
v c =r ICC θ̇
1
v c = ( V L+ V R )
2
Yaw:
ψ̇ R d=v R
Localisation is the problem of identifying the vehicles pose (position and orientation) ->
normally global localisation uses GPS
the accuracy of GPS is 10-30m and is significantly affected by satellite signal condition
The measurement of difference sensors are combined to improve the localisation accuracy
within 30cm
The problem with odometrical localisation is that it uses the integral – therefore the error will
build up the more you integrate
Therefore, more sophisticated algorithms are needed to take this error into account
Mapping
Mapping is the problem of the robot figuring out a map of its environment – perceiving
Avs use off-line and continuously generated maps
Local maps are generated by LiDAR or cameras
Simultaneous Localisation and Mapping (SLAM)
Allows the AV to detect its pose
Builds a map of the environment
It uses probability theory
Probability Recap
P ( A∧B )=P ( A ) P ( B ) If A and B are independent
In conditional probability, P(M|Z) tells us that the probability of M occurring given that we’ve
observed Z, but in mapping:
“The probability that the map M is correct given that we have a sensor reading Z”.
So, we use Bayes’ rule to update the map based on our observations.
Image processing – directly processing the images and generate new ones from or extract attributes
such as lines or colours
Technique Description
Image Filters out the pixels with the values beyond thresholds (only want
thresholding white or black for example – histogram)
For more info – robo lecture
Transfer functions:
Binary thresholding:
{
T ( f ( x , y ) ) 255 if t h 1 ≤ f ( x , y ) ≤t h2 , otherwise 0
0
{
T ( f ( x , y ) ) f ( x , y ) if t h1 ≤ f ( x , y ) ≤ t h2 , otherwise 0
0
Truncate to a value:
{
T ( f ( x , y ) ) f ( x , y ) truncate , otherwise 0
0
Convolution
Mathematical operation of two functions (f and g) that indicate how the shape of one function
is changed by the other function:
h ( x , y ) =f ( x , y )∗g ( x , y )
This is used because edge and blob detection algorithms use linear filters in frequency domains:
+∞
f ( x )∗g ( x )= ∑ f ( x ) g ( x−a )
α=−∞
The filter is moved from minus infinity
to plus infinity.
Multiplying two signals together
Convolution in time domain is
equivalent to multiplication in the
frequency domain (as many image
processing operations are done in the
time domain)
Mirror g wrt y and shift it from minus
infinity towards f.
Convolution is the area under the
intersected part which increases and
then maintains constant
Edge detection
Detects lines as connected pixels separating two regions
It works by defining a filter function g(x,y) that is convolved with the image to reveal the edges
Sobel edge detection
Two separate filters, one in x-axis and one in y-axis:
[ ] [ ]
−1 0 1 −1 −2 −1
g x ( . )= −2 0 2 ; g y ( . )= 0 0 0
−1 0 1 1 2 1
If you want to detect the edge in the image
If the LHS and RHS are the same, then when summed up they will cancel
out – NO EDGE DETECTED
choose 9 pixels, don’t care about middle column, RHS will be 0, but LHS
will be (-1)(255)+(-2)(255)+(-1)(255) – A very NEGATIVE no
then take the integral (sum them up)
or:
h x ( x , y )=f ( x , y )∗g x ( . )
h y ( x , y ) =f ( x , y )∗g y ( .' )
Then, the overall gradient image of the original image is:
h ( x , y ) =√ h2x ( x , y ) +h2y ( x , y )
Kalman Filter
Current ^ ^
X k =K k . Z k + ( 1−K k ) . X
estimation k−1
We want to find ^x k the estimate of the signal x, and we want to find it for each consequent
k’s (at each time interval)
z k is the measured value, but we don’t perfectly know this, ^x k−1 is the estimate of the signal on
the previous state
The only unknown component is the Kalman gain, K k , which Is calculated for each consequent
state
Kalman filter works by finding the most optimum averaging factor for each consequent state,
by also remembering information about previous states
Steps
1 – equations of a Kalman filter:
x k = A x k−1 +B u k + wk−q
1
Each signal value (xk) may be evaluated using a stochastic equation (1) = any x k is a linear
combination of the previous value ( A x k−1 ¿ plus a control signal ( Bu k ) and a process noise (wk-q-
), most of the time, uk doesn’t exist
Equation 2 – any measurement value (with unknown accuracy) is a linear combination of the
signal value and the measurement noise (these are considered to be Gaussian)
A, B and H are general form matrices
If we are sure our system fits the model (normally does), then we have to estimate the mean
and standard deviation of the noise functions, wk-1 and vk
The better estimation of noise parameters – better estimates
2 – start the process
K k =P−¿ k
H ¿ ¿¿
Pk
−¿= A P k−1 A +Q ¿
^x k = x^−¿+
k
K ¿¿
k
−¿¿
Pk = ( I −K k H ) P k
We know A, B and H, but we need to determine R and Q
To start the process, we need to know the estimate of x 0 and P0
We now iterate through the estimates, where the previous estimate is the input for the current
state
Measurement Update
(Correction)
ahead(measure of uncertainty
in the estimating state: 3. Update the error
T covariance:
P−¿=
k
AP k−1 A +Q ¿
Pk = ( I −K k H ) P−¿¿
k
^x k−¿→ ¿ the prior estimate which in a way, is the rough estimate before the measurement
update correction
−¿ →¿
Pk the prior error covariance used in the measurement update equations
Since we have a single output system, the covariance is scalar and is equal to the variance of the
measurement noise
Wind, changes in car velocity - wk
We can estimate the position using a mathematical model but x K is uncertain because of the
process noise