Gps
Gps
mit zgner
Tankut Acarman
Keith Redmill
Library of Congress Cataloging-in-Publication Data
A catalog record for this book is available from the U.S. Library of Congress.
ISBN-13: 978-1-60807-192-0
All rights reserved. Printed and bound in the United States of America. No part
of this book may be reproduced or utilized in any form or by any means, elec-
tronic or mechanical, including photocopying, recording, or by any information
storage and retrieval system, without permission in writing from the publisher.
All terms mentioned in this book that are known to be trademarks or service
marks have been appropriately capitalized. Artech House cannot attest to the
accuracy of this information. Use of a term in this book should not be regarded
as affecting the validity of any trademark or service mark.
10 9 8 7 6 5 4 3 2 1
Contents
Preface ix
CHAPTER 1
Introduction 1
1.1 Background in Autonomy in Cars 1
1.2 Components of Autonomy 2
1.2.1 Sensors 2
1.2.2 Actuators 2
1.2.3 Communication 3
1.2.4 Intelligence 3
1.3 Notes on Historical Development 3
1.3.1 Research and Experiments on Autonomous Vehicles 3
1.3.2 Autonomous Driving Demonstrations 5
1.3.3 Recent Appearances in the Market 8
1.4 Contents of This Book 10
References 11
CHAPTER 2
The Role of Control in Autonomous Systems 13
2.1 Feedback 13
2.1.1 Speed Control Using Point Mass and Force Input 13
2.1.2 Stopping 15
2.1.3 Swerving 17
2.2 A First Look at Autonomous Control 18
2.2.1 Car Following and Advanced Cruise Control 18
2.2.2 Steering Control Using Point Mass Model: Open-Loop Commands 22
2.2.3 Steering Control Using Point Mass Model: Closed-Loop Commands 28
2.2.4 Polynomial Tracking 32
2.2.5 Continuous and Smooth Trajectory Establishment 33
2.2.6 The Need for Command Sequencing 34
References 35
v
vi Contents
CHAPTER 3
System Architecture and Hybrid System Modeling 37
3.1 System Architecture 37
3.1.1 Architectures Within Autonomous Vehicles 37
3.1.2 Task Hierarchies for Autonomous Vehicles 37
3.2 Hybrid System Formulation 43
3.2.1 Discrete Event Systems, Finite State Machines, and Hybrid Systems 43
3.2.2 Another Look at ACC 44
3.2.3 Application to Obstacle Avoidance 45
3.2.4 Another Example: Two Buses in a Single Lane 49
3.3 State Machines for Different Challenge Events 55
3.3.1 Macrostates: Highway, City, and Off-Road Driving 55
3.3.2 The Demo 97 State Machine 57
3.3.3 Grand Challenge 2 State Machine 61
3.3.4 The Urban Challenge State Machine 64
References 67
CHAPTER 4
Sensors, Estimation, and Sensor Fusion 69
4.1 Sensor Characteristics 69
4.2 Vehicle Internal State Sensing 70
4.2.1 OEM Vehicle Sensors 70
4.2.2 Global Positioning System (GPS) 71
4.2.3 Inertial Measurements 80
4.2.4 Magnetic Compass (Magnetometer) 81
4.3 External World Sensing 84
4.3.1 Radar 85
4.3.2 LIDAR 86
4.3.3 Image Processing Sensors 88
4.3.4 Cooperative Infrastructure Technologies 93
4.4 Estimation 95
4.4.1 An Introduction to the Kalman Filter 95
4.4.2 Example 97
4.4.3 Another Example of Kalman Filters: Vehicle Tracking for
Crash Avoidance 99
4.5 Sensor Fusion 101
4.5.1 Vehicle Localization (Position and Orientation) 101
4.5.2 External Environment Sensing 103
4.5.3 Occupancy Maps and an Off-Road Vehicle 106
4.5.4 Cluster Tracking and an On-Road Urban Vehicle 117
4.6 Situational Awareness 133
4.6.1 Structure of a Situation Analysis Module 134
4.6.2 Road and Lane Model Generation 136
4.6.3 Intersection Generation 140
4.6.4 Primitives 141
Contents vii
CHAPTER 5
Examples of Autonomy 149
5.1 Cruise Control 149
5.1.1 Background 150
5.1.2 Speed Control with an Engine Model 151
5.1.3 More Complex Systems 158
5.2 Antilock-Brake Systems 161
5.2.1 Background 161
5.2.2 Slip 162
5.2.3 An ABS System 165
5.3 Steering Control and Lane Following 167
5.3.1 Background 167
5.3.2 Steering Control 167
5.3.3 Lane Following 178
5.4 Parking 182
5.4.1 Local Coordinates 183
5.4.2 Parking Scenarios: General Parking Scenario and DARPA
Urban Challenge Autonomous Vehicle Parking Scenario 184
5.4.3 Simulation and Experimental Results 190
References 191
CHAPTER 6
Maps and Path Planning 193
6.1 Map Databases 193
6.1.1 Raster Map Data 194
6.1.2 Vector Map Data 195
6.1.3 Utilizing the Map Data 196
6.2 Path Planning 198
6.2.1 Path Planning in an Off-Road Environment 199
6.2.2 An Off-Road Grid-Based Path Planning Algorithm 201
6.2.3 Other Off-Road Path Planning Approaches 204
6.2.4 An On-Road Path Planning Algorithm 206
References 215
CHAPTER 7
Vehicle-to-Vehicle and Vehicle-to-Infrastructure Communication 217
7.1 Introduction 217
7.2 Vehicle-to-Vehicle Communication (V2V) 220
7.3 Vehicle-to-Infrastructure Communication (V2I) 223
7.4 Communication Technologies 224
7.4.1 Unidirectional Communication Through Broadcast Radio 224
7.4.2 Cellular/Broadband 224
viii Contents
CHAPTER 8
Conclusions 247
8.1 Some Related Problems 247
8.1.1 Fault Tolerance 247
8.1.2 Driver Modeling 249
8.2 And the Beat Goes On 251
References 255
Appendix 257
A.1 Two-Wheel Vehicle (Bicycle) Model 257
A.2 Full Vehicle Model Without Engine Dynamics 260
A.2.1 Lateral, Longitudinal, and Yaw Dynamics 260
A.2.2 Suspension Forces and Tire Dynamics 263
A.2.3 Tire Forces 264
This book is based on class notes for a senior/graduate-level course I, mit zgner,
have been teaching at The Ohio State University for 8 years titled Autonomy in
Vehicles. Portions were also presented in lectures at short courses in different in-
ternational venues.
The course, and thus the book, focuses on the understanding of autonomous
vehicles and the technologies that aid in their development. Before we get to see
fully autonomous vehicles on the roadway, we will encounter many of the constitu-
ent technologies in new cars. We therefore present a fairly comprehensive over-
view of the technologies and techniques that contribute to so-called intelligent
vehicles as they go through the stages of having driver aids, semiautonomy, and
finally reach full autonomy.
The book content relies heavily on our own experiences in developing a series
of autonomous vehicles and participating in a series of international demonstra-
tion or challenge events. Although the book will explain in substantially more
detail on developments at The Ohio State University, the reader is provided with
the basic background and is encouraged to read and appreciate the work at other
research institutions participating in the demonstration and challenges.
My coauthors and I would like to thank all the sponsors of the research and
development activity reported here. First and foremost is The Ohio State University
College of Engineering, directly and through the Transportation Research Endow-
ment Program (TREP) and the OSU Center for Automotive Research (CAR); the
OSU Electrical and Computer Engineering Department; and the Transportation
Research Center (TRC) testing facility in East Liberty, Ohio. We would like to
thank Honda R&D Americas, National Instruments, OKI, Oshkosh Truck Co.,
Denso, and, finally, NSF, which is presently supporting our work through the Cy-
ber Physical Systems (CPS) program.
A substantial portion of the work reported here is due to the efforts of students
working on projects. A few have to be mentioned by name: Scott Biddlestone, Dr.
Qi Chen, Dr. Lina Fu, Dr. Cem Hatipoglu, Arda Kurt, Dr. Jai Lei, Dr. Yiting Liu,
John Martin, Scott Schneider, Ashish B. Shah, Kevin Streib, and Dr. Hai Yu.
Finally, I need to thank my wife and colleague Professor Fusun zgner for all
her contributions.
ix
CHAPTER 1
Introduction
We define autonomy in a car as the car making driving decisions without in-
tervention of a human. As such, autonomy exists in various aspects of a car to-
day: cruise control and antilock brake systems (ABS) are also prime examples of
autonomous behavior. Some systems already existing in a few modelsadvanced
cruise control, lane departure avoidance, obstacle avoidance systemsare all au-
tonomous. Near-term developments that we anticipate, with some first appearing
as warning devices, are intersection collision warning, lane change warning, backup
parking, parallel parking aids, and bus precision docking. These capabilities show
either autonomous behavior, or can be totally autonomous with the simple addition
of actuation. Finally, truck convoys and driverless buses in enclosed areas have seen
limited operation.
Studies in autonomous behavior for cars, concentrating on sensing, perception,
and control, have been going on for a number of years. One can list a number of
capabilities, beyond basic speed regulation, that are key to autonomous behavior.
These will all affect the cars of the future:
Car following/convoying;
Lane tracking/lane change;
Emergency stopping;
Obstacle avoidance.
In what follows in this book, we shall be investigating all of these basic opera-
tions in detail, and show how they are also part of more complex systems.
In each one of the above operations the car is expected to do self-sensing (ba-
sically speed and acceleration), sensing with respect to some absolute coordinate
system (usually using GPS, with possibly the help of a map data base), and sensing
with respect to the immediate environment (with respect to lane markings, special
indicators on the road, obstacles, other vehicles, and so forth). We shall also be
reviewing the relevant technologies in this book.
1
2 Introduction
1.2.1 Sensors
One of the key components of an autonomous vehicle is the sensors (see Figure 1.1).
The vehicle has to first have measurements related to its own state. Thus it needs
to sense its speed, possibly through its wheel sensors, and also understand its direc-
tion of motion. The vehicle could use rate gyros and steering angle measurements
as standard devices for this data. Data from GPS can also be used in this regard.
The second class of sensors is used by the car to find its position in relation to
other vehicles and obstacles. Vision systems, radars, lidars, and ultrasonic sensors
can be mentioned.
Finally, the vehicle needs to find its position with respect to the roadway. Dif-
ferent sensors have been used with or without infrastructure-based help. Vision
systems, radar, magnetometers, and RF ID tags can be mentioned. GPS data has
also been used with good digital maps.
1.2.2 Actuators
To be able to control the car in an autonomous fashion, it has to be interfaced to
a computer. The specific control loops are speed control (throttle control), steering
control, and brake control. Unless we are dealing with automatic transmission, the
gear shift also has to be automated. A car in which all actuation in these loops is
electrical is called a drive-by-wire vehicle.
Figure 1.1 Generic concept of vehicle integrated by sensors and signal processor.
1.3 Notes on Historical Development 3
1.2.3 Communication
If a car has access to information through a wireless network, this too can help in
the automation of driving. Communication can provide general information as well
as real-time warnings to the driver data that can replace measurements.
One obvious use of communication between two vehicles is in exchanging data
on relative location and relative speed. This can replace or augment data from
sensors.
1.2.4 Intelligence
The intelligence in vehicles is introduced by the rules and procedures provided and
programmed by humans. As such, as of the time of this writing, intelligent vehicles
emulate the driving behavior of humans. The so-called intelligence is supplied by
one or more processors on the vehicle. To this end we can classify three (sets) of pro-
cessors: Those for handling sensor data, those for low level control algorithms and
loop closure over the actuators, and finally those for establishing the autonomous
behavior. We shall investigate their tasks in subsequent chapters.
controlling all cars in a network of highways, fully occupied with automated cars,
called intelligent vehicle highway systems.
In Japan, research on automated highway systems was started in the early
1960s in the Mechanical Engineering Laboratory (MEL) and the National Institute
of Advanced Industrial Science and Technology (AIST). The automated driving
system in the 1960s employed a combination of inductive cable embedded under a
roadway surface and a pair of pickup coils at the front bumper of a vehicle for lat-
eral control, and it was a cooperative system between a vehicle and infrastructure.
In addition to the free agent, a rear end collision avoidance system between two au-
tomated vehicles based on roadway vehicle sensors was developed. The automated
vehicle drove up to 100 km/h on a test track in 1967 [2]. A comprehensive test
system for vision-based application development was built on the production-type
vehicle in the late 1980s and stereo cameras were introduced to detect obstacles
along lanes with curvatures and crossings at 50 km/h speed (see [3, 4]).
In Germany, in the beginning of the 1980s Professor Dickmanns and his team
equipped a Mercedes-Benz van with cameras and other sensors. For safety reasons,
initial experiments in Bavaria took place on streets without traffic. In 1986 the
robot car VaMoRs from the same team managed to drive all by itself; by 1987
VaMoRs was able to travel at speeds up to 96 km/h, or roughly 60 mph.A number
of these capabilities were also highlighted in the European PROMETHEUS Project
(see, for example, [5]).
Starting with the PROMETHEUS Project, which involved more than 20 car
manufacturers in Europe, some Italian research centers began their activity in the
field of perception for vehicles. Among these, the CRF (FIAT Research Center) and
the University of Parma developed some early prototypes, which demonstrated the
viability of perception technologies like artificial vision.
In the United States, apart from the work at The Ohio State University, devel-
opment of autonomous vehicles were undertaken in programs at Carnegie Mellon
University, with links to robotics activity and vision-based expertise [6], and within
the California PATH Program [7].
California Partners for Advanced Transit and Highways (California PATH)
program is dedicated to increasing highway capacity and safety, and reducing
traffic congestion, air pollution, and energy consumption by applying advanced
technology. It funds the research projects selected among the proposals submitted
throughout California and acts a collaboration between the California Depart-
ment of Transportation (Caltrans), the University of California, other public and
private academic institutions, and private industry (see http://www.path.berke-
ley.edu/). The previous PATH research projects elaborating autonomy in vehicles
were the platoon control demonstration in 1997 in San Diego, California, where
the autonomy was attributed with one vehicle capable of splitting, doing a lane
change, falling back, and doing another lane change to join the platoon back au-
tonomously. Advanced snowplow, advanced rotary plow, automated heavy vehicle
control, and Bayesian automated taxi concerning vision-guided automated intel-
ligent vehicles were the previous projects involving control and decision autonomy.
Projects on sensing and sensor technology were directed towards inertial and GPS
measurement on the vehicle for kinematic positioning and magnetic marker refer-
ence sensing on the road infrastructure to measure the relative vehicle position with
1.3 Notes on Historical Development 5
respect to the center of the lane and to carry the roadway characteristics integrated
in the form of the binary code into the north and south poles of those markers.
Recent projects have been concentrated on transportation research for automated
bus steering near bus stops in a city by using autonomous magnetic guidance tech-
nology and deployment of a research testbed to enable traveler communication
through the wireless gateway. Choices about information type to be transferred to
the traveler and the drivers communication handheld device are being determined
by deploying a research testbed in the San Francisco Bay area.
In Europe, a project called PReVENT was integrated by the European auto-
motive industry and the European commission to develop and demonstrate active
safety system applications and technologies [8]. The main goals were to reduce half
of the road accidents, render the European automotive industry more competitive,
and disseminate transport safety initiatives. A public exhibition was organized on
September 1822, 2007, in Versailles, France, with many safety applications ac-
companied by safe speed, lateral support, intersection safety, collision mitigation,
advanced driver assistance systems, and electronic stability control demonstra-
tions, (PReVENT-IP project and its public deliverables are available at http://www.
prevent-ip.org). Another current research activity on autonomous vehicle and dem-
onstration has been developing in Europe under the canonic name Cybercar.
Cybercar has been introduced as the road vehicles with full automated driving
capabilities based on a sensor suite, transportation system management, and also
vehicle-to-vehicle, vehicle-to-infrastructure communication. The future concept of
public transportation has been investigated since the early 1990s for carrying pas-
sengers or goods door-to-door through the network of roads and was implemented
in 1997 to transport passenger at Schipol airport, the Netherlands. The Cybercar
concept has been developed and expanded under a numerous number of European
projects such as CyberCars, CyberMove, EDICT, Netmobil, CyberC3, and cur-
rently CyberCars-2 and CityMobil (for interested readers, please check the former
Cybercar Web page available at http://www.cybercars.org/).
Through the years a number of developments have been introduced to the pub-
lic in national and international challenges. We will summarize those below. Apart
from those, two specific recent demonstrations, showing the activities of specific
groups, are worth mentioning: A team led by Professor Alberto Broggi demon-
strated a set of driverless vans as they went from Italy to China. The 8,000-mile
journey ended in Shanghai, verifying the robustness of a number of technologies
used for autonomous vehicles. A team supported by Google and led by Sebestian
Thrun demonstrated a fully autonomous car in fairly dense urban California traf-
fic, utilizing vision-based systems and Google Street View.
Figure 1.2 Two autonomous cars from The Ohio State University team in Demo 97 performing a
vehicle pass without infrastructure aids.
1.3 Notes on Historical Development 7
(a)
(b)
Figure 1.3 (a) TerraMax before GC04. (b) ION before GC05.
8 Introduction
was developed. Various sensors, including multiple digital, color cameras, LIDARS,
radar, GPS, and inertial navigation units were mounted. We will review the Grand
Challenge later in Chapter 4.
The 2007 DARPA Urban Challenge was performed in emulated city traffic
and saw the return to urban vehicles. Autonomous ground vehicles competed to
win the Urban Challenge 2007 prize by accomplishing an approximately 60-mile
urban course area in less than 6 hours with moving traffic. The autonomous vehicle
had to be built upon a full-size chassis compared to production-type vehicles and
required a documented safety report. The autonomy capability was demonstrated
in forms of a complex set of behaviors by obeying all traffic regulations while ne-
gotiating with other traffic and obstacles and merging into traffic. The urban driv-
ing environment challenged autonomous driving with some physical limits such
as narrow lanes, sharp turns, and also by the daily difficulties in a daily urban
driving such as congested intersections, obstacles, blocked streets, parked vehicles,
pedestrians, and moving vehicles (http://www.darpa.mil/grandchallenge/overview.
asp). Figure 1.4 shows the OSU car ACT that participated in the Urban Challenge,
which was won by the Stanford team [12].
At the time of this writing, another challenge/demonstration was under prep-
aration in the Netherlands: the Grand Cooperative Driving Challenge (GCDC),
which plans to demonstrate the utilization of vehicle-to-vehicle (V2V) communica-
tion as it helps semiautonomous cars go through a series of collaborative exercises
emulating cooperative driving in urban and highway settings (http://www.gcdc.
net/). Team Mekar, which included Okan University, Istanbul Technical University,
and The Ohio State University, joined the challenge with a V2V communication
interface (Figure 1.5).
Figure 1.4 OSU-ACT (Autonomous City Transport) before the 2006 DARPA Urban Grand Challenge.
Night vision systems assist the driver in bad visibility conditions such as night
driving by displaying an image as observed by a sensor that can cope with such
conditions. GM has first introduced night vision in its Lincoln Navigator employ-
ing a far infrared (FIR) spectrum. The Mercedes S-Class followed in 2005 with an
alternative near-infrared (NIR) concept that employs a CMOS camera and NIR
high beam headlamps.
In 2002 Honda introduced a lane-keeping assist system called the Honda in-
telligent driver support system (HIDS) in the Japanese market. It combined ACC
with lane-keeping support. Based on lane detection by a video sensor, an auxiliary
supportive momentum is added to the drivers action. With the collision mitiga-
tion system (CMS) Honda has followed an escalation strategy to mitigate rear-end
collisions. When the vehicle approaches another vehicle in a way that requires
driver action, initially a warning signal is activated when the driver does not react;
a moderate braking is activated that increases to about 6 m/s when a collision is
unavoidable. Should the driver react but his or her reaction is insufficient to avoid
the accident, the system enhances his or her action during all phases. Even though
the system cannot avoid all accidents, the support of the driver and the speed re-
duction will reduce collision severity. Since 2005 an active brake assistance system
that supports the driver to avoid or mitigate frontal collisions is available in the
Mercedes S-class.
At the time of this writing, there was extensive activity in using wireless
vehicle-to-vehicle communication technologies. We would expect these technolo-
gies to provide additional aid to the capabilities of autonomous vehicles.
The following chapters of this book will take the reader through different technolo-
gies and techniques employed in autonomous vehicle development.
Figure 1.5 Semiautonomous vehicle MEKAR, supported by Okan University, the Istanbul Technical
University, and The Ohio State University, before the Grand Cooperative Driving Challenge.
1.4 Contents of This Book 11
References
[1] Fenton, R. E., and R. J. Mayhan, Automated Highway Studies at The Ohio State Univer-
sityAn Overview, IEEE Transactions of Vehicular Technology, Vol. 40, No. 1, 1991,
pp.100113.
[2] Ohshima, Y., et al., Control System for Automatic Driving, Proc. IFAC Tokyo Sympo-
sium on Systems Engineering for Control System Design, Tokyo, Japan, 1965.
[3] Tsugawa, S., Vision-Based Vehicles in Japan: Machine Vision Systems and Driving Con-
trol Systems, IEEE Transactions on Industrial Electronics, Vol. 41, No. 4, 1994, pp.
398405.
[4] Tsugawa, S., A History of Automated Highway Systems in Japan and Future Issues,
2008 International Conference on Vehicular Electronics and Safety, Columbus, OH, 2008.
[5] Dickmanns, E. D., et al., Recursive 3D Road and Relative Ego-State Recognition, IEEE
Trans. PAMI, Vol. 14, No. 2, 1992, pp. 199213.
[6] Thorpe C., et al., Vision and Navigation: The Carnegie Mellon Navlab, Boston, MA: Klu-
wer Academic Publishers, 1990.
[7] Chang, K. S., et al., Automated Highway System Experiments in the PATH Program,
IVHS Journal, Vol. 1, No. 1, 1993, pp. 6387.
[8] Schulze, M., Contribution of PReVENT to Safe Cars of the Future, 13th ITS World Con-
gress, London, U.K., 2006.
[9] Bishop, R., Intelligent Vehicle Technology and Trends, Norwood, MA: Artech House,
2005.
[10] Kato, S., et al., Vehicle Control Algorithms for Cooperative Driving with Automated Ve-
hicles and Intervehicle Communications, IEEE Transactions on Intelligent Transportation
Systems, Vol. 3, No. 3, 2002, pp. 155161.
12 Introduction
[11] zgner, U., K. Redmill, and A. Broggi, Team TerraMax and the DARPA Grand Chal-
lenge: A General Overview, 2004 IEEE Intelligent Vehicle Symposium, Parma, Italy,
2004.
[12] Montemerlo, M., et al., Junior: The Stanford Entry in the Urban Challenge, Journal of
Field Robotics, Vol. 25, No. 9, September 2008, pp. 569597.
CHAPTER 2
Autonomy in vehicles requires control of motion with respect to the desired objec-
tives and constraints. In this chapter, we would like to demonstrate basic lateral
and longitudinal vehicle control. The driving, stopping scenario is first presented
to illustrate the control effects of braking input applied to the fairly simple motion
dynamics. Then, collision and obstacle ideas are briefly given and steering is pre-
sented as another control variable. Headway distance or relative distance between
the follower and leader vehicle is used to illustrate the introduction of feedback to
the vehicle longitudinal control. The perception-reaction time and the simple ve-
hicle dynamics responses are simulated along the commanded sequences. Turning
around a corner and the effects of the longitudinal speed versus the lateral motion
responses are added just after the cruise control scenario to illustrate the possible
coordinated use of the inputs. Fully autonomous vehicles, path generation, and
tracking are briefly presented as an introductory passage to the next chapters.
2.1 Feedback
13
14
The Role of Control in Autonomous Systems
dx(t)
where x denotes displacement of the vehicle, x = , its velocity as a first-order
dt
2
d x(t)
time derivative of the displacement, and x = denotes acceleration/decelera-
dt 2
tion of the point mass as a second-order time derivative of the displacement; m
denotes the mass of the vehicle, is the viscous friction coefficient modeling road
friction, fd represents drive force, and fb represents brake force applied to the vehi-
cle model. Simple analysis can show that a fixed driving force will provide a steady-
state fixed velocity. Under the assumption that the initial conditions for the states
are zero, at steady-state conditions, or more clearly when the vehicle model cruises
at a constant speed with constant driving force, the change with respect to time is
d2x
equal to 0, x 2
= 0 , the point mass vehicle model dynamics become, x = fd or
d t
fd
x= . This simple analysis clearly concludes that the velocity is proportional to
the fixed driving force divided by the viscous friction coefficient. Obviously adjust-
ment of the braking force fb, especially proportional to the velocity, will give us the
option of slowing down and stopping faster.
In the first simulation scenario, we consider the point mass model with road
friction. Driving with constant speed followed by stopping with only road friction
and mass of the vehicle model is simulated. The time responses of the models ve-
locity are plotted in Figure 2.1. The brake force is not applied during stopping or
the decelerating time period; some amount of driving force is applied to drive the
model with constant speed at the initial stage of the simulation scenario. The time-
responses of the displacement are plotted in Figure 2.2.
2.5
2
Driving with
constant
Speed (m/sec)
speed
1.5
Stopping with only
friction and mass
1 (no brakes)
0.5
0
200 220 240 260 280 300 320 340 360 380
Time (sec)
Figure 2.1 The time responses of vehicle models velocity. The vehicle model is subject to road
friction for stopping.
2.1 Feedback 15
Figure 2.2 The time responses of vehicle models displacement in the longitudinal direction subject
to constant drive force and road friction.
2.1.2 Stopping
After considering the time responses of the displacement and the speed of the point
mass vehicle model subject to road friction, we may consider stopping time and
distance maintenance to consider time to hit or time to collide with the vehicle or
the obstacle in front.
The braking force fb may be changed to regulate the speed and the displace-
ment of the vehicle model through the option of slowing down and stopping faster.
Speed profile change versus increasing braking force is plotted in Figure 2.3, and
the resulting displacement is given in Figure 2.4.
The presence of a potentially dangerous target can be determined by compar-
ing the measured time to collision against a minimum time for stopping the vehicle
safely. At any time t from the initial time t0, the distance travelled can be written
as [1]:
2
x(t) = x(t0 ) + (t t0 ) x(t0 ) + (t t0 ) x(t0 ) (2.2)
when collision occurs (i.e., for some time t > t0), the distance x(t) = 0. Under the
assumption that during braking maneuver for the time interval [t0, t], the speed
and deceleration
remain fixed at their initial values, which are denoted by x(t0 )
and x(t0 ) at t = t0, the time to collision (TTC) from the initial time t0 to the instan-
taneous time t can be estimated as:
x(t0 ) + x(t0 )2 4x(t0 ) x(t0 )
TTC = (2.3)
2 x(t0 )
16
The Role of Control in Autonomous Systems
2.5
1.5
Braking force = 0
1
0.5
Figure 2.3 The time responses of velocity subject to different braking forces.
640
620
Displacement (m)
610
600
590
580
Brake force = max
570
560
240 300 350 400 450 500
Time (sec)
Figure 2.4 The time responses of vehicle models displacement subject to different braking forces.
as the constant value for emergency maneuvering operations; the time to reach the
maximum deceleration is denoted by amax:
amax
tb = + td
Jmax
tmin
x(td ) +
td
x(t)dt = 0 (2.4)
Hence,
1
x (t d ) Jmax (tb td ) amax (t min tb ) = 0
2
2
1
x(td ) J (t t )2
tmin = 2 max b d + t
b
amax
It may be assumed that the service limits of 0.2g acceleration with a jerk 0.2
g.sec1 is required for ride comfort. It is also assumed that emergency deceleration
is about 0.4g. The United States standards on ride comfort of the passengers seated
in the land vehicle were defined by the National Maglev Initiative Office [2]. How-
ever, these values are not necessarily fixed and they can be adjusted in accordance
with the required control system performance.
2.1.3 Swerving
The capability to swerve around an obstacle can help indicate items such as maxi-
mum safe turn rate and required look-ahead distance to assure avoidance without
rollover as illustrated in Figure 2.5 [3]. The look-ahead distance, denoted by D, can
be generated using simple physics:
D = R2 ( R y ) + x treact + B
2
(2.5)
where x is the velocity in the longitudinal direction, and y is the displacement value
that the vehicle has to travel in the lateral direction to arrive at the clearance point.
The look-ahead distance is required to assure that the autonomous vehicle can
accomplish the swerving task before closely approaching the obstacle or entering
in the buffer space without rollover or sliding. Safe turn maneuver is calculated in
terms of the maximum value of Rmin, Rroll, or Rslide given by [4],
18
The Role of Control in Autonomous Systems
2w
Pclear
diststop
Rmin
distreact
x y
o
Autonomous
vehicle
Figure 2.5 Swerving around an obstacle.
2hx2
Rroll =
wg
x2
Rslide =
mg
These equations involve many critical variables because of the large number of
environmental and vehicle factors that effect the swerving abilities of a vehicle. The
variables used in these equations are shown in Table 2.1.
. .
x2 x1
x x 2 x 1
Figure 2.6 Cruise control scenario: leading vehicle and the cruise controlled follower vehicle.
the following vehicle has to maintain headway (a safe distance behind the leading
vehicle). To maintain headway, the following vehicles driver, going faster, must
lay off the gas pedal and decelerate the vehicle with road friction. After settling
the desired headway, the driver can apply some drive force to maintain headway.
In Figure 2.7, the time responses of relative displacement are plotted; headway or
relative displacement is decreased by the fast-going follower, and to prevent col-
lision, at t = 100 seconds, traction force fd = C2 is commanded to be 0 (i.e., foot
is taken off gas pedal), and the point mass model motion of the follower vehicle
becomes: m x2 + x2 = 0 . Speed is decreasing with the damping coefficient or road
friction. At t 103.5 seconds, to maintain headway settled, the same amount of
traction force fd = C1 is applied to the following vehicle. To illustrate in detail the
motion of relative speed and relative displacement change during the foot off
and constant headway behavior, the magnification of the time interval is in-
creased in Figures 2.8 and 2.9.
What if we have brakes? In the second headway simulation scenario, we in-
vestigate the use of brakes. In the first scenario, we have settled and maintained
headway distance at approximately 9 meters. If we consider the use of brakes, as
additional friction forces to road friction forces, we can get closer to the leading
vehicle and apply brakes to avoid a collision and to increase headway (relative
displacement) again.
We use the same amount of traction or drive forces as in the second simulation
scenario, but we take the foot off the gas pedal at t = 116 seconds while decreas-
ing displacement to the leading vehicle at 1.3 meters. As a human driver, first we
take our foot off the gas pedal, and then we start applying brake force at t = 117
20
The Role of Control in Autonomous Systems
Figure 2.8 The time responses of relative displacement zoomed in at the time interval of foot off
gas pedal and constant headway.
seconds, as plotted in Figure 2.10. A 1-second delay is added to model the drivers
foot displacement from the gas pedal to the brake pedal. Applying brake force until
2.2 A First Look at Autonomous Control 21
Figure 2.9 The time responses of relative displacement versus relative speed zoomed in at the time
interval of foot off gas pedal and constant headway.
Figure 2.10 The time responses of relative displacement versus relative speed. The brakes are ap-
plied during the time interval t = 117 seconds and t = 120 seconds.
22
The Role of Control in Autonomous Systems
Figure 2.11 The time responses of relative displacement versus relative speed zoomed in the time
interval of foot off gas pedal and when brakes are applied.
t = 120 seconds, the relative position is increased as plotted in Figure 2.11. If brake
forces are kept on, relative position continues to increase. Therefore, to maintain
headway distance between two vehicles, drive force is applied again to the subject
vehicle at t = 121 seconds at the same amount applied to the leading vehicle. Once
again, a one-second time delay is presented to switch drivers foot from the brake
pedal to the gas pedal.
In Figure 2.12, the time responses of relative displacement are compared for
the cases of using additional friction brakes and the case of the road friction to
maintain headway. In the case of braking, the subject vehicle can get closer because
applied friction brake forces are decelerating the vehicle model motion faster in
comparison to road friction force.
In Figure 2.13, the time responses of relative displacement versus relative
speed are plotted. Applied brake forces are changing the sign of the relative speed
between two vehicles. Approaching the leading vehicle, the follower subject to a
higher drive force is decelerated by applying brake forces and the relative position
or headway is increased. To maintain or settle the headway (i.e., at 9 meters as in
both of headway maintenance scenario), the same amount of drive force is applied
to stabilize the relative speed at zero (i.e., at the same speed).
Figure 2.12 Comparisons of the scenarios when the brakes are applied and when the vehicle model
motion dynamics are only affected by the road friction while maintaining larger headway.
x = Vs cos
y = Vs sin (2.6)
=u
where x represents the position of the point mass vehicle model in the longitudinal
direction, y represents the position of the point mass vehicle model in the lateral
direction, denotes the angle between the longitudinal and the lateral direction of
the model, and u denotes the commanded steering wheel turn rate (rad/sec).
Note that this model assumes the speed to be constant. In fact, if the angle is
zero, traveling along the x direction only, we will end up with a first-order differ-
ential equation in x, contrary to (2.1).
We consider fixed speed (Vs = constant). We simulate lane change maneuvering
in Figure 2.14. The wheel steering angle is applied at t = 5 seconds and the point
mass vehicle models motion is plotted. Changing lanes is accomplished when the
position of the vehicle model is shifted from the center of the left lane to the center
of the right lane. Lane width is chosen to be 3.25 meters; therefore the left lane
center is placed to be at 1.625 meters and the center of the right lane is placed to be
at 4.875 meters as illustrated in Figure 2.15.
The wheel steering angle, , and the wheel steering angular rate, u, are plotted
in Figure 2.16 to accomplish lane change maneuvering. Double lane change ma-
neuvering is accomplished by a right lane change operation followed by a left lane
change operation (i.e., shifting the center of gravity of the vehicle model from the
center of the left lane to the center of the right lane and vice versa). Displacement
in both of the longitudinal and the lateral directions is plotted in Figure 2.17. The
wheel steering angle, , and wheel steering angular rate, u, are plotted in Figure
2.18 to accomplish double lane change maneuvering task.
Figure 2.14 The time responses of lane change maneuvering. Displacement in the longitudinal
direction is compared with displacement in the lateral direction.
2.2 A First Look at Autonomous Control 25
80
60
x (m)
40
20
0
0 1 2 3 4 5 6 7 8 9 10
5
4
y (m)
3
2
1
0 1 2 3 4 5 6 7 8 9 10
Time (sec)
Figure 2.15 The time responses of displacement in the longitudinal and lateral direction.
Figure 2.16 The time responses of steering wheel angle and steering wheel angle rate.
Both the lane change and the corner-turning operations are the ones that can be
conceived to be open-loopthat is, preplanned operations. As in any system, the
main distinction between open- and closed-loop is going to be related to the robust-
ness. The open-loop examples provided work fine if certain assumptions are met
exactly. For lane change, these could be assumptions regarding our car being ex-
actly in the middle of the lane, the lane being a precise width that we know before-
hand, the car responding exactly as its model predicts, that no other inputs exist,
and so forth. Otherwise, we shall have to resort the closed-loop commands; to be
able to generate those closed-loop commands we need to accomplish sensing and
tracking the roadway. A preplanned operation illustration example is given next.
26
The Role of Control in Autonomous Systems
Figure 2.17 The time responses of displacement in both of the longitudinal and the lateral direc-
tion for double lane change maneuvering.
Figure 2.18 The time-responses of steering wheel angle and steering wheel angle rate.
Using the same point mass model on the plane, tracking the infrastructure is
simulated by using the open-loop input steering wheel turn rate command or its se-
quences. The tracking scenario is illustrated by the road with a curvature at 30 me-
ters in the longitudinal direction with (rad) angle. In the first scenario, the point
4
mass model is steered to track the road at 30 meters in the longitudinal direction
(i.e., at the moment the point mass displacement is at 30 meters and it is applied the
open-loop steering wheel turn rate, u). Before the corner, the steering wheel turn
rate is kept constant, u = 0, leading to displacement in the longitudinal direction.
2.2 A First Look at Autonomous Control 27
The simulation scenario about turning case is repeated for Vs = 10 m/sec con-
stant longitudinal speed and two different open-loop wheel turn rates. When the
point mass model is steered by u = 1 rad/sec at the corner situated at 30 meters in
the longitudinal direction, displacement of the point mass model in both of the lon-
gitudinal and lateral direction is not adequate to track the turning road (see Figure
2.19 for the tracking instance, when the curvature starts and 1 rad/sec turn rate is
applied, the point mass model displacement in the longitudinal direction is 5 meters
where the fictive lane boundary in the right side is exceeded). The steering angle of
the point mass model (taking the time integral of the turn rate) is applied for the
time interval [3; 4] seconds and it is plotted with a dash-dot line in the Figure 2.20.
To assure open-loop tracking, the turn rate command value may be increased.
In the second case, u = 2 rad/sec for the time duration of t = [3; 3.5] seconds (half-
seconds shorter compared to the lower turn rate scenario) is applied at the corner;
this steering angle waveform is plotted with solid line in Figure 2.20. The turning
radius of the point mass maneuver may be reduced but the tracking maneuver is
not successfully achieved due to the road lane departure on the turning side where
it is supposed to track the lane boundary. Therefore, additional command sequenc-
ing may be necessary to avoid lane departure followed by turning. The turn rate is
modified with respect to time as follows:
Lane departure after turning may be avoided by the input turn rate command
sequence plotted with dashed line in Figure 2.20.
16
Infrastructure
14
y position in the lateral direction (m)
u=time-varying
u=2 rad/sec
12 u=1 rad/sec
10
0
2
30 35 40 45
x position in the longitudinal direction (m)
Figure 2.19 The time responses of road tracking maneuvering. Position in the longitudinal direc-
tion versus position in the lateral direction is plotted.
28
The Role of Control in Autonomous Systems
Figure 2.20 The waveform of the steering angle (its time derivative gives the turn rate).
x = Vs cos
y = Vs sin (2.7)
=u
u = K (2.8)
Simulation to test such a lane tracking algorithm will rely on making a repre-
sentation of the curved road. Such a representation can be done in terms of a data
file (a point by point representation of the curve) or a polynomial or other closed-
form expression.
We shall now move on to a very specific curve, namely, a 90 corner. Turning a
corner is an important maneuvering task. Reference establishment is shown in Fig-
ure 2.22. A two-step reference trajectory is proposed to accomplish the corner turn.
The point mass vehicle, coming to the crossroad from the left side, is going to
turn right. Reference establishment for turning right perpendicularly is constituted
in two steps:
2.2 A First Look at Autonomous Control 29
x
(0,0)
d
d/2 (x 0 ,y0 )
x0
1. Lane keeping: Coming onto the corner while maintaining the reference
position in the lateral direction, denoted by y0, and increasing the reference
position in the longitudinal direction, denoted by x0 in Figure 2.22; refer-
ence is established to approach to the corner:
30
The Role of Control in Autonomous Systems
d
x0
2
(2.9)
d
y0 =
2
2. Approaching the center of the corner: Turning the corner, a new reference
is established satisfying a fixed reference position in the longitudinal direc-
tion, denoted by x0 = d/2, and decreasing the reference position in the
lateral direction, which is denoted by y0 = d/2, to the final reference value
yf.
d
x0 =
2
(2.10)
d
y0 = yf
2
In Figure 2.23, the turning maneuver task is illustrated. In this plot, the ini-
tial reference for the lateral position is chosen constant and equal to the center of
the lane, which is y0 = 1.625 meters. Displacement in the longitudinal direction,
denoted by x, is increasing while approaching to the corner. To achieve turning
task around the corner, a second step of reference generation is established as con-
stant longitudinal position, which is x0 = 1.625 meters. New reference means a
6
y
8
Turning corner
with fixed speed
10
x
12
14
16
18
18 16 14 12 10 8 6 4 2 0
x: position in the longitudinal direction (m)
Figure 2.23 Comparison of displacement during corner maneuvering. Displacement in the longi-
tudinal direction versus lateral direction is plotted for the case when speed is constant and when it
is reduced.
2.2 A First Look at Autonomous Control 31
Figure 2.24 The time responses of the wheel angle and rate for turning corner maneuvering. The
time responses of the speed are plotted for the case when speed is constant and when it is reduced
to improve corner tracking.
32
The Role of Control in Autonomous Systems
Lane
boundaries
smooth reference generation, which connects the desired final point from the initial
point with respect to some turning constraints.
1 (2.11)
C=
R
dC
C = C0 + l = C0 + C1l (2.12)
dl
Thus with constant speed and steering wheel turn rates, ideally there will be no
deviation from the preferred path [11]. It can be shown that C1 = 1 A2 is piecewise
constant and A is the clothoid parameter. For small angular changes we can get an
approximation,
2.2 A First Look at Autonomous Control 33
l2 l3
Y = C0 + C1 (2.13)
2 6
BP (t) = At 3 + Bt 2 + Ct + P1 0 t 1 (2.14)
C = 3(P2 P1 ) (2.17)
Considering the car maneuver control and a sequence of waypoints, the initial
point P1 may be defined by the current car position and the next point P2 is chosen
to be of distance D1 in the direction of the heading from the current position, which
is P1. Towards the destination point P4, the intermediate point P3 is chosen to be
of distance of D2 while assuring an appropriate yaw angle. The offset distances D1
and D2, which may be tuned as the control points to generate a feasible trajectory,
are plotted in the illustrative path generation example in Figure 2.26.
34
The Role of Control in Autonomous Systems
Figure 2.26 The smooth and continuous path generation between the initial and desired goal
point by using Bzier curve techniques.
Defining a finite set of states and transitioning through them. Each state leads
to a set of feedback gains and/or reference signals. The model of the system is
a combination of the vehicle dynamics jointly with a state machine, leading
to a hybrid system.
Defining a functional hierarchy, which under certain conditions again leads
to a different set of feedback gains and reference signals. The hybrid system
model is probably hidden, but still exists.
2.2 A First Look at Autonomous Control 35
References
[1] Ioannou, P. A., and C. C. Chien, Autonomous Intelligent Cruise Control, IEEE Transac-
tions on Vehicular Technology, Vol. 42, No. 4, 1993, pp. 657672.
[2] U.S. Department of Transportation, Compendium of Executive Summaries from the Mag-
lev System Concept Definition Final Reports, DOT/FRA/NMI-93/02, pp. 4981.
[3] zgner, ., C. Stiller, and K. Redmill, Systems for Safety and Autonomous Behavior in
Cars: The Darpa Grand Challenge Experience, Proceedings of the IEEE, Vol. 95, No. 2,
2007, pp. 397412.
[4] Robotic Systems Technology, Demo III Experimental Unmanned Vehicle (XUV) Program;
Autonomous Mobility Requirements Analysis, Revision I, 1998.
[5] Redmill, K., A Simple Vision System for Lane Keeping, IEEE Intelligent Transportation
Systems Conference, Boston, MA, November 1997.
[6] Zhang, W. -B., National Automated Highway System Demonstration: A Platoon System,
IEEE Intelligent Transportation Systems Conference, Boston, MA, November 1997.
[7] Tan, H. -S., R. Rajamani, and W. -B. Zhang, Demonstration of an Automated Highway
Platoon System, Proceedings of the American Control Conference, Philadelphia, PA, June
1998.
[8] Shladover, S., PATH at 20History and Major Milestones, IEEE Transactions on Intel-
ligent Transportation Systems, Vol. 8, No. 4, December 2007, pp. 584592.
[9] Redmill, K., and . zgner, The Ohio State University Automated Highway System
Demonstration Vehicle, Journal of Passenger Cars, SP-1332, Society of Automotive En-
grs., 1999.
[10] Farkas, D., et al., Forward Looking Radar Navigation System for 1997 AHS Demonstra-
tion, IEEE Conference on Intelligent Transportation Systems, Boston, MA, November
1997.
[11] Dickmanns, E. D., and B. D. Mysliwertz, Recursive 3-D Road and Relative Ego-State
Recognition, IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 14,
No. 2, February 1992, pp. 199210.
[12] Prautzsch, H., Curve and Surface Fitting: An Introduction, SIAM, Society for Industrial
and Applied Mathematics Review, Vol. 31, No. 1, 1989, pp. 155157.
[13] zgner, ., and K. Redmill, Sensing, Control, and System Integration for Autonomous
Vehicles: A Series of Challenges, SICE Journal of Control, Measurement, and System In-
tegration, Vol. 1, No. 2, 2008, pp. 129136.
CHAPTER 3
37
38
System Architecture and Hybrid System Modeling
Drive-by-Wire
Vehicle
Figure 3.2 The architecture for the OSU Team car used in Demo 97.
3.1.2.2 Planning
Planning and path generation can take a number of forms depending on the re-
quired application. Between the starting and stopping locations, the vehicle was free
to travel anywhere in the defined corridor. There were no high-level decisions to be
made. The desired path could be fitting smooth functions through the established
waypoints, and deviations from this desired path were generated as a reaction to
local sensing information.
For the Urban Challenge, however, the behavior was defined as an ordered
series of goal locations the vehicle was to attain, starting from any location, and
the route had to be planned in real time over a map database defining a road net-
work as well as parking lots (zones) and parking spaces. For this task, there were
often multiple possible routes and an optimal route had to be identified based on
estimates of travel time. The planning software also required the capability to re-
member blocked or impassible roads so that, if an initial plan failed, a new plan
could be identified.
40
System Architecture and Hybrid System Modeling
Figure 3.3 The architecture of ACT, the OSU urban driving car used in the DARPA UC.
Vehicle Localization
A key element of autonomous vehicle technology is vehicle localization. All aspects
of the system, from sensor processing and fusion to navigation and behavioral deci-
sion making to low-level lateral and longitudinal control, require accurate vehicle
position, velocity, and vehicle heading, pitch, and roll information at a fairly high
update rate. Providing this information requires the use of multiple sensors, includ-
ing multiple Global Positioning System (GPS) receivers augmented with wide-area
differential corrections for redundancy, inertial measurement units (IMU), and dead
3.1 System Architecture 41
reckoning sensors (wheel speeds, transmission gear and speeds, throttle, brake, and
steering wheel position) provided on the vehicle, and a validation system to elimi-
nate sensor errors, especially GPS-related step-change events caused by changes in
differential correction status or the visible satellite constellation. To account for sen-
sor errors, noise, and the different update rates of each sensor, an extended Kalman
filter is applied to generate the required state measurements.
For off-road applications, compensation for vibration and other vertical and
rolling motions needs to be done in software or hardware, for example using
the IMU and sensor data to specifically generate a ground plane that can be
referenced while doing sensor validation and fusion. Sensor adjustments are
also required to deal with dust, rain, and changing lighting conditions.
For domains where there are many moving obstacles (i.e., urban applica-
tions), one may need to track individual obstacles at all times.
Specific operations (parking, dealing with intersections, entering/exiting
highways, and so forth) may use totally separate sensing and sensor architec-
tures tailored to the task.
situations may involve road blockage in which the vehicle might carefully perform
a U-turn, park in a parking space, and deal with dangerous behavior from other
vehicles. All these situations must be identified and evaluated, and the resulting
conclusions transmitted to the high-level controller in order for the vehicle to oper-
ate properly.
The path planning software provides information related to the current opti-
mal path plan. Starting from the path, the situation analyzer can identify the loca-
tion of the road and build a road model constructed from polygons derived from
a spline curve fitting the waypoints defining the road shape. Such a road model
design is particularly suitable for both accuracy and implementation purposes. In
order to reduce computational costs and complexity, only the situations related to
the current metastate or substates, as provided by the high-level control software,
are checked.
Command Interface
In a two-level control hierarchy as shown in Figure 3.4, the low-level control re-
ceives operational instructions from the high-level control module. These instruc-
tions take the form of:
Longitudinal Control
The interface and control of vehicle actuation is achieved by having a drive-by-wire
car. Our experience has been that a simple control algorithm, for example a set of
PID controllers, is adequate to generate a virtual torque command to achieve the
commanded speed, and a state machine is used to select between the use of throttle,
active braking, or engine idle braking. Speed commands are modified to constrain
the acceleration and jerk of the vehicle to preset comfortable limits. There may also
be emergency deceleration modes that are less comfortable.
Urban driving, in contrast to highway or off-road driving, requires the vehicle
to execute a precise stop at predefined locations, for example the stop line of an
intersection. To accomplish this, the low-level control determines the distance from
the vehicles current position to a line drawn through the specified stopping point
and perpendicular to the vehicles path of travel, taking into consideration the dis-
tance from the front bumper of the vehicle to its centroid. The speed of the vehicle
is controlled to follow a specified, possibly nonlinear, deceleration trajectory.
Lateral Control
The path that the vehicle is to follow is specified as a set of control points. The lat-
eral controller identifies both the current location of the vehicle and the look-ahead
point (a prespecified distance ahead of the vehicle along its lateral axis) and extracts
a subset of control points closest to each location. Constant radius circles are fitted
to the points in each subset and these circles are used to compute the vehicle offset
distances from the path and to estimate desired yaw rates. Each subset of points
also defines a desired yaw angle for the vehicle. The offset distances, yaw angle
error measurements, and desired yaw rates can be used to generating a feedback
signal for the steering controller. There are a number of algorithms that can be used
in this control loop, and a simple PID controller with fixed gains is not enough to
cover all possible driving and path-shape scenarios. The variations here are speed
dependent and turn-radius dependent.
3.2.1 Discrete Event Systems, Finite State Machines, and Hybrid Systems
The high-level aspects of an intelligent vehicle can be modeled as a discrete event
system. In this section we develop a modeling approach for modeling the discrete
event system (DES). We represent the DES with a finite state machine (FSM), and
then couple the FSM with a continuous time, dynamic system to create a hybrid
system. One does not always need to go through the full formal development intro-
duced here. Indeed, in many cases it is quite possible to directly develop an FSM.
44
System Architecture and Hybrid System Modeling
g : X P ( E) {}
which specifies which events are enabled at time t, and P(E) denotes the power set
of E. The DES state transition function is given by a set of operators
fE : X X
where E is a subset of g. The transition function specifies the next state when the
event(s) in E occur.
Alternatively, the state transitions can be shown on a graph, where the nodes
represent the states and the directed arcs are labeled with the individual events e in
E, and are pointing from x to fE(x).
Now let us consider the case where the events e can be either generated ex-
ternally, or can be generated by a separate continuous time, dynamic system. The
interface from the continuous time system to the DES is described by a function
. Similarly, the DES also affects the continuous time system through a function
(see Figure 3.4). Further details and examples in a nonvehicle context, can be
found in [5].
In the following sections, we shall first look at ACC to formulate it as a DES
and model it in terms of a finite state machine. We shall than consider first an
obstacle avoidance operation and then a special bus service situation as hybrid
systems.
Speed up
Return
Return
Set
Manual
Brake
Cruise
Set
Brake
Speed up
Return
Follow Return
Set
Manual
Brake
Distance right,
speed matched Cruise Set
Here we call an object obstacle based on its speed on the road. If an object
has a speed less than certain threshold, vmin, our car will regard it as an obstacle
and try to avoid it; otherwise, the car will just follow it.
A complete obstacle avoidance scenario is shown in Figure 3.7. As this figure
illustrates, the whole obstacle avoidance procedure is divided into five stages as
follows:
1. In this stage, we assume that there is no object ahead (within distance d0).
Thus, car 1 runs along the right lane of a two-lane road at speed v1. When-
ever an object is found within distance d0, it enters stage 2.
2. In this stage, car 1 checks the speed of the object to see whether it is an
obstacle or not. At the same time, car 1 still keeps running on the right lane,
but it may slow down little by little as it is approaching the object. When
the distance between car 1 and the object decreases down to d1, it will ei-
ther enter stage 3 if the detected object is considered as an obstacle, or just
follow the object ahead, which will lead car 1 into another stage that is not
shown in this scenario.
3. In this stage, car 1 turns left and changes to the left lane if it considers the
object ahead as an obstacle. When the left lane changing is finished, car 1
enters stage 4.
4. In this stage, car 1 runs on the left lane until it has totally passed the ob-
stacle. Then car 1 enters stage 5.
5. In this stage, car 1 turns right and changes back to the right lane. After that,
car 1 switches back to stage 1.
Based on the assumptions and analysis above, we can design the obstacle avoid-
ance system as shown in Figure 3.8. Obviously, the system will be a hybrid system.
The continuous time system represents the acceleration system of car 1, which
can switch among several dynamic models according to the car 1s current state.
These models include standard (normal) running, following, approaching (object
observed), left lane changing, passing, and right lane changing. The state switching
1 2 3 4 5 1
Car 1 Obstacle
d1
d0
Interface
D Signal
Processing
Continuous-Time y
u
Acceleration System
1. A vision system based on cameras in the front to detect the lane markers,
which is used to get the longitude position of the car on the road;
2. An obstacle detection system based on radar in the front to detect the ob-
jects ahead, which is used to get the distances and velocities of the objects
ahead;
3. A side-looking radar system to on each side of car 1, which is used to check
if car 1 has passed the obstacle.
The outline of the algorithm is represented by a finite state machine in the fol-
lowing figure. States normal, obstacle observed, right lane change, pass-
ing, and left lane change, given in Figure 3.9, correspond to the stages outlined
in Figure 3.7, respectively. The follow state is for the case when there is a low-
speed vehicle ahead, but it is not considered as an obstacle. In this case, car 1 is
E3
Follow
E1
E2
E3
E1, C1
E2 Object
Normal observed
E1
E4, C2
E9 C3 C2
Left lane
Right lane change
change
E5, C3 E7
E8
E10
Passing
E6
Figure 3.9 Finite state machine for double lane change.
assumed to be controlled by its ACC system and following the slow car ahead.
Table 3.2 explains the events in the finite state machine, as well as the interface
conditions to generate the events. Table 3.3 lists the parameters and variables used
in the system.
As mentioned before, the interface part includes two functions, and . The
details of the thresholds specifying have been listed in Table 3.2. The function
is shown in Table 3.4.
AB BC 1 CD
A B C D
BC 2
Figure 3.10 The map for the bus.
LB
Left
RB
LB Right
RB
LB/mL LC/mL
Left Left
RC/mR
RB/mr
LC/mLC
LB/mL Right
Right
RC/mR
RB/mr
F = mx + x
If we further design an automatic cruise control system for the bus, we can get
the following:
Thus the dynamics of the system is determined by the desired velocity vd only.
Then, the continuous time subsystem will simply work at one of the two models:
Based on the idea above, the continuous time subsystem can be designed as in
Figure 3.14.
The discrete time subsystem for each bus consists of two states: one for the
position of the bus on the map, and the other for the dynamic status of the bus, as
shown in Tables 3.73.9. Each state is running in a finite state machine, as shown
in Figures 3.15 and 3.16.
The events stop and go here are generalized events for collision avoidance,
which is generated based on the output of both the continuous time system (x) and
the output of the sensor system (Po, diro). In our problem these events are stimu-
lated according to the truth table in Table 3.10. Only the conditions for Stopi to be
Figure 3.14 The system diagram for the continuous time subsystem.
true and Goi to be false are listed in this table; in all the other conditions, Stopi will
be false and Goi will be true.
The whole bus-road system has been simulated in MATLAB. Some experiment
results are shown in Figure 3.17.
In the following experiment, one bus starts from x = 0.2 mile (on AB), running
towards D, and another one starts from x = 13.0 miles (on CD), running toward A.
The two buses then just move back and forth between stops A and D.
Several parameters in the experiment are chosen as follows:
M = 20 ton.
V0 = 80 mph (for bus 1) and 70 mph (for bus 2).
54
System Architecture and Hybrid System Modeling
XAC i
XB i and
SB = Left and XC i and
Forward i BC 1 Forward i
XB i and XC i and
XAB i SC = Right and
~Forward XCD i
i ~Forward i
AB CD
XB i and
XC i and
~Forwardi
SC = Left and
~Forward i
XB i and
SB = Right and BC 2 XC i and
Forward i Forward i
XBC i
Figure 3.15 The finite state machine for state Pi.
K1 = 10,000 ton/hr.
The sampling frequency is 1 Hz.
The distances between AB, BC, and CD are all 5 miles.
The range of the sensor is 5.0 miles (in order to guarantee no collisions).
3.3 State Machines for Different Challenge Events 55
Table 3.10 The Truth Table for Events Stop and Go for the Train
Pi dirI Po diro Stopi Goi
BC1 1 CD 1 True False
BC1 1 AB 1 True False
BC2 1 CD 1 True False
BC2 1 AB 1 True False
0
0 200 400 600 800 1,000 1,200 1,400 1,600 1,800 2,000
The velocity of the trains: v
100
Train 1
50 Train 2
50
100
0 200 400 600 800 1,000 1,200 1,400 1,600 1,800 2,000
The state of the trains: AB=1, BC1 =2, BC2 =3, CD=4
4
Train 1
Train 2
3
1
0 200 400 600 800 1,000 1,200 1,400 1,600 1,800 2,000
We can see, as shown in Figure 3.17, that with our control system the two
buses can run on the route without any collisions.
Figure 3.18 Two autonomous cars developed by OSU in Demo 97 following a radar-reflecting
stripe and undertaking a pass.
was advocated for location information with respect to the lane, a radar-reflecting
stripe [6] that would indicate the distance from the center of roadway and the rela-
tive orientation of the car. We will mention other possible technologies later in this
book. It has to be pointed out that precision GPS and maps were not commonly
available at that time. Today, it is assumed that precision maps would be available
to the level of identifying individual lanes, and GPS reception would provide pre-
cise location information in real time.
Demo 97, one of the early full-scale AHS demonstrations, was held on a 7.5-
mile segment of highway I-15 in San Diego. This segment was a segregated two-
lane highway normally used for rush-hour high-occupancy vehicle traffic. Traffic
flowed in the same direction in both lanes, and there were no intermediate entry
and exit points. The curvature of the highway lanes was benign and suited for high
speed (70 mph) driving, and other traffic was minimal to nonexistent. A general
AHS would presumably have merge and exit lanes, but the single entry-exit aspect
of Demo 97 made it a single activity: drive down the lane and possibly handle
simple interactions with other vehicles. We shall subsequently call this behavior
a metastate. Dealing with interchanges produced by entry and exit lanes would
require other metastates.
The DARPA Grand Challenges of 2004 and 2005 were both off-road races. As
such, the only behavior and thus the only metastate required would be path fol-
lowing with obstacle avoidance from point A to point B. However, since there is no
path or lane that can be discerned from a roadway, the only method of navigation is
to rely on GPS- and INS- based vehicle localization [7, 8] and a series of predefined
waypoints. Obstacle avoidance would be needed, as in an AHS, although in the
less structured off-road scenario greater freedom of movement and deviations from
the defined path are allowed. The Grand Challenge race rules made sure that there
were no moving obstacles and different vehicles would not encounter each other in
motion. General off-road driving would of course not have this constraint.
Finally, fully autonomous urban driving would introduce a significant number
of metastatessituations where different behavior and different classes of decision
need to be made. The DARPA Urban Challenge, although quite complex, did have
fairly low speed limits, careful drivers, and no traffic lights. Visual lane markings
3.3 State Machines for Different Challenge Events 57
were unreliable, and thus true to life, and the terrain was fairly flat, although some
areas were unpaved, generating an unusual amount of dust and creating problems
for some sensors. See Figure 3.19 for different types of vehicles used in the Grand
and Urban Challenges.
We will make the claim that the basic problem definition will include a series
of waypoints and a concept of lanes. Although lanes are obvious in highway sys-
tems and urban routes, it is reasonable to assume that off-road environments also
present a set of constraints that indicate the drivability of different areas and thus
provide the possibility of defining lanes.
We indicated in Chapter 1 that we would assume a basic setup with waypoints
and lane boundaries. The off-road situation where the feasible path is understood
to be a lane provides the simplest illustration of metastates (see Figure 3.20). A
highway configuration would have multiple lanes all headed in the same direction,
with standard lanes of equal width.
On the other hand, both AHS and urban automated driving scenarios need
concepts/tasks related to changing lanes. One possible metastate configuration is
shown in Figure 3.21.
The urban environment requires a much more complex state machine to repre-
sent the situations and control transitions [9]. Figure 3.22 illustrates the metastates
used in OSU-ACT. It has to be pointed out that Figure 3.22 hides many more sub-
states underneath, as compared to the metastates of Figure 3.20 (see [10]).
U-turn
Finish
Road
Start
Zone
Intersection
Figure 3.22 Metastates for the 2007 DARPA Urban Challenge situation.
a car ahead. Once following the car ahead, we can consider a number of different
reactions if it changes speed.
We shall now introduce the concept of personality to represent a set of be-
havior patterns. This set will affect the decisions of the car. At this initial stage we
shall consider four classes:
3.3 State Machines for Different Challenge Events 59
1. Grandparent;
2. Teenager;
3. Jack;
4. The homicidal driver.
With the single-lane world constraint, the first three personalities do not pro-
vide much variation. The teenager may be more aggressive in its pursuit of the car
ahead. Other combinations can easily be analyzed. With many cars on the single
lane, it is obvious that eventually all traffic would accumulate behind the grandpar-
ent unless there is a homicidal driver. Assuming such a driver could reverse direc-
tion, this creates many unmanageable situations.
When we expand our world to two lanes and allow passing, even with the first
three personalities, a number of interesting situations arise. We will make a set of
assumptions about the personalities.
The last item above allows us to consider manual drivers together with auto-
mated vehicles. Indeed, we have assumed a manual grandparent, an automated
teenager, and Jack, during our runs in the 1997 Technology Demonstration. It was
assumed that the grandparent could speed up, slow down, and even stop. He or
she would not do a sharp stop, change lanes, or speed more than a limit (less than
the teenager and Jack). The analysis of the scenario and the hybrid system model
is provided next.
CSS. These events are processed in the CSS (with filters and observers) and are
passed to the DES side through an interface.
Let the set of low-level events and requests be denoted by the set E (see Table
3.11). There is one discrete state variable X. See Table 3.12 for a list of those states.
The discrete system state transition function shows the low-level events passed
on to the DES side. Lateral and longitudinal states are listed in Tables 3.13 and
3.14.
the major task of building the bridge between the continuous state system and the
discrete state system within the hybrid model. Based on the suggested control law,
certain continuous input is selected from a finite set of all possible inputs to fully
automate the motions of the considered vehicle. The overall layout of the OSU sys-
tem is given in Figure 3.23 [9].
We now consider the finite state machine that will control the scenario. The
states are defined in Table 3.15 and the state machine is also provided in Figure
3.24.
The Demo 97 scenario provides an interesting example of different behavior in
hybrid systems where the threshold values in the interface are selected differently.
The so-called boredom factor adjusting the time a car will continue following,
before starting a lane change so as to pass, are set different in the two cars. As in-
tended, this results in the teenager passing, but Jack not passing the grandparent.
Our Demo 97 scenario did not include the possibility of another car approach-
ing from the left lane, a standard situation that may lead to unsafe situations in the
real world. Normally, the lane-changing car would check for oncoming vehicles in
the left lane.
road following, rollback, and robotic operations states work when the conditions
are met.
To prevent the FSM from being stuck in some states other than path-point
keeping and obstacle avoidance forever, a watchdog is introduced. When the ve-
hicle keeps still or the FSM stays in some unwanted state for a certain period of
time and no promising progress is expected, the FSM and some modules reset.
3.3 State Machines for Different Challenge Events 63
Figure 3.24 The state machine for both the Teenager and Jack.
64
System Architecture and Hybrid System Modeling
Road
following
Alarm
response
Obstacle
Obstacle detected avoidance
No direct path to goal
Figure 3.25 The FSM for ION participating in the second Grand Challenge.
~E3 E6
Follow lane
at V_min
E3 Follow lane
E4 Maintain
distance
E4 D_set E6 STOP (3)
Follow lane
~E4 ~E2 at V_low
E8 or (C2 and E9)
STOP (1) E5 E2 E6
~(E8 or (C2 and E9)
Follow lane E7 or E11
at V_set
E4 STOP (2)
E10
Ea1
Ea1 Ea1
Continue
E1 Enter Exit
one-lane one-lane Continue
road road
A
A C
STOP (1)
E19
STOP (2)
Type IV
(C4 and C5) or
(E20 and E21)
STOP (3)
E25
Continue
E30
STOP (2)
E30 Backward:
maximum Ea2 or ~E26
right wheel
References
[1] zgner, ., and K. Redmill, Sensing, Control, and System Integration for Autonomous
Vehicles: A Series of Challenges, SICE Journal of Control, Measurement, and System In-
tegration, Vol. 1, No. 2, March 2008, pp. 129136.
[2] Redmill, K., and . zgner, The Ohio State University Automated Highway System
Demonstration Vehicle, SAE Transactions 1997: Journal of Passenger Cars, sp-1332, So-
ciety of Automotive Engrs., 1999.
[3] zgner, ., C. Stiller, and K. Redmill, Systems for Safety and Autonomous Behavior in
Cars: The DARPA Grand Challenge Experience, Proceedings of the IEEE, Vol. 95, No. 2,
February 2007, pp. 397412.
[4] Lygeros, J., D. N. Godbole, and S. Sastry, A Verified Hybrid Controller for Automated Ve-
hicles, Proceedings of the 1996 Conference on Control and Decision, Kobe, Japan, 1996,
pp. 22892294.
68
System Architecture and Hybrid System Modeling
[5] Passino, K., and . zgner, Modeling and Analysis of Hybrid Systems: Examples, Proc.
1991 IFAC Symp. on DIS & 1991 IEEE International Symp. on Intelligent Control, Ar-
lington, VA, Aug. 1991.
[6] zgner, ., et al., The OSU Demo 97 Vehicle, IEEE Conference on Intelligent Trans-
portation Systems, Boston, MA, November 1997.
[7] Chen, Q., . zgner, and K. Redmill, Ohio State University at the 2004 DARPA Grand
Challenge: Developing a Completely Autonomous Vehicle, IEEE Intelligent Systems, Vol.
19, No. 5, September-October 2004, pp. 811.
[8] Chen, Q., and . zgner, Intelligent Off-Road Navigation Algorithms and Strategies
of Team Desert Buckeyes in the DARPA Grand Challenge 05, Journal of Field Robotics,
Vol. 23, No. 9, September 2006, pp. 729743.
[9] zgner, ., C. Hatipoglu, and K. Redmill, Autonomy in a Restricted World, Proc. of I.
IEEE ITS Conf., Boston, MA, November 912, 1997, p. 283.
[10] Kurt, A., and . zgner, Hybrid State System Development for Autonomous Vehicle
Control in Urban Scenarios, in C. Myung Jin and M. Pradeep, (eds.), Proceedings of the
17th World Congress The International Federation of Automatic Control, Seoul, Korea,
2008, pp. 95409545.
CHAPTER 4
Sensors are applied in all levels of vehicle control and autonomy, ranging from
engine control, ABS braking and stability enhancement systems, passive driver as-
sistance systems such as navigation, infotainment, backup hazard warning, and
lane change assistance, active safety systems such as lane maintenance and crash
avoidance, and, of course, full vehicle automation.
In broad terms, sensors can be grouped according to the function they provide.
Internal vehicle state sensors provide information about the current operation and
state of the vehicle, including lower-level functions such as engine operations and
higher-level states such as vehicle motion and position. External environment sen-
sors provide information about the world outside the vehicle, potentially including
road and lane information, the location and motion of other vehicles, and station-
ary physical objects in the world. Finally, driver state and intention sensors provide
information about the state or intentions of the driver. These sensors can include
seat occupancy and passenger weight (pressure or infrared sensors), audio sensors,
internal cameras, eye trackers, breath alcohol sensors, and haptic transducers.
In this chapter, we will review the general characteristics of sensors and sensor
performance. Then we will look at the individual sensors and technologies that are
generally applied for vehicle control and automation. Since it is often both advan-
tageous and necessary to combine the information from multiple sensors to provide
a full and error-free understanding of the current state of the vehicle and the world,
we end this chapter with a description of estimation and sensor fusion approaches.
Conceptually, any device or technology that provides information can be con-
sidered, and treated, as a sensor. In vehicle automation applications, common ex-
amples of this include map databases and wireless vehicle-to-vehicle and vehicle-to-
infrastructure communications, which are discussed in Chapters 6 and 7, and other
cooperative infrastructure technologies including visual signs, tags, or markers and
radar reflective surfaces, which are discussed in this chapter.
Sensors are fundamentally transducers in that they convert one physical property
or state to another. There are several general characteristics that are important
in describing and understanding the behavior of sensors and sensor technologies.
While we will not cover these in detail, it is worth bearing in mind how these factors
69
70
Sensors, Estimation, and Sensor Fusion
relate to the selection, interpretation, and fusion of sensors individually and when
used as a sensor suite.
Accuracy: The error between the true value and its measurement, which may
include noise levels and external interference rejection parameters;
Resolution: The minimum difference between two measurements (often
much less than the actual accuracy of the sensor);
Sensitivity: The smallest value that can be detected or measured;
Dynamic range: The minimum and maximum values that can be (accurately)
detected;
Perspective: Quantities such as the sensor range or its field of view;
Active versus passive: Whether the sensor emits energy or radiation that il-
luminates the environment or relies on ambient conditions;
Timescale: Quantities such as the update rate of the sensor output and the
frequency bandwidth of the measurement output over time;
Output or interface technology: For example, analog voltage or current, digi-
tal outputs, and serial or network data streams.
Autonomous vehicles use all the standard sensors available in a car for self-sensing.
Thus speed sensing is available, and variables that are measured in the engine and
powertrain or are related to the brakes can be accessed. Furthermore, sensors are
needed to measure steering wheel angle and gear shift, but these are fairly easy to
design or develop if not already present on the vehicle.
Wheel speed, usually measured by a Hall effect sensor, which produce a digi-
tal signal whose frequency is proportional to speed;
Vehicle dynamic state, possibly including yaw rate and lateral and longitudi-
nal acceleration;
Driver inputs, for example, steering wheel position, throttle and brake pedal
positions, turn signals, headlights, windshield wipers, and so forth;
Transmission gear and differential state;
Brake pressure, either at the master cylinder or for each wheel, usually mea-
sured by a diaphragm or silicon piezoelectric sensor;
Engine and exhaust variables, for example, coolant temperature, O2 and
NOX levels, RPM, and spark plug firing timing.
4.2 Vehicle Internal State Sensing 71
Since these sensors are designed by the vehicle OEM to serve a specific purpose
in the vehicle (for example, ABS, stability enhancement, or powertrain control),
and given the significant price pressure in the automotive market, these sensors
tend to be only as good as is required for their designed application. They are not
always of sufficient quality for vehicle automation.
inexpensive or embedded GPS receiver can achieve position accuracies on the order
of 515 meters. This is sufficient for providing navigation and routing instructions
for a human driver, but is insufficient for resolving which lane a vehicle is currently
occupying.
To achieve the next level of GPS position accuracy, correction signals are avail-
able from free and publicly available services, such as the Nationwide Differential
GPS (NDGPS) [1] service broadcasting in the long-wave band and satellite-based
augmentation systems (SBAS) such as the Wide Area Augmentation System (WAAS)
provided by the U.S. Federal Aviation Administration or the European Geostation-
ary Navigation Overlay Service (EGNOS). An appropriately capable GPS receiver,
using one of these basic differential correction data services broadcast over a large
area of the planet, can achieve position accuracies on the order of 12 meters. This
is sufficient for some safety applications, and can sometimes resolve lane identity,
but is insufficient for autonomous vehicle operations.
Locally computed differential corrections or commercial SBAS systems, for ex-
ample, Omnistar VBS and Firestar, can achieve submeter accuracies, which signifi-
cantly improves the performance of data collection systems and safety systems, and
in some cases is sufficient for longitudinal vehicle automation.
The more sophisticated commercial correction services such as Omnistar HP
can achieve position accuracies of 10 cm or better. High-precision locally computed
real-time kinematic correction systems can produce position accuracies of 12 cm.
Measurements at this level of accuracy are sufficient for full vehicle automation. Of
course, both the correction data and the GPS receiver hardware needed to achieve
these levels of accuracy are quite expensive.
The time information from the GPS is particularly useful in intervehicle com-
munication since it allows the precise synchronization of clocks across multiple
vehicles and infrastructure systems.
Navigation message:
50-Hz data;
12-minute message time;
4.2 Vehicle Internal State Sensing 73
Subframe data:
UTC time and clock corrections;
Almanac;
Precise ephemeris data (30 seconds);
Ionospheric propagation data.
Three points generally reduce the possible location of the object to two points, one
of which is, in the case of the GPS system, unfeasibly far from the surface of the
Earth. This is illustrated in Figure 4.3.
Thus, in order to compute the position of an object we need its distance to at
least three known points. The orbital location of a GPS satellite can be computed
fairly precisely using the almanac and ephemeris data transmitted by the satellites
themselves coupled with a mathematical model of orbital dynamics. Indeed one of
the primary functions of the GPS ground control system is to track the satellites
and regularly update the almanac and ephemeris information. What remains is to
determine the distance, or range, from each of at least three satellites to the antenna
of the GPS receiver.
As mentioned earlier, the distance travelled is a direct function of the elapsed
time between the transmission of a signal and its reception. GPS satellites carry
atomic clocks of very high precision, and the GPS ground control system ensures
that the clocks of all satellites are synchronized to within a few nanoseconds. The
data transmitted by each GPS satellite also contains information about when trans-
missions occur. All GPS satellites transmit on the same frequencies, so each satellite
transmits its data stream by modulating its own unique binary sequence pseudo-
random noise (PRN) code. On the L1 channel, the PRN codes are 1,023 bits long
and repeat every millisecond. Since the GPS receiver knows the PRN sequence of
each satellite, it can produce its own copy of that sequence synchronized to its in-
ternal clock and use correlation to determine the time offset of the received and in-
ternally generated signals for each visible satellite. This is illustrated in Figure 4.4.
However, GPS receivers are typically low-cost devices using nothing more than
a crystal-controlled clock with considerable drift over time relative to the GPS
satellite clocks. Thus, it is necessary to align the receiver clock with the satellite
clocks. This introduces a fourth unknown into the problem, and therefore a full
GPS position fix actually requires four satellites. Since the clock bias (the difference
between the receivers clock and the satellite clock) is unknown, the distance or
range measured by the PRN correlation process is called a pseudorange.
Expressed mathematically in Earth centered Earth fixed coordinates, the mea-
sured pseudorange for each visible satellite is
where
Circular error probability (CEP): The radius of the circle that is expected to
contain 50% of the measurements.
Twice the distance root mean square (2dRMS): The radius of the circle that
is expected to contain 95% of the measurements.
estimates will degrade over relatively short time spans. It may also be possible to
continue autonomous vehicle operation using SLAM techniques or only local in-
formation, for example, lane marker sensing in the case of driving down a limited
access highway.
This continues to be an issue for autonomous vehicle implementations as well
as an area of active research, since a loss of accurate position and orientation infor-
mation adversely effects most other vehicle sensing and control systems.
providers use many base stations covering, for example, an entire continent to
model GPS observation errors and provide the data to GPS receivers often as satel-
lite broadcasts. Examples include the U.S. FAAs WAAS system and the European
EGNOS system, which can generally increase position accuracy to within 12 me-
ters of truth, as well as commercial providers such as Omnistar and Racal, which
offer varying levels of services down to the 10-cm range.
Another approach to improving GPS position accuracies involves measuring
the actual carrier phase of the received GPS satellite transmissions, as shown in
Figure 4.9, to provide higher accuracy range measurements. Very small changes in
carrier phase can be tracked over time (the L1 carrier wavelength is 19 cm). This is
most effective when the GPS receiver is capable of receiving both L1 and L2 trans-
missions, since the different propagation characteristics of the two frequencies can
be used to identify and quantify errors.
Some high-end GPS receivers can also measure Doppler shifts in order to di-
rectly compute the relative velocities of each satellite. Multiple GPS receivers can
also be combined with specialized algorithms to measure absolute yaw angle.
Finally, it is possible to store the raw code and carrier phase measurements
(and possibly the Doppler velocity measurements) logged by a remote GPS re-
ceiver, and to obtain precise satellite ephemeris data available after the fact (usu-
ally within 624 hours), in order to compute highly accurate position and velocity
(a)
(b)
Figure 4.10 (a) Piezo and MEMS accelerometer and (b) ADXL202 micrograph. (Figure courtesy of
Analog Devices.)
be estimated and used to project the accelerations and angular rates into a fixed,
ground referenced coordinate system. An example of such a device is described in
Table 4.1.
Rotation
Coriolis force
Vibration
(out of plane)
(a)
(b)
Figure 4.11 (a, b) MEMS gyroscope micrograph. (Figure courtesy of Analog Devices.)
Light
source Half silvered mirror
Detector
(a)
e
Laser/
iv
ct
Laser/
Co
detector
Bright
Laser
light Split into
two beams De
str
uc
tiv
e Dark
(b)
Figure 4.12 (a, b) Sagnac interferometers.
4.2 Vehicle Internal State Sensing 83
Figure 4.13 Commercial IMU hardware and performance specifications. (Figure courtesy of
Memsic.)
used to estimate the strength of the external magnetic field. The Hall effect sensor
is based on the principle that, as charges (electrons or holes) forming a current flow
through a conductor, they travel in generally a straight line (excluding collisions),
but in the presence of a magnetic field perpendicular to the direction of current flow
they travel in a curved path leading to an accumulation of charge on the sides of the
conductive material and thus producing an electric potential. Finally, one can use
anisotropic magnetoresistive alloys as one leg of a Whetstone bridge. The resistance
changes in relation to the applied magnetic field, and this causes a voltage imbal-
ance in the bridge.
84
Sensors, Estimation, and Sensor Fusion
A number of different sensors have been developed for sensing the external envi-
ronment of an autonomous vehicle. Many have been developed initially for safety
warning or safety augmentation systems that are now being deployed on some high-
end vehicles. These include radar sensors, scanning laser range finders, known as
light detection and ranging (LIDAR) or sometimes laser detection and ranging (LA-
4.3 External World Sensing 85
4.3.1 Radar
Radar is a popular active sensing technology for road vehicles used in sensing both
near and far obstacles. A radar system tends to be designed based on the desired
safety or control function. For example, for a fixed, usually regulated output power,
there is a general trade-off between field of view and range. For applications like
imminent crash detection and mitigation, lane change, and backup safety systems, a
shorter range but wider field of view is desired, perhaps on the order of 30 meters of
range with a 6570 wide field of view. For applications like advanced cruise con-
trol and crash avoidance, a longer range but narrower field of view is required, per-
haps on the order of a 120-meter range and a 1015 field of view. Two examples
of commercial automotive radar systems are shown in Figure 4.15.
Radars are popular choices because they are robust mechanically and operate
effectively under a wide range of environmental conditions. They are generally un-
affected by ambient lighting or the presence of rain, snow, fog, or dust. They gen-
erally provide range and azimuth measurements as well as range rates. They also
tend to be available at a lower cost relative to other active sensors such as LIDAR,
although their measurements, in particular azimuth angle, are less precise. Their
price is decreasing steadily as they are being produced in larger quantities.
Radar sensors have been built to operate on a number of frequencies, but ve-
hicular applications appear to be standardizing on 24 GHz and 77 GHz, with a few
very short-range sensors operating at 5.8 GHz or 10.5 GHz. They may be based on
a pulsed Doppler or one of many continuous wave modulations. Early radar sys-
tems, for example the Delphi ACC radar system, employed a mechanically rotated
antenna to generate azimuth angle measurements, but this is difficult to manufac-
ture robustly and inexpensively. Most modern radars are based on a multielement
(a)
(b)
Figure 4.15 (a) MaCOM SRS 24-GHz UWB radar. (b) Tyco long-range 77-GHz radar. (Figures cour-
tesy of Cobham Sensors.)
4.3.2 LIDAR
A scanning laser range finder system, or LIDAR, is a popular system for obstacle
detection. A pulsed beam of light, usually from an infrared laser diode, is reflected
from a rotating mirror. Any nonabsorbing object or surface will reflect part of that
4.3 External World Sensing 87
light back to the LIDAR, which can then measure the time of flight to produce
range distance measurements at multiple azimuth angles. The basic idea is shown
in Figure 4.16.
(a)
(b)
(c)
Figure 4.16 (ac) LIDAR scans.
88
Sensors, Estimation, and Sensor Fusion
(a) (b)
(c)
Figure 4.17 (ac) LIDAR sensors (SICK LMS291, Ibeo Lux, and Velodyne HD-32E). [(a) Image cour-
tesy of SICK AG, (b) image courtesy of Ibeo Automotive Systems GmbH, and (c) image courtesy of
Velodyne Lidar Inc.]
rectified, birds-eye view. A summary of the lane detection problem and other im-
age processing problems and solution techniques relevant to on-road vehicles can
be found in [5].
As a representative example of a sensor suitable for lane marker location, we
present a system for extracting lane marker information from image data in a form
suitable for use in the automated steering of moving vehicles developed at The
Ohio State University. The algorithm [3] was designed with speed and simplicity in
mind. It assumes a flat, dark roadway with light-colored lane markers, either solid
or dashed, painted on it. The system was initially developed in 1995, and has been
tested in various autonomous vehicle control research projects, including steering
control at highway speeds at the 1997 AHS Technical Feasibility Demonstration
in San Diego.
The basic algorithm is as follows. First, we implement an adaptive adjustment
of the black and white brightness levels in the frame grabber in order to maximize
the dynamic range (contrast) in the region of interest under varying lighting condi-
tions. Then, using the history of located lane markers in previous image frames,
information about lane markers that have already been located in this frame, and
geometric properties of the ground to image plane projection of lane markers (for
example the convergence of parallel lines at infinity and the known or estimat-
ed width of the lane), we identify regions of interest at a number of look ahead
distances.
For each region of interest we apply a matched filter, tuned to the average
width of a lane marker at the given distance ahead, to the pixel brightness values
across the image and store this information in a corresponding vector. We extract
from each vector those candidate lane marker points (bright spots) that pass a
number of statistical hypothesis tests based on the standard deviation and absolute
magnitude of the candidate points and the minimum overall standard deviation of
the nearby area of the image.
Assuming that the lane markers on the road are parallel, we can fit a low-order
polynomial to the computed location of the middle of the lane at all look ahead
distances for which lane markers were identified. This curve allows us to estimate
a lateral offset distance between the center of the lane and longitudinal axis of the
vehicle at any look ahead distance. We can also estimate the curvature of the lane.
By using curve fitting and looking well ahead, this algorithm can handle bro-
ken or dashed lines. Also, by adaptively estimating the width of the lanes or by
entering the lane width directly into the algorithm, the software can estimate the
position of the vehicle in the lane from only one lane marker.
It was originally implemented on a TI TMS320C30 DSP system using a low-
cost monochrome CCD camera. The camera was mounted at the lateral center of
the vehicle as high as possible in order to obtain the best viewing angle. An example
of its operation is shown in Figure 4.18. We estimate that it logged over 1,500
miles both on I-15 and on a demonstration site with tight (150-foot radius) curves
and, with the appropriate tuning of parameters and filter coefficients, was tested
and found to work well on a number of different road surfaces (new asphalt, worn
asphalt, and concrete) and under different lighting conditions.
4.3 External World Sensing 91
Figure 4.20 Planar example of the recovery of depth using stereo images.
baseline length, and f is the focal length of the idealized (pinhole) camera. Then the
disparity is given as
d = xr xl
B
Z=f
d
We note that the density of the resulting disparity or depth map is a function of
the number of correspondence points that can be identified and the distribution of
those points in the real world. We also note that depth Z is inversely proportional
to disparity, so that the resolution of the depth map decreases for objects further
from the camera due to the projection of a larger area onto a single pixel as dis-
tance increases.
The following four steps comprise the general stereo vision algorithm:
We briefly discuss the radar reflective stripe technology developed by The Ohio
State University [6]. The radar sensor measures lateral position by sensing back-
scattered energy from a frequency selective surface constructed as lane striping and
mounted in the center of the lane. The conceptual basis for the radar lane tracking
system is shown in Figure 4.21. The radar reflective surface is designed such that
radar energy at a particular frequency is reflected back toward the transmitting
antenna at a specific elevation angle. Thus, by varying the frequency of the radar
signal we can vary the look ahead distance of the sensor.
The radar chirps between 10 and 11 GHz over a 5-millisecond period, trans-
mitting the radar signal from a centrally located antenna cone. Two antenna cones,
separated by approximately 14 inches, receive the reflected radar energy. The re-
ceived signal is downconverted into the audio range by mixing with the transmit
signal. The lateral offset of the vehicle is found as a function of the amplitude of the
downconverted left and right channel returns at a radar frequency corresponding
to a particular look ahead distance.
In addition, the peak energy in the downconverted signal appears at a frequen-
cy that is a function of the distance from the vehicle to an object ahead. It is thus
possible to extract the distance to an object ahead of the automated vehicle using
the radar hardware already in place for lateral sensing.
94
Sensors, Estimation, and Sensor Fusion
Figure 4.22 One autonomous car passing another at TRC proving grounds. The radar reflective
stripe can be seen on both lanes.
4.4 Estimation 95
4.4 Estimation
Even under the best of circumstances sensors provide noisy output measurements.
In autonomous vehicles, measurements from sensors of varying reliability are used
to ascertain the location and velocity of the car both with respect to the roadway
or locations on a map and with respect to obstacles or other vehicles. In order to
use sensor measurements for control purposes the noise components may need to
be eliminated, measurements may need to be transformed to match variables, and
measurements from multiple sensors may need to be fused. Fusion of data may
be required because a single sensor or sensor reading is insufficient to provide the
needed information, for example the shape of an obstacle or open pathway, or it
may be needed to provide redundancy and reduce uncertainty.
xk = Fk xk 1 + Bk uk + wk
where
and
wk ~ N(0, Qk )
zk = Hk xk + vk
where
Hk is the output matrix modeling the linear combination of states that are
measured;
96
Sensors, Estimation, and Sensor Fusion
vk is the observation noise, also assumed to be zero mean Gaussian, with covari-
ance Rk;
and
vk ~ N(0, Rk )
The initial state and the noise vectors at each time instant are mutually independent.
The Kalman filter estimates the state of a system based only on the last estimate
and the most recent set of measurements available, and therefore is a recursive
filter.
The state of the filter is represented by two variables:
The Kalman filter has two distinct phases. The prediction phase is where the
state is estimated for the next time instant based on the previous estimate, but with
no new measurements.
x k|k 1 = Fk x k 1|k 1 + Bk uk
Note that this is just using the known state equations with no concept of noise.
The update phase is where the predicted state is corrected when a new set of
measurements is available:
x k|k = x k|k 1 + Kk yk
where
yk = zk Hk x k|k 1
Sk = Hk Pk|k 1 HkT + Rk
Pk|k = (I Kk Hk )Pk|k 1
Usually one assumes that the estimated distributions for x 0|0 and P0|0 would
initially be known. Sometimes the values of x 0|0 is known exactly and then P0|0 = 0.
Although one generally conceives of the Kalman filter operating in a cycle of
predict, update, and repeat, it is possible to skip the update step if no new measure-
ments are available or to execute multiple update steps if multiple measurements
are available or if multiple H matrices are used because of the presence of different
sensors possibly updating at different rates.
We have only presented a brief summary of the Kalman filter in the linear sys-
tems case. There are a number of the variants or expansions of the Kalman filter,
the best known being the extended Kalman filter, in which the state transition and
the output or measurement functions are nonlinear and the covariance matrices
and Kalman gains are computed using a linearization of the system, known as the
Jacobian, around the current state. The reader is referred to [7, 8] for a more thor-
ough treatment of the Kalman filter and other estimation techniques.
4.4.2 Example
As a very simple example, consider that one might want to estimate the speed of a
target vehicle detected by some sensor that measures the position of the vehicle over
time. This could be a LIDAR sensor or an image processing sensor. For simplicity,
we assume the sensor is stationary, for example it could be mounted on the side of
the road or above the roadway. We also assume that the vehicles passing the sensor
are modeled with point mass dynamics, without friction, and that they are experi-
encing an unknown random acceleration (or deceleration). In this case, the F, H, R,
and Q matrices are constant and so we drop their time indices.
The model of the vehicle consists of two states, the position and velocity of the
vehicle at a given time
x
xk =
x
We assume that between each sampling timestep, for example, the (k 1)th
and kth time step, each vehicle has a random change in its acceleration ak that is
normally distributed with mean 0 and standard deviation a which is treated as a
process noise input. The point mass dynamics with no friction is
xk = Fxk 1 + Gak
where
98
Sensors, Estimation, and Sensor Fusion
1 t
F=
0 0
and
t 2
G= 2
t
zk = Hxk + vk
where
H = [1 0]
and the measurement or sensor noise is also normally distributed, with mean 0 and
standard deviation sz, that is:
R = E vk vkT = 2z
0
x 0|0 =
0
0 0
P0|0 =
0 0
Having defined the model and the various parameters for the Kalman filter, we
can now select an appropriate time step, possible equal to the update rate of the
sensor, and implement the Kalman filter computations to estimate the vehicle state,
including the unknown speed.
4.4 Estimation 99
Other
xk +1 = Fxk + Dwk
zrk = rk + vrk
z k = k + v k
where
wk ~ N(0, Qk ), vk ~ N(0, Rk )
xpos 0 0 1 T 0.5T 2 0 0 0
x 1
vel 0 0 1 T 0 0 0
xacc 0 0 0 0 1 0 0 0 1 0 0 0 0 0
xk = , D = , F = 2
, C=
y pos 0 0 0 0 0 1 T 0.5T 0 0 0 1 0 0
y 0 1 0 0 0 0 1 T
vel
yacc 0 0 0 0 0 0 0 1
State: xk = Fxk
State covariance: Pk = FPk F T + DQk DT
rk cos(k )
Measurement residual: yk = Cxk
rk sin(k )
State: xk +1 = xk + Kk yk
State covariance: Pk +1 = (I6 KkC ) Pk
Figure 4.25 shows results from an experimental test where the host vehicle
travels along the line y = x with a final velocity of 8.0619 m/s and tracks the other
vehicle that is traveling along the line y = x with a final velocity of 8.9762 m/s.
In the previous sections we have looked at the various types of sensors that might
be used in an autonomous vehicle. However, as we shall see, usually a vehicle is
fitted with multiple sensors and technologies, and the outputs of these individual
sensors must be combined to produce a final overall view of the world. The primary
justifications for this approach are:
There are various techniques to deal with these conflicts, including tracking
and filtering (i.e., extended Kalman filter), confidence and hypothesis testing ap-
proaches, voting schemes, and evidence-based decision theory.
10
y (m)
15
20
25
5 0 5 10 15 20
x (m)
(a)
(b)
Figure 4.25 Sample tracking results: (a) trajectory of each vehicle; and (b) position estimate errors.
4.5 Sensor Fusion 103
including one or more Global Positioning System (GPS) receivers augmented with
some correction service, for example, Omnistar HP wide-area differential correc-
tions, inertial measurement units (IMU), and dead reckoning sensors (wheel speeds,
transmission gear and speeds, throttle, brake, and steering wheel position) provided
on the vehicle, and a validation system to eliminate sensor errors, especially GPS-
related step-change events caused by changes in differential correction status or the
visible satellite constellation. To account for sensor errors, noise, and the different
update rates of each sensor, an extended Kalman filter is applied to generate the
required state measurements [9, 10].
The reasons for fusing these sensors are:
Accuracy: IMU integration can lead to the unbounded growth of position er-
ror, even with the smallest amount of error or bias in its measurements. This
gives rise to the need for an augmentation of the measurements by external
sources to periodically correct the errors. GPS can provide this, since it pro-
vides a bounded measurement error with accuracy estimates.
Data availability: GPS is a line-of-sight radio navigation system, and there-
fore GPS measurements are subject to signal outages, interference, and
jamming, whereas an IMU is a self-contained, nonjammable system that is
completely independent of the surrounding environment, and hence virtu-
ally immune to external disturbances. Therefore, an IMU can continuously
provide navigation information when GPS experiences short-term loss of its
signals. Similarly, dead reckoning sensors are internal to the vehicle.
Figure 4.26 shows one configuration of a vehicle localization sensor fusion sys-
tem. This example is often called a loosely coupled system because the final outputs
of the individual sensors are fused.
Several commercial manufacturers also provide tightly coupled systems, in
which low-level raw data, for example GPS pseudorange and Doppler measure-
ments, are directly fused with IMU and dead reckoning data.
but the cost of many of the sensors used on research vehicles would be prohibitive
for a commercial passenger vehicle application.
The level of autonomy with respect to the human driver is also a significant de-
sign issue in a passenger vehicle system. The degree to which the driver is to be part
of the sensing and control loop is a design decision driven both by technical and
nontechnical (i.e., marketing and legal) considerations. Driver attention and situa-
tion awareness, human-machine interface and driver workload considerations, and
driver state considerations must be considered.
The availability of a priori data, whether from the various terrain and digi-
tal elevation map and satellite imagery datasets that might be considered useful
for fully autonomous off-road route planning and navigation, to the road map
datasets, traffic condition reports, and road maintenance activity schedules that
would be useful for passenger vehicle automation or driving enhancements is also
a significant issue. Error correction and real-time updates of a priori data sets are
obviously useful and necessary for future vehicle systems.
Finally, vehicle to vehicle and vehicle to infrastructure communication capabili-
ties will almost certainly be involved in future vehicle systems, opening the potential
of traffic cooperation and facilitating everything from navigation and routing sys-
tems to traffic control systems to driver warning and collision avoidance systems.
A general architecture for an autonomous vehicle sensor system is shown in
Figure 4.27. There are some distinctions in considering sensing requirements for
urban versus off-road applications. We list some noteworthy items here:
4.5 Sensor Fusion 105
As will be discussed more fully in the following sections, the sensor coverage
must be tailored to the application. The primary consideration for off-road driv-
ing is obstacles in front of and immediately beside the vehicle. Inexpensive short
range ultrasonic sensors may be sufficient to allow for a small, slow backup ma-
neuver and to provide side sensing in very tight corridors. In an urban scenario,
where sensing is required to support sideways maneuvers into alternate lanes at
high speeds, u-turns, intersection negotiation, and merging into oncoming traffic,
sensing for a significant distance in all directions around the vehicle is required.
In general, we identify two approaches to sensor fusion and the representa-
tion of a world model: a grid (cell) or occupancy map approach [11] and a cluster
identification and tracking approach [12, 13]. For a grid map approach the sensing
106
Sensors, Estimation, and Sensor Fusion
4.5.3.1 Sensor Suite for an Example Off-Road Vehicle: The OSU ION
The Ohio State University participated in the 2005 DARPA Grand Challenge, an
off-road autonomous automated vehicle race held in the desert southwest (Nevada
and California) of the United States. The goal of the sensor suite and sensor fusion
module was to provide 360 sensor coverage around the vehicle while operating in
an entirely unknown environment with unreliable sensors attached to a moving ve-
hicle platform [15]. Budget constraints required that this be accomplished without
significantly expanding the existing sensing hardware available to the team. The
chosen sensor suite is shown in Figure 4.28. The effective range of each sensor is
also indicated.
Three SICK LMS 221-30206 180 scanning laser rangefinders (LIDARs) were
mounted at three different heights: the first at 60 cm above the ground and scan-
ning parallel to the vehicle body, the second at 1.1 meters above the ground and
scanning in a plane intersecting the ground approximately 30 meters ahead of the
vehicle, and the third at 1.68 meters above the ground with the scanning plane
intersecting the ground approximately 50 meters ahead of the vehicle. The use of
three LIDARs allowed a rough estimate of object height to be computed as the
4.5 Sensor Fusion 107
(a)
(b)
Figure 4.28 (a, b) The OSU ION off-road vehicle sensing systems.
vehicle approached an obstacle. A fourth LIDAR, not shown in Figure 4.28(a), was
mounted at 1.68 meters above the ground and scanning in a vertical plane. This
LIDAR provided an estimate of the ground profile directly ahead of the vehicle,
which is crucial for eliminating ground clutter and the effects of vehicle pitching
and bouncing motions. An Eaton-Vorad EV300 automotive radar with a 12 scan-
ning azimuth and an 80100-meter range was also mounted parallel to the vehicle
body alongside the lower LIDAR.
A stereo pair of monochrome Firewire cameras and an image processing sys-
tem, described below, was also installed on the vehicle. It was rigidly mounted in
solid housings and included devices for system ventilation and windscreen clean-
ing. The algorithmic structure of the video sensor platform for ION is depicted in
Figure 4.29 [16].
108
Sensors, Estimation, and Sensor Fusion
In a preprocessing step, the images acquired from the stereoscopic camera sys-
tem are warped to compensate for lens distortion and then rectified to compensate
for imperfect alignment and coplanarity of the two cameras. In order to achieve a
high degree of robustness against variable environmental conditions, a diversity of
features is exploited by the subsequent processing step. Disparity, color homogene-
ity, and orientation are the three primary features computed. The disparity feature
allows a fast and robust computation of the ground plane parameters. Similar to
estimation in Hough space, the v-disparity technique searches for a linear decreas-
ing disparity along the columns [17]. Disparity is a reliable clue for depth in well-
textured regions near the stereo camera. In contrast it is highly unreliable in regions
of homogeneous color. Humans possess the ability to interpolate over such regions.
We aim to mimic this capability by segmenting a monoscopic image into nonover-
lapping regions that include homogeneous colors. Hence, large deviations in color
may only occur across region boundaries.
Finally, eight Massa M-5000/95 ultrasonic rangefinders were mounted around
the vehicle to provide side sensing for narrow passages (including tunnels and
bridges) and rear sensing for the vehicle while driving in reverse. Two additional
ultrasonic rangefinders were mounted high on the vehicle and angled downward
at approximately 45 to detect drop-offs and cliff faces near the left and right sides
of the vehicle.
In addition to the sensors that monitor the external environment, localization
and orientation sensors, including a Novatel ProPak-LB-L1/L2 GPS using the Om-
nistar HP wide-area differential correction service, a Crossbow VG700A fiber-op-
tic based vertical gyroscope, a Honeywell digital compass, and wheel speed sensors
on the front and back wheels were installed, validated in real-time, and fused using
4.5 Sensor Fusion 109
an extended Kalman filter to provide position, angular orientation, and speed in-
formation to both the sensor fusion and vehicle control modules [9, 10].
Then the pitch and roll of the ith LIDAR are determined from the terrain ori-
entation using
110
Sensors, Estimation, and Sensor Fusion
where (ioffset, ioffset) is the pitch and roll of the ith LIDAR with respect to the
vehicle.
A normal vector in world coordinates for each horizontal LIDAR can be found
by using the gyroscope measurements and the yaw measurement from the GPS/INS
fusion algorithm as
where (i, i, i) are the pitch, yaw, and roll angles of the ith LIDAR.
The normal vectors Ni = (13) are used to define a plane for each LIDAR such
that an approximation of the height for each measurement could be found by solv-
ing the following system of geometric plane equations (where indicates the vector
dot product)
xilidar xmeas
Ni =(1...3) yilidar = d, Ni =(1...3) ymeas = d
zilidar h
for height h, where (xilidar, yilidar, zilidar) is the location of the ith LIDAR and (xmeas,
ymeas) is the measurement location. The resulting h is an approximate height be-
cause in actuality the (xmeas, ymeas) will be modified by the pitch and roll of the
vehicle.
However, in actual experiments the estimated height measurement did not
prove reliable. Errors in the pitch and roll angular measurements caused the object
height error to increase linearly with range. This may have been due to the fact that
the pitch and roll measurements were not synchronized with the sensor measure-
ments, and the fact that the Crossbow gyroscope was in a shock-mounted enclo-
sure that was not fixed rigidly with respect to the vehicle.
Another approach, which was deployed in the challenge vehicle, used a LIDAR
scanning vertically to find a ground profile, which can be used to cull measure-
ments that are close to the ground. Figure 4.30 shows a simulated vehicle with a
vertically mounted scanning LIDAR pointing towards a small hill with a box struc-
ture placed in front of the hill. The raw output of the simulated LIDAR is shown
in Figure 4.30(b). One can see that a scan of the vertical LIDAR finds a series of
measurements that represent points on the ground. Since each LIDAR scan occurs
at rate of 30 Hz, synchronization with the ground profile LIDAR measurements
was not a major problem.
4.5 Sensor Fusion 111
(a)
90
80
SkyNo return
70
60
50
40
30
20
Ground in front of vehicle
10
Box structure
0
10
0 10 20 30 40 50 60 70 80 90
(b)
Figure 4.30 (a, b) Simulated vertically scanning LIDAR.
112
Sensors, Estimation, and Sensor Fusion
that is, whose estimated heights are near or below the estimated ground profile.
A stereo vision system, which produces a height above ground map, is similarly
processed and areas that are marked as indeterminate are removed from the data
stream. Most radar sensors deliver heavily processed results that do not contain
ground clutter. A series of carefully constructed coordinate transformations then
places all the sensor data into a unified frame of reference while helping to insure
that the fusion results from multiple sensors do not distort or smear an object.
The fused LIDAR and vision data can be used both to detect objects and detect
empty space. When a LIDAR or radar sensor detects an object, a square region of
map cells of size
4.5 Sensor Fusion
Range(meters)
360
0.20
is identified, representing the space in the map potentially occupied by the de-
tected object. Data from the stereo vision system is already quantized into spacial
rectangles.
For each cell in which the presence of an object is to be marked, the confidence
values can be initialized as follows
In order to properly handle empty space detection and avoid eliminating pre-
viously detected objects, the height of previously detected objects as well as the
height of the scanning plane must be considered. The height of each cell is updated
only when the height of the LIDAR scan is greater than the current height recorded
in the map cell.
To process areas that are sensed to be free of objects, an efficient ray-tracing
algorithm must be utilized such as Bresenhams line algorithm [18]. Some condi-
tions must be defined to determine when the values for a cell that is sensed as empty
are to be updated. For example, the rule for the OSU ION vehicle was chosen as
When the test is successful, the confidence and height values are updates ac-
cording to
(
X pos = Rnd cos()(i Xmid ) sin()(H map j Ymid ) + Xv )
Ypos = Rnd (sin()(i X mid ) + cos()(H map j Ymid )+Y )
v
where (i, j) are locations in the vehicle-centered, body-fixed output map, (Xpos,
Ypos) are locations in the approximately vehicle-centered, world-oriented internal
map, is the vehicle yaw, (Xmid, Ymid) indicate the location of the vehicle in the
output map, and (Xv, Yv) indicate the vehicles location in the internal map. In this
case, the interpolation is accomplished by rounding the result to the nearest integer.
While this operation does demand an interpolation, these interpolation errors
do not accumulate because the results of this interpolation are not fed back to the
sensor fusion map.
Figure 4.32 Sensor fusion results: (a) approaching a gate obstacle, (b) entering a tunnel, and (c) inside the tunnel (GPS blocked).
Sensors, Estimation, and Sensor Fusion
4.5 Sensor Fusion 117
top right, and below that a visual image of the scene for comparison, for three ex-
amples selected from a 2005 DARPA Challenge qualification run at the California
Speedway. The location of the vehicle on the map is indicated by a small horizon-
tally centered box. The leftmost example shows the vehicle approaching a gate; the
gate and the traffic cones are visible in the map, as well as two chain link fences
separated by a walkway on the far right. Almost horizontal ground returns can be
seen in the LIDAR data. Notice also that the gate creates an unknown shadow re-
gion in the vision system data. The middle example shows the vehicle approaching
a tunnel. The tunnel, as well as jersey barriers behind it, can be seen in the map,
along with fences and machinery along the right side. The rightmost example shows
the vehicle inside the tunnel. GPS reception is unavailable, yet objects can be suc-
cessfully placed on the sensor map. This example also clearly shows the memory
capability of the sensor map, with objects behind the vehicle and occluded by the
tunnel remaining in the sensor map.
4.5.4.1 Sensor Suite for an Example On-Road Vehicle: The OSU ACT
Ohio State University participated in the 2007 DARPA Urban Challenge, an urban
autonomous vehicle competition held at the former George Air Force Base in Cali-
fornia. The overall behavior of the vehicle deployed in the competition is generated
using a nested finite state machine architecture implemented in the high-level con-
trol module as described in Chapter 3. Decisions and transitions in this system are
driven by trigger events and conditions extracted by the situation analysis module
from a model of the external world build from an array of sensors by the sensor fu-
sion module, as well as conditions extracted by the high-level control module from
the vehicles position, mission, and map databases. An overall block diagram of the
vehicle is shown in Figure 4.33.
The on-vehicle sensors must provide both information about the state of the
ego vehicle, including its position, velocity, and orientation, and coverage of all
features of interest in the external world. This information must be extracted and
provided to controllers in real time.
The current position, orientation, and velocity of the vehicle is estimated using
a Kalman filterbased software module analyzing data from a number of sensors.
118
Sensors, Estimation, and Sensor Fusion
SICK SICK
Figure 4.36(a), acquired while the vehicle was in a parking lot, shows other vehicles
as well as a building to the rear of the vehicle using outlines for the LIDAR returns.
Figure 4.36(b), acquired at a multilane intersection, shows other vehicles, including
summary information from the track of one vehicle target, as well as ground cluster
from curbs, small hills, and vegetation on the sides of the roads.
(a)
(b)
Figure 4.36 Sensor returns (a) in a parking lot and (b) at an intersection.
122
Sensors, Estimation, and Sensor Fusion
system that is fixed with respect to the world. Once the sensor measurements are in
a world-fixed coordinate framework, the sensor measurements can be placed into
clusters that form a group of returns (within 90 centimeters of one another in the
case of the algorithm deployed on the ACT vehicle). Various spatial descriptors can
then be extracted from each cluster. Linear features can, for example, be extracted
with a RANSAC-based algorithm. The geometry and extent of the cluster can be
recorded, the cluster centroid calculated, and the linear features used to estimate a
cluster center if the cluster is determined to represent another vehicle. Large clus-
ters, of a size that is unreasonable for a road vehicle, should not be tracked as
vehicles. Instead the large clusters can be reported as objects with zero velocity and
4.5 Sensor Fusion 123
the linear features of the large clusters can be placed into a clutter line list that
can terminate the tracks of any vehicles that venture too close.
After the sensor returns have been clustered, each cluster is classified accord-
ing to whether it is occluded by another cluster. Clusters that are not occluded by
another cluster are classified as foreground while clusters that are occluded on
the right are right occluded and on the left are left occluded.
The principle output of the sensor fusion algorithm is a list of tracks. Each of
the resulting tracks has a position and velocity. Also, the general size and shape
of the point cluster supporting the track is abstracted as a list of linear features.
One implementation strategy for each of these tasks is described in the rest of this
section.
where (q, r) are sensor hits returned by the LIDAR and qoffset = 116, Xoffset =
1.93, Yoffset = 0.82 are the location and orientation values for the LIDAR, in this
example the left rear SICK LIDAR on the ACT vehicle. These offset values can be
obtained with physical measurements as well as experimental calibration tests in-
volving the detection of objects in the overlapping regions of multiple sensors. For
instance, if one is confident about the orientation offset of one sensor, the offsets
of another sensor can be discovered by observing the same distinctive object within
some overlapping region.
Other sensor-to-vehicle coordinate transformations are similar to the trans-
form described above, but they are specific to a certain sensor. A sensors range
measurement needs to be scaled to meters, and the angle measurements direction
may be reversed, or there may be an angle offset. One choice for the vehicle-based
coordinate system is the standard SAE coordinate system shown in Figure 4.38.
The positive x-axis passes through the front of the vehicle, and the positive y-axis
exits the right side. The positive z-axis points down and the yaw axis is positive
clockwise.
The transforms for moving to world coordinates from vehicle coordinates are
the following:
where (Xposition, Yposition, f) are the position and orientation of the vehicle as re-
turned by the GPS/INS fusion algorithm.
In the case of sensor measurements, the above code performs reasonably well
while still minimizing computational time by prioritizing angular comparison.
Points that are adjacent in the angle axis of their polar representation are more
4.5 Sensor Fusion 125
likely to be in the same cluster than points that are not. This property allows the al-
gorithm to limit the comparisons between pi and pj to only those returns for which
pj is in the angular vicinity of pi. In order to achieve computational efficiencies, the
notion of a neighborhood can be defined in terms of the index into a list of points
sorted by angle.
However, the above pseudo code doesnt guarantee a correct clustering. In
pathological cases it is possible for multiple clusters to be derived from points that
should be within a single cluster. Thus, a correction step can be performed in which
all the clusters found with the above pseudo-code are checked to see if they are
closer than the threshold distance parameter value defined above. Clusters identi-
fied in this process can be merged. Since the number of clusters will be fairly small
in comparison to the number of sensor measurements, the correction step will not
be terribly time consuming. During experiments on the ACT vehicle, this correc-
tion was rarely required in practice.
Direction of travel
Target Target
vehicle vehicle
Ego
vehicle
After the above code returns with a prominent line, the support points that are
within some minimum distance of this line can be flagged as used and the above
function called iteratively until the number of support points drops below some
threshold. After a set of lines is built up in this way, each of the lines may be broken
into at least one line segment.
As mentioned above, the obvious choice for the clusters position, the clus-
ter centroid, presents problems for a tracking filter. Nevertheless, in some cases it
may be the only available measure. We give one possible approach to handle this
problem.
If no suitable linear features are found within the cluster, the tracker attempts
to track the cluster centroid that is given by
1
Xcentroid =
n
xi
1
Ycentroid = yi
n
4.5 Sensor Fusion 127
If a small (<80 cm) linear feature is found in the cluster, such as might be a
return from part of the front or back side of vehicle, a center point is found by
extending a line in the direction away from the ego vehicle for a distance of 2 me-
ters, as shown in Figure 4.40(a). This type of center is called the line midpoint. The
assumption behind the line midpoint center is that while the cluster is probably a
vehicle the small linear feature representing only a small part of the car presents an
unreliable orientation. If a linear feature is found that is long enough to represent
the side of a car, the Car Side center case derives the vehicles center location from
the linear feature as shown in Figure 4.40(b). The car side center is located 1 meter
from the midpoint of the linear feature. Similarly, if a linear feature is found that is
long enough to represent a significant portion of the front or back side of a vehicle,
(a)
(b)
Figure 4.40 (a) Line midpoint center, and (b) car side center.
128
Sensors, Estimation, and Sensor Fusion
the car end center point case identifies the center point as located 2 meters from
the midpoint of the linear feature. Once a track has been classified as a vehicle, its
classification can be fixed regardless of later changes in observed features.
The target vehicles orientation can be estimated by analyzing the direction
of the tracks velocity vector and the direction of the clusters linear features. The
tracks velocity vector has the advantage of being accurate when the target is mov-
ing. However, when the vehicle is not moving, the orientation estimate must rely
on the direction of the clusters longer linear features. The clusters linear features
may represent the sides or ends of a vehicle. However, with a prior estimate for the
vehicle orientation the forward/backward ambiguities of the linear features may be
resolved. Pseudo-code for finding an orientation estimate from the geometry of the
linear features is:
Once a direction estimate has been extracted from the linear features, this lin-
ear feature direction estimate may be used to update the final orientation estimate
in situations where the direction estimate obtained by looking at the targets veloc-
ity is poor. Specifically, the orientation can be taken to be the direction of the veloc-
ity estimate whenever the speed is above a chosen speed (2 meters per second in
the deployed ACT algorithm) while the geometry-based direction is used at slower
speeds.
mapping that minimizes the diagonal of a cost matrix A where element aij is the
cost of associating cluster i with track j. The column mapping solution can be
found using the linear assignment algorithm described in [21]. A simple choice for
the cost of associating cluster i with track j is the distance between the centroid of
the cluster i and track j. However, an ad hoc association cost that was used in the
ACT deployed algorithm was:
100
aij = 100Cdist + + 60 Sc St , all N p > 0
Np
aij = 100Cdist + 100 + 60 Sc St , all N p = 0
where Cdist is the distance between cluster i and track j in meters, Np is the number
of points in the track j, Sc is the extent of cluster i in meters, and St is the extent of
the last cluster that updated track j in meters. The motivation for the 100/Np term
is to make associating an old track with a cluster easier than associating a new
track. The motivation of the |Sc St| term is to make it easier to associate tracks
and clusters of similar sizes. Note that, generally, there are not equal numbers of
clusters and tracks, so the matrix A is padded with entries of maximum cost value
such that it is square.
The tracking filter could involve instantiating a complete Kalman filter for each
track. However, a computationally simpler approach is to use a simple variable
gain Benedict-Bordner [22] filter, also known as an alpha-beta filter, to estimate the
current states of the track. These current track states are then used to match tracks
to the current set of measurement clusters. The coefficients in the filters update
equations can be changed heuristically according to the changing classification sta-
tus of the cluster.
The Benedict-Bordner track update equations are:
v x = vx + h(px _ meas px ) T
v y = vy + h(py _ meas py ) T
p x = px + Tvx + g ( px _ meas px )
(
p y = py + Tvy + g py _ meas py )
where (vx, vy, px, py) are the current velocity and position of the target, and
(v x , v y , p x , p y ) is the updated velocity and position of the target, T is the sample
period, and (px_meas, py_meas) is the centroid of the updating cluster. The values g
and h are filter parameters, which can be heuristically modified during the evolu-
tion of the track.
The point within the cluster that is being tracked can change as the track
evolves. However, all of the various types of tracking points described above at-
tempt to approximate the center of the target. The least desirable tracking point
is the cluster centroid, while more accurate tracking points attempt to use a line
segment representing the side or end of a vehicle. The g and h parameters control
130
Sensors, Estimation, and Sensor Fusion
the degree to which the filter will rely on internal states when updating. For the
algorithm deployed in ACT, the parameters for various target center classifications
are shown in Table 4.3. Note that if a vehicle has recently changed states, it may be
prudent to not update the velocity estimate at that time (essentially this is h = 0).
This allows the large artificial velocity that would be generated during center type
transitions to be ignored.
Vehicle 2
Vehicle 1
Ego
vehicle
All the bins in zbuffer are initialized to very large values prior to any calls
to add_point_to_z_buffer. After the above code has been called for all returned
LIDAR points, zbuffer can be used to find the occlusion status of the left and
right sides for each cluster with the following routine described as pseudo-code:
proc foreground(current_cluster)
find current_clusters maximum angle as a zbuffer bin index
assign this bin index as i
find current_clusters minimum angle as a zbuffer bin index
assign this bin index as j
check zbuffer from bin i to bin i+4 to see if any objects are closer,
if so, mark as
RIGHT_SIDE_OCCLUDED
check zbuffer from bin j to bin j-4 to see if any objects are closer,
if so, mark as
LEFT_SIDE_OCCLUDED
The occlusion status of the cluster can then used to inform track initialization
decisions and modify the values of the g, h parameters for the tracking filter. For
instance, pseudo-code for the trackable_object method used to decide whether
to initialize a track for a cluster follows:
them are interpreted as vehicles because the cluster that updated the track possessed
linear features of the appropriate length at some point in the tracks evolution. For
instance, Figure 4.42 shows a vehicle, track ID 32, stopped at an intersection in
front of the ego vehicle.
The term situation is defined to be knowledge concerning the vehicle and/or the pre-
vailing scenario and surroundings. On the cognitive level, the achievement of situ-
ation awareness is considered an important open issue by many researchers [23].
Vehicular sensing should not be restricted to conventional metrology that acquires
a couple of parameters from the scene, such as position and speed of other vehicles.
Instead, a transition to consistent scene understanding is desirable. This requires a
consistent propagation of knowledge and measures for its confidence throughout
the perception chain as depicted in Figure 4.43. An ambitious goal is to expressively
formulate the ambiguity of knowledge at every level that might stem from sensor
noise, the signal interpretation process, or the ambiguity of previous levels. Thus,
safety measures can be assigned to any potential behavior at the control level con-
sidering all previous processing. In the reverse direction, selective information can
be required from previous processing units (e.g., sensor attention may be directed
towards the most relevant objects in the scene). The numerous closed loops in this
structure may motivate concerns and require a theory on perception stability.
For an off-road scenario, the situation is always path following with obstacle
and collision avoidance. The behavior and control algorithms are required to ana-
lyze the occupancy grid map, identify obstacles that may block the current path,
and adjust the planned path as needed.
For an on-road scenario, we are interested in all the targets in our path and
the targets in surrounding lanes or on roads intersecting our lane. We are not in-
terested in targets that do not affect the current situation and planned behavior.
While an autonomous vehicle is navigating through an urban environment, many
different situations may arise. The situations may vary if the vehicle is on a one-
lane road, on a two-lane road, at an intersection, and so on. Particularly critical
for an autonomous vehicle are those situations related to intersections. When a car
is approaching an intersection, it must give precedence to other vehicles already
stopped. If the intersection is not a four-way stop, the vehicle must cross or merge
safely in the presence of oncoming traffic. If other vehicles are stationary for a long
time, the car must decide whether those vehicles are showing indecisive behavior.
Other situations may involve road blockage in which the vehicle might carefully
perform a U-turn, park in parking spaces, and deal with dangerous behavior from
other vehicles. All these situations must be identified and evaluated and the result-
ing conclusions transmitted to the high-level controller in order for the vehicle to
operate properly.
From a practical viewpoint, situations and events are the switching conditions
among metastates and all the substates inside the high-level behavior control state-
machines. Thus, the aim of situation analysis is to provide the high-level control-
ler with all the switching conditions and events in a timely manner. The situation
analysis software must analyze the current vehicle state, the current and upcoming
required behavior for the route plan, the map database, and the sensor data to
identify specific situations and conditions that are relevant to the vehicles immedi-
ate and planned behavior. In order to reduce computational costs and complexity,
only the situations related to the current metastate or substates, as provided by the
high-level control software, should be checked.
the computational cost as low as possible, only the situations related to the current
metastate or substates are checked.
Figure 4.44 shows the algorithm flow of a situation analysis (SA) software
module. In the case of the algorithm deployed on the ACT vehicle, the high-level
control is developed and implemented as a nested finite state machine with mul-
tiple metastates. The high-level control executes on a different processor. To de-
termine the conditions for changing states within the metastates, the high-level
control sends network UDP packets to the computer on which the sensor fusion
and situational analysis algorithms are executing requesting certain conditions and
providing information about the current state and the current metastate. As shown
in Figure 4.44, the first activity is to generate lane and intersection models as the
vehicle moves through the world. After this, requests from the high-level controller
are received and the requested tests are executed. Finally, a network UDP packet
136
Sensors, Estimation, and Sensor Fusion
is sent back to the high-level control algorithm containing a reply to the requested
conditions.
In many cases the output of the situation analysis algorithm is a series of Bool-
ean (true/false) response values. For example, if the vehicle is currently driving
down a road and about to perform a passing operation, the current state might be
give turn signal and check passing lane. The situation analysis algorithm might
then be asked to evaluate three conditions: is the passing lane temporarily occu-
pied, is the passing lane permanently occupied, and is there sufficient space to pass
before an intersection.
Figure 4.45 GIS map with waypoints and actual path traveled.
and magnitude. However, the spline is not C2 continuous. The second derivative
is linearly interpolated within each segment causing the curvature to vary linearly
over the length of the segment. This is illustrated in Figure 4.46.
To determine the sharpness of a turn, three waypoints can be used to create
two vectors. The normalized dot product of these two vectors will give the cosine
of the angle between these vectors. If the absolute value of the cosine is less than
some value (in the deployed ACT algorithm 0.9, representing an angle of greater
than 25) a simple linear connection can be employed, otherwise a Catmull-Rom
spline can be used to connect the waypoints. The equation for the Catmull-Rom
spline requires four control points, one waypoint before the beginning of the turn,
one waypoint at the start of the curve, one waypoint the end of the curve, and one
waypoint after that. Using these control points the equation for the x component
of the spline is:
( )
1 t + 2t t Xwpi 1 + 2 5t + 3t Xwpi
2 3 2 3
( )
Xsample =
( ) (
2 + t + 4t 2 3t 3 Xwpi +1 + t 2 + t 3 Xwpi + 2 )
y1 y2
Slope =
x1 x2
B = y1 Slope x1
x + x2 y1 + y2
Offset = 1 +
2Slope 2
1
= 1+
Slope 2
x + x2 Offset y + y2
= 2 1 2 +2 1
2 Slope Slope
2 2
x + x2 y + y2 y + y2
= 1 + Offset 2 + 1 2Offset 1 4
2 2 2
Xleft =
+ ( 2 4 )
2
1
Yleft = Xleft + Offset
Slope
4.6 Situational Awareness 139
With the normal and the lane widths defined in the map database, four new
points can be created orthogonal to the samples and half a lane width away. A sam-
ple result is shown in Figure 4.47. To compensate for the missed and overlapping
areas shown in Figure 4.47, the intersection of the projections of each polygons left
side and right side can be calculated, thereby converting rectangles to trapezoids to
eliminate overlapping and to fill missed areas. This result is shown in Figure 4.48.
The individual polygons are called road stretches.
We also need to generate a model of any lanes that are adjacent to the current
lane, as they could be used as a passing lane at some point. To generate these lanes,
the closest waypoints next to the current lane waypoints are found and the lane is
sampled in the same fashion as the current lane. To create the polygons, however,
the current lanes stretches are extended and the intersection with the new lane
is found. This allows one to link the current lane stretches to the adjacent lane
1110
1115
1120
1125
Missed area
1130
1135
1140
820 825 830 835 840 845 850 855 860 865
Figure 4.47 Rectangles in road model generation with overlapping and missing regions.
1116
1118
1120
1122
1124
1126
1128
854 856 858 860 862 864 866
Figure 4.48 Road model with trapezoid corrections.
140
Sensors, Estimation, and Sensor Fusion
stretches regardless of how the original map database waypoints that define that
lane were constructed. These adjacent lanes will be used in cases where passing or
U-turns are required. A sample result is shown in Figure 4.49. This strategy is pref-
erable to simply using the lane center waypoints for the adjacent lane, as they may
not align with those of the existing lane and various anomalies, such as overlapping
lane stretches or empty areas between lane stretches, may arise. Finally, one must
consider situations when passing is not allowed.
After these road stretches are created we can begin looking for relevant ob-
stacles in our lane. Generally, it is convenient to construct only stretches close to the
vehicle and to repeat the process as the vehicle travels down the road.
Generating a road as a set of polygons allows extension of the current imple-
mentation with information coming from different sources. This compares favor-
ably with many algorithms proposed in the literature to extract the edges of lanes
[4, 28, 29] with a prescribed accuracy.
1110
1120
1160
1180
1200
1220
675 680 685 690 695 700 705 710
4.6.4 Primitives
Some computational geometric primitives are applicable to the situation analysis
task. To deal with many situations, we must be able determine if an obstacle is in-
side a lane. To handle this case, algorithms for finding points inside polygons have
been carefully taken into account. Many of these algorithms can be found in the
computer graphics literature [3032]. Most of them are very simple to implement,
and have a low computational cost, typically O(nlogn) or O(n2), with n the number
of the vertices in the polygon.
Stretch #6
Track #1
Stretch #5
Track Centroid
Track #2 Stretch #4
Stretch #3
Track Feature #2
Stretch #2
Stretch Sample Track Feature #1
Points
Stretch #1
As we define the target cluster geometry with a set of lines, one may also want
to check if at least one line is inside a polygon. One approach to this problem is to
determine whether any of the line segments composing the track intersect (cross)
the boundary of the polygon. We must identify the case where an entire line seg-
ment is within the polygon by checking for endpoints lying inside the polygon as
described above. As part of this operation, we can determine which side of the
polygon the track line crosses or is near and the normal distance from the edge of
the polygon.
(a)
(b)
Figure 4.52 (ad) First left turn in area A tests.
4.6 Situational Awareness 143
(c)
(d)
Figure 4.52 (continued)
144
Sensors, Estimation, and Sensor Fusion
on the road, one can first iterate through each track and determine if its centroid is
within the stretchs polygon or the centroid is within 4 meters of the stretchs sample
points (the two points at the center of the road). Four meters were chosen in the
deployed ACT algorithm since it is a typical car length and double a stretch length;
this means that a stretch will not associate with a track that has a centroid more
than two stretches away. After linking a relevant track to a stretch number, one can
determine if the track is a blocking or nonblocking obstacle.
To do this the situation analysis algorithm considers the track features and
lines. The simplest case is when the track contains no lines, only a centroid. In this
case, the normal to the right side of the stretch that passes through the centroid
can be calculated and the length of this normal considered as the offset distance.
In addition, the distance from the autonomous vehicles current position to the
obstacles position can be computed by iterating from the current stretch to the ob-
(a)
(b)
Figure 4.53 (af) Second left turn in area A tests.
4.6 Situational Awareness 145
stacles stretch, adding their lengths together, and finally adding the distance from
the obstacles stretch to the obstacles centroid.
If the track has line segments, one can determine if the lines intersect the left
or right edges of the stretch. If the tracks line segments intersect both sides of a
stretch, this track is identified as a blocking obstacle with an offset distance equal
to the lane width and the distance to the current vehicle position is calculated as
described above. If the tracks line segments intersect one of the stretches sides, one
can determine which side it crosses and if that line crosses the front or back of the
stretch. If the tracks lines intersect with one side of the stretch but do not intersect
with the front or back, then one can determine which endpoint is contained within
the stretch and calculates the normal distance from the side that is intersected to the
endpoint (similar to the no line case). If the tracks lines cross a side and the front
(c)
(d)
Figure 4.53 (continued)
146
Sensors, Estimation, and Sensor Fusion
(e)
(f)
Figure 4.53 (continued)
or back, the intersection to the front or back can be treated like the endpoint in the
previous case. Finally, if none of the tracks lines intersect, one can determine if the
lines are contained entirely within the stretch by testing to see if both endpoints of
the line are contained within the stretches polygon.
After the offset distance of a track and the distance to the object are calcu-
lated, the situation analysis algorithm will continue classifying the track in the next
stretch. A larger offset distance dominates. For example, in Figure 4.51, track #1
would first be classified in stretch 3 and later in stretch 4. However, the distance
to the obstacle would be recorded as the distance to the intersection in stretch 3
so that the high-level control algorithms can attempt to dodge the obstacle, if pos-
sible, and reach a safe offset distance well before the maximum offset distance of
the obstacle is reached.
4.6 Situational Awareness 147
References
[1] http://www.navcen.uscg.gov.
[2] http://www.faa.gov/about/office_org/headquarters_offices/ato/service_units/techops/
navservices/gnss/gps/.
[3] Redmill, K. A., A Simple Vision System for Lane Keeping, Proc. 1997 IEEE Conference
on Intelligent Transportation Systems, Boston, MA, November 1997, pp. 212217.
[4] Redmill, K. A., et al., A Lane Tracking System for Intelligent Vehicle Applications, Proc.
2001 IEEE ITSC, Oakland, CA, August 2529, 2001, pp. 273279.
[5] Bertozzi, M., et al., Artificial Vision in Road Vehicles, Proceedings of the IEEE, Vol. 90,
No. 7, July 2002, pp. 12581271.
[6] Farkas, D., et al., Forward Looking Radar Navigation System for 1997 AHS Demonstra-
tion, Proc. 1997 IEEE Conference on Intelligent Transportation Systems, Boston, MA,
Nov. 1997, pp. 672675.
[7] Lewis, F. L., L. Xie, and D. Popa, Optimal and Robust Estimation: With an Introduction
to Stochastic Control Theory, Boca Raton, FL: CRC Press, 2008.
[8] Thrun, S., W. Burgard, and D. Fox, Probabilistic Robots, Cambridge, MA: MIT Press,
2006.
[9] Redmill, K. A., T. Kitajima, and . zguner, DGPS/INS Integrated Positioning for Con-
trol of Automated Vehicles, Proc. 2001 IEEE Intelligent Transportation Systems Confer-
ence, August 2529, 2001, pp. 172178.
[10] Xiang, Z., and . zguner, A 3D Positioning System for Off-Road Autonomous Ve-
hicles, Proc. 2005 IEEE Intelligent Vehicle Symposium, June 68, 2005, pp. 130135.
148
Sensors, Estimation, and Sensor Fusion
[11] Martin, M. C., and H. P. Moravec, Robot Evidence Grids, Carnegie Mellon Robots Insti-
tute Technical Report CMU-RI-TR-96-06, 1996.
[12] Hall, D. L., and J. Llinas, An Introduction to Multisensor Data Fusion, Proceedings of
the IEEE, Vol. 85, No. 1, January 1997, pp. 623.
[13] Bar-Shalom, Y., and X. Li, Multitarget-Multisensor Tracking: Principles and Techniques,
University of Connecticut, Storrs, CT, 1995.
[14] Foresti, G. L., and C. S. Regazzoni, Multisensor Data Fusion for Autonomous Vehicle
Navigation in Risky Environments, IEEE Trans. on Vehicular Technology, Vol. 51, No.
5, 2002, pp. 11651185.
[15] Redmill, K. A., J. Martin, and . zguner, Sensing and Sensor Fusion for the 2005 Desert
Buckeyes DARPA Grand Challenge Offroad Autonomous Vehicle, Proc. IEEE Intelligent
Vehicles Symposium, June 1315, 2006, pp. 528533.
[16] Hummel, B., et al., Vision-Based Path-Planning in Unstructured Environments, Proc.
IEEE Intelligent Vehicle Symposium, June 2006, pp. 176181.
[17] Dang, T., and C. Hoffman, Fast Obstacle Hypothesis Generation Using 3D Position and
3D Motion, Proc. IEEE International Workshop on MachineVision for Intelligent Ve-
hicles, June 2005.
[18] Abrash, M., Graphical Programming Black Book, Coriolis Group Books, 1997, http://
archive.gamedev.net/reference/articles/article1698.asp.
[19] Galler, B. A., and M. J. Fischer, An Improved Equivalence Algorithm, Communications
of the ACM, Vol. 7, No. 5, May 1964, pp. 301303.
[20] MacLachlan, R., Tracking Moving Objects from a Moving Vehicle Using a Laser Scanner,
Carnegie Mellon Technical Report CMU-RI_TR-0507, 2005.
[21] Jonker, R. and A. Volgenant, A Shortest Augmenting Path Algorithm for Dense and Sparse
Linear Assignment Problems, Computing, Vol. 38 , 1997, pp. 325-340.
[22] Benedict, T. R., and G. W. Bordner, Synthesis of an Optimal Set of Radar Track-While-
Scan Smoothing Equations, IRE Transactions on Automatic Control, Vol. AC-1, July
1962.
[23] Nagel, H. H., Steps Toward a Cognitive Vision System, AI Magazine, Vol. 25, No. 2,
2004, pp. 3150.
[24] Albus, J., et al., Achieving Intelligent Performance in Autonomous Driving, National
Institute of Standards and Technology, Gaithersburg, MD, October 2003.
[25] Catmull, E. E., and R. J. Rom, A Class of Local Interpolating Splines, in Computer Aided
Geometric Design, R. E. Barnhill and R. F. Riesenfeld, (eds.), Orlando, FL: Academic Press,
1974, pp. 317326.
[26] Barry, P., and R. Goldsman, A Recursive Evaluation Algorithm for a Class of Catmull-
Rom Splines, Computer Graphics, (SIGGRAPH 88), Vol. 22, No. 4, 1988, pp. 199204.
[27] Wang, Y., D. Shen, and E. K. Teoh, Lane Detection Using Catmull-Rom Spline, Proc.
1998 IEEE International Conference on Intelligent Vehicles, 1998.
[28] Schreiber, D., B. Alefs, and M. Clabian, Single Camera Lane Detection and Tracking,
Proc. 2005 IEEE Intelligent Transportation Systems, 2005.
[29] Kim, S., S. Park, and K. H. Choi, Extracting Road Boundary for Autonomous Vehicles Via
Edge Analysis, Signal and Image Processing SIP 2006, Honolulu, HI, July 2006.
[30] Haines, E., Point in Polygon Strategies, Graphics Gems IV, P. Heckbert, (ed.), New
York: Academic Press, 1994, pp. 2446.
[31] Shamos, M. I., and F. P. Preparata, The Standard but Technically Complex Work on Geo-
metric Algorithms, in Computational Geometry, New York: Springer-Verlag, 1985.
[32] ORourke, J., Section 7.4, Point in Polygon, in Computational Geometry in C, 2nd ed.,
New York: Cambridge University Press, 1998.
CHAPTER 5
Examples of Autonomy
149
150
Examples of Autonomy
5.1.1 Background
Cruise control is basically a throttle feedback loop, whereas ABS is a brake control
feedback loop. From the pure control viewpoint, the realization of an intelligent
vehicle would require two additional capabilities: the ability to jointly control mul-
tiple loops and the ability to close the third loop, steering.
In general, studies on the design of automated vehicle system involve the so-
lution of two decoupled control systems: the steering control (lane keeping) and
the longitudinal (headway) control. The longitudinal control (i.e., speed control
cruising at the selected speed or keeping relative speed and relative position of the
controlled vehicle with respect to the lead vehicles at a safe distance in the highway
traffic) constitutes the basis for current and future advanced automated automo-
tive technologies. Development of a longitudinal controller for headway regulation
and a supervisory hybrid controller capable of switching among different control
actions was proposed by Hatipoglu, zgner, and Sommerville [1]. The longitudi-
nal car model was built by assembling submodels of the engine, torque converter,
transmission, brake, and vehicle body dynamics in the longitudinal direction. The
headway controller was presented by introducing the dynamics of the relative ve-
locity, the relative distance, and also the highway safety distance. A controller with
constant acceleration (or deceleration), smooth acceleration (or deceleration), and
linear state feedback was presented towards an intelligent cruise control (ICC)
hybrid model. ICC hybrid modeling with the decision rules and boundaries, and
phase plane analysis of relative distance versus relative velocity in different control
regions, were introduced. The Kalman filter and its update law on a real-time basis
was designed to estimate the relative distance and velocity sensed by a laser range-
finder. Possible discrete jumps due to the quantized sensor reading and the process
noise were eliminated by the designed filter. Control methodology including colli-
sion region and conventional cruise control when no obstacle/vehicle is detected in
the sensing range was validated by simulation and experimental results [1].
Intelligent cruise control theory and its applications have been investigated by
many researchers. Using the dynamics of the relative velocity and relative distance,
PID controller with fixed gains, gain scheduling of the PID controller, and an adap-
tive controller were considered for headway regulation by Ioannou and Xu [2]. Ba-
sically, transfer function between the deviation of the vehicle speed and the throttle
angle was approximated for the operating point at steady-state, and the dynamics
of the relative speed and distance of the leading and following vehicles were driven
to zero by using the considered controllers. The feedback control for the braking
dynamics was proposed by using feedback linearization and the effects of brake
torque, static friction forces, and rolling friction forces and aerodynamic forces
were suppressed. Switching logic between the brake and throttle controller was
presented by dividing operating region of the vehicles into three possible situations.
Vehicle following in a single-lane without passing the leading vehicle was studied
by Ioannou and Chien [3]. Replacement of the human driver by an autonomous
cruise controller was shown to enhance traffic flows. The different driver models
were considered for stand-alone cruise controller applications where no informa-
tion was exchanged between vehicles but all of the vehicles were equipped by the
proposed cruise controllers. Cooperative adaptive cruise controller design when in-
tervehicle communication was added to adaptive cruise controller was investigated
5.1 Cruise Control 151
by Lu and Hedrick [4]. Relative distance and velocity with respect to the lead vehi-
cle measured by radar were used as the variables to be regulated by the considered
dynamic sliding mode controllers.
The challenges of cruise control systems arise from the complicated and com-
plex mechanical, electromechanical, or electrical actuator systems. Vehicle actuator
systems are throttle and engine controller systems, brake systems, and transmission
control systems. These systems are in general nonlinear involving nonsmooth non-
linearities in the input-output or in the derived analytical models. The implementa-
tion and comparison of nonlinear controllers in the longitudinal regulation of cars
for platooning was presented by Lu, Tan, and Hedrick [5]. Real-time tests were ac-
complished by restructuring original Demo 97 codes developed by the PATH pro-
gram with considered nonlinear control methodologies. A comparison of spacing
and headway control methodologies for automated vehicles in a platoon were ex-
amined by Swaroop et al. [6]. A three-state lumped parameter longitudinal model
of a vehicle was presented before detailed analysis of constant spacing control and
constant headway control. Individual vehicle stability was shown to be established
straightforward where string stability in a platoon was shown to impose interve-
hicle communication for constant spacing control. Constant headway control was
capable of offering string stability without intervehicle stating of the relative posi-
tion of each vehicle with respect to the lead vehicle. Shladover reviewed the state of
development of advanced vehicle control systems in [7]. The contributions of The
Ohio State University on both steering and longitudinal (spacing) control under
the sponsorships of the Ohio Department of Highways and the Federal Highway
Administration, PATH (Partner for Advanced Transit and Highways) program
University of California, Berkeley and Personal Vehicle System (PVS), and Super-
Smart Vehicle Systems (SSVS) under the sponsorship of Ministry of International
Trade and Industry, Japan were stated. Tsugawa et al. presented an architecture for
cooperative driving of automated vehicles by using intervehicle communications
and intervehicle gap measurement. Proposed architecture and its layers were inves-
tigated in detail followed by experiments on a real-time basis [8].
Drive
u ref + force Point mass u
Speed
vehicle
controller
u model
(a)
Tp we Ts
Torque Vehicle
Engine Transmission
converter dynamics u
model model
wt model Tt model
Throttle
angle
u ref Speed
controller
(b)
Figure 5.1 (a) Point mass simplified vehicle model, and (b) longitudinal vehicle model for cruise
control [1].
variables.
The velocity in the longitudinal and lateral direction, which is given by
x and y in Chapter
2, is replaced by its equivalent symbol u and v, respectively
(i.e., x u , y v ).
A linear longitudinal speed controller is derived by using the vehicle accelera-
tion according to Newtons law,
M u = F Fload (5.1)
where F denotes the net force generated by the engine model and transformed by
the transmission model following the switching strategies as function of the actual
and requested speed values. Fload represents aerodynamic drag forces and possible
load force created by the road inclination. It may be written as a sum of two load
forces resisting to the force created by the engine:
The force accelerating the vehicle may be converted into the generated engine
K
torque in terms of the transmission gain as F = i Te where Ki , i = 1, 2, 3, is the
Re
gain of the transmission, Re denotes the effective radius of the wheel, and Te de-
notes the torque generated by the engine model. The vehicle speed is derived in
terms of engine torque, transmission gain, road, and aerodynamic loads,
5.1 Cruise Control 153
Ki
u= Te u2 g sin() (5.3)
Re M M
Kp t
Re 2
(
Te = Kp uref u + ) TI (u ref )
u d +
Ki
u (5.4)
0
Ki Kp t
( ) (u )
u= Kp uref u + ref u d g sin() (5.5)
Re M TI 0
achieving zero steady-state error, e uref - u, between the reference and actual
speed of the vehicle. The choice of controller parameters KP and TI are tuned to
obtain the desired transient performance. The steady-state error is guaranteed to
be zero through the integral term in the PI controller. The transient performance
exhibited during the acceleration or deceleration maneuver of the vehicle model
may be considered important to satisfy ride comfort, driving performance, and
also fuel consumption economy. The integral time constant TI may be tuned for
a fast transient response without exceeding the reference value. If a large value is
chosen, slow transient may occur, and if a small value is chosen, a fast transient
with an overshoot followed by an oscillatory response may be generated in the time
responses of the vehicle speed. On the other hand, the choice of proportional gain,
Kp, is chosen in order to assure desired change in the time response of the vehicle
speed and not exceeding the service limits of 0.2g acceleration with a jerk 0.2 g.sec1
for ride comfort and fuel consumption economy. Higher gain may cause higher ac-
celeration or deceleration.
The generated engine torque is to accelerate the vehicle model. In order to de-
celerate the vehicle dynamics (i.e., when the drivers set point for vehicle speed is
reduced), the vehicle speed is regulated to the new lower reference speed value and
braking torque is replaced by engine torque. To achieve smooth braking transient
performance, once again different controller gains may be chosen since between
the brake and vehicle model, transmission or torque converter model do not exist.
The performance criteria remains the same, the service limits of 0.2g deceleration
with a jerk 0.2 g.sec1 for ride comfort may be achieved while the driver does not
intervene to the longitudinal controller of the vehicle model.
154
Examples of Autonomy
w e =
1
Je
( ( ) )
Tc + offset , we Ta (we ) Tp (we , wt ) (5.6)
where
: throttle angle;
we: engine speed;
wt: torque converter turbine speed;
Je: engine inertia;
Tc: engine combustion torque;
Ta: engine accessory and load torque;
Tp: feedback from torque converter pump.
The nonlinear engine model may be given also by a static linear equation in
terms of throttle angle input and engine angular speed affecting the output of the
engine model by different constants depending on the operating regions where the
torque equality is linearized (see Figure 5.2 for a static linear engine torque-speed
equation as a parameter of throttle angle input).
An analytical equation for the linearized characteristics of engine combustion
with respect to engine angular speed and throttle angle is given as
Tc (we , ) = C1 C2 we (5.7)
where C1 and C2 denote the local slopes of engine torque-angular speed region with
respect to different throttle angle variations as illustrated in Figure 5.3 on the two-
dimensional plane.
5.1 Cruise Control 155
Figure 5.2 Characteristics of engine combustion torque in terms of engine angular speed versus
throttle angle. (a) Torque-speed characteristics versus the pedal displacement. (b) Torque, speed,
and pedal displacement characteristics.
, throttle angle
0 we , engine speed
Figure 5.3 Linearized characteristics of engine combustion torque in terms of engine angular speed
versus throttle angle.
der firings in the engine, and smoothing the power transfer despite a varying load
torque due to road variations such as bumps and potholes.
Two quadratic equations relating pump and turbine speed to pump and turbine
torque are used to model the torque converter [10]. These equations are linearized
at the operating pump and turbine speeds, wc0 and wT0, respectively.
Tp = 2m1we 0 we + m2 we 0 wT + 2m3wT 0 wT
TT = 2n1we 0 we + n2 we 0 wT + 2n3wT 0 wT
where wT denotes turbine speed (rpm), TT denotes turbine torque (ft lbs), TP is
pump torque (ft lbs), and mi, ni are constant coefficients. Linearized torque con-
verter model at the turbine and engine operating speeds can be used for the pro-
posed linear longitudinal car model. The torque converter operates in three defined
modes depending on the ratio of turbine and engine speed. These modes are con-
verter, coupling, and overrun. The operation is determined by the speed ratio
wT
SR =
we
The hybrid transmission model considers the different gains from the discrete
set {K1, K2, K3} and uses the continuous vehicle speed and throttle angle. When the
vehicle speed u exceeds the upper and lower limits represented by Figure 5.4 by the
notations u0i+, u0i , respectively, the change of gear or mathematically discrete gain
change occurs denoted by Ki, i = 1, 2, 3. The switching logic is achieved by defining
the switch sets;
R
Ki ,i +1 = u u0+i , min max u = e we for i = 1, 2
Ki +1
R
Ki +1,i = u u0i , min max u = e we for i = 1, 2
Ki
where Re is the effective radius of tire and defines transformation between angular
velocity and longitudinal velocity of the vehicle Rewe = u.
The longitudinal model involving all different modes for torque converter and
different gear gain for the transmission models are illustrated in Figure 5.5. The
engine model and vehicle dynamics are assumed to be operating at linearized pump
torque, TP0, turbine torque, TT0, at the operating angular speeds for operating
pump and vehicle speed, wc0 and wv0, respectively.
Speed control with an engine model may be represented by continuous vehicle
dynamics model where the vehicle velocity and engine angular velocity variables
are continuous with respect to time described by differential equations (5.1) and
(5.6), and by the discrete event systems where dynamics of the torque converter
model and transmission systems are changed discretely in time depending on the
upper and lower limits of the vehicle and engine speed. Speed control with an
engine model is illustrated in Figure 5.5, where each model interacts with other
continuous or discrete models and the overall system response is affected by the
interaction and coupling of these different type of systems dependent on the hybrid
limits.
u+
01
u01
, throttle angle (deg)
Figure 5.4 Simplified characteristics of automatic transmission model and switching logic present-
ing the hybrid operation in terms of vehicle speed versus throttle angle.
158
Examples of Autonomy
Figure 5.5 Overview of vehicle dynamics, engine model coupled with torque converter, and transmission
illustrated with different operating modes.
Um U1
Patch
the leader be UM and UL, respectively. The measured headway d and the safety
distance ds can be defined as
d = xL x M
ds = hU L + d0 (5.8)
where XL and XM are the longitudinal positions of the leader and the follower,
respectively, h stands for headway (the time it takes for the leader to stop), and d0
provides an additional safety margin. The velocity difference U is given by:
U = U M U L = d (5.9)
160
Examples of Autonomy
Consider Figure 5.8. The strategy for regulation is as follows: The recommend-
ed velocity of the follower should be chosen in such a way that the velocity vector
of the solution trajectory in the (d, U) plane is directed to the (ds, 0) point at any
time. This choice enforces state trajectories toward the goal point on a straight line
whose slope is determined by the initial position of the system in the (d, U) plane
and guarantees that the velocities of the vehicles become equal when the desired
safety distance is achieved. The slope of the line on which the trajectory slides to
the goal point determines the convergence rate. We divide the (d, U) plane into
six regions. The collision region and the constant velocity region are also included
in Figure 5.8. In the constant velocity region, the follower keeps its current veloc-
ity until the distance between the vehicles becomes less than a user-defined critical
distance dc. Figure 5.8 also includes a relative acceleration curve that gives the least
possible value of d at which the follower should begin to decelerate at its maximum
rate to be able to reach the goal point for a given U, assuming a constant velocity
for the leader. The figure also includes a minimum convergence rate line (MCRL)
whose slope is chosen by considering the minimum admissible convergence rate.
In region 2 and region 5, it is physically impossible to enforce the trajectories
toward the goal point on a straight line. So, the controller should decelerate (accel-
erate) the follower at the maximum deceleration (acceleration) rate in region 2 (re-
gion 5). In region 3 and region 6, it is possible to steer the trajectories to the (ds, 0)
point through a straight line between the initial point and the goal point. However,
the convergence rate would be smaller than the minimum admissible convergence
rate because the slope of the line is less than the slope of the MCRL. So, in region 6
(region 3) we prefer first accelerating (decelerating) the follower toward the MCRL
at its maximum acceleration (deceleration) rate and then sliding the trajectories to
the goal point through this line. In region 1 and region 4, the desired velocity can
be calculated as follows:
m = tan()
U aM aL
m=
=
d U
U (5.10)
mdes =
d ds
m = mdes aM =
( U )2 + aL
d ds
where m is the slope of the trajectory velocity vector, mdes is the desired slope, and
aM, aL are the accelerations of the follower and the leader, respectively.
Equation (5.10) gives the necessary acceleration for the follower that ensures
the exact convergence of the solution trajectory to the goal point on a straight
line. However, it may not always be possible to obtain this acceleration due to
the acceleration and jerk limits of the vehicle. The bounds on the acceleration are
determined by the physical capacity of the vehicle, whereas jerk limits are mainly
determined by riding comfort. In the other regions, the above argument also holds
except that aM is taken as amax(amin) in region 6 and region 5 (region 3 and region
4) instead of using the equation.
5.2.1 Background
Antilock brake systems enhance vehicle safety, performance, and handling capa-
bilities. Antilock brake systems (ABS) maintain directional stability and steerability
under emergency maneuvering tasks on a slippery road surface. With developing
digital electronics technology, for example, BOSCH has been able to offer the digi-
tally controlled ABS to the automotive producers since 1978 and to the commercial
vehicles market since 1981 [14].
During emergency braking on a slippery road surface, the wheels of the vehicle
may lock; this phenomenon occurs since the friction force on the locked wheel is
usually considerably less than the demanded braking force by the driver. The ve-
hicle slides on the locked wheels increasing the stopping distance and more impor-
tantly may lose directional stability and steerability
under most of the driving con-
ditions and maneuvering tasks. The friction coefficient between tire patches and
road pavement cannot be known by the average driver and this external parameter
affecting the stopping distance and stopping time are changing with weather con-
ditions, road pavement, and the type and quality of the tires. Due to the strong
nonlinearity with time-varying parameters and uncertainties in the vehicle brake
systems, and without a priori knowledge of the tire, road friction coefficient, or
condition, design of ABS becomes a difficult control problem.
A discrete-time ABS controller was designed by using a linear control design
with integral feedback subject to the locally linearized brake and vehicle system
dynamics involving the lumped uncertainty parameter [15]. Linearization of highly
nonlinear systems, lumped uncertain parameter approach, and assumption of the
162
Examples of Autonomy
knowledge of vehicle velocity were the disadvantages of the design. Linear theory
may not be a solution to the design of ABS. Some hybrid controller designs were
addressed in [16]. The ABS hydraulic actuator has been considered to have dis-
crete states {brake pressure hold, brake pressure increase, brake pressure reduction}
and a finite state machine supervising ABS activation logic of feedforward and
feedback controllers among the choices of several modes subject to the estima-
tion of wheel angular speed [17]. Gain-scheduled ABS design was presented for
the scenarios when the vehicle speed was a slowly time varying parameter and the
brake dynamics were linearized around the nominal wheel slip [18]. To estimate
tire-road friction on a real-time basis, an algorithm based on a Kalman filter was
proposed in [19]. For automotive applications where uncertainties, discontinui-
ties and lack of measured variables are major controller design problems, sliding
mode control theory has been introduced to assure online optimization, estimation,
friction compensation, and traction control in automotive control problems [20].
Sliding mode controller methodology was proposed to track an unknown optimal
value even in the case that the value changes on a real-time basis. ABS control us-
ing the proposed optimal search algorithm assures reaching and operation at the
maximum friction force during emergency braking without a priori knowledge of
the optimal slip [21]. Time-delay effects caused by the hydraulic actuator dynamics
were considered by using the extremum seeking approach in [22]. Oscillations oc-
curring around the maximum point of the tire force versus slip characteristics were
attenuated by using a higher-order sliding mode controller in [23].
5.2.2 Slip
The tire force model between the tire patches and the road surface depend on the
road surface, tire, weather, and many other conditions that may not be known a
priori (see Figure 5.9 for parameter-dependent characteristics). Friction force affect-
ing the individual wheel rotational dynamics during braking is a nonlinear function
of the relative slip [i.e., Fi = Fi(ki)]. The slip is defined as the relative difference be-
tween the decelerated wheels angular velocity multiplied by the effective radius of
the tire, Rewi, and the velocity in the longitudinal direction, u;
Re wi u (5.11)
ki =
u
Re wi u
and as
Re wi u
ki = (5.12)
Re wi
Re wi > u
for i = 1, 2, 3, 4. The values of the function Fi(ki) were obtained experimentally for
different types of surface conditions. Experiments showed that in the region ki > 0
the function has a single global maximum and in the region ki < 0 a global mini-
mum. The form of the function Fi for ki > 0 is plotted in Figure 5.9. For different
road types (i.e., dry, wet, or icy road conditions), a global maximum point of the
tire force model is encircled.
A more general tire model, which does not contradict the Pacejka model but
includes it as a particular case, has been considered in [24]. It is assumed that each
friction force Fi is a nonstationary function of the slip ki,
Fi = Fi (t, ki )
Fi F
+ i C0
ki t
Dry Road =1
Wet Road =0.5
2,500 Icy Road =0.2
ABS
performance
region
2,000
Traction force [N]
1,500
1,000
500
0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
Longitudinal Slip
Figure 5.9 The tire force characteristics. Longitudinal slip with respect to traction force as a func-
tion of road condition.
164
Examples of Autonomy
ki Fi (t, ki ) 0
For instance, a unique global maximum at the slip ratio ki* is plotted in Figure
5.10.
By assumption Fi is a sufficiently smooth function of ki in the regions ki > 0
and ki < 0 and in the -vicinity ( > 0) the extremal points ki* and k*i satisfy the
conditions;
y i*
Traction Force [N]
Fi (t, ki )
K0 ki k*i
ki
and
Fi (t, ki )
K0 ki* ki
ki
The case when Fi depends on the absolute velocity v has been included allowing
Fi to be nonstationary. It may be a function of the absolute slip
s = Re wi u
F(t, k) = F(s) = F(v(t), k)
2 2
Qi = A1cd1i
( )
Pp Pi A2 cd 2i
(Pi Plow ) (5.13)
where Pp is the constant pump pressure, Plow is the constant reservoir pressure, A1,
A2 are constants representing the orifice area, and is the density of the fluid. Pi is
the hydraulic pressure at the valves from the ith wheel cylinder side. Neglecting the
inertial properties of the fluid and the resistance in the hydraulic line, it is assumed
that the pressure in the wheel cylinder is also equal to Pi. The coefficients cd1i, cd2i
are in fact the control inputs, which can take the values of 0 or 1 depending on the
corresponding valve being open or closed. If the nonlinearities and the temperature
dependence are neglected, the brake torque Tbi is a linear function of the brake
pressure Pi
166
Examples of Autonomy
where Pout is a push out pressure; Awc, , BF, rr are constants (wheel cylinder area,
mechanical efficiency, brake factor, and effective rotor radius, respectively).
The rotational dynamics of the ith wheel (i = 1, 2, 3, 4) are modeled,
where
The linear dynamics of the vehicle is described as in simple cruise control sys-
tem (see Section 5.1.2). According to the Newtonian equation,
4
Mu = F F
i =1
i load
and the whole ABS system is constitued by the ninth-order nonlinear system involv-
ing four wheel dynamics, vehicle dynamics, and hydraulic actuator dynamics;
J wt = Tbi sign (wi ) Re Fi + Tdi (5.16)
4
Mu = F F
i =1
i load (5.17)
dPi 2 2
cf
dt
= A1cd1i
( )
Pp Pi A2 cd 2i
(Pi Plow ) (5.18)
where i = 1, 2, 3, 4. There are eight control inputs cd1i, cd2i, i = 1, 2, 3, 4, which can
take 0 or 1 with constraints cd1i, cd2i = 0.
The ABS control may be stated as follows: steering the slip at each wheel ki to
its extremal value k*i(t), and tracking this extremum during the braking transient.
The main difficulties in applying well-known linear control methods for this
problem are:
The functions Fi(t,ki) are not a priori known and are, in general, nonstationary.
Various parametric uncertainties and disturbances affect the ABS system.
5.3.1 Background
One of the key goals of an automated vehicle is the ability to perform automatic
steering control. Steering control is a nontrivial design problem and a fundamental
design challenge, and the approaches taken to obtain a stabilizing robust controller
design vary significantly based on the available set of sensors and the performance
of the actuators involved. A measure of the vehicles orientation and position with
respect to the road must be available to the controller. Among the most commonly
used techniques are vision-based lane marker detection (preferred by many because
of its simplicity in terms of the required machinery and implementation conve-
nience), radar-based offset signal measurement (developed and used by OSU re-
searchers exclusively), and the magnetic nail-based local position sensing (used by
PATH researchers). Vision- and radar-based systems provide an offset signal at a
preview distance ahead of the vehicle that contains relative orientation information.
The vision system directly processes the image of the road and detects lane mark-
ers. Therefore, it does not require any modifications to current highway infrastruc-
tures. The radar system requires that an inexpensive passive frequency-selective
stripe (FSS) be installed in the middle of the lane, in which case the radar is capable
of providing preview information similar to a vision system. Most other sensor
technologies provide only local orientation and position information. It has been
pointed out that control of vehicles without preview distance measurements poses
a difficult control problem at high speeds. Indeed, the experience of researchers us-
ing look-down-only sensors is that road curvature information must be provided
to the lateral controller, usually by encoding it in the sensor components installed
on the road. Thus we see that sensors are an integral part of the design and that the
performance of the sensor system directly impacts the closed-loop system stability
and performance.
x = Vs cos yaw + steer (5.20)
2
y = Vs sin yaw + steer (5.21)
2
Vs V
= = s tan steer = yaw (5.22)
R Lcar
where a and b are parameters of the steering and speed dynamics, steer is the steer-
ing angle, SteerCom is the steering command from the path following controller, Vs
is the vehicle velocity, Lcar is the car length, is the yaw rate, yaw is the vehicle yaw
angle, and R is the turning radius, as drawn in Figure 5.12.
The proposed controller is constituted by feedforward and feedback loops.
Since the controller is implemented on a digital signal processor with discrete up-
dating periods, the path curvature is predicted by the feedforward controller in the
current and the coming update information period. The curvature may be obtained
from the sampled waypoints in the path of the subject vehicle as illustrated in Fig-
ure 5.13. The feedforward control denoted by SteerFF is derived by the sampled
path points:
5.3 Steering Control and Lane Following 169
L car
(x,y)
CG steer
steer
2
yaw
y
R
YB YA
mr =
XB X A
YC YB
mt =
XC XB
mr mt (YC YA ) + mr (XB + XC ) mt (X A + XB )
Xff =
2(mr mt )
(5.24)
Y + YC 1 XB + XC
Yff = Xff
2 mt 2
Rff = (XA Xff )2 + (YA Yff )2
L
= tan 1 car
steerFF
Rff
there is no additional feedback for the steering control, a car keeps following a path
with this offset.
5.3 Steering Control and Lane Following 171
The CLA predicts a future path based on the current steering angle instead of a
straight look ahead. According to the current steering angle, a current circular path
plotted with a solid line in Figure 5.15 is generated in ahead of the vehicle, which
is a path it will go through if the steering angle is constant. Another circle, plotted
with a dash-dot line, which is generated by the vehicle position and the two way-
points ahead of the vehicle, points A and B, is the desired path that the car should
follow. Comparing the two circles, the difference between the desired path and the
current path can be found. The difference is defined by circles ahead of the offset.
The ahead offset is the distance between the intersections of the two circles and a
circle centered at the vehicle center with radius Dla, the circle plotted with a dashed
line is situated at the center of the vehicle. Dla is also defined as the look-ahead
distance. The ahead offset is the feedback of the steering control.
The equation of the desired path circle can be obtained from (5.25). The equa-
tion of the current path can be obtained as follows:
Lcar
Rcp =
tan ( steer )
Lcar
Xcp = XC
2
(
sin yaw Rcp cos yaw ) ( )
(5.25)
Lcar
Ycp = YC
2
( )
cos yaw + Rcp sin yaw ( )
( x X ) + (y Y )
2 2
cp cp = Rcp2
The ahead offset Dos with a look-ahead distance equal to Dla can be expressed
as:
(X ) + (Y Y )
2 2
d1 = cp Xc cp c
(
a1 = Rcp2 Dla 2 + d1 ) 2d1 2
h1 = Rcp2 a12
X1 = Xc + a1 Xcp Xc (
) d1 h1(Y Y ) d1 cp c
Y 1 = Y + a1 (Y Y ) d1 h1 ( X X ) d1
c cp c cp c
d 2 = ( X X ) + (Y Y )
2 2
dp c dp c
(5.26)
a 2 = ( R D + d 2 ) 2d 2
dp
2
la
2 2
h2 = Rdp2 a22
X 2 = Xc + a2 Xdp Xc ( ) d 2 h2 (Y Y ) d 2 dp c
Y 2 = Yc + a2 (Y Y ) d 2 h2 ( X X ) d 2
dp dp c
Dos = ( X 2 X1) 2
+ (Y 2 Y 1)
2
The feedback part of the steering control is a PID controller, which can be
expressed as:
+ K D dt
steerFB = K1Dos + K2 Dos 3 os
1 t
b
steer (t) = 1 e ( SteerCom (t + t ) SteerCom (t ))
a
b (5.27)
5.3 Steering Control and Lane Following 173
For situations when the steering angle change (qSteerCom(t + Dt) - qSteerCom(t)) is
small, the rising time, the time needed to steer to qSteerCom(t + Dt), is almost a con-
4.4a
stant . In such a situation the steering system should always be able to turn
b 4.4a
the steering wheel to the desired angle within an update period if < Tcycle .
b
However, the steering motor has finite output power to actuate the steering mecha-
nism. When the steering angle change is larger than the threshold 4.4a ( ) ,
steer max
b
where ( steer )max is the maximum steering speed, the rising time becomes angle de-
pendent, which can be approximated as:
TNextAngle =
( SteerCom (t + t ) SteerCom (t))
( )
steer max
(5.28)
In this situation the steering system may not be able to turn the steering wheel
to the desired angle before the next update if the speed is too high. To make sure the
steering angle can achieve to qSteerCom(t + Dt) at the next update, the time needed to
reach the next vehicle position, where the controller will be updated based on the
current speed, should be greater than TNextAngle. The distance to the next update
point is Vst, where Vs is the current speed and t is the update period equal to
Tcycle. The maximum speed can be obtained:
vmax =
( )
steer max Vs t
(5.29)
SteerCom (t + t ) SteerCom (t)
The upperbound of the speed depends on the current steering angle, the pre-
dicted next steering angle, the steering dynamics time response, and the actual
velocity of the vehicle.
The path following algorithm is applied to a wheeled robot to drive in a simu-
lated urban environment as shown in Figure 5.16. The difference between the ro-
bot and a vehicle is the robot uses differential drive for turning instead of steering
wheels. The wheeled robot is a Pioneer 3. Its location is determined by a camera
on the ceiling based on its tag [see Figure 5.16(a)]. The location is forwarded like
a GPS signal and sent to the robot by wireless.
To simulate a robot as a vehicle with steering wheels, the following equation is
used to transform the steering angle steer to the robot turn rate YawRobot .
V tan ( steer )
YawRobot = s (5.30)
Lcar
Figure 5.17 presents the experimental results from the wheeled robot. The
circles are the assigned checkpoints the robot should go through and the lines are
the robot path. The robot starts from the upper right of the figure and ends at the
lower right. There are five curves in the path. The first two and the last two curves
174
Examples of Autonomy
(a)
(b)
Figure 5.16 (a)The wheeled robot is regulated by CLA. (b) Simulated urban environment to per-
form steering control.
are sharp curves, which require fast steering, and the third curve is a long curve,
which is used to verify the path following accuracy on a curve. Both curves have
constant curvatures.
5.3 Steering Control and Lane Following
Figure 5.17 The time responses of robot trajectories and speed. For the case when speed is low, robot trajectories and speed responses are plotted in (a) and (b), and
when speed is set to its maximum, robot trajectories and speed are plotted in (c) and (d).
175
176
Examples of Autonomy
During cornering maneuvering, path following error does not increase and the
objective of the presented CLA steering control is validated experimentally. In the
experimental scenario where the speed is low and kept constant, the robot can fol-
low the path accurately [see Figure 5.17(a, b)]. Figure 5.17(c, d) represents the ex-
perimental results for the case when the speed is set to its maximum. The maximum
wheel speed of the robot is 6 rad/sec, which corresponds to the maximum robot
speed of 0.6 m/sec. Due to the differential turning employed by the robot, when the
robot speed is high, the turning speeds of the outer turning wheels may require a
speed higher than 6 rad/sec while turning. When this phenomenon occurs, the driv-
ing speed is reduced to assure each wheel rotates lower than 6 rad/sec.
Other than steering control there are more general obstacle avoidance ap-
proaches. The potential field approach is an example for obstacle avoidance meth-
odology [25]. The idea is to have an attractive potential field function to get the car
trajectory towards the waypoints to be followed and have an increasing repulsive
potential function to push the trajectory from the obstacles. Using attractive and
repulsive force analogy, the elastic band method is used to modify or deform the car
trajectory locally [26]. An elastic band or bubble band is created such that the car
can travel without collision. Once the obstacle is sensed, the elastic band is smooth-
ly deformed to avoid obstacles. The external repulsive and internal attractive forces
are generated to change the trajectory locally to avoid the obstacles and to keep the
bubbles together, respectively. The potential field and elastic band obstacle avoid-
ance approaches are illustrated in Figure 5.18(a, b), respectively.
The car may navigate autonomously in an environment where the obstacles
may be nonstationary and whose locations may be unknown a priori. Therefore
there are various instances where the obstacles need to be checked to avoid pos-
sible collision scenarios on the generated global path, which is started by the initial
point and ended by the desired fixed final point. In general a global motion plan-
ning needs to be combined with a reactive strategy to continuously plan a motion
plan while avoiding the obstacles. For example, a sampling-based approach is per-
formed prior to computation by using a multiquery for the global motion planning,
and a single-querybased computation on the fly for real-time obstacle avoidance.
The reactive strategy, which computes obstacle location on a real-time basis, relies
heavily on the obstacle information provided by the sensor fusion module. The
obstacles are clustered and tracked by fusing the data collected from all sensors. A
collision-free gap between the vehicle location and the intermediate waypoint to-
wards the final point may be obtained by scanning the distance between the bound-
ary of the tracked obstacles and the vehicle position. The free gaps are calculated
by detecting the discontinuities in the scannings. In the sensing range, the obstacles
are respresented by the disctontinuities and the free gaps are defined by the closed
continuous intervals. The intervals are then compared with the vehicle physical size
in order to check the safe passing stage in the candidate free gap. If a candidate
interval connecting the vehicle location and the next local waypoint is computed,
the car is driven towards the free gap, and the scanning and single-querybased
computation is iteratively continued untill reaching the goal point [27]. The multi-
query for global motion planning and a single-querybased computation on the fly
are illustrated in Figure 5.18(c). Free-gap calculation is shown in Figure 5.18(d).
5.3 Steering Control and Lane Following
(b)
(a)
(c) (d)
Figure 5.18 (a) The potential function energy surface representing the obstacles and the goal configuration. (b) Local motion planning using the elastic band ap-
proach. (c) Possible local goal selection using two different methods: the local goal r1Lgoal selected using the global path and r2Lgoal selected using the sampling-based
177
path approach. (d) Illustration of the calculated free gaps by using a Lidar scan. A free gap is defined by its end angles.
178
Examples of Autonomy
where z(t) is a dummy variable that is necessary to characterize the transient re-
sponse of the offset signal precisely. In this context, o(t) is the measured offset from
the lane center at the look-ahead point (positive to the left of the lane center) and is
to be regulated to zero for all possible road curvature reference inputs (t) (positive
for clockwise turns); this defines the desired path to be followed using the front-
wheel steering angle (t) (positive for clockwise turns). The linearized model is valid
at the operating longitudinal velocity u (positive for forward motion), which is as-
sumed to be kept constant by means of a decoupled longitudinal controller, and for
small values of (t). The sensor delay to depends on the operating velocity and the
preview distance d, and is given by to = d/u.
The other parameters of the vehicle model are determined from
kf + kr akf bkr kf 1
a11 = , a12 = u + , b1 = , d1 =
mu mu m m
akf bkr a2 kf + b2 kr akf I
a21 = , a22 = , b2 = , d2 = w
Iz u Iz u Iz Iz
where kf > 0 and kr > 0 are the lateral tire stiffness coefficients of the front and
rear tires, respectively, m is the mass of the vehicle, and Iz is the moment of inertia
around the center of mass perpendicular to the plane in which the vehicle is located.
The remaining variables are as previously defined. Typical parameter values ap-
proximating those of the OSU vehicles are given in Table 5.1.
buf (t) = Kd . o (t) + Ks .o (t) o (t) + K . reset (t)
t^
+ Kr (r(t) rref (t)) + Ki .sat o()d (5.32)
0
+ Km . p .sign(deadzone(o(t)))
com (t) = sat(buf (t))
where o and o are the Kalman observer estimates of the offset signal and its deriva-
tive and Kd, Ks, K, Kr, Ki, and Km are gains of appropriate dimensions and signs;
reset(t) is defined as:
1. Only the yaw rate r and the steering angle are measured.
2. Vehicle parameters are known within a bounded neighborhood of some
nominal values.
3. The road curvature does not change significantly during the lane change
maneuver.
Studies have been performed to estimate the ideal lateral jerk, acceleration, ve-
locity, and displacement signals that the vehicles center of gravity should follow to
perform a lane change maneuver while preserving passenger comfort. However, in
practice the only input to the vehicle is commanded steering angle. Therefore, these
results must ultimately be used to generate steering angle commands. This can be
accomplished by generating a reference yaw rate signal and applying a yaw rate
controller to generate steering angle commands.
Through this steering control subsection, the operating region in the tire lateral
characteristics is assumed to be responsive to the steering inputs and handling in
the lateral maneuvering tasks is performed. Although the nonlinear tire character-
istics and handling properties are out of scope of this book, the interested reader is
advised to investigate handling and vehicle dynamics control issues. An alternative
semiempirical tire model and vehicle dynamics control issues are given in [28, 29].
One of the factors affecting the motion and handling dynamics is obviously
the vehicle speed. During maneuvering, lateral tire force responses with respect to
182
Examples of Autonomy
the slip ratio and the sideslip angle are plotted in Figure 5.20. In the normal op-
erating region (i.e., at low values of the slip in the longitudinal direction), lateral
force generation is responsive to the sideslip angle increment. Estimation of the slip
ratio may be very complicated due to the unknown road-tire friction coefficient.
Therefore, active safety enforces maintaining a safe speed to limit sideslip and pos-
sibly prevent rollover hazards that may occur during maneuvering at high speed.
Decelerating to a lower speed before a sharp turn is a common drivers instinct in
order to maintain control authority during a lateral maneuver. A study on rollover
prevention for heavy trucks carrying liquid cargo is presented in [30]. Differential
braking modulated by the nonlinear control techniques is used to stabilize the lat-
eral dynamics.
5.4 Parking
Parking is basically displacing the vehicle to the final position from any arbitrary
initial condition. Parking process establishes path planning for the vehicle at low
speed with a desired heading angle while avoiding any possible obstacles situated
near the parking lot. From the viewpoint of motion planning, the parking path is
a circular arc to be tracked by the parking algorithm. At the initial parking local
setting, initial yaw angle of the car model may be different from the tangent angle
of the path. To achieve parking, the tangent angle of the path needs to be reached.
Driving forward or backward by turning the front steering wheel to its maximum
allowable angle may help to refer the vehicle heading angle to the desired parking
path tangent angle [31]. Once the tangent of the path is reached, the circular arc is
tracked until the desired parking position. The vehicle model with Ackerman steer-
ing (Figure 5.21) is used,
l + r
steer =
2
Lfr (5.33)
R=
tan steer
where R is the vehicle turning radius, steer is the average steering angle, and l and
r denote the steering angle of the front left wheel and front right wheel, respec-
tively. Following the Ackermann steering theory, the vehicle model position is equal
to the center of the rear axle and the yaw angle is regulated to follow the arc path
direction.
1 r
L fr
1 r
constraints of maximum steering turn radius and without hitting any boundaries
or obstacles.
Rmin = Rturn Wc / 2
(Rturn + Wc / 2)2 + Lfr 2 Parking Forward
Rmax =
(Rturn + Wc / 2)2 + (Lc Lfr )2 Parking Backward
X0 = Wp / 2 (5.34)
R max 2 (R min+ (Wp / 2)2 )2 when (R max 2 (R min+ (Wp / 2)2 )2 ) >= 0
Y0 =
(WcWp / 2) (Wc / 4) when (R max (R min+ (Wp / 2) ) ) < 0
2 2 2 2
where Wp is the width of the parking spot, Wc is the car width, Lc is the car length,
Lfr is the distance from front bumper to rear axle, Rturn is the vehicle minimum turn-
ing radius by the vehicle center, and Rmin/Rmax are the radiuses of the minimum/
maximum circles a vehicle body can go through according to Rturn. The minimum
turning radius of the car model may be obtained from the vehicle specification.
The goal point of the obstacle avoidance controller is the final position of
the parking spot.
The obstacle avoidance controller hands over the control authorities to
parking when the distance to the final point is smaller than 8 meters.
In the 2007 Urban Challenge, DARPA claimed that paths to a parking spot
would not be totally blocked. This maneuver can make sure parking starts
from the side that is obstacle free.
5.4 Parking 185
R max
Y
wc R turn
Lc
L fr
R min
y0 X
x0
wp
Final path
us
di
x
ra
g
in
rn
tu
in
m
>
R
Final
position
Pulling a car forward into the parking spot was imposed by a rule in the
DARPA Urban Challenge.
Mi
nim
um
tur
nin
gr
ad
ius
y
Final path
r
Final
position
(a)
Final path
Final
position
(b)
y
Final
position
(c)
Figure 5.24 (a) First step, (b) second step, and (c) third step.
5.4 Parking 187
Final
position
(a)
Final
position
(b)
Figure 5.25 (a) Final path radius larger than minimum turning radius. (b) Find final path from
second maneuver.
188
Examples of Autonomy
20 20
15 15
10 10
5 5
0 0
5 5
10 10
15 15
20 20
20 15 10 5 0 5 10 15 20 20 15 10 5 0 5 10 15 20
(c)
20
(a)
20
15
15
10
10
5
5
0
0
5
5
10 10
15 15
20 20
20 15 10 5 0 5 10 15 20 20 15 10 5 0 5 10 15 20
(d)
(b)
20
15
10
10
15
20
20 15 10 5 0 5 10 15 20
(e)
Figure 5.26 (a) General situation. (b) Park from different initial position. (c) Initial position close to bound-
ary. (d) Initial position close to boundary. (e) Park at narrow road.
5.4 Parking 189
x2 + y2
R=
2x
y
FinalPath = sin 1 +
R 2
FinalPath yaw = r
where R is the radius of the possible final path circle, the circle with a larger di-
ameter in Figure 5.24(c). x and y denote the vehicle position according to the local
coordinate, FinalPath is the tangent angle of the possible final path circle at the ve-
hicle position, and yaw is the vehicles yaw angle. The final path is achieved when
FinalPath = yaw.
This maneuvering may definitely be ended by an achievement of the final path
circle. In case the radius of the final path circle is smaller than the minimum turning
radius (i.e., path circle) which may not be followed by the vehicle model due to its
turning limitation, as illustrated in Figure 5.25(a). The autonomous car keeps on
going forward to minimize r. The car will follow the circle plotted with the dash
line in Figure 5.25(a). During this course, an auxiliary circle, plotted with a dash
line, may be generated towards a final path circle (plotted with a dash-dot line)
with a larger radius than the minimum turning radius as shown in Figure 5.25(b).
Figure 5.27 Parking area in Urban Challenge 2007. Approach and backup.
190
Examples of Autonomy
10
10
10 5 0 5 10
(a)
10 10
8 8
6 6
4 4
2 2
0 0
2 2
4 4
6 6
8 8
10 10
10 5 0 5 10 10 5 0 5 10
(c)
(b)
Figure 5.28 (a) General situation. (b) Initial position close to boundary. (c) Park from different initial position.
5.4 Parking 191
and backing up to be back on the path). In the parking area, two other cars were
parked on the left and right sides of the assigned parking spot. The OSU-ACT was
successfully parked into the parking spot (zone) at each round.
Experimental results with different initial conditions and stationary obstacle
positioning are presented in Figure 5.28. The splines present the vehicle trajectory,
solid lines are the parking zone boundaries, and the distributed points are obstacles
detected by the LIDAR scanning.
References
[1] Hatipoglu, C., . zgner, and M. Sommerville, Longitudinal Headway Control of Au-
tonomous Vehicles, Proceedings of IEEE International Conference on Control Applica-
tions, 1996, pp. 721726.
[2] Ioannou, P. A., and Z. Xu, Intelligent Cruise Control: Theory and Experiment, Proceed-
ings of the 32st IEEE Conference on Decision and Control, 1993, pp. 18851890.
[3] Ioannou, P. A., and C. C. Chien, Autonomous Intelligent Cruise Control, IEEE Transac-
tions on Vehicular Technology, Vol. 42, 1993, pp. 657672.
[4] Lu, X. Y., and J. K. Hedrick, ACC/CACC Control Design, Stability and Robust Perfor-
mance, Proceedings of American Control Conference, 2002, pp. 43274332.
[5] Lu, X. Y., H. S. Tan, and J. K. Hedrick, Nonlinear Longitudinal Controller Implementa-
tion and Comparison for Automated Cars, Journal of Dynamic Systems, Measurement
and Control, Vol. 123, 2001, pp. 161167.
[6] Swaroop, D., et al., A Comparison of Spacing and Headway Control Laws for Automati-
cally Controller Vehicles, Vehicle System Dynamics, Vol. 23, 1994, pp. 597625.
[7] Shladover, S. E., Review of the State of Development of Advanced Vehicle Control Sys-
tems, Vehicle System Dynamics, Vol. 24, 1995, pp. 551595.
[8] Tsugawa, S., et al., An Architecture for Cooperative Driving of Automated Vehicles,
IEEE Intelligent Transportation Systems Conference Proceedings, Dearborn, MI, 2000,
pp. 422427.
[9] Cho, D., and J. K. Hedrick, Automotive Powertrain Modeling for Control, Transactions
of ASME, Vol. 111, 1989, pp. 568577.
[10] Kotwicki, A. J., Dynamic Models for Torque Converter Equipped Vehicles, SAE paper
no. 820393, 1982.
[11] Molineros, J., and R. Sharma,
Real-Time Tracking of Multiple Objects Using Fiduci-
als for Augmented Reality, Transactions on Real-Time Imaging, Vol. 7, No. 6, 2001,
pp.495506.
[12] Orqueda, A. A., and R. Fierro, Visual Tracking of Mobile Robots in Formation, Pro-
ceedings of the 2007 American Control Conference, New York, 2007, pp. 59405945.
[13] Gonzales, J. P., Computer Vision Tools for Intelligent Vehicles: Tag Identification, Dis-
tance and Offset Measurement, Lane and Obstacle Detection, Master Thesis, The Ohio
State University, Electrical Engineering Dept., September 1999.
[14] Emig, R., H. Goebels, and H. J. Schramm, Antilock Braking Systems (ABS) for Commer-
cial VehiclesStatus 1990 and Future Prospects, Proceedings of the International Con-
gress on Transportation Electronics, 1990, pp. 515523.
[15] Tan, H. S., Discrete-Time Controller Design for Robust Vehicle Traction, IEEE Control
Systems Magazine, Vol. 10, No. 3, 1990, pp. 107113.
[16] Johansen, T. A., et al., Hybrid Control Strategies in ABS, Proceedings of the American
Control Conference, 2001, pp. 17041705.
[17] Nouillant, C., et al., Hybrid Control Architecture for Automobile ABS, IEEE Interna-
tional Workshop on Robot and Human Interactive Communication, 2001, pp. 493498.
192
Examples of Autonomy
[18] Johansen, T. A., et al., Gain Scheduled Wheel Slip Control in Automotive Brake Systems,
IEEE Transactons on Control Systems Technology, Vol. 11, No. 6, 2003, pp. 799811.
[19] Gustafsson, F., Slip-Based Tire-Road Friction Estimation, Automatica, Vol. 33, No. 6,
1997, pp. 10871099.
[20] Haskara, I., C. Hatipoglu, and . zgner, Sliding Mode Compensation, Estimation
and Optimization Methods in Automotive Control, Lecture Notes in Control and Infor-
mation Sciences, Variable Structure Systems: Towards the 21st Century, Vol. 274, 2002,
pp.155174.
[21] Drakunov, S., et al., ABS Control Using Optimum Search Via Sliding Modes, IEEE
Transactions on Control Systems Technology, Vol. 3, No. 1, 1995, pp. 7985.
[22] Yu, H., and . zgner, Extremum-Seeking Control Strategy for ABS System with Time
Delay, Proceedings of the American Control Conference, 2002, pp. 37533758.
[23] Yu, H., and . zgner, Smooth Extremum-Seeking Control Via Second Order Sliding
Mode, Proceedings of the American Control Conference, 2003, pp. 32483253.
[24] Bakker, E. T., H. B. Pacejka, and L. Linder, A New Tire Model with an Application in
Vehicle Dynamics Studies, SAE Technical Paper no. 870421, 1982.
[25] Khatib, O., Real-Time Obstacle Avoidance for Manipulators and Mobile Robots, Inter-
national Journal of Robotics Research, Vol. 5, No. 1, 1986, pp. 9098.
[26] Quinlan, S., and O. Khatib, Elastic Bands: Connecting Path Planning and Control, Pro-
ceedings of Robotics and Automation, Atlanta, GA, 1993, pp. 802807.
[27] Shah, A. B., An Obstacle Avoidance Strategy for the 2007 Darpa Urban Challenge, Mas-
ters Thesis, The Ohio State University, Electrical Engineering Dept., June 2008.
[28] Pacejka, H. B., Tyre and Vehicle Dynamics, Oxford, U.K.: Butterworth-Heinemann, 2002.
[29] Rajamani, R., Vehicle Dynamics and Control, New York: Springer, 2006.
[30] Acarman, T., and . zgner, Rollover Prevention for Heavy Trucks Using Frequency
Shaped Sliding Mode Control, Vehicle System Dynamics, Vol. 44, No. 10, 2006,
pp.737762.
[31] Hsieh, M. F., and . zgner, A Parking Algorithm for an Autonomous Vehicle, 2008
IEEE Intelligent Vehicles Symposium, Eindhoven, the Netherlands, 2008 pp. 11551160.
CHAPTER 6
Autonomy requires an understanding of the world. Local sensors can provide such
information in a neighborhood of the vehicle of radius 50100 meters. Perhaps
cooperative vehicle-to-vehicle communication can extend that region to a radius
of several hundred meters. In any case, the information available is of a local na-
ture, suitable for short-term vehicle control but not for long-term planning. A fully
autonomous vehicle will require planning behaviors over a much longer timescale.
Ideally, a fully autonomous vehicle would allow navigation from any origin to any
destination without direct human input.
This level of autonomy requires geographic information for an area and a plan-
ning algorithm that can use these maps to generate a plan for the vehicle path and
behavior, as well as mechanisms for replanning when actual current conditions do
not match the contents of the maps. The use of map databases is reasonable due to
the availability of GPS-based positioning.
In this chapter, we will review the common types of map data available for off-
road and on-road situations as well as path planning algorithms suitable for each
situation.
Generally speaking, the available map data falls into two categories: raster data
and vector data. Raster-based data divides an area into a collection of cells or grids,
often of uniform size, and each grid cell is assigned one or more values representing
the information being mapped. This approach is similar to the grid-based sensor
fusion algorithms that were introduced in Chapter 4. Generally speaking, raster
data consumes a large amount of memory but requires simpler data processing
techniques as the information is represented at a more direct level. Examples of
raster data include digital elevation maps and land cover or usage maps.
Vector data expresses information in terms of curves, connected line segments,
or even discrete points. Curves or line segments can also enclose and represent
areas. Vector data is a more complex and rich representation and thus generally re-
quires less storage but more complicated processing algorithms. Examples include
digital line graph road maps, constant altitude topographic maps, and real estate
parcel databases.
193
194
Maps and Path Planning
1. Digital elevation map (DEM) data at 10-m resolution are widely available
from the USGS NED database for most of the continental United States.
The resolution may vary, for example in undeveloped desert areas the reso-
lution may only be 30m. This provides only a very coarse terrain descrip-
tion, obviously not meeting the resolution and accuracy requirements of
autonomous vehicle navigation. Furthermore, most of the USGS DEM data
is rather old, which may pose a problem in some developed areas.
2. Space Shuttle radar topographic mission (SRTM) data is available but its
resolution is still a modest 30m. However, as the data was taken in 2000, it
is more recent than the NED datasets. The ASTER global digital elevation
map dataset has also recently been released, although it too has a 30-m
resolution.
3. USGS land cover data (metadata) is available for many areas.
4. USGS digital orthophoto quarter quadrangles (DOQQ) provide aerial im-
age data at 1-m resolution.
Figure 6.2 is an example of data plotted from a U.S. Geological Survey base
map called the 7.5-minute DEM, containing terrain elevation for ground positions
at regularly sampled (approximately 30m) horizontal intervals referenced to the
Universal Transverse Mercator (UTM) projection. This particular area is near Stur-
gis, South Dakota. These datasets are available online at http://edc.usgs.gov/prod-
ucts/elevation/dem.html and http://seamless.usgs.gov.
Land cover information can also be gathered from the National Landcover
Characterization Dataset (NLCD). Aerial photograph and digital orthophoto
quarter quadrangle (DOQQ) images are useful to visualize the path and verify
its accuracy. A digital orthophoto is a digital image of an aerial photograph in
which displacements caused by the camera and the terrain have been removed. It
Landcover
Elevation (DEM)
1. USGS digital line graphs (DLG) extracted from existing maps and other
sources, which include roads, railroads, pipelines, constant elevation, hy-
drological features, political boundaries, and a variety of point features.
196
Maps and Path Planning
It should be noted that, in general, currently available map data is neither suf-
ficiently accurate for autonomous control purposes nor sufficiently detailed for
complete behavior and path planning. For example, the streets may only be repre-
sented by a road centerline.
path between a given vertex and every other vertex in the graph. The algorithm is
as follows:
1. Given some initial node, mark its value as 0 and the value of all other nodes
as infinite;
2. Mark all nodes as unvisited.
3. Mark the given initial node as the current node.
4. For all unvisited neighbors of the current node:
a. Calculate a cost value as the sum of the value of the current node and
the connecting edge.
b. If the current value of the node is greater than the newly calculated cost,
set the value of the node equal to the newly calculated cost and store a pointer
to the current node.
5. Mark the current node as visited. Note that this finalizes the cost value of
the current node.
6. If any unvisited nodes remain, find the lowest cost unvisited node, set it as
the current node, and continue from step 4.
If the user is only interested in the lowest cost path from the initial node to a
given final node, the algorithm may be terminated once the final node is marked
as visited. One can then backtrack through the node pointers to find the route cor-
responding to the lowest cost.
Figure 6.4 illustrates a simple execution of Dikstras algorithm. Figure 6.4(a)
gives a sample graph network, consisting of five nodes and connections with
weights (or lengths) as shown. The origin is node 1 and the desired destination is
node 5. As we begin the process [Figure 6.4(b)], the current node, shown with a
double circle, is set to the initial node, its cost is set to 0, and the cost of all other
nodes is set to infinity. In the first pass [Figure 6.4(c)], the cost for all nodes con-
nected to the current node is set to the value of the current node, in this case 0, plus
the edge cost and a pointer (the dashed line) is set to point back to the current node.
In the second pass [Figure 6.4(d)], the current node is chosen as the lowest cost
unvisited node, in this case node 4, and node 0 is marked as visited (crossed out in
this example). The cost to node 3 is unchanged since the value of the current cost of
node 4 plus the cost of the edge to node 3 is higher than the current cost of node 3.
The cost for node 5 is set to the sum of the cost for node 4 and the cost of the edge
between node 4 and node 5. Since node 5 is the destination, the algorithm would
normally stop at this point, reporting the optimal value of the cost from node 1 to
node 5 as 4 and the optimal path as 1 4 5. Note however that the value of
the current cost of node 2 is not the optimal value. If we wished to complete the
calculation of the cost of node 2, we would need to take one more step, in which
the current node would be switched to node 3, node 4 would be marked as visited,
and the value of the cost of node 2 would become 5 instead of 6.
The Dijkstra algorithm is generally considered the basis for optimal path plan-
ning and routing algorithms. However, it is computationally expensive and so a
number of extensions, usually employing some heuristic methodology to obtain an
efficient though probably suboptimal solution, have been proposed, including the
A* algorithm, which will be discussed in Section 6.2.
198
Maps and Path Planning
(a) (b)
(c) (d)
Figure 6.4 (ad) Example of Dijkstras algorithm.
The role of a path planning software module is to provide the coordinates of a tra-
versable route connecting a given initial and desired final position. The following
section discusses some of the issues for the architecture of a path planning module
[1]. It also discusses the heuristics used in the implementation of path planning
software.
Typically, the path planning task can be divided into two parts:
Global path planning: This subtask deals with generating the best route to
reach the destination in the form of a series of road segments or points such
that the route is divided into dense evenly spaced segments.
Local path planning: This subtask performs the path planning for each seg-
ment generated by the global path planner. Hence, the local path planner
would be responsible for obstacle avoidance of unknown obstacles. It would
need to be equipped with capabilities such as perceiving objects, road follow-
ing, and maintaining continuous real-time motion.
6.2 Path Planning 199
Incomplete knowledge: In the real world the environment is not static, that
is, the position of the obstacles is continuously changing. As a result, algo-
rithms developed for path planning should accommodate these uncertainties.
One approch is to use sensors to update the map as a change occurs in it or as
problems with the currently planned route are identified. Another approach,
suitable mainly for off-road navigation, is to employ an obstacle avoidance
algorithm to circumvent obstacles that were not indicated on the map.
Robot dynamics: The global path planner should consider the vehicle dy-
namics such as turning radius, dimensions of the vehicle, and maximum ne-
gotiable elevation so as to generate a safe and efficient route. Properties like
inertia, response time to steering command, stopping distance, and maxi-
mum speed of motion are important for motion planning on the route gener-
ated by the global planner. These factors can be incorporated into the cost
function of the algorithm.
Terrain dynamics: It is important to know the behavior of the vehicle when
operating on different terrain types. For example some vehicles may be not
equipped to handle soft soil and could sink in it. Extreme terrain such as
very rocky conditions could also pose a serious hazard, and a steep slope
increases load on the drive train and also raises the center of gravity, which
may destabilize the vehicle.
Global considerations: Knowledge about the global conditions such as dan-
gerous regions or resource limitations is yet another decisive factor for the
global planner. It should compute the route in such a way that the robot
achieves a near-continuous motion. The goal of the global planner is to en-
sure the most optimal path, that is, a path that satisfies at least some of the
requirements like cost, energy, safety, or mechanical constraints for any given
situation. The algorithm minimizes the cost function, thus optimizing the
constraints mentioned above.
Considering the above factors, the implementation steps of the path planning
module can be broadly classified into:
problem can be solved using conventional off-line path planning techniques. On the
other hand, if there is no information available about the terrain then the vehicle
has to rely on its sensors completely for gathering information about the surround-
ings. This would make the process too slow and computationally expensive since
the sensor data has to be collected first and then processed before the global planner
can use this information. Considering both of these alternatives, it can be concluded
that some prior information about the surroundings, even if it is incomplete, is
highly advantageous for the path planning module [2].
Terrain information is typically described by:
Surface data is usually available in the form of an evenly spaced grid format
such as the digital elevation model (DEM), in which only the ground surface is
considered. Another topographic model of the Earths surface is the digital surface
model (DSM). This format depicts the top of all surfaces whether bare-earth, non-
elevated manmade surfaces, elevated vegetative, or elevated manmade surfaces.
It is often necessary to segment this type of data into smaller areas to reduce the
computational processing and memory requirements.
The map databases give us information about elevation, roads, and land cover
for the designated terrain. The path planning software has to access this database
and use the information to optimize the route based on a prescribed cost function.
The following are some parameters that can be considered for inclusion in the op-
timization cost function:
Distance traversed.
Traveling on roads as far as possible even if the path becomes somewhat
longer (road database).
Avoiding paths with sudden changes in elevation. This factor is further influ-
enced by the maximum slope the vehicle can negotiate (elevation or terrain
database).
Avoiding frequent changes in direction.
Avoiding regions with forest cover or residential areas (land cover database).
In our experience, augmenting publicly available data with more detailed and
accurate information about routes that are easily traversed is advantageous and
will allow the vehicle to travel at higher speeds since we are more confident about
the precise latitude and longitude of the roads [1, 3].
It should also be noted that, in production applications, these datasets are
highly compressed and optimized for the needed tasks both to reduce storage and
memory space and to reduce computational complexity, as the software may oth-
erwise spend a considerable amount of time simply reading and parsing the data.
6.2 Path Planning 201
A* has a main loop that repeatedly obtains a vertex n with the lowest f(n) value
from the OPEN list. If n is the goal vertex, then by backtracking from n to its par-
ent the solution is obtained. Otherwise, the vertex n is removed from the OPEN list
and added it to the CLOSED list.
For each successor vertex n of n, if n is already in the OPEN or CLOSED list
and its cost is less than or equal to the f(n) estimate then the newly generated n is
disregarded. However, if the f(n) estimate for n already in the OPEN or CLOSED
list is greater than the newly generated n then the pointer of n is updated to n and
the new f(n) cost estimate of n is determined by:
If vertex n does not appear on either list then the same procedure is followed
to set a pointer to the parent vertex n and to calculate the f(n) estimate of the cost.
Eventually n is added to the OPEN list and returned to the beginning of the main
loop.
The central idea behind A* search is that it favors vertices that are close to the
starting point and vertices close to the goal. The A* algorithm balances the two
costs g(n) and h(n) while moving towards the goal. The heuristic function gives the
202
Maps and Path Planning
algorithm an estimate of the minimum cost from the current vertex to the goal.
Hence, it is important to choose a heuristic function that can give the most accurate
estimate. If the heuristic function is lower than (or equal to) the cost of moving
from vertex n to the goal, then A* search guarantees the shortest path. However, if
the heuristic estimate is greater than the actual cost, then the resultant path is not
shortest, but the algorithm may execute faster. Some of the popular heuristic func-
tions used in the case of grid maps are listed in the following subsections.
(
h ( n ) = max x xgoal , y y goal ) (6.3)
However, it can be modified to account for the difference in costs for moving
along the diagonal.
(
h _ diagonal ( n ) = min x xgoal , y y goal )
h _ straight ( n ) = x xgoal + y y goal (6.4)
( x x ) + (y y ) (6.5)
2 2
h (n) = goal goal
Then the cost [4] associated with a vertex can be determined using
Cost ( n, n ) = f ( d ) e s (6.6)
where the cost of traveling from vertex n to n is a linear function of the distance
and increases exponentially depending on the slope. The other constants appearing
in (6.6) are:
Figure 6.8 shows a MATLAB rendering of the above cost function for a sample
terrain map. Figure 6.9 illustrates a sample path planned over actual dessert terrain
in southern California. A sample flowchart for the overall path planning algorithm
is shown in Figure 6.10.
400
300
200
100
0
100
200
180
160
140
120
100
80
60
Road 40
Average cost function for terrain Terrain map
Figure 6.8 MATLAB simulation of modified cost function and elevation map for terrain.
6.2 Path Planning 205
A* algorithm
Start vertex
Successor vertex (n) Goal vertex
Figure 6.11 Sample obstacle map and the resulting Voronoi diagram.
of points or obstacles, a grid map, or any other representation of the data, then
a vehicle or robot following an edge path is as far away from objects as possible.
One must still plan a path through the resulting graph, using for example the A*
algorithm described above.
Another approach to navigation and path planning involves treating all objects
as if they have an electrical charge. The vehicle also has a charge of the same po-
larity as objects, and the goal is given a charge of opposite polarity. This scheme
produces a vector field with the goal location at the minimum of the field. Various
local or global optimization techniques can be used to find a path through the
valleys of the field. However, potential field methods can be complicated. For ex-
ample, many of the numerical optimization techniques may allow the solution to
be trapped in a local minimum, thus never reaching the final goal.
Figure 6.12 Graph model of a T-junction. Only one exit/entry pair (1, 2) is defined for the right
turn. A left turn onto the left lane on the vertical road is not allowed.
208
Maps and Path Planning
7 8
6 1
5 2
4 3
3
4
21
5
Figure 6.14 Graph model of a zone. Edges are established from zone entries to all nodes inside,
and from these nodes to zone exits.
Our graph definition accurately describes the intricate urban road environment
with consistency and flexibility, and therefore establishes a model for the route
planner to utilize. Also, it enables the route planner to achieve a more accurate es-
timation of traveling time costs for the edges. What is more, the generated optimal
routes can be represented in terms of series of waypoints, which reduces the effort
expended interpreting and executing the routes in the control module.
No
Locate the
vehicle on graph
No Road blockage
detected?
Yes
nearest approachable edge with respect to its kinematical capability and then pro-
duce a route from that edge to continue with the mission.
To find the nearest reachable edge for the vehicle we need to search the graph
with multiple criteria. We want to minimize the distance from the vehicle position
to the edge as well as the deviation of the vehicle heading from the direction of the
edge.
Consider an arbitrary edge Eij on the directed graph.
As shown in Figure 6.16,
point P represents the position
ofthe vehicle and vector PN indicates
its heading.
1 is the angle between Eij and PN . The angle between Eij and Pj is denoted by 2.
We confine our search scope to edges with |q1| < 90 and |q2| < 90. To determine
whether edge Eij is a good match for our vehicle, we need to consider both how far
away P is from Eij and how well the vehicle orientation is aligned with the edge.
The distance from point P to edge Eij is defined as
( ) {
d P, Eij = min PQ , Q Eij } (6.7)
We define a distance threshold D such that all edges Eij with d(P, Eij) < D are
considered nearby edges. An optimization function is then designed as:
( )
2
D2 d P, Eij 1
(
f P, Eij = ) 2D
+
2
cos 1 (6.8)
L
min = (6.9)
tan max
where L is the distance between the front and rear axles of the vehicle and max is
the maximum steering angle. If the vehicle stands too close to the end node of an
Figure 6.16 Edges Eij and Ejk on the graph and vehicle position P with heading PN.
6.2 Path Planning 211
edge it cannot approach that node without backing up. However, reversing on the
road is not allowed. We can determine whether the node is reachable for the vehicle
by the following proposition.
Assume that a vehicle is standing at point P with an orientation PN and there
is a nearby edge Eij. The vehicle has
a minimum turning radius of min. Fix point
O1 such that PO1 is normal to PN and |PO1| = rmin and also fix point O2 such that
2 is normal to Eij and |jO2| = rmin as shown in Figure 6.17. If the angle between
jO
PN and edge Eij is acute and |O1O2| 2rmin, then the vehicle can arrive at node j
with a proper nonreversal maneuver.
We use this fact to decide whether edge Eij of the search results is approachable.
If not, we check the successors Ejk of that edge and choose the one with maximum
f(P, Eij) as the final result. In short, the searching algorithm works as follows:
E1 = arg min
Eij E, q < 90 , 2 < 90
( )
d P, Eij (6.10)
Step 3: Check whether the edge E2 is reachable by the vehicle. If not choose
an edge E3 with the largest f(P, Ejk) from its successors. The edge found is the
best match for the vehicle.
estimate of the cost of sticking to the original plan and thus remaining stationary
while waiting for the road to clear. This high cost leaves the original plan as the last
yet still possible option.
Second, new nodes and edges might need to be added to the graph to allow U-
turns on both sides of the blocked road segment. It is also possible that the destina-
tion might be unreachable since it is on the road segment past the road blockage.
As an alternative to simply waiting for the blockage to clear and continuing with
the original plan, the vehicle could navigate in order to reach the far road seg-
ment beyond the blockage and then reach the destination by a U-turn maneuver as
shown in Figure 6.18. To incorporate a U-turn edge into the defined graph on the
destination side of the barrier, the planner can identify the lane of the destination
and find a lane next to it with opposite direction. On this later lane the planner
searches for a waypoint located closest to the destination, or defines it as a node if
a waypoint does not exist and establishes an edge between this node and the check-
point corresponding to the U-turn.
It is reasonable to assume that the road blockages will be cleared after a cer-
tain length of time Tblk and the graph can update itself again for later planning.
The planner keeps records of all blocked segments with time stamps and releases a
record and undoes the corresponding modifications after Tblk seconds have passed
since the establishment of the record.
U-turn U-turn
2
( )+t
L Eij
( )
t Eij =
v (E )
d (6.12)
ij
where L(Eij) is the physical length of Eij and v(Eij) is the expected average driving
speed the vehicle uses for the edge. This can depend on a speed limit extracted from
the map database, some indication of the type of road, the length of the segment,
and the presence of intersections. The term td is the expected time delay due to stop
signs or U-turn maneuvers. It can be constant for simplicity or a function of the
vehicle speed to achieve higher accuracy.
Figure 6.19 illustrates a minimum-time route on an example map. The vehicle
starts from node S and is to arrive at the destination node T within the shortest
time. Therefore the path planner would choose the highlighted route which runs
through nodes A, B, and C over the other candidate route with nodes D, E, F, G, H,
J, and C, which is shorter in distance but involves two more intersections.
A* searching [8] is known as an efficient and effective algorithm for the short-
est path searching problem with a single source and single destination. The key
element of A* is the heuristic estimate h(n) for estimating the so called cost-to-
go, which is the cost from node n to the destination. To ensure that the optimal
solution always be found, h(n) has to be admissible and consistent [9]. When aim-
ing at a minimum-distance route, our h(n) is the straight-line distance from node
n to the destination. This distance heuristic h(n) fulfills the requirements of [9].
For optimality in time, we define our h(n) as the straight-line distance from node
n to the destination divided by the upper bound of maximum speed limits over
the network. As a scaling of the distance heuristic, this preserves the properties of
admissibility and consistency.
The implementation of an A* algorithm to determine a minimal cost route
from a start node s to an end node t follows the logic below. The module maintains
two sets of nodes, an open set P and a closed set Q. To estimate the lowest cost
of routes passing through node n, the function f(n) = g(n) + h(n) is defined for all
nodes n in the open set P. Here g(n) is the lowest cost of the routes from s to n with
the routes only consisting of nodes in set Q and h(n) is an estimate of the cost-to-
go. Initialize set P to contain only the start node s and Q to be empty. At each step,
move the node n with the smallest f(n) value from P to Q, add the successors of n
to set P if they are not already present, and update f(n) for all these successors in
Figure 6.20 (a, b) Sample routes in a simple road network. The dashed lines imply the center lines
of the available lanes, with waypoints marked by dots.
6.2 Path Planning 215
lane following, intersection processing, and so forth) that define the required ma-
neuvers and therefore the corresponding vehicle control strategies [10].
6.2.4.6 Example
The example shown in Figure 6.20 demonstrates the capability of solving planning
problems in a dynamically varying route network with successful real-time replan-
ning. The road network consists of a traffic loop and two extended stubs with U-
turns permitted at the ends of the stubs. The vehicle at point P was heading for the
destination checkpoint Q when it found the blocking barrier in its way at point R.
An alternative route to the destination was calculated, which happened to be much
shorter thanks to the U-turn allowed at the blockage. Both the original route and
the new plan are shown in the figure.
References
[1] Toth, C., et al., Mapping Support for the OSU DARPA Grand Challenge Vehicle, Proc.
2006 IEEE Intelligent Transportation Systems Conference, Toronta, Canada, September
1720, 2006, pp. 15801585.
[2] LaValle, S. M., and J. J. Kuffner, Randomized Kinodynamic Planning, International
Journal of Robotics Research, Vol. 20, No. 5, May 2001, pp. 378400.
[3] Dilip, V., Path Planning for Terramax: the Grand Challenge, M.S. Thesis, The Ohio State
University, 2004.
[4] Marti, J., and C. Bunn, Automated Path Planning for Simulation, Proc. of the Confer-
ence on AI, Simulation and Planning (AIS94), 1994.
[5] Defense Advanced Research Projects Agency, Urban Challenge, http://www.darpa.mil/
grandchallenge.
[6] Fu, L., A. Yazici, and . zguner, Route Planning for the OSU-ACT Autonomous Vehicle
in DARPA Urban Challenge, Proc. 2008 IEEE Intelligent Vehicles Symposium, Eind-
hoven, the Netherlands, June 46, 2008, pp. 781786.
[7] Dubins, L. E., On Curves of Minimal Length with a Constraint on Average Curvature,
and with Prescribed Initial and Terminal Positions and Tangents, American Journal Math-
ematics, Vol. 79, July 1957, pp. 497516.
[8] Hart, P. E., and N. J. Nilson, A Formal Basis of the Heuristic Determination of Minimum
Cost Paths, IEEE Transactions on Systems Science and Cybernetics, Vol. 4, No. 2, July
1968, pp.100107.
[9] Russell, S. J., and P. Norvig, Artificial Intelligence: A Modern Approach, Upper Saddle
River, NJ: Prentice-Hall, 1995, pp. 97104.
[10] Kurt, A., and . zguner, Hybrid State System Development for Autonomous Vehicle
Control in Urban Scenarios, Proc. of the IFAC World Congress, Seoul, Korea, July 2008.
CHAPTER 7
7.1 Introduction
217
218 Vehicle-to-Vehicle
and Vehicle-to-Infrastructure Communication
Roadside base
station (RSU)
Emergency event
Intervehicle
communications
Roadside-to-vehicle
communications
The VII consortium is also considering issues related to business models, legal is-
sues, security and technical feasibility, and acceptance among the various stake-
holder parties.
In the European Union, a number of research activities have been coordinated
under the eSafety Program [3, 4], including the Car2Car Consortium [5], COMe-
Safety [6], and the Cooperative Vehicle-Infrastructure Systems [7] program, among
many others [8, 9]. In Japan, various programs have used V2V communication of
some form, including the Advanced Cruise Assist Highway Safety Research Associ-
ation (AHSRA), the Active Safety Vehicle (ASV3) programs, the Advanced Vehicle
Control Systems (AVCS), and ITS Energy activities [1012].
220 Vehicle-to-Vehicle
and Vehicle-to-Infrastructure Communication
Single hop: Two vehicles are close enough to communicate directly with each
other (either broadcast or point to point) with low latency.
Multihop: Vehicles that can not directly communicate may forward messages
through intermediate nodes.
Multihop communication has been the subject of much research [13], but no
standard has emerged, and in fact the technical difficulties of establishing routing
and acknowledgment protocols along with potentially high latency may limit its
use to very specific applications such as medium range emergency notification or
other sparse broadcast communication applications.
Many early experiments in V2V communication were carried out with stan-
dard wireless LAN technologies, for example IEEE 802.11b, operating in the 2.4-
GHz ISM band, and some success was achieved at ranges of up to several hundred
meters. But the technical difficulties inherent in vehicle and traffic situations, in-
cluding the high relative velocities (Doppler effects), a safety critical low latency
requirement, operation in an urban environment (multipath), and spectrum com-
petition from other users in unlicensed frequency bands renders this an unrealis-
tic solution for commercial deployment. The IEEE 802.11p/WAVE standards have
recently emerged as the current consensus for the implementation of V2V and local
V2I communications. They will be described later in this chapter.
One very important issue in the use of DSRC technology is the level of market
penetration. The V2V technology will not provide a benefit to those who have it
until a sufficient number of vehicles have been equipped with the sensors, communi-
cation technology, processing, and human interfaces required for the applications.
The V2I technology faces a similar hurdle, but in addition it requires government
7.2 Vehicle-to-Vehicle Communication (V2V) 221
Obstacle!
Okay!
Merging!
Vehicle 1
Vehicle 2
Collision!
Vehicle 3
Left-turn collision
Scenario I
Local multivehicle sensor fusion to provide better global maps and situation
awareness;
Energy-saving applications, for example, platooning and energy manage-
ment and control for HEVs in stop-and-go traffic.
A wide range of technologies have been considered, or in some cases deployed, for
V2V and V2I communications. This section will describe some of those that have
been actively implemented.
7.4.2 Cellular/Broadband
As we have previously mentioned, cellular telephone and broadband data services
have become ubiquitous. Pricing, at least for individual users, is still rather high,
but vehicle OEM and other providers have negotiated pricing arrangements for par-
ticular services. Perhaps the best known is the GM OnStar service, which provides
driver information including directions, stolen vehicle location, crash detection and
emergency services notification, and other services. BMW Assist is a similar service.
To date, these services have been implemented by specific vehicle manufacturers
and are not available outside their vehicle brands.
(430 Kbps). Several test deployments were made, including a traffic management
application in Louisville, Kentucky. Research involving high spectral efficiency
modulation schemes and software-defined radio development was carried out at
Purdue University [16] and at The Ohio State University [17]. In December 2007,
the channels were still allocated to FHWA but the only known operational system
consisted of a Delaware DOT traffic monitoring application that was phased out in
2009 [18]. FHWA has since dropped support for the licenses. While the demonstra-
tions and deployments were successful, and there were RF propagation advantages
available in this lower frequency band, the extremely narrow channel bandwidth
has generally proved impractical for large-scale deployment.
Recently, there have been requests to reallocate certain channels within the
700-MHz band for use by both public safety and intelligent transportation system
applications to provide licensed high-speed broadband capability using a cellular-
like network.
Figure 7.5 shows a demonstration application developed at OSU in 2003 for
a long-distance information and warning system based on a 220-MHz software-
defined radio (SDR).
The United States, since 1999, and the European Union, since 2008, have allo-
cated spectrum in the 5.9-GHz band for wireless DSRC systems for vehicular safety
and intelligent transportation system applications. The standards for DSRC in the
United States are IEEE 802.11p, finalized in 2010, and WAVE (IEEE 1609), of
which 1609.3 and 1609.4 have been approved and 1609.1 and 1609.2 remained
226 Vehicle-to-Vehicle
and Vehicle-to-Infrastructure Communication
(a)
(b)
Figure 7.5 (a, b) The OSU demonstration of V2V networking in 2003.
in draft or trial use status at the time of this writing. The 802.11p standard is
derived from the 802.11a wireless LAN standard and thus provides a fairly easy
and inexpensive route to produce communication electronics and hardware. These
standards provide network and application support, as well as sufficient range and
data rates for both V2V and V2I communication applications.
7.5 802.11p/WAVE DSRC Architecture and U.S./EU Standards 227
IEEE 802.11p, which defines the physical and medium access layer;
IEEE 1609.1, which defines resource, system, and message management as
well as one possible interface to applications;
IEEE 1609.2, which defines security and encryption services;
IEEE 1609.3, which defines the network and transport layers and the new
WAVE short message (WSM);
IEEE 1609.4, which defines the channelization, which is unique to the WAVE
stack, and message prioritization occurring in the MAC layer.
The first task of the OFDM PLCP sublayer is to create a frame that can be eas-
ily demodulated at the receiver. This frame is called the PLCP protocol data unit
(PPDU), a breakdown of which is shown in Figure 7.7.
A PPDU is created by building a preamble of fixed length (for 802.11p, this is
32 ms), which is used for synchronization at the receiver. Following the preamble
is a PLCP header using binary phase shift keying (BPSK) and a fixed coding of R
= 1/2. This header provides the receiver with coding rate, modulation type, and
length of the rest of the message. The receiver can use these values to begin de-
modulating the message. The next field is the service field; it contains the scrambler
seed that was used to randomize the message. This is important for descrambling
the message and undoing the circular convolution.
400 to 500 bytes and have a new EtherType (002) to allow the WAVE network
stack to handle them differently than the normal 802.2 frames. Service channels
can use either IPv6 protocols or WSMP. Both the CCH and SCHs use a priority
queuing scheme based on 802.11e EDCA.
Figure 7.9 shows the priority queues used on the channels. Each queue repre-
sents the access category index or ACI. Table 7.2 shows some of the parameters
used for the CCH prioritization (the SCH use similar values). These values are used
for determining which message will be sent next. This is useful if there are many
queued regular priority messages to be sent and a high priority emergency message
is created. The arbitration interframe space number (AIFSN) parameters indicate
the minimum number of short interframe space (SIFS) windows a packet of a given
priority will wait. This number will be added to a random number between 1 and
the CW range when a packet reaches the head of the queue. For example, a CCH
packet with a best effort priority could get assigned a wait time of 6 through 13.
The channel dwell intervals also require the 1609.4 section to query the physi-
cal layer (802.11p) regarding how much time will be required to send a selected
message on the current channel. If the time required exceeds the remaining time for
the current channel interval and thus would interfere with the guard interval, the
messages will be buffered in the MAC layer and transmitted later upon return to
the correct channel. The default channel dwell interval is 50 ms.
tested against the list of registered applications. If an application matches and has
a higher priority then the currently active service, the SCH specified in the WSIE is
set as the next active service channel.
Two types of WAVE devices are supported, the roadside unit (RSU) and the on-
board unit (OBU). The RSU would typically be a service provider deployed on the
roadside that would provide general information, for example, map data like the
geometric intersection data (GID) message or local speed limits. If it is connected to
a traffic light it could also provide signal phase and timing (SPaT) messages. Both
message types are defined in [22]. The OBU is responsible for receiving data from
RSUs and generating and receiving safety messages that relay information about
the vehicles current state.
In this section we will develop a few examples. Two important operations in high-
way driving will be presented: platoons [24] and adaptive cruise control (ACC), and
the control of merging traffic. A theoretical analysis of an ACC policy is given to
demonstrate the performance improvement with intervehicle communication. We
simulate collision avoidance and merging scenarios to illustrate the effectiveness
of wireless communication in intelligent vehicle applications and to demonstrate
the technology. Merging followed by an emergent braking scenario is studied to
further show the effect of wireless communication on the improvement of possible
automated and cooperative driving [21].
For urban driving we present a stop-and-go control system with and without
wireless communication. In the stop-and-go scenario, decelerations and emergency
braking to avoid collisions is controlled by an automated driving system with a
given sensing range, while starting or acceleration is executed by the human driver.
In the simulations that follow we use a simplified first order longitudinal model
of the vehicle with a time constant of 10 seconds and a unity gain. Dimensions
and acceleration limits for several classes of vehicles are shown in Table 7.3. We
will only evaluate the effect of the loop closure delay (caused by communications
delays) on safety. The details of how the delay is produced will not be considered.
Thus, there is no need to precisely model the physical layer or MAC layer of the
V2V communication devices.
Modeling driver behavior is a complex but necessary part of simulations used
for studying new technologies since individual drivers perceive and react different-
ly. The drivers perception and reaction time plays a significant role in all problems
related to safety. A simple model, such as a parameterized first- or second-order
linear system, can be used to define a set of driver types, and instances of each type
can be distributed among the simulated vehicles. The driver model determines the
drivers acceleration, deceleration, and response to warnings. Thus different driv-
ers may take different actions when they face the same situations. For example,
when drivers notice a possible collision ahead, conservative drivers may perform
an emergency braking action while more aggressive drivers will only release the
throttle pedal.
where u(t) is the commanded input from the driver to the ACC system. Then u(t)
could be selected as
u (t ) = k1 ( xL xF ) + k2 x L x F + k3
where k1, k2, k3 represent PD and feedforward controller gains that might be picked
subconsciously by a driver or after analysis by the ACC designer. Another way of
formulating the same problem is
u(t) = 1 h +
= xL xF + L (7.1)
= x x + L + h xL
L F
where
The three parameters (h, , L) are selected by the designer. It has been shown
that this control law can ensure string stability [2527]. If we are interested in pla-
toon formation, this control can be applied to every vehicle in a platoon.
The ACC policy in a highway scenario in which eight vehicles are grouped in
a platoon is simulated in the following examples. A typical ACC controlled car-
following case is chosen [2528] with a desired constant time headway of 1 second,
a constant desired spacing between vehicles in a platoon of 5m, and a convergence
rate of 0.2. The effect of intervehicle communication is simulated. Extensive studies
have been carried out by researchers at PATH with similar results [29].
300 300
Stationary obstacle
250 Collisions 250
Safety distance is kept.
Distance (m)
Distance (m)
200 200
100 100
50 50
0 0
0 10 20 30 40 0 10 20 30 40
Time: Sec
(a) (b)
Figure 7.12 (a, b) Position profiles of vehicles using ACC.
effect, in which small perturbations are amplified as they travel back through the
platoon leading to oscillations of increasing amplitude and eventually to instability
and collisions.
1
xides = i + i
h
1 1
xides = xi xi 1 + xi xi 1 + L + h xi = v + S + h xi
h h
where v = xi xi 1 are the relative velocity changes between vehicles in a certain
information updating time interval, S = xi - xi-1 + L is the longitudinal position
difference between vehicles, and L is the safety distance required between vehicles.
Note that S is usually a negative value and its absolute value cannot exceed L since
the ith vehicle follows the (i - 1)th vehicle.
Thus the relationships between velocity, potential velocity changes, and infor-
mation distance are given by
ha v S ha v S (7.2)
x i
h h
Figure 7.14 shows both the upper limit and the lower limit of (7.2) An auto-
mated vehicle has a certain time interval to exchange information and then take ac-
tion accordingly. The duration of this time interval depends on system parameters
such as onboard system data processing time, vehicle reaction time, and commu-
nication interval (if intervehicle communication is available). Then v represents
the relative velocity changes between two vehicles during this time interval. If we
consider a platoon on the highway in which all the vehicles have reached steady
state (i.e., maintain the same speed as the lead vehicle), then in a given time interval
t the maximum relative velocity changes is v = (a a) t , which occurs in the
worst-case scenario of the leading vehicle performing an emergency deceleration
while the following vehicle is accelerating at its maximum rate. The information
distance is defined as the longitudinal position difference between these two ve-
hicles S at a given time stamp and can be viewed as the distance that is needed for
information exchanges.
We simulated two cases in which t is 0.1 and 1.0 second, corresponding to
a Dv of 0.5 and 5.0 m/s, respectively. The results are plotted in Figure 7.14. The
area between the upper limit and lower limit lines is the area of safe controllability
7.6 Potential Applications in an Autonomous Vehicle 237
for velocity according to the specific information distance. For example, in Figure
7.14(a), line BF indicates that when the information distance is 25.03m, the vehicle
is safely controllable when its velocity is between 0 m/s and 43.375 m/s. The area
above the upper limit is an uncontrollable region. Line AB, on the other hand, indi-
cates that a vehicle with the velocity of 43.375 m/s requires an information distance
of at least 25.03m and also that no action needs to be taken when the distance be-
tween two vehicles are greater than 100m. In both figures, area I represents the re-
quirement on the information distance for a given velocity while area III represents
the safely controllable range of vehicle velocity for the given information distance.
Comparisons between Figure 7.14(a) and Figure 7.14(b) show that the required
information distance can be shortened by reducing the time interval. However, it
is impractical to reduce the time interval below 0.1 second due to the equipment
constraints. On the other hand, increasing information distance by means of in-
tervehicle communication can ease the system requirement. As shown in Figure
7.14(b), line BC indicates that, even with a long update interval of 1 second, the
vehicles velocity can reach 63.727 m/s with an information distance of 100m. The
results also show that vehicle needs to take action only when it falls into the area
between the two lines.
This analysis shows that with intervehicle communications, the updating time
interval for the control of the vehicle can be larger; in other words, we do not need
a communication system with a short transmission interval to ensure the control-
lability of vehicles. This eases the requirement on both the onboard processing
system and the communication equipment bandwidth.
238 Vehicle-to-Vehicle
and Vehicle-to-Infrastructure Communication
F L
F M L
Figure 7.17 shows the detailed merging control algorithm. Upon entering the
system, the merging vehicle broadcasts its position, time stamp, speed, accelera-
tion rate, heading, and size [30]. Vehicles within communication range broadcast
their own position, speed, headway, and size to the merging vehicle. After receiving
the information, the merging vehicle virtually maps itself onto the main lanes and
looks forward to find the closest open spot from itself at the predicted time it will
reach the merging point. It also calculates whether there is any road contention
between itself and its closest follower. It will then broadcast the position of the clos-
est open spot, which is ahead of itself and the closest followers ID if there is a po-
tential road contention. Based on this information, the vehicles on the main lanes
compute whether they are affected and determine what to do. Those vehicles that
are ahead of the merging vehicle and behind the closest open spot will change lane
and/or speed up, depending on whether the open spot is in the rightmost lane. The
open spot is then effectively moved backward and matched to the position of the
merging vehicle. If there is a potential road contention between the closest follower
and the merging vehicle, the closest follower will slow down until the required
open spacing is achieved. It should be mentioned that the merging vehicle does
not look for the closest open spot, which is behind itself, since this would create
a slow down in the main lane. It also should be noted that all the affected leaders
Figure 7.17 A merging control algorithm with the aid of intervehicle communication.
240 Vehicle-to-Vehicle
and Vehicle-to-Infrastructure Communication
will match their speeds to the merging vehicle until the merging vehicle completes
its lane change.
In the following simulations we assume a typical ACC controlled car-following
case with desired constant time headway of 1 second, constant spacing between
vehicles in a platoon of 5 meters, and convergence rate of 0.2 on the main lane.
0
(a) Affected vehicle (leader)
2 Affected vehicle (follower)
4
24 26 28 30 32 34
30
Velocity m/s
25
(b)
20 Affected vehicle (leader)
Affected vehicle (follower)
15
24 26 28 30 32 34
Longitude distance: m
600
Affected vehicle (leader)
Affected vehicle (follower)
400 Merging vehicle Not enough space for merging
(c)
200
Ready for merging
0
24 26 28 30 32 34
Time: sec
longer time for vehicles to merge into the right lane, which results in delay. Simula-
tions show the effect of IVC on the delay due to the occupied merging point. Cases
with traffic throughput of 1,200, 1,800, 2,400, and 3,200 vehicles/hour in the right
lane are simulated. The delay of the merging vehicle is calculated by subtracting the
time required for vehicles to merge when there is no traffic in the main lane from
the time obtained by simulation with other traffic. For each throughput rate 100
merging vehicles are simulated. The average delays under various conditions are
plotted in Figure 7.19. The data shows that as traffic throughput increases from
1,200 to 3,200 vehicles/hour, the average delay is increased from 0.3 second to
1.5 seconds when no IVC is employed because the probability of vehicles arriving
at the merging point simultaneously increases. However, the delay is kept almost
constant around 0 second when IVC is employed.
Algorithms such as this one also assist in avoiding collisions due to unsafe
manual merging maneuvers.
1
Time: sec
0.8
0.6
0.4
0.2
0.2
1,200 1,400 1,600 1,800 2,000 2,200 2,400 2,600 2,800 3,000 3,200
Traffic flow: vehicles/hour
Figure 7.19 Delay time improvements.
242 Vehicle-to-Vehicle
and Vehicle-to-Infrastructure Communication
may be appropriate when the traffic flow consists of both automated and manually
controlled vehicles [38].
The stop-and-go in urban area simulation scenario is studied by simulating 12
vehicles, which are divided in two groups. The effect of intervehicle communica-
tion is simulated. A simple human driver model was assumed to represent the reac-
tions of the driver for starting and acceleration control. All the vehicles are initially
traveling with a velocity of 25 mph with time gaps of 1.5 seconds and a safety
distance L chosen as 5m, which is also the sensing range. A stationary obstacle is
placed into the lane of the oncoming vehicles.
hicle is large enough to safely stop and avoid the stationary obstacle during this
low-speed operation.
The speed profiles of vehicles without communications in this simulation sce-
nario are shown in Figure 7.20(b). The longitudinal velocities of the vehicles in the
groups decrease by following the leading vehicles deceleration. Since there is no
communication between the vehicles or the groups, the speed profile of the vehicles
in the platoons is activated only by the leading vehicles deceleration. Acceleration/
deceleration profiles of the vehicles are given in Figure 7.20(c). Since the time gap
in this example is large enough for stop-and-go or collision avoidance by emer-
gency braking, the emergency deceleration limit of 0.31g is sufficient to stop the
leader vehicle of the second group within its 5-meter sensing distance.
The speed profiles of the vehicles within both groups are shown in Figure
7.21(b) and the acceleration/deceleration profiles of the vehicles in both groups are
shown in Figure 7.21(c). Since the warning message is received by all the vehicles
in both groups, the vehicles start braking simultaneously in order to avoid possible
obstacles and collisions and maintain a safe following distance or assigned constant
time gap profiles; smooth deceleration and acceleration profiles are obtained.
References
[1] http://www.its.dot.gov.
[2] http://www.vehicle-infrastructure.org.
[3] http://www.esafetysupport.org/en/esafety_activities/.
[4] http://ec.europa.eu/transport/wcm/road_safety/erso/knowledge/Content/04_esave/esafety.
htm.
[5] Car2Car Communication Consortium Manifesto, http://www.car-to-car.org/fileadmin/
downloads/C2C-CC_manifesto_v1.1.pdf.
[6] http://www.comesafety.org.
[7] http://www.cvisproject.org.
[8] http://www.esafetysupport.org/en/esafety_activities/related_projects/index.html.
[9] http://www.ertico.com.
[10] Tsugawa, S., A History of Automated Highway Systems in Japan and Future Issues,
Proc. IEEE Conference on Vehicular Electronics and Safety, Columbus, OH, September
2224, 2008.
[11] Tsugawa, S., Energy ITS: Another Application of Vehicular Communications, IEEE
Communications Magazine, November 2010, pp. 120126.
[12] http://www.mlit.go.jp/road/ITS/.
[13] Korkmaz, G., E. Ekici, and F. zguner, An Efficient Ad-Hoc Multi-Hop Broadcast Proto-
col for Inter-Vehicle Communication Systems, IEEE International Conference on Com-
munications (ICC 2006), 2006.
[14] http://www.its.dot.gov/cicas/.
[15] http://www.vics.or.jp/english/vics/pdf/vics_pamph.pdf.
[16] Krogmeier, J. V., and N. B. Shroff, Final Report: Wireless Local Area Network for ITS
Communications Using the 220 MHz ITS Spectral Allocation, FHWA/IN/JTRP-99/12,
April 2000.
[17] Fitz, M. P., et al., A 220 MHz Narrowband Wireless Testbed for ITS Applications, The
Fourth International Symposium on Wireless Personal Multimedia Communications, Aal-
borg, Denmark, September 2001.
[18] http://www.ntia.doc.gov/osmhome/spectrumreform/Spectrum_Plans_2007/Transporta-
tion_Strategic_Spectrum_Plan_Nov2007.pdf.
[19] Tokuda, K., M. Akiyama, and H. Fujii, DOLPHIN for Inter-Vehicle Communications
System, Proceedings of the IEEE Intelligent Vehicle Symposium, 2000, pp. 504509.
[20] Shiraki, Y., et al., Development of an Inter-Vehicle Communications System, OKI Tech-
nical Review 187, Vol. 68, September 2001, pp. 1113, http://www.oki.com/en/otr/down-
loads/otr-187-05.pdf.
[21] Tsugawa, S., et al., A Cooperative Driving System with Automated Vehicles and Inter-
Vehicle Communications in Demo 2000, Proc. IEEE Conference on Intelligent Transpor-
tation Systems, Oakland, CA, 2001, pp. 918923.
[22] DRAFT SAE J2735 Dedicated Short Range Communications (DSRC) Message Set Diction-
ary Rev 29, SAE International, http://www.itsware.net/ITSschemas/DSRC/.
[23] http://www.calm.hu.
7.6 Potential Applications in an Autonomous Vehicle 245
[24] Shladover, S. E., et al., Automated Vehicle Control Development in the PATH Program,
IEEE Transactions on Vehicular Technology, Vol. 40, 1991, pp. 114130.
[25] Swaroop, D., and K. R. Rajagopal, Intelligent Cruise Control Systems and Traffic
Flow Stability, Transportation Research Part C: Emerging Technologies, Vol. 7, 1999,
pp.329352.
[26] Rajamani, R., and C. Zhu, Semi-Autonomous Adaptive Cruise Control Systems, Proc.
Conf. on American Control, San Diego, CA, 1999, pp. 14911495.
[27] Rajamani, R., and C. Zhu, Semi-Autonomous Adaptive Cruise Control Systems, IEEE
Transactions on Vehicular Technology, Vol. 51, 2002, pp. 11861192.
[28] Zhou, J., and H. Peng, Range Policy of Adaptive Cruise Control Vehicles for Improved
Flow Stability and String Stability, IEEE Transactions on Intelligent Transportation Sys-
tems, Vol. 6, 2005, pp. 229237.
[29] Xu, Q., et al., Vehicle-to-Vehicle Safety Messaging in DSRC, Proc. First ACM Workshop
on Vehicular Ad Hoc Networks (VANET 2004), Philadelphia, PA, 2004, pp. 1928.
[30] Vehicle Safety Communications Project Task 3 Final ReportIdentify Intelligent Vehicle
Safety Applications Enabled by DSRC, U.S. DOT HS 809 859 (NHTSA), March 2005,
http://www.nhtsa.gov/DOT/NHTSA/NRD/Multimedia/PDFs/Crash%20Avoidance/2005/
CAMP3scr.pdf.
[31] Sakaguchi, T., A. Uno, and S. Tsugawa, Inter-Vehicle Communications for Merging Con-
trol, Proc. IEEE Conf. on Vehicle Electronics, Piscataway, NJ, 1999, pp. 365370.
[32] Uno, A., T. Sakaguchi, and S. Tsugawa, A Merging Control Algorithm Based on Inter-
Vehicle Communication, Proc. IEEE Conf. on Intelligent Transportation Systems, Tokyo,
Japan, 1999, pp. 783787.
[33] Fenton, R. E., and P. M. Chu, On Vehicle Automatic Longitudinal Control, Transporta-
tion Science, Vol. 11, 1977, pp. 7391.
[34] Fenton, R. E., IVHS/AHS: Driving into the Future, IEEE Control Systems Magazine,
Vol. 14, 1994, pp. 1320.
[35] Ioannou, P. A., and M. Stefanovic, Evaluation of ACC Vehicles in Mixed Traffic: Lane
Change Effects and Sensitivity Analysis, IEEE Transactions on Intelligent Transportation
Systems, Vol. 6, 2005, pp. 7989.
[36] Drew, D. R., Traffic Flow Theory and Control, New York: McGraw-Hill, 1968.
[37] Takasaki, G. M., and R. E. Fenton, On the Identification of Vehicle Longitudinal Dynam-
ics, IEEE Transactions on Automatic Control, Vol. 22, 1977, pp. 610615.
[38] Acarman, T., Y. Liu, and . zgner, Intelligent Cruise Control Stop and Go with and
Without Communication, Proc. Conf. on American Control, Minneapolis, MN, 2006,
pp. 43564361.
Selected Bibliography
ASTM E2158-01 Standard Specification for Dedicated Short Range Communication (DSRC)
Physical Layer Using Microwave in the 902 to 928 MHz Band.
ETSI ES 202 663 (V1.1.1), Intelligent Transport System (ITS); European Profile Standard for
the Physical and Medium Access Control Layer of Intelligent Transport Systems Operating in the
5 GHz Frequency Band, 2010.
ETSI EN 301 893 (V1.5.1), Broadband Radio Access Networks (BRAN); 5 GHz High Per-
formance RLAN; Harmonized EN Covering the Essential Requirements of Article 3.2 of the
R&TTE Directive.
246 Vehicle-to-Vehicle
and Vehicle-to-Infrastructure Communication
ETSI EN 302 571 (V1.1.1), Intelligent Transport Systems (ITS); Radiocommunications Equip-
ment Operating in the 5.855 MHz to 5.925 MHz Frequency Band; Harmonized EN Covering the
Essential Requirements of Article 3.2 of the R&TTE Directive.
ETSI EN 302 665 (v1.1.1), Intelligent Transport Systems (ITS); Communications Architecture,
2010.
ETSI TS 102 665 (V1.1.1), Intelligent Transport Systems (ITS); Vehicular Communications;
Architecture.
ETSI TS 102 687 (V1.1.1), Intelligent Transport Systems (ITS); Transmitter Power Control
Mechanism for Intelligent Transport Systems Operating in the 5 GHz Range.
The Institute of Electrical and Electronics Engineers (IEEE), Wireless LAN Medium Access
Control (MAC) and Physical Layer (PHY) Specifications, http://standards.ieee.org, ANSI/IEEE
Std.802.11, 1999 (also know as ISO/IEC 8802-11:1999(E), 2007).
The Institute of Electrical and Electronics Engineers (IEEE), IEEE 802.11e/D4.4, Supplement to
Part 11: Wireless Medium Access Control (MAC) and Physical Layer (PHY) Specifications: Me-
dium Access Control (MAC) Enhancements for Quality of Service QoS, June 2003.
IEEE P802.11k: IEEE Standard for Information TechnologyTelecommunications and In-
formation Exchange Between SystemsLocal and Metropolitan Area NetworksSpecific Re-
quirements Part 11: Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY)
Specifications Amendment 1: Radio Resource Measurement of Wireless LANs, 2008.
IEEE P802.11pTM/D8.0: Draft Standard for Information TechnologyTelecommunications
and Information Exchange Between SystemsLocal and Metropolitan Area NetworksSpe-
cific RequirementsPart 11: Wireless LAN Medium Access Control (MAC) and Physical Layer
(PHY) Specifications; Amendment 7: Wireless Access in Vehicular Environments, 2009.
IEEE Standard for Information TechnologyTelecommunications and Information Exchange
Between SystemsLocal and Metropolitan Area NetworksSpecific Requirements Part II: Wire-
less LAN Medium Access Control (MAC) and Physical Layer (PHY) Specifications, IEEE Std.
802.11g-2003 (Amendment to IEEE Std. 802.11, 1999 Edn. (Reaff 2003) as amended by IEEE
Stds. 802.11a-1999, 802.11b-1999, 802.11b-1999/Cor 1-2001, and 802.11d-2001), 2003.
IEEE Std. 1609.12006 IEEE Trial-Use Standard for Wireless Access in Vehicular Environ-
ments (WAVE)Resource Manager, IEEE Std. 1609.1-2006, 2006.
IEEE Trial-Use Standard for Wireless Access in Vehicular EnvironmentsSecurity Services for
Applications and Management Messages, IEEE Std. 1609.2-2006, 2006.
IEEE Trial-Use Standard for Wireless Access in Vehicular Environments (WAVE)Multi-Chan-
nel Operation, IEEE Std. 1609.4-2006, 2006.
IEEE Trial-Use Standard for Wireless Access in Vehicular Environments (WAVE)Networking
Services, IEEE Std. 1609.3-2007, April 20, 2007.
ISO/IEC 7498-1, Information TechnologyOpen Systems InterconnectionBasic Reference
Model: The Basic Model, 1994.
CHAPTER 8
Conclusions
In this concluding chapter of the book we want to mention two problems in the
development of autonomous vehicles. We picked these two as still open for research
and worth investigating, and we also view them as immediate contributors.
247
248
Fault Detection
Data Fusion
Data Validation
Data Processing
Sensor Sensors
Control
Unit
Physical Sensor
data is passed through initial filtering and conditioning algorithms. These processes
look for outlying data points using rule bases. This filtered data is then transmitted
to the data processing tier where the data validation, fault detection and isolation,
and data fusion algorithms are located. The sensor data is packaged into logical
sensor packets for transmission to the control and computational tier at this level
as well. The highest phase in the SFCS framework, the control and computational
tier is a terminal for data computation. This plane is where the existent IV system
control algorithms reside, as well as, the fault tolerance decision module. Further
analysis of the four tiers, their associated modules, and specific data transmission
protocols are discussed in the papers [1, 2].
Certain guidelines have to be adopted to specify what is meant by ACC and LK
for this approach, and to likewise dictate the sensor data that is of interest and in
what regions. ACC is concerned with the detection of all obstacles in the forward
path of motion of the home vehicle with relative speeds of 10 m/s and within a
look-ahead distance of 10m to 100m. The main specification for the LK system
is the acceptable offset distancehow far the vehicle is allowed to drift from the
center of the lane. The lateral offset of the vehicle is desired to be regulated to zero;
however, given the resolution of the lateral sensing devices, a 5-cm offset distance
could be acceptable for this research. The offset rate is also of interest in the valida-
tion of sensor measurements and for control purposes.
We provide SFCS as an example of a possible approach. The readers are en-
couraged to develop other possible approaches [3].
for driver assistance systems design and development. The cognitive human driver
model has attracted researchers attention for several years. The essential quality of
the cognitive human driver model focuses on human drivers psychological activi-
ties during the driving. Cognitive models on the other hand can help to develop un-
derstanding of driver behavior. The cognitive simulation model of the driver (COS-
MODRIVE) was developed at the French Institute for Transportation Research [7].
PATH researchers extended and organized the COSMODRIVE framework for the
purpose of driver modeling in their SmartAHS [811]. The model allowed simulta-
neous simulations of vehicles controlled by drivers and semiautomated systems for
comparisons. A drivers knowledge database and the cognitive process underlying
the driving activity contribute to the cognitive approaches.
We first present a general architecture of the human driver model as shown in
Figure 8.2. This model is based on the structure of COSMODRIVE. The model
consists of seven modules, which can be divided into two groups: the external and
internal view of the driver. The environment module is the external to the group,
while all other modules are in the internal group.
Additional sensors, such as camera/LIDAR may also provide the driver with
necessary data on the environment. Once a driver-assistance system is avail-
able, the system assistant message will also be served as a complement to the
environment.
Perception module: The perception module represents visual and audio
sensing. The data generated by the perception module includes estimation
of velocity, direction, distance between vehicles, and so forth in range and
angle. In a microscopic traffic simulator, when considering the changes of
visibility for the drivers, we can simply adjust the parameters of range and
angle according to the situation in the perception module. In the real world,
range and angle are based on the driving environment, for example, blizzard
weather leads to short range and small angle.
Task-planning module: The task-panning module provides the decision-mak-
ing module with information on which direction the vehicle is going.
Driver characteristics module: The essential function of the driver charac-
teristics module is to predict a human drivers psychological activities based
on his/her knowledge, which contains both driving knowledge-based and
traffic rulebased information, and his/her driving skill, which indicates his/
her ability of driving (novice/expert). It changes as the subject driver changes.
Decision-making module: The decision-making module is the most impor-
tant part of the driver model. It acts as a higher-level controller for the ve-
hicle. The decision is made based on the perceptive information, its itinerary
from task planning module, the drivers own characteristics, and the vehicles
current state.
Implementation module: The implementation module is responsible for the
two- dimensional control of the vehicle based on the information it receives
from the decision-making module.
Emergency management module: The emergency management module deals
with unexpected/irregular emergency, such as another vehicles traffic viola-
tion and obstacle avoidance.
Conflict
area
Intersection
area
Figure 8.3 A multilane intersection.
During the year this book was being written and compiled, there were two
interesting demonstrations of advanced technologies:
Figure 8.6 Three trucks demonstrating energy savings due to tight convoying.
8.2 And the Beat Goes On 255
References
[1] Schneider, S., and . zgner, A Sensor Fault Controller Scheme to Achieve High Mea-
surement Fidelity for Intelligent Vehicles, with Applications to Headway Maintenance,
Proceedings of the Intelligent Transportation Society of America 8th Annual Meeting and
Exposition, Detroit, May 1998, pp. 113.
[2] Schneider, S., and . zgner, A Framework for Data Validation and Fusion, and Fault
Detection and Isolation for Intelligent Vehicle Systems, Proceedings of IV, Stuttgart, Ger-
many, October 1998.
[3] Lee, S. C., Sensor Value Validation Based on Systematic Exploration of the Sensor Redun-
dancy for Fault Diagnosis, IEEE Transactions on Systems, Man, and Cybernetics, Vol. 24,
April 1994, No. 4, pp. 594605.
[4] Liu, Y., and . zgner, Human Driver Model and Driver Decision Making for Intersec-
tion Driving, Proc. IV2007, Istanbul, Turkey, pp. 642647.
[5] Acarman, T., et al., Test-Bed Formation for Human Driver Model Development and Deci-
sion Making, Proc. IEEE ITSC 2007, Seattle, WA, 2007, pp. 934939.
[6] Kurt, A., et al., Hybrid-State Driver/Vehicle Modelling, Estimation and Prediction, Proc.
of 2010 13th International IEEE Conference on Intelligent Transportation Systems (ITSC),
Madeira, Portugal, September 2010, pp. 806811.
[7] Tattegrain-Vest, H., et al., Computational Driver Model in Transport Engineering: COS-
MODRIVE, Journal of the Transportation Research Board, Vol. 1550, 1996, pp. 17.
[8] Delorme, D., and B. Song, Human Driver Model for SMARTAHS, Tech. Rep. UCB-ITS-
PRR-2001-12, California PATH Program, Institute of Transportation Studies, University
of California, Berkeley, April 2001.
[9] Burnham, G., J. Seo, and G. Bekey, Identification of Human Driver Models in Car Follow-
ing, IEEE Transactions on Automatic Control, Vol. AC-19, December 1974, pp.911916.
[10] Song, B., D. Delorme, and J. VanderWerf, Cognitive and Hybrid Model of Human Driver,
2000 IEEE Intelligent Transportation Systems Conference Proceedings, Dearborn, MI, Oc-
tober 2000, pp. 16.
[11] Cody, D., S. Tan, and A. Garcia, Human Driver Model Development, Tech. Rep. UCB-ITS-
PRR-2005-21, California PATH Program, Institute of Transportation Studies, University of
California, Berkeley, June 2005.
Appendix
A single track vehicle or bicycle model is used to represent the simplified lateral
dynamics of the vehicle model. The model covers two degrees of freedom along
the following variables: lateral velocity and the angular velocity around the verti-
cal axis, which is also called yaw rate. Steer angle and the longitudinal force on the
front tire of the model are two input to the dynamical system.
The local coordinate system is fixed to the sprung mass CGs of the bicycle
(single track) vehicle to describe the orientation and derive the motion dynamics of
the vehicle mass. The global coordinate system fixed to the road is also defined to
perceive rotation and displacement of the vehicle model from a stationary point.
Tf
The longitudinal force on the front tire is denoted by where Tf is the traction or
Rf
braking torque applied to the front axle and Rf is the effective radius of the tire. is
the steer angle of the front tire. The forces generated in the lateral direction during
turning are attached to the individual front and rear tire, denoted by Fyf and Fyr,
respectively. The physical meanings are presented in Table A.1.
The motion dynamics in the lateral direction and the rotational dynamics
around the local vertical axis is derived by summing the forces and the moments
using the single track free body diagram given in Figure A.1. The reference direc-
tion of the states, forces, and front steer angle is chosen as illustrated in the top
view of the bicycle model. The motion dynamics are derived with respect to these
given reference directions.
The two-degree of freedom (2DOF) bicycle model is derived to represent the
simplified lateral motion dynamics by ignoring all possible tire modeling complexi-
ties and other possible degrees of freedom on the rotational roll, pitch axes, and
translational vertical and longitudinal axes. To maintain validity of the derived
model, the operating point must be in the linear region of the tire force model
characteristics and the small steer angle and the constant longitudinal velocity as-
sumptions must hold (i.e., this model may be valid in the driving scenario when the
vehicle is being driven at constant speed and low steering input is applied):
257
258
Appendix
Y b a
CG u
r
v Fyf Tf
Fyr
Rf
The steering wheel angle is small and it is approximated as: sin and cos
1.
The longitudinal velocity of the model is constant: u = uconstant.
Taking the sum of the forces in the lateral direction, which is shown by Y,
Tf
m v + u r = Fyf + Fyr +
Rf
summing moments about the local Z-axis, which is perpendicular to the plane of
the model:
Tf
I z r = aFyf bFyr + a
Rf
A.1 Two-Wheel Vehicle (Bicycle) Model 259
In the linear and normal operating region of the tire characteristics, the tire
force dynamic equations in the lateral direction can be linearized at the steady state
and approximated as:
Fyf = kf f
Fyr = kr r
where f and r denote the front and rear tire slip angles, respectively. The tire slip
angle is simply defined as the angular difference between the tire direction of mo-
tion and its orientation. For the determination of the slip angle on the front and
rear tire we will be using,
v + ar
f = tan 1
u
v br
r = tan 1
u
v + ar
f =
u
v br
r =
u
Inserting the lateral forces in the lateral dynamics of the bicycle model, the mo-
tion dynamics can be obtained by:
1 v + ar v b r Tf
v = u r + kf + kr +
m u u Rf
1 v + ar v br Tf
r = akf bkr + a
Iz u u Rf
From these equations, the linearized and simplified lateral dynamics are:
kf + kr akf bkr 1 Tf
v = v u + r + kf +
mu mu m Rf
akf bkr a 2 kf + b2 kr a Tf
r= v r+ kf +
Iz u Iz u Iz Rf
Hence, the linear bicycle model can be written in the state-space form:
260
Appendix
x = Ax + Bu
where the state vector is constituted by the lateral velocity and yaw rate x = [v r]T
and the input scalar is the steering wheel angle, u = .
kf + kr akf bkr 1 Tf
kf +
mu
u +
mu v m
Rf
v =
akf bkr r +
a kf + b kr a
2 2
Tf
r I kf + R
Iz u Iz u z f
the individual longitudinal and lateral force between the tire patches and the road
for i = 1, 2, 3, 4, respectively.
The distance from the tires to the sprung mass center of gravity (CG) is denoted
by c, and the distances from the front and rear axle to CG are represented by a and
b, respectively. The steer angle of the front tires is represented by , whereas the
rear tires are assumed to have a zero steer angle. The summation of the forces in
the X-direction yields acceleration in the longitudinal direction,
( )
u = vr + 1 (Fx1 + Fx 2 )cos + Fx 3 + Fx 4 Fy1 + Fy 2 sin A u2 sign(u) + g sin(T )
( )
where m is the total mass of the vehicle including unsprung masses, A is the aero-
dynamic drag coefficient, and T is the terrain angle affecting the longitudinal
dynamics.
Acceleration in the lateral direction is derived by summing the forces in the Y
direction,
(( )
v = ur + 1 Fy1 + Fy 2 cos + Fy 3 + Fy 4 + (Fx1 + Fx 2 )sin + g cos(T )sin(T )
)
1
( ) ( ( ) )
r= b Fy 3 + Fy 4 + c (Fx 4 Fx 3 ) + a (Fx1 + Fx 2 ) sin + Fy1 + Fy 2 cos
Iz
(( )
+ c Fy 2 Fy1 sin + (Fx1 Fx 2 )cos
)
where Iz is the moment of inertia of the total mass of the vehicle about the local
Z-axis.
The front view of the 3-D vehicle model is illustrated in Figure A.4. The sprung
mass (body of the vehicle) is connected to the unsprung masses (front and rear ax-
les and the individual tires) through the rocker arms. Individual suspension forces
Z Z
u v
T T
Figure A.3 Road inclination and terrain angles affecting the vehicle dynamics.
262
Appendix
V
p
k k
CG
Fs2,3 Fs1,4
Fs2,3 h Fs1,4
d n n d
Fytotal
Figure A.4 Front view of the vehicle.
are denoted by Fsi for i = 1, 2, 3, 4 and the suspension mechanisms are drawn as
rectangular boxes for simplicity.
The sprung mass CG height of the vehicle versus the ground plane reference is
given by h. The mechanical link between the wheel and the sprung mass is support-
ed by the rocker arm and suspension mechanism. The distance between the contact
point of the suspension rocker and the wheel is denoted by d, whereas the distance
between the contact point of the suspension rocker and the body is denoted by n.
The distance between the contact point of the suspension rocker and the CG of the
vehicle model is denoted by k.
The summation for the forces about the
local Z-axis gives the acceleration in
the vertical direction, which is denoted by Z ,
4
1 Fsi
Z = uq vp +
ms
R i =1
g cos(T )cos(T )
si
where ms is the mass of the sprung mass of the vehicle model. Roll dynamics are
obtained by summing the moment about the local X-axis,
1 4
p=
I xs i =1
Rri Fsi + Fytotal h + qr(I ys I zs )
where Ixs, Iys, and Izs are the moments of inertias of the sprung mass of the vehicle
about the local X-, Y-, and Z-axes, respectively, and Rri for i = 1, 2, 3, 4 is given in
terms of the rocker arm displacement versus the wheel contact point, sprung mass
contact point, and CG of the vehicle model:
A.2 Full Vehicle Model Without Engine Dynamics 263
d
Rr1 = Rr 2 = Rr 3 = Rr 4 = k + (k n)
d+n
Taking the summation of the moments about the Y-axis gives the pitch dynamics,
1 F F F F
q= Fxtotal h + a s1 + s 2 b s 3 + s 4 + pr (I zs I xs )
I ys Rr1 Rr 2 Rr 3 Rr 4
( ) ( )
Fxtotal = Fxq + Fx 2 cos + Fx 3 + Fx 4 + Fyq + Fy 2 sin
( )
Fytotal = Fyq + Fy 2 cos + Fy 3 + Fy 4 (Fx1 + Fx 2 ) sin
Fsi = Ksi ( Zui Zsi + Lsi )+ Csi Z ui Z si
for i = 1, 2, 3, 4 where Zsi and Zui are the heights of the ith corner and the vertical
height of the ith unsprung mass above the ground reference, respectively, Lsi is the
unweighted length of the ith suspension actuator, Ksi is the spring constant, and Csi
is the damper coefficient of the ith suspension actuator.
F si
K si C si
Tire
M ui
Z si Suspension
Z ui K ui C ui
Ground
Figure A.5 Suspension and tire models.
264
Appendix
Similarly, the unsprung masses are modeled as a spring and a damper in paral-
lel and Zui satisfy the following differential equation:
( )
mui Z ui = Kui Rfi Zui Cui Z ui Fwi gmui
where Rfi is the radius of the ith unloaded tire, Kui and Cui are the spring and the
damper constant of the ith tire model, and mui is the unsprung mass.
The corner heights of sprung masses need to be calculated in terms of the
known quantities. The heights of each corner at any time instant can be found by
adding the displacements due to roll and pitch to the initial height as follows:
where Zs is the vertical displacement of the CG and Zsiss is the steady-state height
of the ith corner of the related sprung mass.
i
Fxi = kxi f ( i )
1 + i
tan ( i )
Fyi = kyi f ( i )
1 + i
where kxi and kyi are the tire longitudinal and lateral cornering stiffness, respec-
tively. The variable i is expressed in term of the individual slip ratio, i, the slip
angle i, the tire-road friction coefficient, denoted by m, and the vertical force on
the tire, Fzi, as follows:
mFzi (1 + i )
i =
(kxi i )2 + (kyi tan ( i ))
2
2
(2 i ) i if i < 1
f (i ) =
1 if i 1
v + ar
1 = tan 1
u + cr
v + ar
2 = tan 1
u cr
v br
3 = tan 1
u cr
v br
4 = tan 1
u + cr
Rwi uti
if Rwi uti (during acceleration)
Rwi
i=
Rwi uti
if Rwi < uti (during braking)
uti
where R is the tire effective radius and uti denotes the velocity on the rolling direc-
tion from the ith individual tire model,
The vertical tire force is given subject to dynamic weight transfer excited by
pitch and roll motions,
a mhcg amhcg
Fz1 = mg u+ v + eCs1 p+ eKs1 p
2l l 2el
a mhcg amhcg
Fz 2 = mg u+ v eCs 2 p eKs 2 p
2l l 2el
a mhcg amhcg
Fz 3 = mg + u v + eCs 3 p+ eKs 3 p
2l l 2el
a mhcg amhcg
Fz 4 = mg + u v eCs 4 p eKs 4 p
2l l 2el
266
Appendix
where l = a + b is the wheel base, p, p denotes the roll angle and its time derivative,
and Csi and Ksi are the individual suspension damping and stiffness coefficients,
respectively. The variables and parameters used in the modeling study are denoted
in Table A.2.
Nonlinear characteristics response of the combined longitudinal and lateral
tire force model with respect to the slip ratio and the side-slip angle variable change
is simulated in Figures A.6 and A.7, respectively.
Table A.2 List of Physical Meanings of the 3-D Nonlinear Vehicle Model
u Longitudinal velocity of the vehicle v Lateral velocity of the vehicle
Z Vertical velocity of the vehicle r Yaw rate of the vehicle
mit zgner received a Ph.D. from the University of Illinois and held positions at
IBM, the University of Toronto, and Istanbul Technical University. He is a profes-
sor of electrical and computer engineering and holds the TRC Inc. chair on intel-
ligent transportation systems (ITS) at The Ohio State University. He is a fellow of
the IEEE. Professor zgners areas of research interest are in ITS, decentralized
control, and autonomy in large systems, and is the author of over 400 publica-
tions. He was the first president of the IEEE ITS Council in 1999 as it transformed
into the IEEE ITS Society and he has also been the ITS Society vice president for
conferences. Professor zgner has also served the IEEE Control Society in various
positions. He participated in the organization of many conferences and was the
program chair of the first IEEE ITS Conference and the general chair of the IEEE
Control Systems Society 2002 CDC, ITS Society IV 2003, and ICVES 2008. Teams
coordinated by Professor zgner participated successfully in the 1997 Automated
Highway System Technology Demonstration, the DARPA 2004 and 2005 Grand
Challenges, and the 2007 Urban Challenge.
269
270
About the Authors
Keith Redmill received a B.S.E.E. and a B.A. in mathematics from Duke University
in 1989 and an M.S. and a Ph.D. from The Ohio State University in 1991 and
1998, respectively. He has been with the Department of Electrical and Computer
Engineering since 1999, initially as a senior research associate and most recently as
a research scientist. Dr. Redmill has led or participated in a wide range of interdisci-
plinary projects including a series of self-driving automated passenger vehicles, au-
tonomous ground and aerial robotic vehicle development and experiments, sensing
and sensor fusion development projects involving computer vision, LADAR, radar,
GPS, IMU, and other sensing modalities, wireless vehicle-to-vehicle and vehicle-
to-infrastructure communication simulation and application development, traffic
monitoring and data collection, intelligent vehicle control and safety systems for
vehicles ranging from small ATVs to heavy duty commercial trucks, remote sensing
programs, embedded and electromechanical system design and prototyping, and
process control development. His areas of technical interest include control and
systems theory, intelligent transportation systems, autonomous vehicle and robotic
systems, real-time embedded systems, GPS and inertial positioning and navigation,
transit and traffic monitoring, image processing, wireless digital communication for
vehicles, sensor technologies, decentralized multiagent hierarchical and hybrid sys-
tems, and numerical analysis and scientific computing. He has extensive software
development, electronics development and testing, and embedded systems deploy-
ment experience. He is a member of the IEEE and SIAM.
Index
1609, 225, 227, 228, 229, 230, 246 Bus, 1, 5, 44, 49, 51, 52, 53, 55, 70, 118, 218,
802.11, 218, 220, 225, 226, 227, 228, 229, 231, 252
246
C
802.11p, 218, 220, 225, 226, 227, 228, 229,
231, 246 Camera, 4, 6, 8, 10, 47, 69, 85, 88, 89, 90, 91,
92, 93, 107, 108, 173, 194, 195, 251,
A 252
ABS, 1, 69, 71, 150, 158, 161, 162, 163, 165, Center of gravity (CG), 19, 179, 257, 258, 260,
166, 167 261, 262, 264, 266
Accelerometer, 80, 81, 84, 118 Challenge, 5, 7, 8, 11, 37, 38, 39, 41, 55, 56,
Ackerman steering, 183 57, 61, 64, 106, 110, 117, 136, 147,
Active, 3, 5, 10, 43, 69, 70, 78, 85, 95, 132, 151, 167, 184, 190, 199, 203, 206,
182, 219, 221, 223, 231, 252 208, 218, 253
Advanced cruise control, 1, 18, 44, 85, 158, Channel, 74, 93, 224, 225, 227, 228, 229, 230,
247, 252 231
AHS, 3, 37, 55, 56, 57, 90, 250 Classification, 83, 117, 128, 129, 130, 217, 220
Architecture, 11, 37, 40, 41, 104, 105, 106, Closed loop, 25, 28, 133, 151, 153, 167, 247
117, 151, 198, 225, 231, 250, 251 Clothoid, 32
Assistance, 5, 8, 10, 69, 181, 221, 250, 251 Cluster, 77, 78, 105, 106, 117, 120, 122, 123,
Authority, 34, 182 124, 125, 126, 127, 128, 129, 130,
Automate, 2, 3, 4, 5, 6, 11, 32, 49, 55, 57, 59, 132, 133, 134, 142, 147, 176
61, 67, 88, 90, 93, 106, 147, 149, 150, Collision, 1, 4, 5, 10, 13, 15, 18, 19, 41, 49, 52,
151, 167, 232, 236, 241, 242, 247 54, 55, 83, 88, 99, 104, 134, 147, 150,
Autonomy, 1, 4, 6, 8, 13, 37, 69, 104, 158, 193 160, 176, 219, 221, 222, 223, 231,
Awareness, 104, 117, 133, 136, 223, 232 232, 233, 234, 236, 238, 241, 242,
243, 244, 252
B
Communication, 3, 5, 6, 8, 10, 11, 47, 69, 70,
Bandwidth, 70, 115, 217, 225, 227, 237 71, 72, 77, 94, 104, 112, 115, 149,
Bayesian, 4 150, 151, 193, 217, 218, 219, 220,
Behavior, 1, 3, 8, 13, 19, 38, 39, 40, 41, 42, 56, 221, 222, 223, 224, 225, 226, 227,
58, 61, 67, 69, 101, 117, 119, 133, 231, 232, 234, 236, 237, 238, 239,
134, 180, 193, 196, 199, 214, 221, 240, 242, 243
232, 249, 250 Compass, 81, 84, 108
Bzier, 33 Congestion, 4, 149, 219, 223,
Bicycle model, 178, 257, 259, Constraint, 13, 31, 32, 38, 56, 57, 59, 106,
Brake, 1, 2, 7, 10, 13, 14, 16, 19, 20, 119, 166, 172, 184, 199, 237
22, 41, 70, 103, 149, 150, 151,
153, 161, 162, 165, 166, 260
271
272 Index
Control, 1, 2, 3, 4, 5, 7, 8, 11, 13, 17, 18, 22, Displacement, 14, 15, 17, 18, 19, 20, 22, 24,
28, 31, 33, 38, 40, 41, 42, 43, 44, 47, 26, 27, 30, 31, 93, 181, 190, 194, 257,
49, 51, 55, 57, 59, 61, 69, 70, 71, 74, 262, 264
78, 85, 89, 90, 94, 95, 101, 103, 104, Distance, 3, 8, 13, 15, 17, 19, 22, 28, 33, 43, 44,
109, 115, 117, 119, 129, 130, 133, 46, 47, 54, 56, 59, 61, 64, 73, 74, 75,
134, 135, 136, 138, 146, 149, 150, 77, 87, 88, 90, 91, 93, 99, 105, 109,
151, 157, 158, 161, 162, 165, 166, 124, 125, 126, 127, 129, 138, 142,
167, 168, 170, 171, 172, 176, 179, 144, 145, 146, 149, 150, 151, 158,
181, 182, 184, 193, 196, 208, 214, 159, 160, 161, 167, 169, 171, 172,
215, 217, 219, 223, 228, 230, 231, 173, 176, 178, 179, 181, 184, 199,
232, 233, 234, 237, 238, 239, 241, 200, 201, 202, 203, 204, 210, 212,
242, 247, 249, 251, 252 213, 218, 220, 225, 233, 234, 236,
Convoy, 1, 158, 252 237, 238, 240, 242, 243, 244, 249,
Cooperative, 4, 6, 8, 69, 93, 94, 150, 151, 193, 251, 253, 258, 261, 262
219, 223, 232, 238, 243, 253 Drive-by-wire, 2, 7, 43
Coordinate, 1, 71, 74, 75, 80, 81, 99, 110, 112, Driver behavior, 232, 249, 250
115, 118, 120, 122, 123, 124, 132, Driver model, 150, 232, 242, 249, 250, 251
183, 189, 198, 206, 209, 257, 260 DSRC, 218, 219, 220, 223, 225, 231
Coordinate transform, 112, 123 Dubins, 168
Cost, 42, 74, 80, 85, 88, 90, 104, 106, 129, 134, Dynamics, 11, 13, 14, 34, 49, 51, 60, 74, 97,
135, 196, 197, 199, 200, 201, 202, 150, 151, 153, 154, 157, 161, 162,
203, 204, 208, 211, 212, 213 165, 166, 168, 169, 172, 173, 178,
Cost function, 199, 200, 203, 204 181, 182, 199, 233, 257, 259, 260,
Cruise control, 1, 8, 13, 18, 44, 51, 85, 149, 261, 262, 263
150, 151, 158, 166, 232, 233, 241,
E
247, 252
Curvature, 4, 26, 27, 32, 56, 90, 136, 137, 167, Edge, 31, 32, 88, 89, 105, 126, 140, 142, 145,
168, 172, 174, 178, 179, 180, 181 196, 197, 201, 204, 206, 207, 208,
Curve, 28, 33, 42, 90, 134, 138, 160, 167, 169, 209, 210, 211, 212, 213
173, 174, 193, 219 Ego, 117, 119, 125, 127, 132, 133
Electronic toll, 223, 225
D Elevation, 93, 104, 181, 193, 194, 195, 199,
DARPA, 7, 8, 37, 38, 56, 64, 106, 117, 136, 200, 203, 204
147, 184, 185, 190, 203, 206 Energy, 4, 70, 93, 199, 219, 223, 253
Data structure, 106, 136, 214 Environment, 1, 8, 18, 37, 41, 44, 57, 64, 69,
Delay, 16, 20, 22, 75, 162, 179, 212, 213, 217, 70, 84, 85, 101, 103, 106, 108, 109,
221, 223, 232, 234, 238, 241 117, 120, 126, 130, 134, 173, 176,
DGPS, 6, 72, 78 199, 208, 220, 227, 231, 247, 250, 251
Digital elevation model (DEM) 194, 200 Estimation, 69, 95, 97, 108, 109, 162, 182, 208,
Dijkstra, 196, 197, 201 251
Dimension, 88, 91, 92, 99, 154, 180, 199, 214,
F
232
Disparity, 92, 93, 108 Fault, 101, 230, 247, 249
Fault tolerance, 247, 249
Index 273
Features, 71, 78, 92, 93, 106, 108, 117, 122, Highway, 3, 4, 5, 6, 8, 9, 13, 31, 32, 37, 41, 43,
123, 126, 128, 130, 133, 143, 144, 55, 56, 57, 58, 78, 90, 103, 105, 149,
149, 195, 199, 203, 204 150, 151, 167, 180, 219, 221, 224,
Feedforward, 162, 168, 170, 172, 233 225, 232, 234, 236, 238, 247
Fiber optic gyroscope, 80 Hybrid, 11, 34, 37, 42, 43, 44, 46, 49, 59, 61,
Filter, 41, 60, 75, 80, 90, 95, 96, 97, 98, 99, 150, 156, 157, 162
101, 103, 109, 112, 117, 126, 129,
I
130, 132, 150, 162, 249
Finite state machine, 38, 43, 44, 47, 48, 49, 52, Image processing, 85, 88, 90, 91, 97, 107, 108,
59, 61, 117, 135, 162, 251 119
Follower, 13, 19, 22, 158, 159, 160, 161, 233, IMU, 40, 41, 80, 83, 103, 105, 118,
239 Inertial, 4, 8, 40, 80, 103, 165,
Friction, 13, 14, 15, 19, 22, 97, 150, 161, 162, Infrastructure, 2, 4, 5, 6, 11, 26, 31, 32, 37, 69,
163, 166, 182, 264 72, 93, 94, 104, 167, 217, 218, 219,
Function, 3, 9, 39, 44, 47, 48, 60, 69, 74, 77, 220, 221, 223, 232
84, 85, 88, 89, 93, 97, 126, 138, 150, Intelligence, 3
152, 162, 163, 164, 165, 167, 176, Intelligent transportation systems, 219, 231
179, 199, 200, 201, 202, 203, 204, Intersection, 1, 5, 8, 34, 41, 43, 65, 73, 95, 99,
210, 213, 251, 264 105, 117, 120, 130, 133, 134, 135,
Fusion, 11, 40, 41, 69, 70, 84, 95, 101, 103, 136, 139, 140, 141, 146, 171, 196,
104, 105, 106, 109, 110, 112, 114, 206, 207, 212, 213, 214, 215, 217,
115, 117, 120, 122, 123, 124, 134, 218, 221, 222, 223, 231, 232, 249,
135, 176, 193, 223, 232, 247, 249 251, 252, 253
G Intervehicle, 6, 72, 150, 151, 225, 232, 234,
Geographic, 78, 193, 194, 196, 200, 206, 207 236, 237, 238, 239, 240, 242, 243
GIS, 137, 194 ITS, 219, 231
Goal 3, 5, 34, 39, 65, 106, 133, 136, 160, 161, J
167, 176, 184, 199, 201, 202, 206
J2735, 227
GPS, 1, 2, 4, 6, 8, 37, 38, 40, 41, 56, 71, 72, 73,
74, 75, 77, 78, 79, 80, 95, 103, 108, K
110, 117, 118, 120, 124, 173, 193, Kalman filter, 41, 80, 95, 96, 97, 98, 99, 101,
209, 220, 222, 224, 228 103, 109, 117, 129, 150, 162
Graph, 44, 106, 193, 195, 196, 197, 201, 204, Kinematic, 4, 71, 72, 80, 168, 210
206, 207, 208, 209, 210, 211, 212, 214
L
Grid, 41, 105, 106, 134, 136, 193, 196, 200,
201, 202, 203, 204, 206 Lane change 1, 4, 8, 24, 25, 26, 34, 45, 47, 48,
Guidance, 5 49, 58, 59, 60, 61, 62, 64, 69, 85, 181,
Gyroscope, 80, 82, 108, 109, 110, 118 207, 240
Lane departure warning (LDW), 9, 88, 252
H Lane keeping, 10, 29, 59, 150, 179, 181, 247
Headway, 13, 18, 19, 20, 21, 22, 23, 57, 59, Lane marker, 6, 37, 45, 47, 88, 89, 90, 91, 119,
149, 150, 151, 159, 234, 235, 239, 240 167, 206
Heuristic, 129, 197, 198, 201, 202, 203, 213 Lane tracking, 1, 3, 28, 34, 93
Hierarchical, 3, 214 Latency, 217, 218, 220
Hierarchy, 34, 38, 42, 64
High level, 11, 38, 39, 41, 42, 43, 103, 117, Lateral, 4, 5, 13, 17, 22, 24, 25, 26, 27, 29, 30,
134, 135, 136, 146, 214, 247 31, 40, 43, 49, 59, 60, 61, 70, 80, 89,
274 Index
90, 93, 94, 101, 112, 119, 152, 158, Navigation, 8, 38, 40, 56, 69, 72, 75, 101, 103,
167, 178, 179, 180, 181, 182, 233, 104, 134, 193, 194, 199, 206, 223
249, 257, 258, 259, 260, 261, 263, Network, 3, 4, 5, 39, 70, 71, 103, 120, 135,
264, 266, 267 136, 197, 206, 207, 208, 213, 214,
Layer, 60, 88, 151, 194, 204, 218, 220, 223, 215, 217, 218, 220, 223, 225, 226,
227, 228, 229, 230, 231, 232 227, 229, 230, 231
Leader, 13, 158, 159, 160, 161, 233, 234, 239, Node, 44, 196, 197, 201, 207, 208, 209, 210,
242, 243 211, 212, 213, 220, 221
LIDAR, 2, 8, 37, 84, 85, 86, 87, 88, 89, 97, 99, Nonlinear, 43, 75, 83, 97, 99, 151, 154, 161,
100, 106, 107, 109, 110, 111, 112, 162, 165, 166, 179, 180, 181, 182,
114, 115, 117, 119, 120, 123, 124, 258, 260, 264, 266
125, 126, 132, 143, 191, 251
O
Linear, 32, 80, 95, 97, 99, 108, 109, 110, 112,
118, 122, 123, 126, 127, 128, 129, Objective, 13, 60, 103, 117, 147, 176, 210
130, 133, 136, 137, 138, 150, 152, Obstacle, 1, 2, 4, 6, 8, 13, 15, 17, 18, 19, 38, 41,
154, 156, 161, 162, 165, 166, 178, 44, 45, 46, 47, 53, 56, 61, 62, 64, 71,
203, 232, 257, 259 85, 86, 88, 91, 95, 105, 106, 107, 109,
Linear assignment, 129 112, 116, 134, 136, 140, 141, 143,
Localization, 6, 38, 40, 56, 88, 101, 103, 104, 144, 145, 146, 149, 150, 176, 177,
108 182, 184, 191, 196, 198, 199, 206,
Longitudinal, 3, 8, 13, 15, 17, 18, 22, 24, 25, 212, 221, 233, 234, 235, 242, 243,
26, 27, 29, 30, 31, 40, 43, 60, 61, 70, 244, 247, 249, 251
72, 80, 90, 101, 149, 150, 151, 152, Obstacle avoidance, 1, 38, 44, 45, 46, 56, 61,
153, 154, 156, 157, 158, 159, 162, 62, 64, 136, 176, 184, 196, 198, 199,
163, 164, 178, 179, 181, 182, 232, 212, 221, 251
234, 236, 243, 257, 258, 260, 261, Occupancy grid, 41, 134, 136
263, 264, 266, 267 Off-road, 6, 7, 31, 38, 41, 43, 55, 56, 57, 71,
Low level, 3, 40, 42, 43, 60, 101, 103 84, 103, 104, 105, 106, 107, 109, 112,
134, 199, 201, 203, 204, 205
M On-road, 84, 90, 103, 109, 117, 134, 136, 193,
Magnetometer, 2, 81, 84 195, 206, 208
Magnet, 4, 5, 6, 32, 81, 83, 84, 93, 167 Optimization, 162, 196, 200, 206, 210
Map database, 39, 41, 69, 117, 134, 136, 139,
P
140, 193, 194, 199, 200, 201, 206,
207, 213, 222 Parking, 1, 39, 41, 42, 105, 120, 121, 134, 182,
Maps, 2, 11, 56, 89, 106, 193, 195, 196, 202, 183, 184, 185, 189, 190, 191, 206
223, 239 Passive, 32, 69, 70, 88, 167, 263
Marker, 4, 5, 6, 31, 32, 37, 45, 47, 69, 88, 89, Path planning, 11, 33, 42, 61, 115, 134, 136,
90, 91, 95, 119, 167, 206 182, 193, 196, 197, 198, 199, 200,
MEMS, 80, 81, 82, 83 201, 203, 204, 205, 206
Merge, 56, 59, 125, 134, 147, 238, 241, 242 Pedestrian, 8, 95, 103
Merging, 6, 8, 105, 124, 218, 221, 232, 238, Perception, 1, 4, 13, 133, 221, 232, 249, 251
239, 240, 241 Perspective, 70, 89, 91, 195
N PID, 33, 43, 150, 169, 172
Pitch, 40, 83, 84, 88, 101, 107, 109, 110, 118,
257, 260, 263, 264, 265, 266
Index 275
Platoon, 4, 6, 151, 218, 223, 232, 233, 234, 61, 69, 70, 71, 77, 80, 81, 83, 84, 85,
236, 240, 243 86, 88, 89, 90, 91, 93, 94, 95, 97, 98,
Point mass, 13, 14, 15, 19, 22, 24, 26, 27, 28, 99, 101, 103, 104, 105, 106, 107, 108,
97, 151, 152, 233 109, 110, 113, 114, 115, 116, 117,
Pollution, 4, 247 118, 119, 120, 121, 122, 123, 124,
Polynomial, 28, 31, 32, 90 125, 128, 130, 131, 132, 133, 134,
Positioning, 4, 6, 40, 71, 103, 191, 193, 209 135, 147, 150, 158, 167, 176, 178,
Potential field, 176, 196, 206 179, 181, 193, 196, 199, 200, 218,
Pseudorange, 74, 75, 78, 103 220, 223, 232, 238, 247, 249, 251
Sensor fusion, 11, 40, 69, 84, 101, 103, 105,
R
106, 109, 112, 113, 114, 115, 116,
Radar, 2, 6, 8, 32, 37, 47, 56, 69, 84, 85, 86, 93, 117, 120, 122, 123, 131, 134, 135,
94, 107, 112, 119, 120, 151, 158, 167, 176, 193, 223, 247
181, 194, 231 Simulation, 14, 18, 19, 27, 28, 150, 190, 204,
Ramp, 6, 238 232, 238, 240, 241, 242, 243, 249, 250
Range, 19, 54, 70, 74, 79, 83, 84, 85, 86, 87, Situation, 6, 8, 38, 41, 42, 44, 56, 57, 58, 59,
88, 90, 93, 101, 105, 106, 107, 110, 61, 64, 91, 104, 106, 109, 117, 120,
119, 120, 123, 124, 126, 130, 150, 128, 133, 134, 135, 136, 140, 141,
176, 179, 204, 218, 219, 220, 223, 144, 146, 147, 149, 150, 169, 173,
224, 225, 226, 227, 229, 230, 231, 188, 190, 193, 194, 195, 199, 200,
232, 234, 237, 238, 239, 241, 242, 251 207, 209, 220, 221, 223, 232, 241, 251
RANSAC, 89, 122, 126 Situational awareness, 117, 133
Raster, 115, 193, 194, 196 Slip, 162, 163, 164, 165, 166, 168, 178, 182,
React, 10, 19, 39, 58, 166, 211, 221, 232 259, 264, 265, 266, 267
Real time, 3, 71, 72, 104, 108, 150, 151, 162, Spline, 33, 42, 134, 136, 137, 138, 191
176, 190, 198, 208, 215, 219, 224, 247 Stability control, 5
Reliability, 88, 95, 217, 218 Standard 2, 17, 45, 46, 47, 49, 57, 61, 70, 71,
Roll, 17, 40, 41, 83, 84, 88, 101, 105, 109, 110, 75, 78, 85, 90, 97, 98, 123, 158, 178,
118, 150, 182, 257, 260, 262, 264, 202, 207, 218, 220, 224, 225, 226,
265, 266 227, 230, 231
RTK, 80 State machine, 34, 38, 41, 43, 44, 45, 47, 48,
S 49, 50, 52, 54, 55, 57, 59, 61, 63, 64,
66, 67, 117, 135, 162
Safety, 4, 5, 8, 69, 72, 84, 85, 88, 133, 149, 150,
Steering, 2, 3, 5, 7, 13, 22, 24, 25, 26, 27, 28,
158, 159, 160, 161, 180, 182, 199,
31, 32, 41, 43, 70, 89, 90, 103, 109,
217, 219, 220, 221, 223, 225, 231,
118, 150, 151, 166, 167, 168, 169,
232, 234, 235, 236, 238, 242, 247,
170, 171, 172, 173, 174, 176, 178,
249, 252
179, 180, 181, 182, 183, 184, 185,
SBAS, 72, 78,
199, 210, 257, 258, 260, 266
Security, 217, 219, 227
Sensing, 1, 3, 4, 8, 25, 31, 32, 37, 39, 41, 70,
Stereo, 4, 37, 85, 88, 91, 92, 93, 107, 108, 112,
78, 81, 84, 85, 93, 94, 103, 104, 105,
114
106, 107, 108, 119, 133, 150, 167,
Stiffness, 179, 258, 264, 266
176, 181, 232, 234, 238, 241, 242,
Stop-and-go, 6, 9, 223, 232, 241, 242, 243
243, 249, 251
Stripe, 6, 37, 56, 93, 94, 167
Sensor 2, 3, 4, 5, 8, 9, 10, 11, 19, 28, 34, 37,
Suspension, 260, 261, 262, 263, 366
38, 40, 41, 47, 49, 52, 53, 54, 57, 59,
276 Index
Vehicle model, 13, 14, 15, 16, 19, 22, 23, 24, Wireless, 3, 5, 10, 69, 71, 78, 94, 173, 217, 218,
151, 152, 153, 169, 172, 179, 183, 220, 223, 225, 226, 227, 231, 232
189, 257, 258, 260, 261, 262, 264, 266 World model, 105, 106
Vision, 2, 3, 4, 5, 10, 32, 37, 47, 88, 91, 93, WSMP, 228, 229
112, 114, 115, 117, 158, 167, 181
Y
Vision system, 2, 10, 37, 47, 88, 91, 112, 114,
115, 117, 167 Yaw, 33, 43, 70, 79, 81, 83, 84, 110, 115, 123,
Voronoi, 196, 204, 206 168, 178, 180, 181, 182, 183, 184,
189, 257, 258, 260, 261, 266
W
WAAS, 72, 79
Wave, 78, 85, 218, 220, 225, 227, 228, 229,
230, 231