for
Automated, Connected,
and Intelligent Vehicles
Taylor & Francis
Taylor & Francis Group
http://taylorandfrancis.com
Handbook of Human Factors
for
Automated, Connected,
and Intelligent Vehicles
Edited by
Donald L. Fisher, William J. Horrey,
John D. Lee, and Michael A. Regan
First edition published 2020
by CRC Press
6000 Broken Sound Parkway NW, Suite 300, Boca Raton, FL 33487-2742
Reasonable efforts have been made to publish reliable data and information, but the author and
publisher cannot assume responsibility for the validity of all materials or the consequences of
their use. The authors and publishers have attempted to trace the copyright holders of all material
reproduced in this publication and apologize to copyright holders if permission to publish in this
form has not been obtained. If any copyright material has not been acknowledged please write and
let us know so we may rectify in any future reprint.
Except as permitted under U.S. Copyright Law, no part of this book may be reprinted, reproduced,
transmitted, or utilized in any form by any electronic, mechanical, or other means, now known or
hereafter invented, including photocopying, microfilming, and recording, or in any information
storage or retrieval system, without written permission from the publishers.
For permission to photocopy or use material electronically from this work, access www.copyright.
com or contact the Copyright Clearance Center, Inc. (CCC), 222 Rosewood Drive, Danvers, MA
01923, 978-750-8400. For works that are not available on CCC please contact mpkbookspermissions@
tandf.co.uk
Trademark notice: Product or corporate names may be trademarks or registered trademarks, and are
used only for identification and explanation without intent to infringe.
Typeset in Times
by codeMantra
Contents
Preface.......................................................................................................................ix
Editors ....................................................................................................................... xi
Contributors............................................................................................................ xiii
v
vi Contents
Chapter 11 Driver State Monitoring for Decreased Fitness to Drive ................. 247
Michael G. Lenné, Trey Roady, and Jonny Kuo
Chapter 17 Automated Vehicle Design for People with Disabilities .................. 377
Rebecca A. Grier
Chapter 23 Techniques for Making Sense of Behavior in Complex Datasets ..... 497
Linda Ng Boyle
ix
x Preface
and intelligent vehicles how to respond when the automated driving suite reaches
the boundary of its operating envelope. Regulatory agencies and policy makers will
presumably want to understand how best to license drivers to operate automated
vehicles and what guidelines original equipment manufacturers of driver–vehicle
interfaces could use in order for an interface not to be considered a potentially lethal
distraction. Internationally, low- and middle-income countries will need to make
hard decisions about what features of automated, connected, and intelligent vehicles
will save the most lives and still remain affordable.
In summary, the Handbook combines in one volume the information needed by
a diverse community of stakeholders, including human factors researchers, trans-
portation engineers, regulatory agencies, automobile manufacturers, fleet operators,
driving instructors, vulnerable road users, and special populations. The Handbook
will be particularly important for the engineers developing the new technologies.
Technological advances often outpace related, yet fundamental research efforts into
the safety and broader implications of newly developed systems. Technology devel-
opers are just now beginning to realize the importance of human factors in this
burgeoning space, making the proposed Handbook a timely, valuable, and unique
resource for the various individuals who will need to design their systems with peo-
ple in mind and then evaluate those systems in ways that capture the impact of these
systems on safety.
The Handbook is an integrated volume, the chapters clearly referencing one
another. We hope that this helps readers navigate the Handbook. The Handbook con-
tains chapters written by some of the world’s leading experts that provide individuals
with the human factors knowledge that will determine in large measure whether the
potential of automated, connected, and intelligent vehicles goes realized or unre-
alized, not just in developed countries, but in low‐ and middle‐income countries
as well. The Handbook addresses four major transportation challenges—crashes,
congestion, carbon emissions, and confinement—from a human factors perspec-
tive in the context of automated, connected, and intelligent vehicles. We hope that
Handbook will help set the stage for future research in automated, connected, and
intelligent vehicles by defining the space of unexplored questions. Researchers will
not only have a place to turn for answers to questions but also a place to search for
the next generation of questions.
A final comment on the cover of the Handbook is in order. The history of auto-
mated vehicles has its ups and downs, like a road with many twists and turns. But
these diversions are not a function of change in the hope we and many involved in
this history have for such vehicles, vehicles which can literally address many of the
major challenges facing the world today: the economic, psychological or personal
cost of automobile crashes, congestion, carbon emissions, and lack of accessibility to
transportation. We hope, like the picture of the winding roads (our interpretation of
the cover art), that the progress is ever upward over time, towards a future where the
problems, like the roads, disappear.
Editors
Donald L. Fisher is a principal technical advisor at the Volpe National Transportation
Systems Center in Cambridge, MA, a professor emeritus and research professor
in the Department of Mechanical and Industrial Engineering at the University of
Massachusetts (UMass) Amherst, and the director of the Arbella Insurance Human
Performance Laboratory in the College of Engineering. He has published over 250
technical papers, including recent ones in the major journals in transportation, human
factors, and psychology. While at the Volpe Center he has worked extensively across
the modes on research designed to identify the unintended consequences of automation
and, when such are identified, to develop and evaluate countermeasures. Additionally,
he has developed a broad, interdisciplinary approach to understanding and remediat-
ing functional impairments in transportation, including distraction, fatigue, and alco-
hol. While at UMass Amherst, he served as a principal or co-principal investigator
on over 30 million dollars of research and training grants, including awards from the
National Science Foundation, the National Institutes of Health, the National Highway
Traffic Safety Administration, MassDOT, the Arbella Insurance Group Charitable
Foundation, the State Farm Mutual Automobile Insurance Company, and the New
England University Transportation Center. He is a former associate editor of Human
Factors and editor of both the recently published Handbook of Driving Simulation for
Engineering, Medicine and Psychology (2011) and the Handbook of Teen and Novice
Drivers (2016). He currently is a co-chair of the Transportation Research Board (TRB)
Committee on Simulation and Measurement of Vehicle and Operator Performance.
He has chaired or co-chaired a number of TRB workshops and served as a member
of the National Academy of Sciences Human Factors Committee, the TRB Younger
Driver Subcommittee, the joint National Research Council and Institute of Medicine
Committee on the Contributions from the Behavioral and Social Sciences in Reducing
and Preventing Teen Motor Crashes, and the State Farm® Mutual Automobile Insurance
Company and Children’s Hospital of Philadelphia Youthful Driver Initiative. Over the
past 25 years, Dr Fisher has made fundamental contributions to the understanding of
driving, including the identification of those factors that: determine how most safely
to transfer control from an automated vehicle to the driver; increase the crash risk of
novice and older drivers; impact the effectiveness of signs, signals, and pavement mark-
ings; improve the interface to in-vehicle equipment, such as forward collision warning
systems, back over collision warning systems, and music retrieval systems; and influ-
ence drivers’ understanding of advanced parking management systems, advanced trav-
eler information systems, and dynamic message signs. In addition, he has pioneered
the development of both PC-based hazard anticipation training and PC-based atten-
tion maintenance training programs, showing that novice drivers so trained actually
anticipate hazards more often and maintain attention better on the open road and in a
driving simulator. This program of research has been made possible by the acquisition
in 1994 of more than half a million dollars of equipment, supported in part by a grant
from the National Science Foundation. He has often spoken about his results, including
xi
xii Editors
William J. Horrey, PhD, is the traffic research group leader at the AAA
Foundation for Traffic Safety. Previously, he was a principal research scientist in the
Center for Behavioral Sciences at the Liberty Mutual Research Institute for Safety.
He earned his PhD in engineering psychology from the University of Illinois at
Urbana–Champaign in 2005. He has published over 50 papers on numerous topics
including visual (selective) and divided attention, automation, driver behavior, and
distractions from in-vehicle devices. He chairs the Transportation Research Board
Standing Committee on Vehicle User Characteristics (AND10) and the Publications
Division at the Human Factors and Ergonomics Society. He is an associate editor
of the Human Factors Journal and has served on several national and international
committees related to transportation safety and human factors.
John D. Lee, PhD, is the Emerson Electric Professor in the Department of Industrial
and Systems Engineering at the University of Wisconsin, Madison and director of
the Cognitive Systems Laboratory. Dr Lee’s research seeks to better integrate people
and technology in complex systems, such as cars, semi-autonomous systems, and
telemedicine. His research has led to over 400 publications and presentations, includ-
ing 13 books. He helped to edit The Oxford Handbook of Cognitive Engineering, the
Handbook of Driving Simulation for Engineering, Medicine, and Psychology, and
two books on distraction Driver Distraction: Theory, Effects, and Mitigation and
Driver Distraction and Inattention. He is also the lead author of a popular textbook
Designing for People: An Introduction to Human Factors Engineering.
Michael A. Regan is a professor of human factors with the Research Centre for
Integrated Transport Innovation at the University of New South Wales in Sydney,
Australia. He has BSc (Hons) and PhD degrees in engineering psychology from the
Australian National University and has designed and led more than 200 research
projects in transportation human factors and safety—spanning aircraft, motorcycles,
cars, trucks, buses, and trains. Mike is the author/co-author of around 200 peer-
reviewed publications, including three books on driver distraction and inattention,
and driver acceptance of new technologies. He was the 25th president of the Human
Factors and Ergonomics Society of Australia and is a Fellow of the Australasian
College of Road Safety.
Contributors
Liliana Alvarez Birsen Donmez
Occupational Therapy Mechanical and Industrial
University of Western Ontario Engineering
London, ON, Canada University of Toronto
Toronto, ON, Canada
Richard Bishop
Bishop Consulting Mica Endsley
Highland, MD, USA SA Technologies
Gold Canyon, AZ, USA
Ana Paula Bortoleto
Sanitation and Environment Donald L. Fisher
University of Campinas, School of Civil Transportation Human Factors (V314)
Engineering, Architecture and Urban Volpe National Transportation
Planning Systems Center
Campinas, Brazil Cambridge, MA, USA
xiii
xiv Contributors
Paul M. Salmon
Trent Victor
Centre for Human Factors and
Volvo Cars Safety Centre
Sociotechnical Systems
Chalmers University of Technology
University of the Sunshine Coast
Göteborg, Sweden
Maroochydore, Australia
William J. Horrey
AAA Foundation for Traffic Safety
John D. Lee
University of Wisconsin Madison
Michael A. Regan
University of New South Wales
CONTENTS
Key Points .................................................................................................................. 2
1.1 Background ....................................................................................................... 2
1.2 Definitions......................................................................................................... 5
1.2.1 Levels of Automation and Active Safety Systems ................................ 5
1.2.1.1 Levels of Automation.............................................................. 5
1.2.1.2 Active Safety Systems............................................................ 7
1.2.2 Automated, Connected, and Intelligent Vehicles.................................. 8
1.2.2.1 Automated Vehicles ............................................................... 8
1.2.2.2 Connected Vehicles................................................................ 8
1.2.2.3 Intelligent Vehicles ................................................................ 8
1.2.3 Operational Design Domain ................................................................. 9
1.3 The Handbook: A Quick Guide....................................................................... 10
1.3.1 The State of the Art: ACIVs (Chapter 2) ............................................ 11
1.3.2 Issues in the Deployment of ACIVs (Problems).................................. 11
1.3.2.1 Driver’s Mental Model of Vehicle Automation
(Chapter 3) ������������������������������������������������������������������������ 11
1.3.2.2 Driver Trust in ACIVs (Chapter 4)....................................... 12
1.3.2.3 Public Opinion about ACIVs (Chapter 5) ............................ 12
1.3.2.4 Workload, Distraction, and Automation (Chapter 6)............ 12
1.3.2.5 Situation Awareness in Driving (Chapter 7)......................... 13
1.3.2.6 Allocation of Function to Humans and Automation and
the Transfer of Control (Chapter 8)...................................... 13
1.3.2.7 Driver Fitness in the Resumption of Control (Chapter 9) .... 13
1.3.2.8 Driver Capabilities in the Resumption of Control
(Chapter 10)........................................................................... 14
1
2 Human Factors for Automated Vehicles
KEY POINTS
• Automated, connected, and intelligent vehicles hold great promise—
increasing safety for all and mobility for underserved populations while
decreasing congestion and carbon emissions.
• There may be unintended consequences of advances in automated technol-
ogies that affect the benefit that drivers can derive from these technologies,
potentially slowing the development of the technologies themselves.
• Many of these unintended consequences center around human factors
issues, issues between the driver and the vehicle, other road users, and the
larger transportation system.
• Human factors research can be used to identify and seek to explain the unin-
tended consequences, to develop and evaluate countermeasures, and to decrease,
if not entirely avoid, any delay in the deployment of these technologies.
1.1 BACKGROUND
We as humans cannot help but wonder what the future will hold and how it will
unfold in time. When it comes to the effect of advanced technologies on our behav-
iors and on the behavior of the vehicles that we drive, the public speculation has
Introduction 3
been especially intense over the last ten years, starting in 2009 when Google1 began
its self-driving car program, now called Waymo. Such vehicles have the potential to
substantially reduce the number of crashes, the level of carbon emissions, the conges-
tion in our road systems, and the spread of wealth inequality, while at the same time
increasing substantially opportunities for those who are mobility impaired (National
Highway Traffic Safety Administration, 2017; Department of Transportation, 2019;
Chang, 2015). Although some individuals are skeptical about early presumptions
regarding the benefits of automated, connected, and intelligent vehicles (ACIVs)
(Noy, Shinar, & Horrey, 2017; Bonnefon, Shariff, & Rahwan, 2016), the introduction
of vehicles with advanced features continues to increase exponentially. As with the
advent of the smartphone, anticipating the long-term positive and negative conse-
quences of new technology is nearly impossible (e.g., Islam & Want, 2014; Twenge,
Martin, & Campbell, 2018). It may be some time before we actually know the real
benefits of such vehicles and features.
However, it is possible to take the bumps out of the road to full automation even
without knowing the long-term consequences. This Handbook will focus specifically
on the changes that will be wrought and the corresponding human factors challenges
that need to be addressed by advances in the autonomy, connectivity, and intelli-
gence of the vehicles that are being introduced into the fleet today and are likely to be
introduced over the next several years. For readers relatively new to the discussion of
why human factors concerns might be relevant to advanced technologies in the auto-
mobile, a simple example from one of the editors’ and authors’ long list of examples
might help. This particular editor was driving 60 mph on a highway with two travel
lanes in each direction and for a brief second or two fell asleep (had what is techni-
cally referred to as a “microsleep”). He drifted into the adjacent lane, woke up, and
returned to his own lane. Had there been a large truck overtaking him in the adjacent
lane, he might not be here to tell the story. Others’ lives may have been destroyed as
well. But, fortunately, there was no truck and all was well. This speaks directly to
the lifesaving potential of technologies which, in this case, could have kept the car in
the lane and maintained speed adaptively. But it also points out just how beguiling
these technologies can be.
Most vehicles on the road today that keep the car centered and adjust the speed
require the driver to constantly monitor the driving environment (SAE International,
2018). Why? If we consider just automatic steering, there are many situations in
which it may unexpectedly deactivate. The driver really does need to be in the loop.
But, we also know that, perversely, automation can make it easier for the driver to
fall out of the loop and become disengaged (Endsley, 2017; Endsley, 1995). Are we
just trading off situations in which the technologies can be lifesaving for situations
in which the technologies actually create conditions that increase the likelihood that
a driver will crash if the technology cannot handle a particular scenario? This is the
fundamental paradox of automation. While it can provide unparalleled opportuni-
ties, it comes with its own set of challenges.
Perhaps this paradox is best exemplified by a recent study of driver’s trust in auto-
mation. A field study was run in which the drivers were asked to navigate a network
1 Now Alphabet.
4 Human Factors for Automated Vehicles
of roads on a closed course using a vehicle with both automatic steering and adap-
tive cruise control (ACC) (Victor, Tivestan, Gustafsson, Sangberg, & Aust, 2018).
The drivers were told that they needed to monitor the driving environment and were
warned by the vehicle driver state monitoring system if they did not comply. At the
end of the drive, either a car or a large garbage bag was placed in their path. Both
were easily crushed (e.g., a balloon car), but not obviously so to the driver before
striking them. Driver’s trust in automation was measured after the drive on a scale
of 0 (no trust) to 7 (high trust). Fully 21 of 76 drivers crashed (28%). All of the driv-
ers who crashed had trust scores of 5 or higher (Victor, 2019). In short, the drivers
became so reliant on the technology that they assumed it would avoid obstacles even
when the technology encountered situations it was not designed to accommodate.
For readers familiar with the ongoing issues, you will find material in this
Handbook which we believe will help set the stage for a human focus on future
discussions about ACIVs. To date, the definition that vehicle manufacturers and
major standards organizations have put forth concerning automation defines it
primarily from a vehicle-centric point of view: as the technology capabilities
increase, so too does the level of automation of the vehicle (SAE International,
2018). But this can easily mislead drivers into believing that their role in the driv-
ing process decreases as the levels of automation increase, despite warnings to the
contrary, to the point where drivers actually feel that they can safely disengage
from driving for long periods of time. A more driver-centric viewpoint is essential
to extending automation safety benefits, one which defines and supports the new
roles that drivers face.
The goal of this Handbook is to identify the real gains that can be achieved by
identifying the various human factors, challenges, and opportunities that ACIVs
pose and then, whenever possible, to suggest countermeasures (and opportunities
for needed research), recognizing that the rapid advances in technology are chang-
ing both the challenges and the countermeasures. It is arguably the case that, even
if these challenges were not addressed, there would be a net benefit to incorporating
advanced technologies into new automobiles (Najm & daSilva, 2000), especially
active safety systems like automatic emergency braking (AEB) and electronic stabil-
ity control. But it is also arguably the case that, if these challenges are addressed,
the net benefits will only increase. Moreover, by addressing these challenges, one
reduces the real likelihood that the development of the technologies will be hobbled,
if not halted, by crashes such as the one that occurred in Phoenix, Arizona (National
Transportation Safety Board, 2018).
That a scenario similar to the above might unfold and put temporary brakes on
the development and deployment of automated technologies already seems to have
occurred, at least in part. At the start of 2018, before the crash in Phoenix, it looked
like vehicles with advanced technologies were on the verge of becoming a wide-
spread reality. Uber prepared to launch a robo-taxi service. Waymo indicated that
individuals would be able to ride in a driverless car by the end of the year. General
Motors touted a demonstration it would undertake in New York City. None of these
(and several other similar initiatives) have come to pass, at least yet. In fact, the pub-
lic has become ever more skeptical about a self-driving vehicle, with some 71% now
afraid to ride in such a vehicle compared to 61% before the crash (Edmonds, 2019).
Introduction 5
1.2 DEFINITIONS
There is an understandable confusion around the terms used in the discussion of
automation, human factors, and driving. The terms do change with each passing
year, in part because the technologies change and in part because the field becomes
more expert and nuanced at understanding how best clearly to differentiate among
the various terms. As editors and authors, we have tried with each chapter to make
sure that the same terms are used in the same way and, if we deviate, to make it clear
how we are refocusing the definition of a term.
TABLE 1.1
Active Safety Systems and SAE Levels
Driver Role
Active Safety Active
System Category SAE Level Definition Manual Control Monitoring Take Over
AEB Driver Support Features 0 No AS Both steering and slowing/ Yes N/A
PCW (DSF) No ACC accelerating
FCW 1 AS or ACC, but Steering or slowing/ Yes Yes, limited warning*
ESC not both accelerating, but not both
BSW 2 Both AS and ACC Hands on wheel or eyes Yes Yes, limited warning*
engaged on road.
Automated Driving 3 AS and ACC No inputs while at Level 3 No Yes, several seconds of warning
System (ADS) Features 4 AS and ACC No inputs while at Level 4 No Yes, minutes of warning (if
driver chooses to drive the
vehicle outside of the ODD)
A subset of the 5 AS and ACC No inputs ever No No
above
Note: AEB: automatic emergency braking; PCW: pedestrian collision warning; FCW: forward collision warning; ESC: electronic stability control; BSW: blind spot warn-
ing; AS: automatic steering; ACC: adaptive cruise control.
* Warnings may indicate that the system or systems are no longer functioning, and drivers are responsible for monitoring the traffic environment and taking control when
needed.
Human Factors for Automated Vehicles
Introduction 7
complexity and range of possible automation types. As just one example, consider
a vehicle which has no steering wheel or pedals and is confined to operate only in a
circumscribed location. By definition it is a Level 4 vehicle (since it cannot operate
in all locations), but the driver is never asked to take over control since the vehicle
is confined to an area in which it is assumed to be totally capable.
There are at least four critical, related, details to understand about these defi-
nitions in order to keep clear the underlying concepts. First, the level of automa-
tion assigned refers to the features which currently are active, not to the vehicle
itself. Thus, a car that could operate at Level 3 may also be operating at times at
Level 0, being no different in any way in terms of driver inputs than a car with
no automated features. Second, the features associated with the first three lev-
els are referred to frequently as driver support features (DSF). They are called
DSFs because the driver still needs to be continuously monitoring the roadway.
Lateral and longitudinal control by themselves do not replace the driver at Levels
1 or 2; there are many other driving functions and tasks that still need to be per-
formed by the driver at these levels. Third, related to this, the features which need
to be added in order to achieve the three highest levels are now referred to as
Automated Driving System (ADS) features. (SAE International, 2018; Department
of Transportation, 2019). Fourth, the term advanced driver assistance systems (or
ADAS) is now no longer used by many researchers because it has become so broad
that it is no longer clear to what features an individual is referring when they speak
about such systems. In many cases, the systems that are described elsewhere as
being ADAS would overlap with many of those portrayed in Table 1.1 under active
safety systems, DSF, or ADS.
system with which the driver needs to interact (directly or indirectly as part of
the vehicle and larger transportation system). So, for example, the driver is almost
never faced with a vehicle which has, for example, just automatic steering and no
active safety systems. The driver is instead immersed in a system with many dif-
ferent features, with the human factors challenges becoming correspondingly more
complex.
1.2.2.1 Automated Vehicles
By automated (or autonomous) vehicles we mean those vehicles that have automatic
steering, ACC (adaptive speed, braking, acceleration, and deceleration), or both.
Depending on the level, they can be activated separately, in concert, or not at all (e.g.,
if a driver elects to turn them off; Table 1.1). They almost certainly have active safety
systems, especially at higher levels, as such systems are an essential ingredient—
at least conceptually—to the proper functioning of the higher levels (e.g., AEB is
essential, though it may not exist as a separate safety system but as an integrated
component in the overall system).
1.2.2.2 Connected Vehicles
Connected vehicles as a term is typically used to refer to vehicles that can com-
municate with one another via vehicular ad hoc networks (VANET or inter-vehicle
connectivity; V2V). But more generally, connected vehicles as entities in and of them-
selves can have upwards of 200 sensors that need to communicate with each other
(intra-vehicular connectivity or vehicle-to-sensor, V2S) (Lu, Cheng, Zhang, Shen, &
Mark, 2014). Vehicles can also connect with the roadway infrastructure (V2R), to the
internet (V2I), and to the ubiquitous Internet of Things (IOT). Each of these different
types of connectivity creates its own human factors challenges.
Note that vehicles can be connected without having any lateral or longitudinal
control features. So, for example, intersection collision warning systems require
vehicles to communicate with one another and, by definition, warn the driver of
a potential collision, but they do not require that the vehicles have automated fea-
tures. Similarly, a vehicle can be automated without having any connectivity to other
vehicles or the changing infrastructure (e.g., signal status).
1.2.2.3 Intelligent Vehicles
We include the term intelligent vehicles to account for the additional features of
future vehicles that do not align with the features of automated and connected vehi-
cles. These include driver state monitoring algorithms that can tune the vehicle to the
Introduction 9
2 Note that the systems are in constant flux. New systems may have different operational design domains.
3 Again, please be aware that the systems are in constant flux. New systems may have different opera-
tional design domains.
10 Human Factors for Automated Vehicles
the ACC is also activated, the ACC could automatically disengage. Additionally, the
ACC will not detect or brake for children, pedestrians, animals, or other objects. It
may not detect a vehicle ahead on winding and hilly roads. The list goes on.
In summary, it is important for the driver to understand the ODD for all of the rea-
sons listed above. Whether the driver can actually do so is one of the many questions
we will address in the chapters in the Handbook, a brief discussion of which follows.
FIGURE 1.1 Engagement and disengagement and the factors which influence them.
Introduction 11
We have organized the Handbook into five sections. The first section, including the
Introduction, provides the reader with a basic understanding of the human factors issues
surrounding ACIVs along with a comprehensive discussion of the many ongoing and
future activities targeting the research, development, and deployment of such vehicles.
The second section focuses on developing an understanding of the unintended conse-
quences of automation for the human driver. These issues emerge from more funda-
mental human characteristics that can be used both to identify the source of potential
problems and to serve as the foundation for solutions. For example, the mental models
that drivers have of automation are fundamental characteristics that can not only lead to
problems but also point to how to train people and how to design better HMIs. The third
section focuses on the possible solutions to these unintended consequences. The fourth
section introduces additional topics that do not neatly fit into the above categories. And,
finally, we conclude with a section on the evaluation of the safety benefits of ACIVs.
the driver does, from deciding at what speed to travel to what route to take and, now,
what level of automation to engage, and whether the systems currently activated
are operating within the envelope of the ODD. In this chapter, the authors review
the various mental models of automation and driving, showing how fundamental
psychological biases contribute to their formation, especially when automation
is entered into the picture, and then explaining how those mental models govern
driver’s short (e.g., sudden braking), intermediate (e.g., level of automation), and
long-term (e.g., mode choice) behaviors.
not only the driver that needs to be situation aware as the driver is embedded in a
larger transportation system. In Chapter 13, the authors argue that considering the
situation awareness requirements of both human and non-human agents is critical,
as well as how they can exchange and connect their awareness with one another in
different road environments. To demonstrate their approach, the authors present an
overview of their distributed situation awareness model and an analysis of the recent
Uber–Volvo fatal collision in Tempe, Arizona.
4 The literature generally uses the terms HMI and Driver–Vehicle Interface (DVI) interchangeably; we
will use HMI throughout this paper but view HMI and DVI to be synonymous for our purposes.
16 Human Factors for Automated Vehicles
on more of the driving task, the HMI might need to become more complex to prop-
erly support the driver. A continuing challenge is to identify just what these changes
to complexity are amidst the changing and uncertain landscape of advanced vehicle
technology. Despite these challenges, the objective of this chapter is to summarize
what is known regarding HMI design principles for ACIVs. Many of these principles
are aimed at addressing the important issues raised in the preceding chapters.
1.3.3.3 Automated Vehicle Design for People with Disabilities (Chapter 17)
Impairments to a driver’s fitness reduce the driver’s normal range of safe operations,
but generally do not impact the driver’s ability to obtain a motor vehicle license.
However, drivers with disabilities who cannot obtain a driver’s license may be able
to obtain one when fully automated vehicles (SAE Levels 4/5) are available, poten-
tially providing them with much greater mobility and independence. In Chapter 17,
the author describes what is needed from a human factors standpoint in order for
this potential to be realized. In particular, it is argued that accessibility for per-
sons with various disabilities needs to be considered early in the design process.
Although accessibility has not traditionally been a design focus for passenger vehicle
manufacturers, there are lessons to be learned from other modes of transportation
that can be leveraged in the design of fully automated vehicles. The first part of the
chapter introduces readers to the social model of disabilities, which is a philosophy
that views disabilities as an individual difference similar to height and gender. The
second part of the chapter provides an overview of the universal design and its seven
principles for developing products and systems that are usable by all. The third and
final section of the chapter discusses what aspects of a highly automated vehicle are
new and unique and how to make these accessible to persons with disabilities.
the safety of ACIV systems. In Chapter 18, the authors summarize the general con-
cept of learner-centered HMI design for ACIVs and provide information on specific
training-related factors. Training is potentially useful for everything from learning
how to respond to the warnings, to learning how to monitor the dynamic driving task
when in Level 2 and above, and to re-familiarizing oneself with driving skills that
might have atrophied. The chapter serves as a foundation for driver training stake-
holders, technology developers, consumers, and legislatures to address the growing
need to include relevant and effective training for ACIV systems as these technolo-
gies are developed and deployed.
management and analysis of data that are derived from these evaluations, especially
issues centered on big data. In Chapter 23, the author describes the techniques that
can help the analyst make sense of these complex datasets. First, the author describes
methods for understanding complex datasets, including how to manipulate, visu-
alize, and examine the data. Second, the author describes the research questions
that can be answered about both the operator and the system with large datasets.
Finally, the author describes both the techniques and tools needed to make sense of
the behavior given the context, user, and the technology itself and the questions that
should be asked to check/validate the correctness of the outcomes.
1.4 CONCLUSION
This introduction and overview is not meant to distill the wisdom, experience, and knowl-
edge that are conveyed within the Handbook. Rather, our goal as editors has been to pro-
vide the reader with the motivation for the Handbook, which follows from the paradox
of automation and the basic terminology. We hope this handbook leaves the reader less
rather than more confused about the ACIV landscape as it is relevant to human factors.
We leave where we began, hopefully having provided substance to our initial remarks.
Automation has saved, does save, and will continue to save lives. A great majority of
the reasons for crashes have been traced back to the driver (National Highway Traffic
Safety Administration, 2008). Yet as the chapters make clear, the larger system of
which the driver is a part is almost never designed with the characteristics, strengths,
and weaknesses of the human front and center. And in those cases where the human
driver clearly is at fault, this does not mean that the automation that is being introduced
in today’s vehicles will necessarily produce the benefits everyone hopes it will. First,
the type of automation that has saved so many lives is largely of a different kind (active
safety features) than the type of automation which is working its way into Level 1 and 2
cars (automatic steering and ACC). Thus, the automation itself could introduce errors.
Second, the type of automation that is being introduced is being used to replace driver
operations in areas where the human driver is phenomenally successful, having only
1.25 fatalities per 100 million miles of vehicle travel. Thus, the automation needs to
meet an especially high threshold of safety. Third, increasingly autonomous systems do
not remove humans and their errors, but simply displace them. Future crashes might be
due to programmer, remote operator, or maintenance errors. This is perhaps the most
nefarious instance of the automation paradox: automation might eliminate the current
driver errors, but might create new opportunities for even more dangerous “human
errors.” This creates challenges for the driver and others in the sociotechnical system
of transportation that do not exist with vehicles in the recent past. Understanding these
challenges and providing insight into the potential countermeasures is the purpose of
the Handbook. It is a purpose in service of the goal sought by all concerned: achieving
the maximum benefits of ACIVs. There is so much promise.
ACKNOWLEDGMENTS
The editors would like to acknowledge the help of the publisher, Taylor & Francis,
and everyone with whom we interacted including Cindy Carelli, Erin Harris, and
20 Human Factors for Automated Vehicles
Assunta Petrone. They were always helpful and quick to respond. We are especially
grateful to the fantastic group of authors that contributed both their time and energy
in drafting their respective chapters. Without them, such a Handbook would not be
possible. We also would like to thank Daniela Barragan and Stanislaw Kolek for
their assistance in processing and proofing several chapters. Donald Fisher would
like to acknowledge the support of the Volpe National Transportation Systems
Center for portions of the preparation of this Handbook. William Horrey is grateful
for the support of David Yang and the AAA Foundation for Traffic Safety. The opin-
ions, findings, and conclusions expressed in this publication are those of the authors
and not necessarily those of the Department of Transportation, the John A. Volpe
National Transportation Systems Center, the AAA Foundation for Traffic Safety, or
the University of New South Wales.
REFERENCES
Bonnefon, J., Shariff, A., & Rahwan, I. (2016). The social dilemma of autonomous vehicles.
Science, 24, 1573–1576.
Cadillac. (2018). 2018-cad-ct6-owners-manual. Retrieved September 22, 2018, from www.
cadillac.com/content/dam/cadillac/na/us/english/index/ownership/technology/super-
cruise/pdfs/2018-cad-ct6-owners-manual.pdf
Chang, J. (2015). Estimated Benefits of Connected Vehicle Applications: Dynamic Mobility
Applications AERIS, V2I Safety, and Road Weather Management. Washington, DC:
Department of Transportation.
Department of Transportation. (2019). Automated Vehicles 3.0. Preparing for the Future of
Transportation. Washington, DC: Department of Transportation.
Edmonds, E. (2019, March 14). Three in Four Americans Remain Afraid of Fully Self-Driving
Vehicles (AAA). Retrieved April 9, 2019, from https://newsroom.aaa.com/2019/03/
americans-fear-self-driving-cars-survey/
Endsley, M. (1995). Toward a theory of situation awareness in dynamic systems. Human
Factors, 37, 32–64.
Endsley, M. (2017). Autonomous driving systems: A preliminary naturalistic study of the
Tesla Model S. Journal of Cognitive Engineering and Decision Making, 11, 225–238.
Fridman, L. (2018). Human-Centered Autonomous Vehicle Systems: Principles of Effective
Shared Autonomy. arXiv, Cornell University. Retrieved October 12, 2019, from https://
arxiv.org/pdf/1810.01835.pdf
IIHS HLDI. (2018). GM Front Crash Prevention Systems Cut Police-Reported Crashes.
IIHS, HLDI. IIHS. Retrieved July 14, 2019, from https://www.iihs.org/news/detail/
gm-front-crash-prevention-systems-cut-police-reported-crashes
Lu, N., Cheng, N., Zhang, N., Shen, X., & Mark, J. (2014). Connected vehicles: Solutions and
challenges. IEEE Internet of Things Journal, 1, 289–299.
Najm, W. & daSilva, M. (2000). Benefits estimation methodology for intelligent vehicle
safety systems based on encounters with critical driving conflicts. ITS America 10th
Annual Meeting and Exposition. Boston.
National Highway Traffic Safety Administration. (2008). National Motor Vehicle Crash
Causation Survey (NMVCCS): Report to Congress. Washington, DC: Department of
Transportation.
National Highway Traffic Safety Administration. (2017). Automated Driving Systems 2.0:
A Vision for Safety. Washington, DC: U.S. Department of Transportation. Retrieved
November 23, 2017, from www.nhtsa.gov/sites/nhtsa.dot.gov/files/documents/13069a-
ads2.0_090617_v9a_tag.pdf
Introduction 21
National Transportation Safety Board. (2018, May 24). Preliminary Report. Highway.
HWY18MH010. Retrieved from NTSB Investigations. Accident Reports: www.ntsb.
gov/investigations/AccidentReports/Reports/HWY18MH010-prelim.pdf
Noy, I., Shinar, D., & Horrey, W. (2017). Automated driving: Safety blind spots. Safety
Science, 102, 68–78.
SAE International. (2018, December 11). SAE International Releases Updated Visual Chart for
Its “Levels of Driving Automation” Standard for Self-Driving Vehicles. Retrieved from
SAE International: www.sae.org/news/press-room/2018/12/sae-international-releases-
updated-visual-chart-for-its-%E2%80%9Clevels-of-driving-automation%E2%80%9D-
standard-for-self-driving-vehicles
Toyota. (2017, November 14). Toyota Owners. Retrieved from Manuals & Warranty: www.toy-
ota.com/owners/resources/owners-manuals?&srchid=semGOOGLETOO_Resources_
Owner_Manuals+-+BMM%2Btoyota+%2Bmanual&gclid=CLvHnsW5v9cCFVCiswo
dtzMPmA&gclsrc=ds
Victor, T. (Performer). (2019, January). Transportation Research Board Human Factors
Workshop. Guiding the Design of Partially Automated Vehicles. Annual Meetings of
the Transportation Research Board, Washington, DC.
Victor, T., Tivestan, E., Gustafsson, P. J., Sangberg, F., & Aust, M. (2018). Automation
expectation mismatch: Incorrect prediction despite eyes on threat and hands on wheel.
Human Factors, 60, 1095–1116.
Taylor & Francis
Taylor & Francis Group
http://taylorandfrancis.com
2 Decades of Research and
Automated Driving
Development Leading to
Today’s Commercial Systems
Richard Bishop
Bishop Consulting
CONTENTS
Key Points................................................................................................................. 24
2.1 Introduction .................................................................................................... 25
2.1.1 Automated Driving: From Vision to the Launch of a New
Industry in 70 Years............................................................................ 25
2.1.2 Advent of Active Safety Systems........................................................ 27
2.1.3 Addressing Safe Testing and Deployment of Automated Driving ..... 27
2.1.4 Pursuit of Nascent “Holy Grails” ....................................................... 28
2.2 Distinctions within SAE Levels of Automation ............................................. 29
2.3 Automated Driving: Technology Basis........................................................... 31
2.3.1 Understanding the World to Make Proper Driving Decisions ........... 31
2.3.2 Perception, Mapping, and Localization .............................................. 31
2.3.3 Motion Planning and Control ............................................................. 32
2.3.4 Artificial Intelligence........................................................................... 34
2.3.5 Off-Board Information Sources .......................................................... 34
2.3.6 Driver Monitoring as Key to Safety Case........................................... 34
2.3.7 Behavioral Competencies and Remote Support.................................. 35
2.3.8 Design and Test Processes to Ensure Safety....................................... 36
2.3.8.1 Functional Safety ................................................................. 36
2.3.8.2 Cybersecurity........................................................................ 37
2.3.8.3 ADS Validation Processes ................................................... 37
2.4 Automated Driving Commercial Development and Deployment................... 38
2.4.1 Automated Fleet Services for Freight and Parcels ............................. 39
2.4.2 Automated Fleet Services for People.................................................. 40
2.4.3 Private Ownership: Automation Features in Mass-Market
Passenger Cars..................................................................................... 43
2.5 Regulatory Considerations.............................................................................. 47
2.6 Going Forward: ADS Implications for Human Factors.................................. 48
References................................................................................................................. 51
23
24 Human Factors for Automated Vehicles
KEY POINTS
• The modern era of automated driving has been underway worldwide since
the early 1990s, initially supported by public funding. Private sector fund-
ing began to dominate after Google entered the space early in the 2010
decade, greatly increasing the pace of development.
• Active safety systems that intervene via braking or steering to avoid a crash
are now available or standard on many passenger vehicles. We can expect
the crash rate for human-driven vehicles to begin a distinct downward trend
when the number of equipped vehicles reaches an inflection point. This
beneficial trend will be largely independent of the deployment pace of auto-
mated vehicles.
• The concept of Operational Design Domain (ODD) is essential to describ-
ing the capability of driver support features and automated driving sys-
tems (ADS). ODD defines the operating conditions under which the ADS is
designed to function and can include many factors such as road type, speed,
environmental conditions, and traffic conditions.
• Key factors in developing safe roadworthy Level 4 vehicles are functional
safety for overall system design, artificial intelligence for perception, simu-
lation to support validation, and robust cybersecurity.
• New approaches and standards are needed to communicate the intention
and awareness of driverless trucks and robo-taxis to other road users, i.e.,
“external human–machine interface (HMI).”
• Off-board information can augment situational awareness for active safety
systems, driver support features, and ADSs. This may include GPS, addi-
tional data from the cloud (roadworks ahead, weather changes), and low-
latency information that may be transmitted directly from the infrastructure
or other vehicles. Due to the possibility of wireless communications inter-
ruptions or lack of communication from some traffic participants, this data
can augment situational awareness but cannot be relied upon. Therefore,
while off-board information can enhance performance, it is not necessary
for implementing automated driving.
• An exception to the above point is truck platooning systems that rely on
vehicle-to-vehicle communications implemented as part of system design,
in a manner so that communications integrity is controlled. Level 1 truck
platooning is coming to market now, which will evolve to Level 4 driv-
erless follower trucks behind a human-driven lead truck in the coming
years.
• Level 4 robo-taxis for street operations and driverless trucks for highway
operations are rapidly moving toward commercialization, with on-road
testing now underway using safety drivers.
• Regulations governing the operation of driverless vehicles on public roads
are evolving and vary significantly around the world; currently the United
States is the most open environment.
History and Current Status of Automated Driving 25
2.1 INTRODUCTION
Before beginning a detailed discussion of the human factors issues associated with
automated, connected, and intelligent vehicles (ACIV), it is important to have some
understanding of the current and future technologies that are likely to be introduced
into future vehicles, along with the different classification schemes, key concepts,
and timelines for deployment. These technologies were described briefly in the pre-
vious chapter, and here they are described in more detail. Note: because commercial
activities in ACIV are evolving quickly, this chapter should be considered a snapshot
of the situation at the time of writing (2019).
The ideal road trip should be safe, smooth, uninterrupted, and expeditious. For
the vehicle occupants, the time should be productive, restful, and/or entertaining,
and they should be connected with and aware of the information they care about.
Accessing this capability needs to be affordable and, for most people, it is important
that the trip be “good for society as well as for me,” i.e., environmentally friendly.
Automated driving promises to bring us closer than ever to this ideal, particularly
when joined with vehicle connectivity that enables optimized traffic flow. Tech
developers are envisioning a driverless and crashless society and traffic engineers
are dreaming about the end of congestion—a tall order indeed: can it really be?
FIGURE 2.1 Demo ’97 automated vehicles in platooning formation. (From California
Path Program, https://path.berkeley.edu/research/connected-and-automated-vehicles/national-
automated-highway-systems-consortium/photo.)
language with substantial investments, joined by the broader tech industry. For
example, Toyota has invested hundreds of millions of dollars in Uber (Somerville,
2019). In early 2019, the German Association of the Automotive Industry (VDA)
car industry association estimated that Germany’s car industry alone will invest
18 billion euros in “digitization and connected and automated driving” by 2021
(Eckert, 2019). Independently, extensive robo-taxi public road testing is underway
by numerous startup companies. Though specific introduction dates have not been
announced, given the maturity of current systems, it is conceivable that driverless
mobility services will be available starting in the 2020 timeframe.
Applying automated driving to truck operations is another very active area,
driven by the potential for reduced fuel consumption and labor costs. Fuel use can
be reduced via Level 1 “platooning” enabled by vehicle-to-vehicle (V2V) communi-
cations. Fully driverless trucking, with the obvious benefits of reduced labor costs,
has attracted substantial investment, launching many startups. Truck OEMs have
recently become active here as well. For last-mile street-level operations, driverless
parcel delivery has also seen an upswing in activity.
The following sections discuss key automated driving concepts and underlying
technology, plus provide a snapshot of the current level of testing and deployment
activities. The chapter concludes with observations on the implications of these
developments for human factors research and development.
At the lower end of the scale (Levels 1–2), Driver Support Features (DSFs) perform
a portion of the driving task, controlling lateral or longitudinal control (Level 1) or
both (Level 2), with the human having ultimate responsibility to monitor the situ-
ation and intervene as needed. Examples are Adaptive Cruise Control and Lane
Centering. In Level 3, the ADS is performing the entire driving task within the
defined ODD, but a human driver is required to be available to take over control
when requested by the system. For Levels 4 and 5, the ADS takes full responsibility
for vehicle control; the vehicle, not the driver, is driving. For Level 4 this is condi-
tional, focused on a specified ODD, whereas for Level 5 it is unconditional—i.e.,
the vehicle can automatically handle all driving situations now handled by human
drivers. For the foreseeable future, deployment of highly automated systems will
be at Level 4. While Level 5 is useful as a logical endpoint of the scale, there may
not be a sufficient business case to actually deploy “anywhere, anytime” Level 5
systems; i.e., society and markets may not see the need for that last 0.0001% of
ADS capability.
The levels of automation are intended primarily to distinguish between various
degrees of human versus machine control. They do not address implementation
aspects such as reliance on connectivity, nor do they address some key aspects that
are important to the business case. For instance, whether a vehicle has a driver, or
has driver controls, is not addressed for Levels 4 and 5.
It should also be noted that a particular vehicle may operate at different levels of
automation depending on the operational environment and task at hand. A vehicle
designed for Level 4 operations within a single lane on the highway may revert to
Level 2 momentarily if the system requires that the driver approves/monitors a lane
History and Current Status of Automated Driving 31
change maneuver. The same vehicle may operate at Level 1, providing low-speed
Adaptive Cruise Control on suburban and city streets, and automatically execute a
Level 3 parking maneuver at the end of the trip.
High-definition digital maps include road geometry, direction of travel, lane con-
figurations, crosswalks, traffic lights, stop signs, curbs, and sidewalks, all registered
with high accuracy and in three dimensions. Maps are created in advance by the self-
driving vehicle’s sensors, as safety operators manually drive test vehicles throughout
a city with on-board sensors scanning the entire 3D space including roads, side-
walks, and buildings. The resulting base map is annotated with the relevant traf-
fic regulations and guidance, such as yielding relationships (Ford Motor Company,
2018). Processing of sensor data interacts with the maps; for instance, at a complex
signalized intersection with several traffic signal heads hanging above, the known
3D location of the specific traffic light relevant to the ADS vehicle assists the soft-
ware in detecting signal status of the correct signal head, plus it avoids allocating
processing resources to irrelevant information elsewhere in the scene.
Some ADS developers create their own HD maps, while others can access this infor-
mation from a diverse group of startups or traditional mapping providers. The maps
must constantly be refreshed as the world changes. Nevertheless, the ADS must robustly
adapt to recent changes in the real world not noted in the map. Any variance between the
stored map and the world as detected by on-board sensors is uploaded to the tech devel-
oper’s cloud. Efforts are underway to establish an industry standard for sharing HD Map
data from vehicles back to the cloud, which would enable broader sharing.
ADS developers implement localization techniques to determine the physical loca-
tion of the vehicle as well as its position relative to nearby objects; this is generally
accomplished by correlating digital map data to information coming from perception
sensors. While global positioning systems (GPS) are useful for high-level navigation,
automated driving must rely on this more direct localization approach. Tech devel-
oper Aurora notes that, using these means, they are able to “determine the vehicle’s
position even in environments that deny or deceive GPS, localizing all six degrees of
freedom to within 10 centimeters and 0.1 degree of accuracy” (Aurora, n.d.).
Waymo’s Voluntary Safety Self-Assessment Letter, provided to the U.S. National
Highway Traffic Safety Administration (NHTSA), offers some excellent examples of
how perception, mapping, and location come together to understand a traffic scene.
As shown in Figure 2.2, Waymo self-driving vehicle software has detected vehicles
(depicted by green and purple boxes), pedestrians (in yellow), cyclists (in red) at the
intersection, and a construction zone up ahead (Waymo, 2017).
FIGURE 2.2 (See color insert.) Understanding of a real-world traffic scene in virtual
space. (From Waymo 2018.)
FIGURE 2.3 (See color insert.) Assigning predictions to each object in the traffic scene.
(From Waymo 2018.)
34 Human Factors for Automated Vehicles
human judgment to take over driving when needed. Driver-facing cameras assess
the driver’s head position and/or gaze to “monitor whether the driver is monitoring
the technology.” If the driver is not attentive, warnings are activated that may lead
to the system disabling support or achieving a Minimal Risk Condition (MRC)
(see below).
There is a key trade-off between having the ADS fulfill all behavioral competen-
cies (incurring additional cost and development time) or taking another approach
relying on off-board resources. Remote system support—bringing a human into the
loop—can be applied if a driverless vehicle becomes confused or lacks the data to
keep driving (due to construction, weather, smoke, etc.) (Nissan Motor Company,
n.d.). In this case, the vehicle will stop in-lane and contact a remote human operator
who will view the situation and download a new path that the vehicle implements
with its on-board ADS. At times this may involve the human authorizing an override
of ADS rules, such as “never cross a double yellow line” on the pavement, which
could be necessary to proceed beyond a lane closed for maintenance. For robo-taxis
in particular, the complexity of endpoints (pickup and drop-off situations) may likely
need remote support to provide efficient services to riders.
Remote driving approaches are also being pursued. Here, a remote human driver
fully operates a driverless vehicle in a service setting (such as driving tractor trail-
ers from a warehouse to a transfer yard near the highway where a driverless truck
will take the load onwards) or to rescue a vehicle designed for driverless operation
that had encountered a failure or a situation it cannot handle (such as a sudden
snowstorm that is outside the system ODD) (Starsky Robotics, n.d.; Ford Motor
Company, 2018).
36 Human Factors for Automated Vehicles
Further
our resulting self-driving vehicle will fail safely or fail functionally depending on the
type of the malfunction. Fail-functional systems include braking, steering, electrical
History and Current Status of Automated Driving 37
low voltage power and the Virtual Driver System. In the event that driving conditions
sufficiently change to violate the ODD’s conditions, our vehicles will implement the
protocols defined by the Minimal Risk Condition.
(Ford Motor Company, 2018)
2.3.8.2 Cybersecurity
Cybersecurity presents another challenge that must be handled in ADS system
design and health monitoring. Most ADS under development is tethered to the man-
ufacturer’s or technology developer’s cloud for system monitoring, data collection,
and, in some cases, active control of the vehicle. This wireless connection creates
a “threat surface” that can be attacked by hackers. While news reports continue to
emerge about the latest hack of a car (Wired.com, 2017), automobile manufacturers
have invested extensive resources in developing new architectures, procedures, and
hardware/software to prevent a cyberattack. ADS developers must design cyberse-
cure systems as well. The Alliance of Automobile Makers (AAM) has noted that
“vehicles are highly complex with multiple layers of security, and remote access is
exceedingly difficult by design. New cars being launched now have a substantial
increase in cybersecurity compared to earlier years. Automakers are collaborating
in all areas possible, including hardware, software and knowledge sharing with sup-
pliers, government and the research community” (Kunkle, 2019). Collaboration is
key to monitoring the evolution of cyberattacks and securing critical vehicle sys-
tems. The Automotive Information Sharing and Analysis Center (Auto-ISAC) was
established by the auto industry for this purpose, which is open to ADS developers
as well. Both the Auto-ISAC (n.d.) and NHTSA (2016) have published reports on
automotive cybersecurity best practices.
result from this testing and simulation are pushed out to the on-road fleet for further
validation, a process that may occur on a daily basis in some cases. This ongoing
feedback loop is core to agile and fast development and is a key reason why ADS
development has progressed so rapidly.
Another example is BMW’s Data-Driven Development process (BMW, 2019),
which is being applied to validation of the BMW iNEXT Level 3 system launching
in late 2021. To create a broad dataset, the development team is collecting 5 million
km (3.1 million miles) of public road driving data from 80 BMW 7 Series cars that
are in operation in the United States, Germany, Israel, and China. From this data,
2 million km (1.25 million miles) of highly relevant driving scenarios (including
environmental factors) are then extracted. When parameterized and expanded in
software, this data provides the simulation system an additional 240 million km
(150 million miles) to maximize diversity of the road conditions against which the
ADS is tested. The dataset will grow substantially, as the data collection fleet is set
to grow to 140 vehicles by the end of 2019.
Because all ADS can be exposed to these edge cases in the real world, ADS
developers are discussing ways to share data on edge cases, which would include
development of standards to describe on-road scenarios. This process will take
time and meanwhile developers are working individually. The recently completed
German Pegasus project (Brooke, 2019), involving key automakers, created a data-
base of relevant scenarios that can be implemented in simulation to complement
on-road testing.
How well are current systems doing in real-world testing? Californian law requires
ADS developers testing on public roads to annually report “disengagements,” i.e.,
situations in which the human safety driver (testing a Level 4 system) took over driv-
ing from the ADS to maintain safety. This dataset does not provide a full picture of
overall performance, as the complexity of the driving environment can vary dramati-
cally. Nevertheless, this data can at least be indicative. In 2018, GM Cruise reported
a takeover rate of approximately once every 5,000 miles, operating in the highly
challenging traffic of downtown San Francisco (Ohnsman, 2019). Waymo reported a
disengagement rate of approximately once every 11,000 miles, operating in the less
complex environment of Silicon Valley suburbs (Waymo, 2019). It should be noted
that several ADS Level 4 developers do the majority of their testing in states other
than California, where disengagement reports are not required. Keeping in mind
the caveat that disengagement data has limited value, from year to year it generally
shows a steady improvement in disengagement rates.
to “leapfrog” Level 3 to offer Level 4 systems where the driver is out of the control
loop. However, given the long tail of Level 4 development (requiring new design
approaches and extensive safety validation), Level 3 systems are now coming to
market restricted to specific operating conditions, as described below.
As noted in the introduction, though initially highly skeptical about the robo-
taxi use case, the a substantial portion of the auto industry has embraced mobility
services as key to their future. In particular, offering mobility via robo-taxi ser-
vices provides the advantage of fleet operation rather than private ownership. This
is crucial in bringing such a complex technology to the public; fleet vehicles are not
subject to the same cost pressures as a retail car product; return-on-investment is the
key metric. Fleets have the advantage of hands-on, skilled staff to conduct software
upgrades, system safety certification, maintenance, etc. on a regular basis for every
vehicle in the service operation. Their operating area can be limited geographically
and mapped in detail, with the complexity of the operating environment matched
to the capability of the ADS. For these reasons, higher level automation will be
deployed in fleets first; OEMs are viewing fleet operations as incubators for mass-
market system development.
Current commercial development activity is focused across the domains of pas-
senger cars, robo-taxi, public transit, parcel delivery, and heavy trucks. The avail-
ability of various levels of automation to the public will depend on both product
introduction dates and geography. Robo-taxi deployments in particular will begin in
highly constrained geographic environments and spread in extent gradually.
public highways collecting performance data, with safety drivers in place to monitor
the Level 4-intent system.
By replacing a truck driver with ADS technology for part or all of the journey, the
Level 4 business case for trucking is strong and deployment is now at the beginning
stages. In early 2019, Einride received permission from the Swedish government to
begin operating their driverless vehicles on a 300-m section of non-highway public
road (Einride, n.d.). Highway operations in the United States are seen as the strongest
market, and truck ADSs are expected to be launched by other startups in the 2019–2020
timeframe in areas of lowest complexity in terms of traffic, topography, and weather.
Most tech developers are focused initially on deployment of a “ramp to ramp” system
that operates only on limited-access divided highways. Human drivers would bring
the load to a transfer yard adjacent to the highway, where the robo-truck would attach
to the trailer and then drive the entire highway portion, with the reverse happening at
the other end of the journey (Alistair, 2019). However, TuSimple (n.d.) plans to operate
from “dock to dock,” taking the load driverless from plants, warehouses, or distribu-
tion centers via the street network to reach the highway, providing driverless operations
for the entire trip. See Table 2.1 for a summary of commercial activity in Automated
Freight Movement; note that information in the table is not exhaustive.
Delivery of parcels by automated vehicles operating in street settings is an
intriguing use case, because the pressures that come with transporting human pas-
sengers (safety, user experience, etc.) are not present. This allows greater design
freedom while maintaining the attractive business case of fulfilling deliveries with-
out the need to pay a human driver. Working with retail partners, Ford has tested
an automated pizza delivery service (with safety driver) in Miami, with plans to
replicate this next in Washington, D.C., and at scale in 2021 in several cities (Ford
Motor Company, 2018). Daimler has conducted tests of automated V-Class vans in
the Stuttgart area for parcel delivery (Daimler, n.d.). Toyota is aiming to have their
e-Palette vehicles (configurable, all-purpose automated platforms) ready in time
for the 2020 Olympic and Paralympic Games in Tokyo, with testing in the United
States in the early 2020s (Toyota Corporation, n.d.). In 2018, Toyota announced the
e-Palette Alliance, which included retail partners such as Pizza Hut and Amazon.
Nuro (n.d.) and Udelv (n.d.) are two startups active in this space. See Table 2.2 for a
summary of commercial activity in Automated Parcel Delivery; note that informa-
tion in the table is not exhaustive.
TABLE 2.2
A Summary of Commercial Activity in Automated Parcel Delivery
Fleet Operations Commercial Activity: Level 4 Automated Parcel Delivery
Road Type
Public
Developer Function Highway Street Platform Road Test Deployment Comments
Nuro Last-mile delivery: grocery • Custom vehicle 2019 Partnership with Kroger
Udelv Last-mile delivery: grocery • Retrofitted production 2019 Partnership with Walmart
passenger van
Ford Last-mile delivery: pizza • Retrofitted production Pre-2019 Miami, Florida (2018)
passenger car Washington, DC
Daimler Last-mile delivery • Retrofitted production Pre-2019 Stuttgart (2018)
passenger car
Toyota Last-mile delivery, multiple • Custom vehicle 2020 ePalette multi-purpose platform limited deployment
functions, reconfigurable US, Japan
Startups Last-mile delivery • Custom vehicle/ 2020 (est) Limited deployment, least complex environments
and OEMs production vehicle
Startups Last-mile delivery • Custom vehicle/ 2025 (est) Deployment in diverse environments
and OEMs production vehicle
Human Factors for Automated Vehicles
History and Current Status of Automated Driving 43
The first launch of robo-taxi services (true driverless, without a safety driver
or “minder”) is expected in the 2020 timeframe from Waymo through its flagship
product, Waymo One (Waymo, n.d.), and from General Motors through their Cruise
Automation group (Chokshi, 2018; Colias, 2018). Initial services will be highly lim-
ited geographically and expand based on customer demand and increasing technol-
ogy capability. Toyota and Uber plan to begin pilot-scale deployments on the Uber
ride-sharing network at an unspecified time. BMW will be fielding a fleet in late
2021 to test robo-taxi functionality in large-scale trials conducted in defined urban
environments (BMW, 2019). Looking longer term, Volvo Cars and Baidu have part-
nered to develop and mass manufacture “fully electric and autonomous vehicles,”
estimating 14.5 million units to be sold in China by 2040 for robo-taxi services.
These automated ride hailing services are currently focused only on street opera-
tions. While this may serve some markets (such as New York City) very well, many
deployment areas will see customer demand for the automated vehicles to also oper-
ate on the local highways to optimize the trip time. This has significant implications
for the technical approach but is not likely to occur within the first few years of
initial deployment.
In addition, high capacity mobility services will be operating in public transit
operations. For example, in 2019 five automated buses were slated to begin operation
across a 14-mile route including the Forth Road Bridge in Edinburgh, Scotland, on
public roads and exclusive busways (BBC, 2018). Initially, true driverless operations
will only occur within the bus depot for movements such as parking and moving to
the fueling station and bus wash. On road, the presence of a driver will be required
“as a back-up for passenger safety and to comply with UK legislation.”
See Table 2.3 for a summary of commercial activity in robo-taxi deployment, all
of which is focused on Level 4; note that information in the table is not exhaustive.
TABLE 2.3
A Summary of Commercial Activity in Robo-Taxi (Level 4)
Fleet Operations Commercial Activity: Robo-Taxi
Road Type
Public
Developer Highway Street Platform Road Test Deployment Operations Area
May Mobility • Custom vehicle 2019 Detroit, now in commercial operation (with on-board “minders”)
Transport • • Modified production 2019 12 mile road segment 2019 Edinburgh (with safety driver as
Scotland bus required by UK law)
Waymo • Retrofitted production Pre-2019 2020 (est) “Waymo Now,” Chandler, AZ announced for 2019
passenger car
Mercedes, • Retrofitted production Pre-2019 San Jose, CA
Bosch passenger car
Alliance • Retrofitted production Pre-2019 2020 2020 Summer Olympics, Japan
passenger car
Toyota • Retrofitted production 2020 2020 Summer Olympics, Japan
passenger car
Hyundai • Retrofitted production Pre-2019 2021 2021 deployment announced
passenger car
Ford • Retrofitted production Pre-2019 2021 100 self-driving vehicles on public roads in Miami, Fla., Pittsburgh,
passenger car PA., and Dearborn, Mich. 2021 deployment announced USA
BMW • Retrofitted production Pre-2019 2021 China
passenger car
Volkswagen • Retrofitted production 2021 2021 deployment announced
passenger car
(Continued )
Human Factors for Automated Vehicles
TABLE 2.3 (Continued )
A Summary of Commercial Activity in Robo-Taxi (Level 4)
Fleet Operations Commercial Activity: Robo-Taxi
Road Type
Public
Developer Highway Street Platform Road Test Deployment Operations Area
Multiple • Production robo-taxi 2021 (est) Operations confined to streets
Multiple • • Production robo-taxi 2023 (est) Natural evolution from street-only services to highways, depending
on local road networks
BMW • Production robo-taxi 2023 Deployment announced
Mercedes • Production robo-taxi 2023 Deployment announced
Aptiv/ • Retrofitted production Pre-2019 Singapore
nuTonomy passenger car
GM/Cruise • Retrofitted production Pre-2019 2019 deployment plan deferred to unspecified later date, USA
History and Current Status of Automated Driving
passenger car
Lyft, Aptiv • Retrofitted production Pre-2019 Las Vegas, Nevada
passenger car
Pony.ai • Retrofitted production China, no specific information available
passenger car
Toyota, Uber • Production robo-taxi No specific information available
Voyage • Retrofitted production Pre-2019 The Villages (Florida retirement community)
passenger car
WeRide.ai • Retrofitted production Remote driving via 5G network, China; no specific information
passenger car available
45
46 Human Factors for Automated Vehicles
permission (examples are BMW, Mercedes, Volvo Cars, and Nissan). For instance,
BMW’s Automatic Lane Change on the 2019 X5 (BMW, 2018) can be used on high-
ways when the Lane-Keeping Assistant (lane centering) is active. A lane change is
activated by the driver holding the turn signal in the desired direction; if the sensors
detect that there is space in the adjacent lane and that no other vehicle is approaching
at high speed, the lane change is completed.
Driver monitoring is playing an increasingly important role. The 2019 Audi A8
provides lane centering and requires hands on the steering wheel. An escalating
series of warnings are triggered if the vehicle detects an inattentive driver while the
cruise control is activated; these are audible, visible, and physical interventions (brake
taps). Lack of a driver response is interpreted as a driver emergency; the car slows
to a complete stop in-lane with the hazard lights on and initiates an emergency call.
True hands-off production systems were pioneered by General Motors (n.d.),
which introduced Super Cruise™ on the model year 2018 Cadillac CT6. The ODD
of this SAE Level 2 system restricts use to limited-access divided highways based
on detailed mapping specifically created for this product. GM speaks of the driver
and Super Cruise™ acting as “partners in driving.” An infrared camera mounted on
the steering column enables head tracking to assess the driver’s attentiveness to the
road. If driver inattention is detected, visual alerts and haptic seat signals are pro-
duced. In the absence of proper driver response, the vehicle will achieve the MRC
of slowing down, stopping in the highway, and activating hazard flashers, with the
vehicle contacting Onstar Emergency Services (part of the GM telematics system)
(Cadillac, n.d.). BMW offers a hands-off Traffic Jam Assistant (Hyatt, 2018) as long
as the driver’s attention is clearly on the road ahead (using driver monitoring optical
cameras placed at the top of the instrument cluster); the system operates on limited-
access highways and surface streets at speeds less than 37 mph. The model year 2020
Nissan Skyline, to be available in the Japanese market in late 2019, will offer the
ProPilot 2.0 (Nissan Motor Company, 2019) system capable of “navigated” highway
driving with hands-off multi-lane driving capabilities. The navigation concept refers
to on-ramp to off-ramp highway driving, engaging with the vehicle’s navigation sys-
tem to help maneuver the car according to a predefined route on designated road-
ways. On the route set by the driver, the system assists with passing, lane diversions,
and lane exiting. Here, as in the other Level 2 cases, a monitoring system in the cabin
continually confirms that the driver’s attention remains on the road.
Tesla stated (somewhat confusingly) that by the end of 2019 its vehicles will be
able to operate without any driver control inputs on highways and streets but will still
require the driver to pay attention to the road. Sometime during 2020, the company
asserts that “the system will be significantly robust to permit users to not pay atten-
tion while using it” (Paukert & Hyatt, 2019), which implies a Level 3 or higher sys-
tem. Similarly, Audi, BMW, Mercedes-Benz, and Volvo have stated they will offer
highly automated vehicles in the early 2020s. An interesting Level 3 system example
is the 2021 BMW Vision iNEXT (BMW Group, 2019), which will enable drivers to
“delegate the task of driving to the car for extended periods of time when driving on
the highway at speeds up to 130 km/h (81 mph).”
As product offerings enter Level 3, vehicle manufacturers are adding a new
component: pre-approval of roads to enable automated functionality. As noted
History and Current Status of Automated Driving 47
above, Cadillac only allows use of Super Cruise™ on limited-access divided high-
ways. Level 3 systems now being discussed will likely require specific roads (or road
segments) to be approved; in some cases this will be dynamic based on the prevail-
ing weather and traffic conditions. For example, Volvo Cars’ model year 2021 XC90
will offer a Highway Pilot system, which would “allow you to eat, sleep, work, watch
a movie, relax, do whatever” on “roads that the system was confident it could safely
navigate” (Shilling, 2018).
Initial mass-market Level 4 systems, in which the ADS is fully responsible for
the driving task within the ODD, will most likely focus on highway operations ini-
tially. Audi has announced that by 2021 it will introduce Level 4 “Highway Pilot”
for use on limited-access highways. Moreover, startup electric vehicle manufactur-
ers have typically asserted that their offerings will include high levels of automated
driving capability. For instance, Byton has announced that Level 4 automated driv-
ing is planned as part of their “K-Byte” sport utility vehicle (SUV) launch in 2021
(O’Kane, 2018).
In the past, a product introduction date was synonymous with availability of a
feature “everywhere.” As can be seen from the above discussion, timing is not every-
thing: it must be paired with the geography and road conditions in which a particu-
lar function is made available via the vehicle manufacturer’s cloud connection with
the vehicle.
2.5 REGULATORY CONSIDERATIONS
Regulations regarding vehicle equipment and usage of vehicles have been largely har-
monized with regard to human-driven vehicles (see also, this Handbook, Chapter 14).
The advent of automated vehicles presents a massive challenge to regulators, and
there are distinct differences in the processes to develop regulatory frameworks for
ADS in different parts of the world. The governing structure of countries such as
China and Singapore is considered “top down” such that officials can move quickly
to set up a regulatory structure. In the near term, Australia has an orderly process
underway within their existing vehicle certification framework to allow highly
automated cars, trucks, and robo-taxis to operate in the future via a safety assur-
ance system based on mandatory self-certification (Australia National Transport
Commission, 2018). Europe is allowing significant testing while enmeshed in a long-
term process working with international bodies to enable deployment of ADS. Based
on an informal poll conducted by the author, there is a broad agreement among EU
experts that Level 4 systems on mass-market vehicles will not be generally allowed
for sale or commercial operations in Europe before 2025. However, permits and
exemptions processes are expected to allow limited scale and limited time deploy-
ment of some fleet services, which could enable small-scale market launches earlier.
In the United States, three legal regimes define the ADS playing field: (1) the 1949
Geneva Convention on Road Traffic, (2) regulations enacted by NHTSA (mainly
focused on vehicle “equipment”) and the Federal Motor Carrier Safety Administration
(FMCSA) (mainly focused on commercial trucks), and (3) the vehicle codes of all
50 U.S. states (mainly focused on how vehicles are used, such as speed limits, mobile
phone usage, and seat belt usage). No current NHTSA regulations prohibit or impede
48 Human Factors for Automated Vehicles
sale of automated vehicles. However, the situation in the United States for truckers
presents a special case, due to FMCSA regulations covering commercial truck driv-
ers that were written such that “the driver” (assumed to be human) had to perform
tasks such as a pre-trip inspection of the truck for any mechanical problems. Truck
ADS developers argued that these types of rules should not apply to an ADS truck.
In USDOT’s Federal Automated Vehicles Policy 3.0 (NHTSA, 2018), the Federal
government agreed, explicitly allowing Level 4 driverless trucks to operate on U.S.
highways. This pro-industry policy, based on self-certification by tech developers,
most likely puts the United States in the lead position worldwide regarding deploy-
ment of ADS trucks. Going forward, there may be a need to clarify the authority of
state regulators on this topic; some states are very supportive of driverless trucks
while others are being more cautious.
First-generation truck platooning represents a special regulatory case. As it relates
to a Level 1 system with drivers fully attentive and able to adapt as needed to sur-
rounding traffic, ADS-related regulations are not relevant. However, because trucks
follow each other more closely than would be safe without platooning technology,
state-level truck-following distance regulations come into play. Motivated by the fuel
economy benefits, truck ADS tech developers and early adopter fleets have worked
extensively to explain the platooning safety case to state officials. This has resulted
in full allowance of commercial truck platooning in 27 U.S. states (Boyd, 2019), with
ongoing efforts to amend regulations in many others.
State-level laws in the United States, written before automated driving was envi-
sioned, need to be updated to adapt to ADS operations. For instance, many state
regulations consider a vehicle in an active roadway with no human inside to be an
abandoned vehicle, which is prohibited—making a robo-taxi without a rider illegal
(National Academies of Sciences, Engineering, and Medicine, 2018)! At the local
level, the role of law enforcement and emergency response is important in terms of
education and developing appropriate practices and norms. For instance, emergency
responders need to have confidence that an ADS vehicle involved in a crash will
not attempt to start driving when their personnel are nearby (Waymo, 2017). These
issues are generally seen as tractable but need to be addressed for full deployment
of ADS.
customer trust. As Ford puts it, “We don’t believe that the central challenge in the
development of self-driving vehicles is the technology. It’s trust. Trust in the safety, reli-
ability, and experience that the technology will enable” (Ford Motor Company, 2018).
Traditional human factors investigations apply strongly to Automation Levels 1–3,
whereas the emphasis in Level 4–5 systems shifts more toward trust and user experi-
ence. Level 1 systems have been available for many years in the form of Adaptive
Cruise Control. Truck platooning is a new application that presents interesting issues
for follower drivers in particular. In Level 1, the job of such drivers is to cede lon-
gitudinal control to the system and retain responsibility for steering, monitoring the
road environment, and adjusting to other traffic via lane changes or dissolving the
platoon. Will the driver find the follower role to be stressful, monotonous, or both?
How will the driver anticipate events ahead? Initial research results based on a field
operational test of truck platooning recently completed in Germany found that driv-
ers who were new to the idea of platooning were initially skeptical but very early in
the trial came to accept the system, with some preferring it over regular driving due
to the “team driving” feeling and high trust in the system. Furthermore, physiologi-
cal studies showed platoon driving to be no more stressful or monotonous than regu-
lar truck driving (Hochschule Fresenius, DB Schenker, & MAN Truck & Bus, n.d.).
As deployment of first-generation platooning systems proceeds, additional research
would be valuable to assess experiences of platoon drivers as well as those sharing
the roads with platoons.
Vehicles equipped for Level 2 and 3 automation are interesting for human fac-
tors investigations, because the safety case involves both the ADS and the human
driver. Can the driver be trusted to perform their assigned duties, or should they be
monitored (see e.g., this Handbook, Chapters 8, 11)? Product decisions about driver
monitoring relate to personal responsibility. Initially, the Tesla Autopilot relied on
drivers to monitor this system; now, hands-on-wheel detection is installed but there
is no attention monitoring. For other automakers, there is a strong trend to implement
driver monitoring in some form. Avoiding mode confusion between Level 2 and
Level 3 will be of top importance, particularly in a vehicle that may provide Level
3 capability on the highway but Level 2 support after exiting the highway (see also,
this Handbook Chapters 7, 15, 21). Or, given that some automakers will enable spe-
cific sections of highway for Level 3 support, the user must be aware that the vehicle
has proceeded out of the “Level 3 approved section” and is now operating in Level 2
such that the driver’s constant attention is required.
ADS delivering “driverless” capability completely changes the human factors
paradigm. For robo-taxis, this shifts from understanding driver performance to pos-
sibly monitoring the vehicle interior to ensure the service is operating properly and
not being abused. Trust will be a key competitive discriminator: service quality will
key on providing information from the vehicle (audible, visual, haptic) that reassures
riders the ADS knows what it is doing. Additionally, ADS will distinguish them-
selves competitively by having enough “street smarts” to understand complex street-
side dynamics to efficiently pick up and drop off passengers (Figure 2.4). Customer
service must encompass situations ranging from informing passengers during a mal-
function or crash event to more benign aspects such as allowing passengers to make
an unplanned stop.
50 Human Factors for Automated Vehicles
FIGURE 2.4 Pickup and drop-off points may be challenging for robo-taxis. (Image source:
iStock. Photo credit: Anouchka. Lisbon, Portugal - May 2, 2017: People waiting in line to get
a taxi in Lisbon Portela Airport, Portugal.).
REFERENCES
Abdulkhaleq, A., Wagner, S., Lammering, D., Röder, J., Balbierer, N., Ramsauer, L., …
Beohmert, H. (2017). A systematic approach based on STPA for developing a
dependable architecture for fully automated driving vehicles. Procedia Engineering,
179, 51.
Alistair, C. (2019). These 7 Companies Are Making the Self-Driving Truck a Reality. Retrieved
from www.gearbrain.com/autonomous-truck-startup-companies-2587305809.html
Aurora. (n.d.). The New Era of Mobility. Retrieved from https://aurora.tech/vssa/index.html
Australia National Transport Commission. (2018). Safety Assurance for Automated Driving
Systems: Decision Regulation Impact Statement. Retrieved from www.ntc.gov.au
Automotive Information Sharing and Analysis Center. (n.d.). Best Practices. Retrieved from
www.automotiveisac.com/best-practices/
BBC. (2018). First Driverless Edinburgh to Fife Bus Trial Announced. Retrieved www.bbc.
com/news/uk-scotland-edinburgh-east-fife-46309121
Billington, J. (2018). The Prometheus Project: The Story Behind One of AV’s Greatest
Developments. Retrieved from www.autonomousvehicleinternational.com/features/
the-prometheus-project.html
Bishop, R. (2005). Intelligent Vehicle Technology and Trends. Norwood, MA: Artech
House.
Bishop, R. (2019). The Three Streams of Truck Platooning Deployment. Retrieved from https://
medium.com/@richard_32638/the-three-streams-of-truck-platooning- development-
a71edbb8c12a
Blanco, S. (2019). New Automotive Radars Take Tech from a Blip to a Boom. Retrieved from
www.sae.org/news/2019/03/new-av-radar-sensors
BMW Group. (2018). The All-New 2019 BMW X5 Sports Activity Vehicle. Retrieved from www.
press.bmwgroup.com/usa/article/detail/T0281821EN_US/the-all-new-2019-bmw-
x5-sports-activity-vehicle
BMW Group. (2019). The New BMW Group High Performance D3 platform. Data-Driven
Development for Autonomous Driving. Retrieved from www.press.bmwgroup.
com/global/article/detail/T0293764EN/the-new-bmw-group-high-performance-d3-
platform-data-driven-development-for-autonomous-driving
Boyd, S. (2019). Peloton Technology: Connected Automation for Freight Safety and
Efficiency. Retrieved from www.automatedvehiclessymposium.org/home
Brooke, L. (2019). Autonomy for the masses. Autonomous Vehicle Engineering, 2019(March),
8–12.
Brown, I. (1986). Functional requirements of driving. Paper Presented at the Berzelius
Symposium on Cars and Causalities. Stockholm, Sweden.
Cadillac. (2017). V2V Safety Technology Now Standard on Cadillac CTS Sedans. Retrieved
from https://media.cadillac.com/media/us/en/cadillac/news.detail.html/content/Pages/
news/us/en/2017/mar/0309-v2v.html
Cadillac. (n.d.). CT6 Super Cruise™ Convenience and Personalization Guide. Retrieved from
www.cadillac.com/content/dam/cadillac/na/us/english/index/ownership/technology/
supercruise/pdfs/2018-cad-ct6-supercruise-personalization.pdf
Chokshi, N. (2018). Mary Barra Says G.M. Is “On Track” to Roll Out Autonomous Vehicles
Next Year. Retrieved from www.nytimes.com/2018/11/01/business/dealbook/barra-
gm-autonomous-vehicles.html
Colias, M. (2018). GM President Dan Ammann to Take New Role as Head of Autonomous-
Car Business. Retrieved from www.wsj.com/articles/gm-president-dan-ammann-to-
take-new-role-as-head-of-autonomous-car-business-1543514469
Daimler. (n.d.). The Vision Van. Intelligent Delivery Vehicle of the Future. Retrieved from
www.daimler.com/innovation/case/electric/mercedes-benz-vision-van-2.html
52 Human Factors for Automated Vehicles
Defense Advanced Research and Projects Agency (DARPA). (n.d.). The Grand Challenge.
Retrieved from www.darpa.mil/about-us/timeline/-grand-challenge-for-autonomous-
vehicles
Eckert, V. (2019). German Carmakers to Invest 60 Billion Euros in Electric Cars and
Automation: VDA. Retrieved from www.reuters.com/article/us-autoshow-geneva-
germany-vda/german-carmakers-to-invest-60-billion-euros-in-electric-cars-and-
automation-vda-idUSKCN1QJ0AU
Einride. (n.d.). World Premiere: First Cab-Less and Autonomous, Fully Electric Truck in
Commercial Operations on Public Road. Retrieved from www.einride.tech/news/
world-premiere-first-cab-less-and-autonomous-fully-electric-truck-in-commercial-
operations-on-public-road/
European Commission. (n.d.). Transport Research and Innovation Monitoring and
Information System (TRIMIS), Chauffeur II. Retrieved from https://trimis.ec.europa.
eu/project/promote-chauffeur-ii
Fitzgerald, M. (2019). Ford Claims It Will Have 100 Self-Driving Cars on the Road by the
End of the Year. Retrieved from www.cnbc.com/2019/04/25/ford-aims-for-100-self-
driving-cars-on-the-road-by-the-end-of-2019.html
Ford Motor Company. (2018). A Matter of Trust: Ford’s Approach to Developing Self-Driving
Vehicles. Retrieved from https://media.ford.com/content/dam/fordmedia/pdf/Ford_
AV_LLC_FINAL_HR_2.pdf
General Motors. (n.d.). Giving You the Freedom to Go Hands Free. Retrieved from www.
cadillac.com/world-of-cadillac/innovation/super-cruise
Griggs, T. & Wakabayashi, D. (2018). How a self-driving Uber killed a pedestrian in Arizona.
The New York Times. Retrieved from www.nytimes.com/interactive/2018/03/20/us/
self-driving-uber-pedestrian-killed.html
Hochschule Fresenius, DB Schenker, & MAN Truck & Bus. (n.d.). EDDI Electronic Drawbar –
Digital Innovation: Project Report – Presentation of the Results. Retrieved from www.
deutschebahn.com/resource/blob/4136372/d08df4c3b97b7f8794f91e47e86b71a3/
Platooning_EDDI_Project-report_10052019-data.pdf
Hyatt, K. (2018). BMW’s Extended Traffic Jam Assistant System Wants to Stare into Your
Eyes. Retrieved from www.cnet.com/roadshow/news/bmw-driver-monitor-camera-x5/
Insurance Institute for Highway Safety/Highway Loss Data Institute. (2019). Real-
World Benefits of Crash Avoidance Technologies. Retrieved from www.iihs.org/
media/259e5bbd-f859-42a7-bd54-3888f7a2d3ef/e9boUQ/Topics/ADVANCED%20
DRIVER%20ASSISTANCE/IIHS-real-world-CA-benefits.pdf
International Organization for Standardization. (2011). Road Vehicles – Functional Safety
(ISO 26262). Retrieved from www.iso.org/standard/43464.html
International Organization for Standardization. (2019). Road Vehicles—Safety of the Intended
Functionality (ISO/PAS 21448:2019). Retrieved from www.iso.org/standard/70939.
html
Japan Automotive Research Institute. (n.d.). Functional Safety (ISO26262). Retrieved from
www.jari.or.jp/tabid/223/Default.aspx
Kunkle, F. (2019). Auto Industry Says Cybersecurity Is a Significant Concern as Cars
Become More Automated. Retrieved from www.washingtonpost.com/transporta-
tion/2019/04/30/auto-industry-says-cybersecurity-is-significant-concern-cars-become-
more-automated/?wpisrc=nl_sb_smartbrief
Lee, T. B. (2019). Feds Investigate Why a Tesla Crashed into a Truck Friday, Killing Driver.
Retrieved from https://arstechnica.com/cars/2019/03/feds-investigating-deadly-friday-
tesla-crash-in-florida/
Locomation. (n.d.). Locomation: Driving the Future of Autonomous Trucking. Retrieved
from https://locomation.ai/
History and Current Status of Automated Driving 53
Michon, J. A. (1985). A critical view of driver behavior models: What do we know, what
should we do? In L. Evans & R. Schwing (Eds.), Human Behavior and Traffic Safety
(pp. 485–520). New York: Plenum Press.
MyCarDoesWhat.org (n.d.). All about Today’s Car Safety Features. Retrieved from https://
mycardoeswhat.org/
National Academies of Sciences, Engineering, and Medicine. (2018). Implications of
Connected and Automated Driving Systems, Vol. 3: Legal Modification Prioritization
and Harmonization Analysis. Washington, DC: The National Academies Press.
doi:10.17226/25293. Retrieved from www.trb.org/NCHRP/Blurbs/178300.aspx
National Automated Highway System Consortium. (1998). Technical Feasibility
Demonstration Final Report. Retrieved from https://path.berkeley.edu/sites/default/
files/part_1_ahs-demo-97.pdf
National Highway Traffic Safety Administration. (2016). Cybersecurity Best Practices for
Modern Vehicles (DOT HS 812 333). Washington, DC: National Highway Traffic
Safety Administration. Retrieved from www.nhtsa.gov/staticfiles/nvs/pdf/812333_
CybersecurityForModernVehicles.pdf
National Highway Traffic Safety Administration (NHTSA). (2017). Automated Driving
Systems 2.0: A Vision for Safety. Retrieved from www.nhtsa.gov/sites/nhtsa.dot.gov/
files/documents/13069a-ads2.0_090617_v9a_tag.pdf
National Highway Traffic Safety Administration. (2018). Preparing for the Future
of Transportation: Automated Vehicles 3.0. Retrieved from www.nhtsa.gov/
vehicle-manufacturers/automated-driving-systems
Nissan Motor Company. (2019). Nissan to Equip New Skyline with World’s First Next Gen
Driver Assistance System. Retrieved from https://newsroom.nissan-global.com/releases/
nissan-to-equip-new-skyline-with-worlds-first-next-gen-driver-assistance-system
Nissan Motor Company. (n.d.). Seamless Autonomous Mobility: The Ultimate Nissan
Intelligent Integration. Retrieved from www.nissan-global.com/EN/TECHNOLOGY/
OVERVIEW/sam.html
Nowakowski, C., Shladover, S. E., & Chan, C.Y. (2016). Determining the readiness of
automated driving systems for public operation: Development of behavioral compe-
tency requirements. Transportation Research Record: Journal of the Transportation
Research Board, 2559, 65–72.
Nuro. (n.d.). https://nuro.ai/
nuTonomy. (n.d.). nuTonomy to Test Its Self-Driving Cars on Specific Public Roads in Boston.
Retrieved from www.nutonomy.com/press-release/boston-trial-launch/
O’Kane, S. (2018). Byton Teases a Fully Autonomous Electric Sedan Due in 2021. Retrieved
from www.theverge.com/2018/6/12/17449996/byton-sedan-concept-k-byte-release-date
Ohnsman, A. (2019). Waymo Tops Self-Driving Car Disengagement Stats as GM Cruise Gains
and Tesla Is AWOL. Retrieved from www.forbes.com/sites/alanohnsman/2019/02/13/
waymo-tops-self-driving-car-disengagement-stats-as-gm-cruise-gains-and-tesla-is-
awol/#6a6f623931ec
Paukert, C. & Hyatt, K. (2019). Tesla Autonomy Investor Day: What We Learned,
What We Can Look Forward To. Retrieved from www.cnet.com/roadshow/news/
teslas-autonomy-investor-day-recap/
Peloton Technology. (n.d.). https://peloton-tech.com/
Perry, F. (2017). Overview of DSRC Messages and Performance Requirements. Retrieved
from www.transportation.institute.ufl.edu/wp-content/uploads/2017/04/HNTB-SAE-
Standards.pdf
SAE International. (2018). Taxonomy and Definitions for Terms Related to Driving Automation
Systems for On-Road Motor Vehicles (J3016_201806). Warrendale, PA: Society for
Automotive Engineers. Retrieved from www.sae.org/standards/content/j3016_201806/
54 Human Factors for Automated Vehicles
Sage, A. & Lienert, P. (2017). GM Plans Large-Scale Launch of Self-Driving Cars in U.S.
Cities in 2019. Retrieved from www.reuters.com/article/us-gm-autonomous/gm-plans-
large-scale-launch-of-self-driving-cars-in-u-s-cities-in-2019-idUSKBN1DU2H0
Scania Group. (n.d.). Autonomous Transport Solutions. Retrieved from www.scania.com/
group/en/autonomous-transport-solutions/
Shilling, E. (2018). Volvo Plans Autonomous XC90 You Can “Eat, Sleep, Do Whatever” in
By 2021. Retrieved from https://jalopnik.com/volvo-plans-autonomous-xc90-you-can-
eat-sleep-do-what-1826997003
Shladover, S. E. (2012). Recent International Activity in Cooperative Vehicle–Highway
Automation Systems (FHWA-HRT-12–033). Washington, DC: Federal Highway
Administration.
Somerville,H.(2019).Uber’s Self-Driving Unit Valued at $7.25Billion in New Investment. Retrieved
from www.reuters.com/article/us-uber-softbank-group-selfdriving/ubers-self-driving-
unit-valued-at-7-25-billion-in-new-investment-idUSKCN1RV01P
Starsky Robotics. (n.d.). Voluntary Safety Self-Assessment. Retrieved from https://uploads-ssl.
webflow.com/599d39e79f59ae00017e2107/5c1be1791a7332594e0ff28b_Voluntary%20
Safety%20Self-Assessment_Starsky%20Robotics.pdf
Tesla. (n.d.). Autopilot Section. Retrieved from www.tesla.com/presskit
Toyota Corporation. (n.d.). e-Palette Concept. Retrieved from www.toyota-global.com/pages/
contents/innovation/intelligent_transport_systems/world_congress/2018copenhagen/
pdf/e-Palette_CONCEPT.pdf
TuSimple. (n.d.). Retrieved www.tusimple.com/
UC Berkeley Institute of Transport Studies. (n.d.). National Automated Highway Systems
Consortium. Retrieved from https://path.berkeley.edu/research/connected-and-automated-
vehicles/national-automated-highway-systems-consortium
Udelv. (n.d.). Retrived from www.udelv.com/
Voelcker, J. (2007). Autonomous vehicles complete DARPA urban challenge. IEEE
Spectrum. Retrieved from https://spectrum.ieee.org/transportation/advanced-cars/
autonomous-vehicles-complete-darpa-urban-challenge
Waymo. (2017). On the Road to Fully Self-Driving (Waymo Safety Report). Retrieved from
https://waymo.com/safety/
Waymo. (2019). An Update on Waymo Disengagements in California. Retrieved from
https://medium.com/waymo/an-update-on-waymo-disengagements-in-california-
d671fd31c3e2
Waymo. (n.d.). We’re Building the World’s Most Experienced Driver. Retrieved from https://
waymo.com/
Wired.com. (2017). Car Hacking. Retrieved from www.wired.com/tag/car-hacking/
3 Driver’s Mental Model
of Vehicle Automation
Bobbie Seppelt
Massachusetts Institute of Technology
Trent Victor
Volvo Cars Safety Centre, and Chalmers
University of Technology
CONTENTS
Key Points ................................................................................................................ 55
3.1 Importance and Relevance of Mental Models in Driving and Automation .... 56
3.2 Defining Mental Models ................................................................................. 57
3.3 Mental Models under Uncertainty................................................................... 58
3.4 General and Applied Mental Models ............................................................. 60
3.5 Measurement of General and Applied Mental Models .................................. 61
3.6 Supporting Accurate and Complete Mental Models....................................... 62
3.7 Conclusion ...................................................................................................... 63
References................................................................................................................. 63
KEY POINTS
• An operator’s expectations of system behavior are guided by the complete-
ness and correctness of his/her mental model.
• The accuracy and completeness of mental models depend on the effec-
tiveness of system interfaces and on the variety of situations drivers
encounter.
• Drivers may use biased, uncertain, or incomplete knowledge when operat-
ing automation. Good automated system design should strive to account for
and mitigate these known human biases and processing limitations.
• Both “general” and “applied” mental models affect reliance action. A
mismatch between a driver’s general and applied mental models can nega-
tively affect trust and acceptance of vehicle automation.
• Multiple qualitative and quantitative measures can be used to assess the
impact of mental models on driver’s reliance on automation.
• To support accurate and complete mental model development, drivers
need information on the purpose, process, and performance of automated
systems.
55
56 Human Factors for Automated Vehicles
and among components and processes (Moray, 1999). Mental models allow people
to account for and to predict the behavior of physical systems (Gentner & Stevens,
1983). According to Johnson-Laird (1983), mental models enable people to draw
inferences and make predictions, to decide on their actions, and to control system
operation. In allowing an operator to predict and explain system behavior, and
to recognize and remember relationships between system components and events
(Wilson & Rutherford, 1989), mental models provide a source to guide people’s
expectations (Wickens, 1984).
Further elaborating on this role of mental models for generating predictions,
the predictive processing framework (Clark, 2013) has emerged as a powerful new
approach to understanding and modeling mental models (Engström et al., 2018).
Within the predictive processing framework, a mental model can be seen as a
hierarchical generative model. The hierarchical generative model is embodied in
the brain to generate predictions, after learning over time how states and events in
the world, or one's own body, generate sensory input. Predictive processing suggests
that frequent exposure to reliable statistical regularities in the driving environment
will lead to improvement of generative model predictions and increasingly automa-
tized performance. Further, failures may be understood in terms of limited exposure
to functional limitations, and therefore, as an inappropriately tuned generative model
in such situations.
Consequently, for the driver’s generative model to become better at predicting
automation, it needs feedback (sensory input) on statistical regularities. Automation
can be designed to provide transparent feedback on the state of the automation and
has been shown to improve performance (Bennett & Flach, 2011; Flemisch, Winner,
Bruder, & Bengler, 2014; Seppelt & Lee, 2007; 2019; Vicente & Rasmussen, 1992).
Dynamic human–machine interface (HMI) feedback can provide statistical regu-
larities of sensory input that the model can tune to. If designed to continuously
communicate to drivers the evolving relationship between system performance
and operating limits, HMIs can support the development of a generative model to
properly predict what the automation can and cannot do (see e.g., this Handbook,
Chapter 15). It is in understanding both the proximity to and type of system limita-
tion (e.g., sensor range, braking capability, functional velocity range, etc.) that driv-
ers are able to initiate proactive intervention in anticipation of automation failure
(Seppelt & Lee, 2019).
conceptual form that can be defined as accurate and complete. The merit of a mental
model, therefore, is not based on its technical depth and specificity, but in its ability
to enable a user to accurately (or satisfactorily) predict the behavior of a system.
The means of organizing knowledge into structured patterns affords rapid
and flexible processing of information that translates into rapid responses such
as the decision to rely on the automation when operating an automated system
(Rumelhart & Ortany, 1977). Mental models are constructed from system-pro-
vided information, the environment, and instructions (Norman, 1986). They
depend on and are updated from the feedback provided to operators (e.g., as from
proprioceptive feedback or in information displays). The accuracy of mental mod-
els depends on the effectiveness of system interfaces (Norman, 1988) and the
variety of situations encountered. Inaccuracies in mental models are more likely
the more unreliable the automation (Kazi et al., 2004). Mental model accuracy
may be improved with feedback that informs the operator of the automation’s
behavior in a variety of situations (Stanton & Young, 2005). For operators who
interact with automation that in turn interacts with the environment, such as driv-
ers, it is important that their mental models of the automated system include an
understanding of the environment’s effect on its behavior. For judgments made
under uncertainty, such as the decision to rely on automated systems that control
all or part of a complex, dynamic process, two types of cognitive mechanisms—
intuition and reasoning—are at work (Kahneman & Frederick, 2002; Stanovich &
West, 2002). Intuition (System 1) is characterized by fast, automatic, effortless,
and associative operations—those similar to features of perceptual processes.
Reasoning (System 2) is characterized by slow, serial, effortful, deliberately
controlled, and often relatively flexible and rule-governed operations. System
1 generates impressions that factor into judgments while System 2 is involved
directly in all judgments, whether they originate from impressions or from delib-
erate reasoning.
Consequently, under uncertainty or under time pressure, mental models are sub-
ject to cognitive biases—systematic patterns of deviation from norm or rationality in
judgment (Haselton, Nettle, & Andrews, 2005). Although there are a large variety of
cognitive biases, examples of important biases affecting mental models of automa-
tion include
Thus, it can be expected that drivers may use biased, uncertain, or incomplete knowl-
edge when operating automation. Good automated system designs should strive to
minimize the effect of these known human biases and limitations, and to measure
the accuracy of mental models.
Automated System
Cognitive function
Driver
FIGURE 3.1 Conceptual model of the influence of a driver’s mental model on automation
reliance.
Driver’s Mental Model of Vehicle Automation 61
that ACC does not work effectively in rainy conditions, but may not have a well-
developed “applied” mental model to understand the combination of road condi-
tions and rain densities that result in unreliable detection of a lead vehicle. As a
driver gains experience using an automated system in a variety of situations, s/he
develops his/her “applied” mental model, which is conceptually consistent with the
idea of “situation models” connecting bottom-up environmental input with top-down
knowledge structures (Durso, Rawson, & Girotto, 2007). As depicted in Figure 3.1,
both “general” and “applied” mental models affect reliance action (also see in this
Handbook Chapter 4). A correct “general” mental model is important in the con-
struction of an adequate “applied” model when experiencing concrete situations on
the road and for selecting appropriate actions (Cotter & Mogilka, 2007; Seppelt &
Lee, 2007). In turn, experience updates the “general” mental model. A mismatch
between a driver’s general mental model and experience can negatively affect trust
and acceptance (Lee & See, 2004). To help explain drivers’ reliance decisions and
vehicle automation use behaviors in the short- and long term, it is necessary to mea-
sure mental models.
TABLE 3.1
Example Measures of General and Applied Mental Models
General Mental Model Measures
• Questionnaires on purpose, process, and performance (PPP)
• Automation status ratings (i.e., mental model accuracy)
• Automated system mode(s)
• Proximity of current automated system state to its capabilities/limits
• Expected system response and behavior in specific conditions
• Trust ratings
Applied Mental Model Measures
• Monitoring of the driving environment
• Sampling rate and length to the forward roadway
• Sampling rate and length to designated safety-critical areas (e.g., to intersections & crosswalks)
• Trust-related behaviors during use of the automation (e.g., hands hovering near the steering
wheel)
• Presence of safety-related behaviors prior to tactical maneuvers or hazard responses
(e.g., glances to rear-view mirror & side mirrors, turn signal use, and over-the-shoulder glances)
• Secondary task use
• Level of skill loss: Manual performance after a period of automated driving compared with
manual task performance during the same period of time
• Response time to unexpected events/hazards or system errors/failures
• Use of automation within vs. outside its operational design domain (ODD)
The set of measures listed in Table 3.1 offers a starting point for how to assess gen-
eral and applied mental models of vehicle automation. Further research is required
to assess whether and to what extent the above measures provide practical insight
into driver’s reliance behavior for the diversity of automated technologies and their
real-world use conditions. Further knowledge on how behavior changes in the longer
term, after extended use of automated systems, as a function of the type and com-
plexity of a driver’s mental model, is also important.
of automation (e.g., Victor et al., 2018). Based on current system design practices,
there seems to be a fundamental disconnect between driver’s general and applied
mental models. For example, in Victor et al. (2018), drivers were trained prior to
use on the limits of highly reliable (but not perfect) automation. However, in prac-
tice, they experienced reliable system operation until the final moment of the study.
Regardless of the amount of initial training on system limitations, 30% of drivers
crashed into the object they “knew” was outside the detection capabilities of the sys-
tem. Without reinforcement of the general mental model by the dynamic experience,
information decayed or was dominated by dynamic learned trust. Consistent with
previous findings on the development of mental models of vehicle automation rela-
tive to initial information (Beggiato, Pereira, Petzoldt, & Krems, 2015), this study
found that system limitations initially described but not experienced tend to disap-
pear from a driver’s mental model. Research to date on mental models of vehicle
automation indicate a need to support and/or train driver’s understanding of in situ
vehicle limitations and capabilities through dynamic HMI information (Seppelt &
Lee, 2019; this Handbook, Chapters 15, 16, 18), routine driver training, and/or using
intelligent tutoring systems.
3.7 CONCLUSION
This chapter described the importance and relevance of mental models in driving
and automation, defined how fundamental psychological mechanisms contribute to
their formation, provided a new framing of general and applied mental models and
how to measure them, and concluded with a review of recent research on how to
support accurate and complete mental model development. Future research needs to
examine relationships between mental models, trust, and both short- and long-term
acceptance across types and combinations of vehicle automation.
REFERENCES
Abraham, H., Seppelt, B., Mehler, B., & Reimer, B. (2017). What’s in a name: Vehicle
technology branding & consumer expectations for automation. Proceedings of the
9th International ACM Conference on Automation User Interfaces and Interactive
Vehicular Applications. New York: ACM.
Beggiato, M. & Krems, J. F. (2013). The evolution of mental model, trust and acceptance of
adaptive cruise control in relation to initial information. Transportation Research Part
F: Traffic Psychology and Behaviour, 18, 47–57.
Beggiato, M., Pereira, M., Petzoldt, T., & Krems, J. (2015). Learning and development
of trust, acceptance and the mental model of ACC. A longitudinal on-road study.
Transportation Research Part F: Traffic Psychology and Behaviour, 35, 75–84.
Bennett, K. B., & Flach, J. M. (2011). Display and interface design: Subtle science, exact art.
CRC Press.
Bhana, H. (2010). Trust but verify. AeroSafetyWorld, 5(5), 13–14.
Boer, E. R. & Hoedemaeker, M. (1998). Modeling driver behavior with different degrees
of automation: A hierarchical decision framework of interacting mental models. In
Proceedings of the XVIIth European Annual Conference on Human Decision making
and Manual Control, 14–16 December, France: Valenciennes.
64 Human Factors for Automated Vehicles
Casner, S. M., Geven, R. W., Recker, M. P., & Schooler, J. W. (2014). The retention of manual
flying skills in the automated cockpit. Human Factors, 56(8), 1506–1516.
Clark, A. (2013). Whatever next? Predictive brains, situated agents, and the future of
cognitive science. Behavioral and Brain Sciences, 36(3), 181–204. doi:10.1017/
S0140525X12000477
Cotter, S. & Mogilka, A. (2007). Methodologies for the assessment of ITS in terms of driver
appropriation processes over time (HUMANIST Project Deliverable 6 of Task Force E).
Dekker, S. W. A. & Woods, D. D. (2002). MABA-MABA or Abracadabra? Progress on
human-automation co-ordination. Cognition, Technology and Work, 4, 240–244.
Durso, F. T., Rawson, K. A., & Girotto, S. (2007). Comprehension and situation awareness.
In F. T. Durso, R. S. Nickerson, S. T. Dumais, S. Lewandowsky, & T. J. Perfect (Eds.),
Handbook of Applied Cognition (2nd ed., pp. 163–193). Chichester, UK: John Wiley &
Sons.
Engström, J., Bärgman, J., Nilsson, D., Seppelt, B., Markkula, G., Piccinini, G. B., & Victor,
T. (2018). Great expectations: A predictive processing account of automobile driving.
Theoretical Issues in Ergonomics Science, 19(2), 156–194.
Flemisch, F., Winner, H., Bruder, R., & Bengler, K. (2014). Cooperative guidance, control and
automation. In H. Winner, S. Hakuli, F. Lotz, & C. Singer (Eds.), Handbook of Driver
Assistance Systems: Basic Information, Components and Systems for Active Safety
and Comfort (pp. 1471–1481). Berlin: Springer.
Gentner, D. & Stevens, A. L. (1983). Mental Models. Hillsdale, NJ: Lawrence Erlbaum
Associates.
Haselton, M. G., Nettle, D., & Andrews, P. W. (2005). The evolution of cognitive bias. In
D. M. Buss (Ed.), The Handbook of Evolutionary Psychology (pp. 724–746). Hoboken,
NJ: John Wiley & Sons.
Hilbert, M. (2012). Toward a synthesis of cognitive biases: How noisy information processing
can bias human decision-making. Psychological Bulletin, 138(2), 211–237.
Johnson-Laird, P. (1983). Mental Models. Cambridge, MA: Harvard University Press.
Kahneman, D. & Frederick, S. (2002). Representativeness revisited: Attribute substitution in
intuitive judgment. In T. Gilovich, D. Griffin, & D. Kahneman (Eds.), Heuristics and
Biases (pp. 49–81). Cambridge: Cambridge University Press.
Kazi, T. A., Stanton, N. A., & Harrison, D. (2004). The interaction between drivers’ conceptual
models of automatic-cruise-control and level of trust in the system. 3rd International
Conference on Traffic & Transport Psychology (ICTTP). Nottingham, UK: ICTTP.
Kazi, T., Stanton, N. A., Walker, G. H., & Young, M. S. (2007). Designer driving: drivers’
conceptual models and level of trust in adaptive cruise control.
Kempton, W. (1986). Two theories of home heat control. Cognitive Science, 10(1), 75–90.
Larsson, A. F. L. (2012). Driver usage and understanding of adaptive cruise control. Applied
Ergonomics, 43, 501–506.
Lee, J. D. (2018). Perspectives on automotive automation and autonomy. Journal of Cognitive
Engineering and Decision Making, 12(1), 53–57.
Lee, J. D. & Moray, N. (1994). Trust, self-confidence, and operators’ adaptation to automa-
tion. International Journal of Human-Computer Studies, 40, 153–184.
Lee, J. D. & See, K. A. (2004). Trust in technology: Designing for appropriate reliance.
Human Factors, 46(1), 50–80.
Makoto, I. (2012). Toward overtrust-free advanced driver assistance systems. Cognition,
Technology & Work, 14(1), 51–60.
Moray, N. (1986). Monitoring behavior and supervisory control. In K. R. Boff, L. Kaufman,
and J. P. Thomas (Eds.), Handbook of Perception and Human Performance (Vol. 2,
Chapter 40). New York: Wiley.
Driver’s Mental Model of Vehicle Automation 65
Moray, N. (1999). Mental models in theory and practice. In D. Gopher & A. Koriat, (Eds.),
Attention and Performance XVII Cognitive Regulation of Performance: Interaction of
Theory and Application. Cambridge: MIT Press.
Norman, D. A. (1983). Some observations on mental models. In D. Gentner & A. L. Stevens
(Eds.), Mental Models (pp. 7–14). Hillsdale, NJ: Lawrence Erlbaum Associates.
Norman, D. A. (1986). Cognitive engineering. In D. A. Norman & S. W. Draper (Eds.),
User Centered System Design: New Perspectives on Human-Computer Interaction
(pp. 31–61). Hillsdale, NJ: Lawrence Erlbaum Associates.
Norman, D. A. (1988). The Psychology of Everyday Things. New York: Basic Books.
Norman, D. A. (1990). The ‘problem’ with automation: Inappropriate feedback and interac-
tion, not ‘over-automation’. Philosophical Transactions of the Royal Society London,
Series B, Biological Sciences, 327(1241), 585–593.
Onnasch, L., Wickens, C. D., Li, H., & Manzey, D. (2014). Human performance consequences
of stages and levels of automation: An integrated meta-analysis. Human Factors, 56(3),
476–488.
Oswald, M. E. & Grosjean, S. (2004). Confirmation bias. In R. F. Pohl (Ed.), Cognitive
Illusions: A Handbook on Fallacies and Biases in Thinking, Judgement, and Memory
(pp. 79–96). Hove, UK: Psychology Press.
Parasuraman, R., Mouloua, M., & Molloy, R. (1994). Monitoring automation failures
in human-machine systems. In M. Mouloua and R. Parasuraman (Eds.), Human
Performance in Automated Systems: Current Research and Trends (pp. 45–49).
Hillsdale, NJ: Lawrence Erlbaum Associates.
Payne, S. J. (1991). A descriptive study of mental models. Behavior & Information Technology,
10(1), 3–21.
Revell, K. M. & Stanton, N. A. (2017). Mental Models: Design of User Interaction and
Interfaces for Domestic Energy Systems. Boca Raton, FL: CRC Press.
Rouse, W. B. & Morris, N. M. (1986). On looking into the black box: Prospects and limits in
the search for mental models. Psychological Bulletin, 100, 359–363.
Rumelhart, D. E. & Norman, D. A. (1981). Analogical processes in learning. In J. R. Andersen
(Ed.), Cognitive Skills and Their Acquisition (pp. 335–359). New York: Psychology
Press.
Rumelhart, D. E. & Ortany, A. (1977). The representation of knowledge in memory. In
R. C. Anderson & R. J. Spiro (Eds.), Schooling and the Acquisition of Knowledge
(pp. 99–135). HillsDale, NJ: Lawrence Erlbaum Associates.
SAE International. (2018). Taxonomy and Definitions for Terms Related to Driving
Automation Systems for On-Road Motor Vehicles (SAE Standard J3016). Warrendale,
PA: Society of Automotive Engineers.
Sarter, N. B. & Woods, D. D. (1994). Pilot interaction with cockpit automation II: An experi-
mental study of pilots’ model and awareness of the flight management system. The
International Journal of Aviation Psychology, 4(1), 1–28.
Sarter, N. B., Woods, D. D., & Billings, C. E. (1997). Automation surprises. In G. Salvendy
(Ed.), Handbook of Human Factors & Ergonomics, Second Edition. Hoboken, NJ:
Wiley.
Seppelt, B. D. (2009). Supporting Operator Reliance on Automation through Continuous
Feedback. Unpublished PhD dissertation, The University of Iowa, Iowa City.
Seppelt, B. D. & Lee, J. D. (2007). Making adaptive cruise control (ACC) limits visible.
International Journal of Human-Computer Studies, 65(3), 192–205.
Seppelt, B. D. & Lee, J. D. (2019). Keeping the driver in the loop: Enhanced feedback to sup-
port appropriate use of imperfect vehicle control automation. International Journal of
Human-Computer Studies, 125, 66–80.
66 Human Factors for Automated Vehicles
Seppelt, B. D., Reimer, B., Angell L., & Seaman, S. (2017). Considering the human across
levels of automation: Implications for reliance. Proceedings of the Driving Assessment
Conference: 9th International Symposium on Human Factors in Driver Assessment,
Training, and Vehicle Design. Iowa City, IA: University of Iowa, Public Policy Center.
Shiffrin, R. M. & Schneider, W. (1977). Controlled and automatic human information process-
ing: II. Perceptual learning, automatic attending and a general theory. Psychological
Review, 84(2), 127–190.
Summala, H. (2007). Towards understanding motivational and emotional factors in driver
behaviour: Comfort through satisficing. In P. C. Cacciabue (Eds.), Modelling Driver
Behaviour in Automotive Environments. London: Springer.
Stanovich, K. E. & West, R. F. (2002). Individual differences in reasoning: Implications for
the rationality debate. In T. Gilovich, D. Griffin, & D. Kahneman (Eds.), Heuristics and
Biases (pp. 421–440). Cambridge: Cambridge University Press.
Stanton, N. A. & Young. M. S. (2005). Driver behavior with adaptive cruise control.
Ergonomics, 48(10), 1294–1313.
Tversky, A. & Kahneman, D. (1974). Judgement under uncertainty: Heuristics and biases.
Science, 185(4157), 1124–1131.
Vicente, K. J. & Rasmussen, J. (1992). Ecological interface design: Theoretical foundations.
IEEE Transactions on Systems, Man, and Cybernetics, SCM-22(4), 589–606.
Victor, T. W., Tivesten, E., Gustavsson, P., Johansson, J., Sangberg, F., & Ljung Aust,
M. (2018). Automation expectation mismatch: Incorrect prediction despite eyes on
threat and hands on wheel. Human Factors, 60(8), 1095–1116.
Wickens, C. D. (1984). Engineering Psychology and Human Performance. Columbus, OH:
Merrill.
Wickens, C. D., Li, H., Santamaria, A., Sebok, A., & Sarter, N. B. (2010). Stages and lev-
els of automation: An integrated meta-analysis. Proceedings of the Human Factors
and Ergonomics Society Annual Meeting (pp. 389–393). Los Angeles, CA: Sage
Publications.
Wiener, E. L. (1989). Human Factors of Advanced Technology ("Glass Cockpit")
Transport Aircraft (NASA Contractor Report 177528). Mountain View, CA: NASA
Ames Research Center.
Wilson, J. R. & Rutherford, A. (1989). Mental models: Theory and application in human fac-
tors. Human Factors, 31, 617–634.
Zhang Y., Lewis, M., Pellon, M., & Coleman, P. (2007). A preliminary research on mod-
eling cognitive agents for social environments in multi-agent systems. AAAI Fall
Symposium: Emergent Agents and Socialities (p. 116). Retrieved from:https://www.
aaai.org/Papers/Symposia/Fall/2007/FS-07-04/FS07-04-017.pdf
4 Driver Trust in
Automated, Connected,
and Intelligent Vehicles
John D. Lee
University of Wisconsin-Madison
CONTENTS
Key Points ................................................................................................................ 67
4.1 Introduction..................................................................................................... 68
4.2 Trust and Types of Automation....................................................................... 68
4.3 Definition and Mechanisms Underlying Trust ............................................... 72
4.4 Promoting Appropriate Trust in Vehicle Automation .................................... 78
4.4.1 Calibration and Resolution of Trust.................................................... 78
4.4.2 Trustable Automation: Create Simple Use Situations and
Automation Structure.......................................................................... 80
4.4.3 Trustable Automation: Display Surface and Depth Indications
of Capability........................................................................................ 81
4.4.4 Trustable Automation: Enable Directable Automation and
Trust Repair ........................................................................................ 82
4.4.5 Trustworthy Automation and Goal Alignment................................... 84
4.5 Trust and Acceptance of Vehicle Technology................................................. 84
4.6 Ethical Considerations and the Teleology of Technology............................... 85
4.7 Conclusion....................................................................................................... 86
Acknowledgements .................................................................................................. 87
References ................................................................................................................ 88
KEY POINTS
• Trust complements the concepts of mental models and situation awareness
in explaining how and when people rely on automation.
• Trust is a multi-faceted term that operates at timescales of seconds to years
and mediates how people rely on, accept, and tolerate vehicle technology.
• Trust mediates micro interactions concerning how people rely on automa-
tion to engage in non-driving tasks to macro interactions concerning how
the public accepts new forms of transport.
67
68 Human Factors for Automated Vehicles
• Trust depends on surface features, such as the look and feel of the human–
machine interface (HMI), as well as depth features, such as the reliability
of the automation and alignment of the person’s and automation’s goals.
• The imperfect goal alignment between primary users (e.g., riders in an
automated vehicle (AV)) and incidental users (e.g., pedestrians negotiating
intersections with AVs) might undermine the trust and tolerance of inciden-
tal users.
4.1 INTRODUCTION
In the previous chapters, we described the different types of automated, con-
nected, and intelligent vehicle technologies and the mental models that inform the
driver’s decisions and behaviors. The discussion of mental models (this Handbook,
Chapter 3) shows that it is one thing to develop technologies that can greatly benefit
the driver and society, but it is an entirely different matter for these technologies to
be used appropriately. Like mental models, trust is one factor that guides how driv-
ers use vehicle automation, as well as the public acceptance and tolerance of driver-
less vehicles. This chapter addresses how one goes about building, calibrating, and
repairing trust for different types of automation.
This chapter begins by describing different types of automation and different roles
of people from the perspective of trust. This is followed by a definition of trust, a
description of the cognitive mechanisms that underlie it, and the need to promote
appropriate trust. Design approaches to promote appropriate trust depend on various
relationships between people and vehicle technology, such as drivers monitoring fal-
lible self-driving vehicles, passengers riding in driverless vehicles, people relying on
algorithms to choose ride-sharing partners, and even pedestrians who must negoti-
ate intersections with automated vehicles (AV). These human–technology relation-
ships are described in terms of general categories of vehicle technology—shared,
traded, and ceded control. These categories are relevant for trust because they reflect
increasing degrees to which people make themselves vulnerable to the technology.
Considering these categories, we discuss the roles of people, why trust is relevant, the
basis of trust, and the design choices that might promote appropriate trust and repair
trust after automation mishaps. The chapter concludes by briefly addressing the ethical
considerations associated with managing trust and creating trustworthy technology.
vehicles where drivers can trade control with automation—SAE Level 3 automa-
tion. Here, the vehicle might drive itself for part of the journey, but drivers would be
able to take control and might be called upon by the vehicle to take control. Many
vehicles contain less ambitious automation where drivers share control with the
driver. Here the driver remains completely responsible for driving, but the automa-
tion eases the demands of driving and might allow for short periods of less vigilant
attention to the roadway—SAE Levels 1 and 2 automation. Shared control includes
adaptive cruise control (ACC) and lane-centering systems that the driver deliberately
engages, as well as systems that engage automatically, such as electronic stability
control and automatic emergency braking, which activate only in rare and extreme
conditions. The terms ceded, traded, and shared are used because they reflect how
automation puts people in different positions of vulnerability and uncertainty, which
engages trust in guiding behavior (Mayer, Davis, & Schoorman, 1995). Whether in
the form of ceded, traded, or shared control, vehicle automation is changing the role
and responsibility of drivers, and the theoretical construct of trust will likely play
a critical role in mediating how people adapt to this new technology (see also, this
Handbook, Chapters 6, 8, 9).
The people for whom vehicle automation is designed—the drivers or riders—are
the most obvious users whose trust in it will influence its success, but vehicle auto-
mation and people reside in a large interconnected network, see Figure 4.1 (Lee,
2018). In this context, even driverless vehicles, where drivers have ceded control
to the vehicle, are not actually autonomous. Driverless vehicles will depend on an
array of people to maintain it, repair it, and remotely operate it when the on-board
automation encounters situations that exceed its capacity. Here trust might influ-
ence whether a remote operator chooses to intervene and “drive” the vehicle, much
like trust influences those who manage remotely piloted drones (Guznov, Lyons,
Nelson, & Woolley, 2016). Another critical trust relation emerges when people share
rides in an AV. Similar to other situations in the sharing economy, trust can inhibit
or facilitate people’s willingness to accept the suggestion of an algorithm to share a
ride with a stranger (Ert, Fleischer, & Magen, 2016; Payyanadan & Lee, 2018). Other
nodes on this network that automation may affect are other road users, such as pedes-
trians and cyclists, as well as drivers of conventional vehicles. These other road users
are not the primary users of automation, where the automation has been designed
to achieve their goals, but are incidental or passive users, who did not choose to use
the automation and whose goals may conflict with those of the AVs and their riders
(Inbar & Tractinsky, 2009; Montague, Xu, & Chiou, 2014).
Changes to one node can ripple through the network and affect trust in unantici-
pated ways. For example, pedestrians who grow distrustful and intolerant of AVs might
bully or sabotage them, undermining their efficiency—early demonstrations of self-
driving vehicles have encountered hostile people who have vandalized them (Romero,
2018). The diminished efficiency might then undermine the trust of the riders when
they fail to arrive at their destinations on time. The many nodes of the network that
define the transportation ecosystem mean it is insufficient to consider only the rela-
tionship between the person in the vehicle and the vehicle technology (Woods, 2015).
Figure 4.1 shows many trust relationships linking the nodes of a multi-echelon
network. The echelons range from the network of sensors and computers that reside
70 Human Factors for Automated Vehicles
FIGURE 4.1 A multi-echelon network of trust relationships and vehicle technology. Filled
circles indicate people and open circles represent automation elements. The lines indicate
trust relationships, with the solid lines indicating likely goal alignment and dashed lines
indicate relationships where goal alignment is less likely.
Driver Trust in ACIVs 71
in the car to the network of organizations and societal infrastructure that guides
technology development, deployment, and regulation. The influence of trust in these
relationships depends on the type of vehicle technology and the role of the person.
The most commonly studied trust relationships are those between people and the
AV system, shown Figure 4.1 as the “Driving Functions and Activities.” Other trust
relationships span levels, such as between a remote operator of self-driving vehicles
at the level of “Remote Infrastructure and Traffic” and the riders involved in the
traffic situation at the level of “Negotiated Road Situations.” Some trust relation-
ships even span the extremes, as the engineering community at the level of “Policy,
Standards, and Societal Infrastructure” and the behavior of sensors at the level of
“Vehicle Sensors and Control.” Generally, technology serves the goal of the per-
son directly interacting with it, but in this network the goals of some people might
be at odds with the automation—shown as dotted lines. For example, the goals of
pedestrians negotiating an intersection will often conflict with those of an AV that
is serving its rider’s goal of minimizing travel time. This figure shows only a few of
the many trust relationships that comprise the transportation ecology to highlight the
range of situations where trust in technology plays a role.
These trust relationships can suffer from either over-trust or under-trust. For
example, with shared control (SAE Levels 1 and 2), technology supports people
in their role as drivers (SAE, 2016). Here the danger is that people will over-trust
automation and rely on it to perform in ways the designers did not intend. As an
example, some drivers over-rely on the Tesla automation, leading to extended periods
of disengagement from driving and several fatal crashes (NTSB, 2016). For safety
drivers and remote operators who supervise AVs and intervene when the automation
encounters unexpected difficulties the situation is similar to shared control: the dan-
ger is that they will over-trust the technology and rely on it to perform safety critical
driving functions when they should not.
Over-trust with shared control contrasts with under-trust with ceded control (SAE
Level 5), where the person’s role might be that of a passenger who is unable to inter-
vene and take back the driving task. Here the danger is that people will under-trust
the automation, reject it, and fail to enjoy its benefits. For example, people might not
trust that self-driving vehicles can drive safely and opt to drive themselves. Trust is
needed not just for riding in an AV, but also for requesting an AV, entering and initi-
ating a trip, making trip changes while en route, and safely pulling over and exiting
at the destination (Weast, Yurdana, & Jordan, 2016). Figure 4.1 highlights one such
relationship, where riders will need to trust the vetting and matching algorithms
associated with making shared rides safe. Here people must develop trust in their
fellow passengers through the technology used to arrange the ride (Ert et al., 2016).
In contrast with riders and drivers who are primary users of technology, the goals
of incidental users, such as pedestrians, might not align with the automation, which
could undermine their trust of AVs even if they behave safely. For example, AVs
might fail to give way to pedestrians waiting to cross the street, forcing the pedestri-
ans to wait longer than they might with manually driven vehicles. Another class of
incidental users is the general public who must share its transportation infrastructure
with AVs. Public anger erupted when buses used to shuttle large technology employ-
ees in Silicon Valley clogged public bus lanes (De Kosnik, 2014). This anger is an
72 Human Factors for Automated Vehicles
indicator of potential distrust that might emerge if people see AVs as appropriating
public resources. The factors affecting trust in technology differ in each of these
situations, which can be revealed by considering the psychological mechanisms
underlying trust.
Trust-relevant
Purpose
Process
Performance
Modify Sample
Direct
FIGURE 4.2 Trust-related actions, the trust-related information revealed by these actions,
and the cognitive mechanisms underlying trust in the context of Neisser’s perceptual cycle.
(Neisser, 1976.)
interfere with achieving their goals. Tolerance is critical in considering how pedes-
trians and other drivers might respond to AVs. The balance of this section expands
on the cognitive mechanisms of trust and by first relating trust to mental models
and situation awareness.
Trust has close connections to other important concepts regarding driver interac-
tion with automation, such as mental models (see this Handbook, Chapter 3) and
situation awareness (see this Handbook, Chapter 7). One definition of mental models
is an explicit verbalizable mental representation of a system that allows people to
anticipate system behavior and to plan actions to achieve desired results (Norman,
1983; Rouse & Morris, 1986). Mental models support simulation of possible future
states of the system to guide attention and action. According to this conception of
mental models, trust complements them as an implicit, non-verbalizable, representa-
tion that guides behavior through an affective response to the system (Lee, 2006).
For example, a feeling of diminished trust might lead the driver to suspect that the
lane-keeping system is not operating properly and the driver’s mental model might
lead the driver to recognize that rain is interfering with the sensors. Trust also dif-
fers from mental models in that it guides interactions with intentional agents, or with
agents whose complexity and autonomy makes them appear intentional, whereas
mental models tend to guide interactions with inanimate systems—you might have a
mental model of a toaster but trust a voice-based assistant. A toaster requires a men-
tal model to guide specific interactions, but a voice-based assistant requires a belief
that it will act and reliably provide information.
74 Human Factors for Automated Vehicles
Situation awareness can be thought of as the elements of the mental models that
are active in the moment, and so guides attention and behavior in a particular situ-
ation (Endsley, 2013). It also encompasses the perception, interpretation, and pro-
jection of the current system state (Endsley, 1995). All of these reflect the person’s
mental model and the assimilation of information regarding the situation. Situation
awareness, as defined as an explicit awareness of the system, is often critical for
guiding decision-making and effective behavior. However, much behavior is guided
by implicit knowledge and affective processes that produce advantageous deci-
sions without awareness for the cues that guide them (Bechara, Damasio, Tranel, &
Damasio, 1997; Kahneman, 2011).
Figure 4.3 builds on Figure 4.2 to show how trust complements the influence of
mental models and situation awareness to influence behavior. The cognitive state
of the person, defined by trust, mental model, and situation awareness directs per-
ceptual exploration, specific actions, and, more generally, behavior. This behavior
samples the world and, more specifically, the operational domain and device, as well
as the specific situation. These samples modify the cognitive state—trust, mental
model, and situation awareness.
World
Domain
and
Device
Perceptual
Situation
awareness exploration
Mental
Action
model
Trust Behavior
Direct
FIGURE 4.3 Links between situation awareness, mental models, and trust based on
Neisser’s perceptual cycle. (Neisser, 1976.)
Driver Trust in ACIVs 75
interpretation of the action and outcomes relative to the goal of the action (Haggard &
Tsakiris, 2009). Fluent control requires a direct connection between action and out-
come, precise response to movement, and immediate feedback (Sidarus, Vuorre,
Metcalfe, & Haggard, 2017). For example, people feel agency in controlling a con-
ventional vehicle because a vehicle responds immediately and smoothly to a driver’s
movement of the steering wheel. Agency is greatest when people’s actions clearly and
immediately reflect a goal that they actively choose (Haggard, 2017).
The I, We, and You agencies are relevant to how people trust and respond to vehicle
automation because agency influences the capacity to detect errors, accept respon-
sibility, and feel in control (van der Wel, 2015). Through predictive processing of
perceptual-motor actions, people are very sensitive to their own errors, in the case of
I agency; or their partners’ errors, in the case of We agency (Picard & Friston, 2014).
Furthermore, people can predict the goals of others before the action ends, provided
that the action conforms to the constraints of biological motion (Elsner, Falck-Ytter, &
Gredebäck, 2012). Fluent interactions with automation and a sense of We agency
depend on these predictive mechanisms that break down when biological motion
rules are violated and the automation lacks anthropomorphic characteristics (Sahaï,
Pacherie, Grynszpan, & Berberian, 2017). Observable and directable behavior that
includes anthropomorphic control patterns will likely engage the mirror neuron sys-
tem of the person and promote fluent control (Knoblich & Sebanz, 2008; Madhavan &
Wiegmann, 2007). For specific examples of how to create such anthropomorphic con-
trol patterns see Lasseter (1987). Removing fluent control tends to shift people from
I to We or You agency, resulting in diminished error detection, longer latency, and a
diminished sense of control and responsibility. Ideally, the I, We, and You agencies
should parallel the role of the person in shared, traded, and ceded control.
We agency is sometimes transformed into I agency when people develop a sense
that they are responsible for control that is actually generated by another. This vicari-
ous control emerges when people are deeply engaged by observation and actively pre-
dicting outcomes. Vicarious control tends to occur when instructions precede action
but not when the instructions follow the action (Wegner, Sparrow, & Winerman,
2004). In the driving context, this might have implications for the timing of turn-
by-turn navigation commands that are being followed by a car with traded control.
Instructions that precede the response of the maneuver are likely to induce a greater
degree of vicarious control and agency. The opposite of vicarious control—agency
collapse—can also occur, leading people who are controlling the vehicle to move
from a sense of I agency to You agency and ascribe agency and responsibility to the
vehicle. This seems to occur in some cases of unintended acceleration, where people
feel that the car is out of their control despite their continued depression of the accel-
erator pedal. Such agency collapse might become more prevalent with increasingly
autonomous automation (Schmidt, 1989; Schmidt & Young, 2012).
SAE Level 2 automation poses a problem in this regard because removing the
driver from the fluent, perceptual-motor, control might lead to a sense of You agency
where people disengage from driving, fail to detect automation errors, and lose a
sense that they are responsible to oversee the automation. A recent test-track study
highlighted this possibility by exposing people to obstructions (e.g., a stationary
vehicle and garbage bag) that require them to respond because the automation would
78 Human Factors for Automated Vehicles
not. Although people received instructions and had their eyes on the road, 28% failed
to respond (Victor et al., 2018). The authors explained the results in terms of an
expectation mismatch, which suggests a misunderstanding of system capability due
to poor situation awareness and an inaccurate mental model (Engström et al., 2017).
The concept of agency suggests a more fundamental mismatch where some people
may have developed expectations based on You agency rather than the I or We agency
appropriate for SAE Level 2 automation. Shifts of agency are likely driven, not so
much by the explicit understanding associated with mental models of automation,
but by developing trust based on experiencing the automation.
The research on agency and joint activity suggests that trust is rooted in
perceptual-motor experience of acting together (Seemann, 2009). Trust that devel-
ops with We agency emerges as an attitude of the other that exists and is expressed
in the bodily state of the other (embodied) and the intentions of the other can be
read from actions of the other (enacted). Trust and We agency depends on a sense of
joint control that depends on perceptual-motor coupling that some types of automa-
tion can enhance by providing additional feedback through the steering wheel and
pedals, but that other types of automation can degrade or eliminate leading to You
agency (Abbink et al., 2017; Mulder, Abbink, & Boer, 2012). More generally, these
findings suggest assessing agency might be essential to define the nature of the trust
relationship (Caspar et al., 2015; Haggard, 2017). Are people trusting the automation
to drive—as in ceded control and You agency? Or are they trusting the automation to
work with them to drive—as in shared control and We agency?
Trust
Overtrust
Undertrust
Trustworthiness
Much research examines how automation reliability affects trust and reliance,
where reliance is the proportion of time people have the automation engaged.
Reliability is often manipulated by inserting random faults in the automation that
produce some overall level of reliability, such as 75% of the trials producing error-
free performance (Hancock et al., 2011; Schaefer et al., 2016). This simple indi-
cation of performance often ignores the important effect of context (Bagheri &
Jamieson, 2004). The context or operational domain refers to features of the situa-
tion that might influence the automation performance. To the extent that automation
reliability depends on the context, the resolution of trust can be improved by high-
lighting situations where the automation performs well and situations where it does
not. Improving the resolution of trust is particularly critical for SAE Level 2 and 3
automation. Such automation might work perfectly within the Operational Design
Domain (ODD), which might cover 75% of the driving situations, and not elsewhere.
This is a very different situation than the automation operating properly 75% in
all driving situations. Designing an ODD that is easy for drivers to understand can
greatly enhance the resolution of trust so that drivers trust and rely on the automation
when it is within the ODD, and but not when it is not.
Ideally, trust would be highly resolved and well-calibrated so that high levels of
trust correspond to high capability to achieve the driver’s goal, and low levels of trust
correspond to situations where the technology is generally incapable of achieving the
driver’s goal. Figure 4.4 points to two general approaches in achieving appropriate
trust: improve the capability of the automation—improve trustworthiness—to meet
people’s expectations or make the technology more trustable. Increasing trustwor-
thiness to match people’s trust with automation that requires drivers to remain able
to take back control—SAE Level 2 or 3—can fail because the better performing
automation might engender greater and equally inappropriate trust, leading drivers
to be even less likely to take control when needed. This automation conundrum is
well documented in other domains (Endsley, 2016; Sebok & Wickens, 2017; see also,
this Handbook, Chapter 21). A more promising approach is to enhance the calibra-
tion and resolution of trust by making automation more trustable, which is the focus
of the balance of this section.
80 Human Factors for Automated Vehicles
system so that it can automatically change lanes should consider that drivers are
likely to chunk such a task to include checking the blind spot and so would expect
the automation to include that subtask with the update.
Purpose
• Define the operational domain in concrete terms and provide a continuous
indicator of whether the vehicle is in the operational domain or not.
• Indicate impending changes in the driving situation relative to the opera-
tional domain.
• Monitor the driver’s behavior and provide feedback when the driver’s
behavior, such as attention to the road, deviates from that required by the
purpose of the automation.
• Match the goals of the automation to those of the person (e.g., speed or
energy efficiency).
Process
• Indicate what the automation “sees” and does not “see.”
• Communicate the state of the vehicle and explain its behavior. The vehicle
should indicate what events led the vehicle to suddenly brake or change lanes.
• Allow people to request more information to explain behavior and to reduce
the flow of information.
• Automation that mimics the process of human driving will produce fewer
surprises and be more trustable.
Performance
• Show proximity to control capacity to indicate when performance is likely
to suffer, such as in steering through a curve.
• Consider haptic and auditory cues to highlight situations where perfor-
mance is degraded.
The purpose, process, and performance of automation can be thought of as depth fea-
tures that define the capability of the automation, which are imperfectly revealed by
the surface features of the interface. Sometimes the surface features of the interface
themselves can strongly influence trust even though they have no direct connection
82 Human Factors for Automated Vehicles
to the underlying capability of automation. For example, the scent of lavender leads
to greater interpersonal trust by inducing a calm, inclusive state (Sellaro, van Dijk,
Paccani, Hommel, & Colzato, 2014), and pastel colors enhance trust in cyberbanking
interfaces (Kim & Moon, 1998).
Sometimes the surface features of the automation conflict with the depth features
that define its true capabilities. One example of this is the label used for various
vehicle automation features. These labels can imply that the automation has greater
capabilities than it has, but often the labels are ambiguous and confusing (Abraham,
Seppelt, Mehler, & Reimer, 2017; Nees, 2018). Although the labeling and descrip-
tion of automation in the owner’s manual is an important basis of trust, real-time
feedback from the HMI in the vehicle might be more influential (Li, Holthausen,
Stuck, & Walker, 2019; this Handbook, Chapter 15). Both visual and auditory dis-
plays of performance and process of the automation act as an externalized mental
model that helps drivers see its limits (Seppelt & Lee, 2007; 2019).
Moving beyond the HMI, the behavior of automation can make the automation
more trustable and promote appropriate trust. The variability of the lane position
of the vehicle can be a salient cue that might convey the depth features of auto-
mation. Because degraded control behavior of the vehicle engages drivers (Alsaid,
Lee, & Price, 2019), less precise lane-keeping can prompt drivers to look to the road
(Price, Lee, Dinparastdjadid, Toyoda, & Domeyer, 2019). Precise lane-keeping in
situations where the automation is less capable could promote over-trust. Similarly,
acceleration cues of ACC helped redirect driver’s attention to the road and antici-
pate conflicts with the vehicle ahead (Morando, Victor, & Dozza, 2016). Similarly,
vehicles can announce their intent to change lanes through the roll of the vehicle’s
body (Cramer & Klohr, 2019). In general, the salient surface features, such as vehicle
motion, should be mapped to important depth features of the automation to convey
how its capability changes with the evolving situation.
The trust-relevant characteristics of the automation’s purpose, process, and per-
formance are often inferred from direct experience with the system, but this might
not be possible with the introduction of dramatically new technology, such as self-
driving cars. Many people will not have any direct experience on which to base
their initial trust of these systems. In the absence of direct experience, people base
their trust on relational and societal bases (Lee & Kolodge, 2019). Relational bases
include experience with a brand (e.g., GM or Ford) or a type of technology (e.g.,
computers). Societal bases include the policy and regulatory structures that ensure
vehicle safety. Like labels that can lead to inappropriate trust, so can the relational
and societal bases of trust (also see this Handbook, Chapter 5).
effectively center the vehicle but might make it difficult for the driver to adjust the
vehicle’s position to accommodate other traffic or potholes. An alternate approach
could be for the automation to be more directable and accept steering input from
the driver and provide feedback through the steering wheel that indicates when the
driver approaches safety boundaries, such as the edge of the road (Abbink et al.,
2017; Abbink, Mulder, & Boer, 2012). Such feedback could also indicate to the driver
when sensors are providing less precise information. As in the previous discussion of
agency, control changes the way we perceive the world and letting the driver direct
the automation provides support for appropriate trust that is not easily achieved by
simply displaying trust-related information (Wen & Haggard, 2018). More specifi-
cally, like active touch and fluent perceptual-motor interaction, perception through
direction of automation will likely be more effective in helping drivers detect limita-
tions than perception through observation of automation (Flach, Bennett, Woods, &
Jagacinski, 2015; Gibson, 1962). Directable automation may provide a strategy to
circumvent one of the ironies of automation: better automation leads to less fre-
quent engagements and the occasional engagements tend to be more challenging
(Bainbridge, 1983). Automation that keeps the driver engaged and invites interaction
may prepare people for occasional instances where they must take control.
Making the automation directable engages the person in developing trust; trust
repair engages the automation in re-developing trust. The process of recovering peo-
ples’ trust following an automation failure can be an active process of trust repair
(de Visser, Pak, & Shaw, 2018; Kim & Cooper, 2009; Tomlinson & Mayer, 2009).
Trust repair describes the process of interacting with the person to recover trust
rather than simply relying on the baseline representation of the automation’s pur-
pose, process, and performance. Here, additional information is presented to demon-
strate that the system is trustworthy. One way to frame this information is in terms
of extrapolating past experiences, evaluating the present experience, and projecting
future outcomes—explain, show, and promise (Emirbayer & Mische, 1998). This
can include an explanation of past failings. It can also include showing information
in the present that shows why the automation fails, and it could project to the future
with a promise for why the automation will not fail (Kohn, Quinn, Pak, de Visser, &
Shaw, 2018). For example, if the automation requests the driver to take back control,
it might build trust by describing what about the situation—unusually heavy rain—
led to the need for the driver to intervene and why this would be unlikely to occur
in the future. More generally, the elements of trust recovery—explain, show, and
promise—can convey the locus of causality (e.g., an external event, such as heavy
rain versus an internal event, such as a software bug), degree of control over the situ-
ation (e.g., the degree to which the rain was unavoidable), and stability of the cause
(e.g., the unpredictable nature of intense rain) (Tomlinson & Mayer, 2009).
The aim of trust repair, like the other ways of engineering the relationship with
automation, is to promote highly resolved and well-calibrated trust, not to increase
trust. Following this logic, thought should be given to the complement of trust
repair—trust tempering. Trust tempering would actively monitor situations where
the system performed well, but failure was likely, and then explain why such success
cannot be counted on in the future can moderate trust and help people avoid relying
on automation in such situations in the future.
84 Human Factors for Automated Vehicles
issue found trust and perceived usefulness influenced the intention to use an autono-
mous vehicle to a similar extent and that perceived ease of use had a much smaller
influence (Choi & Ji, 2015).
Unlike the typical application of TAM to information technology in the work-
place, driving involves considerable risk. Although autonomous vehicles promise
to greatly reduce the risk of driving, it is not the actual risk and safety that govern
trust and technology acceptance but the perceived risk. With perceived risk, people
focus on stories and feelings, not statistics and analysis, and so they might neglect
the many crashes autonomous vehicles avoided and focus on the few caused by
automation (Slovic, Finucane, Peters, & MacGregor, 2004). Consequently, people
may perceive autonomous vehicles as much riskier than they actually are. Perceived
risk can also deviate from actual risk in situations where the technology is not read-
ily observable, mishaps produce deadly consequences, and people feel they have
no control. Such situations produce dread risk (Slovic, 1987; Slovic et al., 2004).
An analysis of open-ended items from a large survey found people responding to
vehicle automation in terms of dread risk (Lee & Kolodge, 2019). If vehicle automa-
tion produces feelings of dread risk, then AVs might need to be 1,000 times safer
than conventional vehicles to be perceived as having the same risk (Lee, 2019;
Slovic, 1987).
Trust plays a critical role in mediating risk perception (Slovic, 1993). In the con-
text of AVs, trust partially mediated how potential environmental benefits influ-
ence risk acceptance, and fully mediated the effect of these benefits on people’s
intention to use AVs (Liu, Ma, & Zuo, 2019). In other words, people are will-
ing to accept more risk for riding in an environmentally friendly vehicle if they
trust automation. Generally, trust is slow to develop and quick to lose (Muir, 1987;
Slovic, 1993). This may be even more pronounced with AVs where a failure might
lead people to see risk in terms of dread risk. Such a transition might lead to trust
collapse and a slow recovery as people monitor the behavior of automation with the
expectation of further violations of their trust (Earle, Siegrist, & Gutscher, 2010;
Slovic, 1993).
Although TAM suggests that the perceived usefulness of AVs will be a powerful
force in their acceptance, the nature of driving makes it likely that risk will likely
play an important role. This role might be accentuated by the potential for people
to perceive risk associated with vehicle automation as dread risk. Importantly, trust
seems to have a strong influence on both perceived usefulness and risk, making it
critical to craft trustworthy and trustable automation.
can maintain the position of the vehicle in the lane and avoid obstacles, but that it
behaves politely with other road users and how it provides service and equitably uses
public resources. Vehicle automation can provide mobility and greater agency to
those who are poorly served by today’s transportation options, such as those that are
older, economically disadvantaged, or have vision and mobility limitations (see also,
this Handbook, Chapters 10, 17). However, recent history has shown that the algo-
rithms that power social media and public policy tend to exacerbate existing biases
and inequities (O’Neil, 2016). To counter this tendency, there is an increasing need
for policies to ensure the trustworthiness and ethical implementation of technology
(Etzioni & Etzioni, 2016).
Some frame the ethical challenge of AVs in terms of the Trolley Problem, where
the algorithm designers face decisions about which people an algorithm causes to die
and which it saves in particular crash situations (Jean-Francois, Azim, & Iyad, 2016;
Shariff, Bonnefon, & Rahwan, 2017). Such situations and the ethical dilemmas posed
are rarely realistic challenges for designers (Bauman, McGraw, Bartels, & Warren,
2014; De Freitas, Anthony, & Alvarez, 2019). A more realistic dilemma is the macro-
level “Trolley Problem” that companies and regulatory agencies face: should they
pull the lever and enable self-driving cars on the road knowing that the technology is
imperfect and their action will be responsible for thousands of people dying or should
they wait and let thousands more die in conventional vehicles as is happening today.
At a more pragmatic level, developers and policymakers must decide whether it is
ethical for fully automated, self-driving vehicles to follow the letter of law. Following
the speed limit rather than the speed of the traffic stream might increase the risk of
collision; however, over time such behavior might lead others to drive more slowly
and follow the speed limit, which might ultimately make driving safer. Chapter 14
describes some of the challenges faced by governmental responses to AVs.
As technology becomes more capable, we confront the threat that increasingly
autonomous technology will undermine human agency (Hancock, Nourbakhsh, &
Stewart, 2019; Lee, 2019). At the same time, autonomous systems can free us from
mundane tasks, such as navigating a congested highway and give us agency to pur-
sue more meaningful activities. The diverse conceptualization of trust presented in
this chapter may provide the theoretical perspective needed to identify what level
of control people need to achieve their goals and feel enhanced agency rather than
diminished agency.
4.7 CONCLUSION
Trust complements the concepts of mental models and situation awareness in
explaining how and when people rely on automation. Trust is a multi-faceted term
that operates at timescales of seconds to years to describe whether people rely on,
accept, and tolerate vehicle technology. Trust mediates micro interactions concern-
ing how people rely on automation to engage in non-driving tasks to macro interac-
tions concerning how the public accepts new forms of transport. Public acceptance
may depend on the trust of incidental users, such as pedestrians who must negotiate
with AVs at intersections, and drivers who must share the road with AVs. With such
Driver Trust in ACIVs 87
incidental users, designers need to consider how to reconcile the goals of the riders
of the AVs and the goals of the other road users who must tolerate the AVs if they are
to succeed (Domeyer, Lee, & Toyoda, 2020). Unlike traditional users for which the
automation was primarily designed, the trust and tolerance of incidental users might
play a critical role in the success of increasingly AVs as they use public roadways and
resources in ways that conventional vehicles might not. Chapter 5 addresses these
issues of public acceptance in more detail.
Harmonizing trust relationships in the transportation network represents a com-
plex and multi-faceted design challenge (this Handbook, Chapter 19). The specific
design considerations depend on the particular trust relationships (e.g., driver inter-
action SAE Level 2 automation or pedestrians interacting with driverless vehicles);
however, some specific advice to promote appropriate trust can be offered:
ACKNOWLEDGEMENTS
This work is partially supported by a grant from Toyota CSRC and the National
Science Foundation (NSF 18–548 FW-HTF). The chapter was greatly improved
with the comments of the editors, Erin Chiou, Josh Domeyer, and Mengyao Li,
and the other members of the Cognitive Systems Laboratory at the University of
Wisconsin—Madison.
88 Human Factors for Automated Vehicles
REFERENCES
Abbink, D. A., Carlson, T., Mulder, M., de Winter, J., Aminravan, F., Gibo, T., & Boer, E.
(2017). A topology of shared control systems–Finding common ground in diversity.
IEEE Transactions on Human-Machine Systems, 48(5), 509–525.
Abbink, D. A., Mulder, M., & Boer, E. R. (2012). Haptic shared control: Smoothly shift-
ing control authority? Cognition, Technology and Work, 14(1), 19–28. doi:10.1007/
s10111-011-0192–5
Abraham, H., Seppelt, B., Mehler, B., & Reimer, B. (2017). What’s in a name: Vehicle tech-
nology branding and consumer expectations for automation. AutomotiveUI 2017–9th
International ACM Conference on Automotive User Interfaces and Interactive
Vehicular Applications, Proceedings, 226–234. doi:10.1145/3122986.3123018
Adolphs, R., Tranel, D., & Damasio, A. R. (1998). The human amygdala in social judgment.
Nature, 393, 470–474.
Alsaid, A., Lee, J. D., & Price, M. A. (2019). Moving into the loop: An investiga-
tion of drivers’ steering behavior in highly automated vehicles. Human Factors.
doi:10.1177/0018720819850283
Bagheri, N. & Jamieson, G. A. (2004). The impact of context-related reliability on automa-
tion failure detection and scanning behaviour. 2004 IEEE International Conference on
Systems, Man and Cybernetics, 1, 212–217.
Bainbridge, L. (1983). Ironies of automation. Automatica, 19(6), 775–779.
doi:10.1016/0005–1098(83)90046-8
Bansal, G., Nushi, B., Kamar, E., Weld, D., Lasecki, W., & Horvitz, E. (2019). A case for
backward compatibility for Human-AI teams. arXiv:1906.01148.
Bauman, C. W., McGraw, P. A., Bartels, D. M., & Warren, C. (2014). Revisiting external validity:
Concerns about trolley problems and other sacrificial dilemmas in moral psychology.
Social and Personality Psychology Compass, 8(9), 536–554. doi:10.1111/spc3.12131
Bechara, A., Damasio, H., Tranel, D., & Damasio, A. R. (1997). Deciding advantageously
before knowing the advantageous strategy. Science, 275(5304), 1293–1295.
Caspar, E. A., De Beir, A., Magalhaes De Saldanha Da Gama, P. A., Yernaux, F., Cleeremans,
A., & Vanderborght, B. (2015). New frontiers in the rubber hand experiment: When
a robotic hand becomes one’s own. Behavior Research Methods, 47(3), 744–755.
doi:10.3758/s13428-014-0498–3
Choi, J. K. & Ji, Y. G. (2015). Investigating the importance of trust on adopting an autonomous
vehicle. International Journal of Human-Computer Interaction, 31(10), 692–702. doi:
10.1080/10447318.2015.1070549
Cramer, S. & Klohr, J. (2019). Announcing automated lane changes: Active vehicle roll
motions as feedback for the driver. International Journal of Human-Computer
Interaction, 35(11), 980–995. doi:10.1080/10447318.2018.1561790
Davis, F. D. (1989). Perceived usefulness, perceived ease of use, and user acceptance of infor-
mation technology. MIS Quarterly, 13(3), 319–340.
De Freitas, J., Anthony, S. E., & Alvarez, G. A. (2019). Doubting driverless dilemmas.
PsychArXiv, 1–5. doi:10.7498/aps.62.064705
De Kosnik, A. (2014). Disrupting technological privilege: The 2013–2014 San Francisco
Google bus protests. Performance Research, 19(6), 99–107. doi:10.1080/13528165.20
14.985117
de Visser, E. J., Monfort, S. S., Goodyear, K., Lu, L., O’Hara, M., Lee, M. R., … Krueger,
F. (2017). A little anthropomorphism goes a long way. Human Factors, 59(1), 116–133.
doi:10.1177/0018720816687205
de Visser, E. J., Pak, R., & Shaw, T. H. (2018). From ‘automation’ to ‘autonomy’: The impor-
tance of trust repair in human–machine interaction. Ergonomics, 0139, 1–19. doi:10.10
80/00140139.2018.1457725
Driver Trust in ACIVs 89
Domeyer, J., Dinparastdjadid, A., Lee, J. D., & Douglas, G. (2019). Proxemics and kinesics
in automated vehicle-pedestrian communication: Representing ethnographic observa-
tions. Transportation Research Record. doi:10.1177/0361198119848413
Domeyer, J., Lee, J. D., & Toyoda, H. (2020). Vehicle automation-other road user communica-
tion and coordination: Theory and mechanisms. IEEE Access, 8, 19860–19872.
Driver, J., Davis, G., Ricciardelli, P., Kidd, P., Maxwell, E., & Baron-Cohen, S. (1999). Gaze
perception triggers reflexive visuospatial orienting. Visual Cognition, 6(5), 509–540.
Dzindolet, M. T., Pierce, L. G., Beck, H. P., Dawe, L. A., & Anderson, B. W. (2001).
Predicting misuse and disuse of combat identification systems. Military Psychology,
13(3), 147–164.
Earle, T. C., Siegrist, M., & Gutscher, H. (2010). Trust, risk perception and the TCC model of
cooperation. In M. Siegrist, T. C. Earle, & H. Gutscher (Eds.), Trust in Risk Management:
Uncertainty and Scepticism in the Public Mind (pp. 1–50). London: Earchscan.
Elsner, C., Falck-Ytter, T., & Gredebäck, G. (2012). Humans anticipate the goal of other peo-
ple’s point-light actions. Frontiers in Psychology, 3, 120. doi: 10.3389/fpsyg.2012.00120
Emirbayer, M. & Mische, A. (1998). What is agency? American Journal of Sociology, 103(4),
962–1023.
Endsley, M. R. (1995). Toward a theory of situation awareness in dynamic systems. Human
Factors, 37(1), 32–64. doi:10.1518/001872095779049543
Endsley, M. R. (2013). Situation awareness. In J. D. Lee & A. Kirlik (Eds.), The Oxford
Handbook of Cognitive Engineering (pp. 88–108). New York: Oxford University Press.
Endsley, M. R. (2016). From here to autonomy. Human Factors, 59(1), 5–27.
doi:10.1177/0018720816681350
Engström, J., Bärgman, J., Nilsson, D., Seppelt, B., Markkula, G., Piccinini, G. B., & Victor,
T. (2017). Great expectations: A predictive processing account of automobile driv-
ing. Theoretical Issues in Ergonomics Science, 19(2), 154–194. doi:10.1080/14639
22X.2017.1306148
Ert, E., Fleischer, A., & Magen, N. (2016). Trust and reputation in the sharing economy:
The role of personal photos in Airbnb. Tourism Management, 55, 62–73. doi:10.1016/j.
tourman.2016.01.013
Etzioni, A. & Etzioni, O. (2016). AI assisted ethics. Ethics and Information Technology,
18(2), 149–156. doi:10.1007/s10676-016-9400–6
Fehr, E. & Camerer, C. F. (2007). Social neuroeconomics: The neural circuitry of social pref-
erences. Trends in Cognitive Sciences, 11(10), 419–427. doi:10.1016/j.tics.2007.09.002
Flach, J. M., Bennett, K. B., Woods, D. D., & Jagacinski, R. J. (2015). Interface design: A control
theoretic context for a triadic meaning processing approach. The Cambridge Handbook
of Applied Perception Research, 647–668. doi:10.1017/CBO9780511973017.040
Gefen, D., Karahanna, E., & Straub, D. W. (2003). Trust and TAM in online shopping: An
integrated model. MIS Quarterly, 27(1), 51–90.
Ghazizadeh, M., Lee, J. D., & Boyle, L. N. (2012). Extending the technology acceptance
model to assess automation. Cognition, Technology & Work, 14(1), 39–49. doi:10.1007/
s10111-011-0194-3
Gibson, J. J. (1962). Observations on active touch. Psychological Review, 69(6), 477–491.
Guznov, S., Lyons, J., Nelson, A., & Woolley, M. (2016). The effects of automation
error types on operators’ trust and reliance. In S. Lackey & R. Shumaker (Eds.),
Lecture Notes in Computer Science (Vol. 9740, pp. 116–124). Berlin: Springer.
doi:10.1007/978-3-319–39907-2_11
Haggard, P. (2017). Sense of agency in the human brain. Nature Reviews Neuroscience, 18(4),
197–208. doi:10.1038/nrn.2017.14
Haggard, P. & Tsakiris, M. (2009). The experience of agency: The experience of agency
feelings, judgments, and responsibility. Current Directions in Psychological Science,
18(4), 242–246.
90 Human Factors for Automated Vehicles
Hancock, P. A., Billings, D. R., Schaefer, K. E., Chen, J. Y. C., de Visser, E. J., &
Parasuraman, R. (2011). A meta-analysis of factors affecting trust in human-robot
interaction. Human Factors, 53(5), 517–527. doi:10.1177/0018720811417254
Hancock, P. A., Nourbakhsh, I., & Stewart, J. (2019). On the future of transportation in an
era of automated and autonomous vehicles. Proceedings of the National Academy of
Sciences, 116(16), 7684–7691. doi:10.1073/pnas.1805770115
Hayashi, N., Ostrom, E., Walker, J., & Yamagishi, T. (1999). Reciprocity, trust, and the sense
of control: A cross-societal study. Rationality and Society, 11(1), 27–46.
Hoff, K. A. & Bashir, M. (2015). Trust in automation: Integrating empirical evidence on factors
that influence trust. Human Factors, 57(3), 407–434. doi:10.1177/0018720814547570
Hogarth, R. M., Lejarraga, T., & Soyer, E. (2015). The two settings of kind and wicked
learning environments. Current Directions in Psychological Science, 24(5), 379–385.
doi:10.1177/0963721415591878
Hu, P. & Young, J. (1999). Summary of Travel Trends: 1995 Nationwide Personal
Transportation Survey. Washington, DC: Federal Highway Administration.
Inbar, O. & Tractinsky, N. (2009). The incidental user. Interactions, 16(4), 56–59.
Jean-Francois, B., Azim, S., & Iyad, R. (2016). The Social dilemma of autonomous vehicles.
Science, 352(6293), 1573.
Kahneman, D. (2011). Thinking, Fast and Slow. New York: Macmillan.
Kahneman, D. & Klein, G. A. (2009). Conditions for intuitive expertise: A failure to disagree.
American Psychologist, 64(6), 515–526. doi:10.1037/a0016755
Kim, J. & Moon, J. Y. (1998). Designing towards emotional usability in customer interfaces—
Trustworthiness of cyber-banking system interfaces. Interacting with Computers,
10(1), 1–29.
Kim, P. H. & Cooper, C. D. (2009). The repair of trust: A dynamic bilateral perspective and
multilevel conceptualization. Academy of Management Review, 34(3), 401–422.
Klein, G. A., Woods, D. D., Bradshaw, J. M., Hoffman, R. R., Feltovich, P. J., Hoffman, R. R.,
… Ford, K. M. (2004). Ten challenges for making automation a “Team Player” in joint
human-agent activity. IEEE Intelligent Systems, 19(6), 91–95.
Knoblich, G. & Sebanz, N. (2008). Evolving intentions for social interaction: From entrain-
ment to joint action. Philosophical Transactions of the Royal Society B-Biological
Sciences, 363(1499), 2021–2031. doi:10.1098/rstb.2008.0006
Kohn, S. C., Quinn, D., Pak, R., de Visser, E. J., & Shaw, T. H. (2018). Trust repair strategies
with self-driving vehicles: An exploratory study. Proceedings of the Human Factors and
Ergonomics Society Annual Meeting, 62(1), 1108–1112. doi:10.1177/1541931218621254
Kosfeld, M., Heinrichs, M., Zak, P. J., Fishbacher, U., & Fehr, E. (2005). Oxytocin increases
trust in humans. Nature, 435, 673–676.
Langton, S. R. H. & Bruce, V. (1999). Reflexive visual orienting in response to social atten-
tion of others. Visual Cognition, 6(5), 541–567.
Lasseter, J. (1987). Principles of traditional animation applied to 3D computer animation.
ACM SIGGRAPH Computer Graphics, 21(4), 35–44. doi:10.1145/37402.37407
Lee, J. D. (2006). Affect, attention, and automation. In A. Kramer, D. Wiegmann, & A. Kirlik
(Eds.), Attention: From Theory to Practice (pp. 73–89). New York: Oxford University
Press.
Lee, J. D. (2018). Perspectives on automotive automation and autonomy. Journal of Cognitive
Engineering and Decision Making, 12(1), 53–57. doi:10.1177/1555343417726476
Lee, J. D. (2019). Trust and the teleology of technology. Ergonomics, 62(4), 500–501. doi:10.
1080/00140139.2019.1563332
Lee, J. D. & Kolodge, K. (2019). Exploring trust in self-driving vehicles with text analysis.
Human Factors. doi:10.1177/0018720819872672
Lee, J. D. & Moray, N. (1992). Trust, control strategies and allocation of function in human-
machine systems. Ergonomics, 35(10), 1243–1270. doi:10.1080/00140139208967392
Driver Trust in ACIVs 91
Lee, J. D. & See, K. A. (2004). Trust in automation: Designing for appropriate reliance.
Human Factors, 46(1), 50–80.
Lee, J. D. & Seppelt, B. D. (2012). Human factors and ergonomics in automation design. In
G. Salvendy (Ed.), Handbook of Human Factors and Ergonomics (pp. 1615–1642).
Hoboken, NJ: Wiley. doi:10.1002/9781118131350.ch59
Li, M., Holthausen, B. E., Stuck, R. E., & Walker, B. N. (2019). No risk no trust: Investigating
perceived risk in highly automated driving. In Proceedings of the 11th International
Conference on Automotive User Interfaces and Interactive Vehicular Applications
(pp. 177–185). New York: ACM.
Liu, P., Ma, Y., & Zuo, Y. (2019). Self-driving vehicles: Are people willing to trade risks
for environmental benefits? Transportation Research Part A: Policy and Practice,
125(March), 139–149. doi:10.1016/j.tra.2019.05.014
Madhavan, P. & Wiegmann, D. A. (2007). Similarities and differences between human–
human and human–automation trust: An integrative review. Theoretical Issues in
Ergonomics Science, 8(4), 277–301. doi:10.1080/14639220500337708
Mayer, R. C., Davis, J. H., & Schoorman, F. D. (1995). An integrative model of organizational
trust. Academy of Management Review, 20(3), 709–734.
McKnight, D. H., Choudhury, V., & Kacmar, C. (2002). Developing and validating trust mea-
sures for e-commerce: An integrative typology. Information Systems Research, 13(3),
334–359.
Meyer, J. & Lee, J. D. (2013). Trust, reliance, and compliance. In A. Kirlik & J. D. Lee (Eds.),
The Oxford Handbook of Cognitive Engineering (pp. 109–124). New York: Oxford
University Press.
Mikolajczak, M., Gross, J. J., Lane, A., Corneille, O., de Timary, P., & Luminet, O. (2010).
Oxytocin makes people trusting, not gullible. Psychological Science, 21(8), 1072–1074.
doi:10.1177/0956797610377343
Miller, H. J. (2013). Beyond sharing: Cultivating cooperative transportation systems through
geographic information science. Journal of Transport Geography, 31, 296–308.
Montague, E., Xu, J., & Chiou, E. (2014). Shared experiences of technology and trust: An
experimental study of physiological compliance between active and passive users
in technology-mediated collaborative encounters. IEEE Transactions on Human-
Machine Systems, 44(5), 614–624. doi:10.1109/THMS.2014.2325859
Morando, A., Victor, T., & Dozza, M. (2016). Drivers anticipate lead-vehicle conflicts dur-
ing automated longitudinal control: Sensory cues capture driver attention and pro-
mote appropriate and timely responses. Accident Analysis & Prevention, 97, 206–219.
doi:10.1016/j.aap.2016.08.025
Muir, B. M. (1987). Trust between humans and machines, and the design of decision aids.
International Journal of Man-Machine Studies, 27, 527–539.
Muir, B. M. & Moray, N. (1996). Trust in automation. Part II. Experimental studies of
trust and human intervention in a process control simulation. Ergonomics, 39(3),
429–460.
Mulder, M., Abbink, D. A., & Boer, E. R. (2012). Sharing control with haptics: Seamless
driver support from manual to automatic control. Human Factors, 54(2), 786–798.
Nass, C., Jonsson, I. M., Harris, H., Reaves, B., Endo, J., Brave, S., & Takayama, L. (2005).
Improving automotive safety by pairing driver emotion and care voice emotion.
Conference on Human Factors in Computing Systems (pp. 1973–1976). New York:
ACM.
Nass, C. & Moon, Y. (2000). Machines and mindlessness: Social responses to computers.
Journal of Social Issues, 56(1), 81–103. doi:10.1111/0022–4537.00153
Nave, G., Camerer, C., & McCullough, M. (2015). Does oxytocin increase trust in humans?
A critical review of research. Perspectives on Psychological Science, 10(6), 772–789.
doi:10.1177/1745691615600138
92 Human Factors for Automated Vehicles
Sebanz, N., Bekkering, H., & Knoblich, G. (2006). Joint action: Bodies and minds moving
together. Trends in Cognitive Sciences, 10(2), 70–76.
Sebok, A. & Wickens, C. D. (2017). Implementing lumberjacks and black swans into model-
based tools to support human-automation interaction. Human Factors, 59(2), 189–203.
doi:10.1177/0018720816665201
Seemann, A. (2009). Joint agency: Intersubjectivity, sense of control, and the feeling of trust.
Inquiry, 52(5), 500–515. doi:10.1080/00201740903302634
Sellaro, R., van Dijk, W. W., Paccani, C. R., Hommel, B., & Colzato, L. S. (2014). A ques-
tion of scent: Lavender aroma promotes interpersonal trust. Frontiers in Psychology,
5(OCT), 1–5. doi:10.3389/fpsyg.2014.01486
Seppelt, B. D. & Lee, J. D. (2007). Making adaptive cruise control (ACC) limits visible.
International Journal of Human-Computer Studies, 65(3), 192–205.
Seppelt, B. D. & Lee, J. D. (2019). Keeping the driver in the loop: Dynamic feedback to sup-
port appropriate use of imperfect vehicle control automation. International Journal of
Human-Computer Studies, 125, 66–80. doi:10.1016/J.IJHCS.2018.12.009
Shariff, A., Bonnefon, J. F., & Rahwan, I. (2017). Psychological roadblocks to the adop-
tion of self-driving vehicles. Nature Human Behaviour, 1(10), 694–696. doi:10.1038/
s41562-017-0202–6
Sheridan, T. B. & Ferrell, W. R. (1974). Man-Machine Systems: Information, Control, and
Decision Models of Human Performance. Cambridge, MA: MIT Press.
Sidarus, N., Vuorre, M., Metcalfe, J., & Haggard, P. (2017). Investigating the prospective
sense of agency: Effects of processing fluency, stimulus ambiguity, and response con-
flict. Frontiers in Psychology, 8, 1–15. doi:10.3389/fpsyg.2017.00545
Slovic, P. (1987). Perception of risk. Science, 236(4799), 280–285.
Slovic, P. (1993). Perceived risk, trust, and democracy. Risk Analysis, 13(6), 675–682.
Slovic, P. (1999). Trust, emotion, sex, politics, and science: Surveying the risk-assessment
battlefield. Risk Analysis, 19(4), 689–701.
Slovic, P., Finucane, M. L., Peters, E., & MacGregor, D. G. (2004). Risk as analysis and risk
as feelings: Some thoughts about affect, reason, risk, and rationality. Risk Analysis,
24(2), 311–322.
Tomlinson, E. C. & Mayer, R. C. (2009). The role of causal attribution dimensions in trust repair.
Academy of Management Review, 34(1), 85–104. doi:10.5465/AMR.2009.35713291
van der Wel, R. P. R. D. (2015). Me and we: Metacognition and performance evaluation of
joint actions. Cognition, 140, 49–59. doi:10.1016/j.cognition.2015.03.011
Verberne, F. M. F., Ham, J., & Midden, C. J. H. (2012). Trust in smart systems: Sharing driv-
ing goals and giving information to increase trustworthiness and acceptability of smart
systems in cars. Human Factors, 54(5), 799–810. doi:10.1177/0018720812443825
Victor, T. W., Tivesten, E., Gustavsson, P., Johansson, J., Sangberg, F., & Ljung Aust, M.
(2018). Automation expectation mismatch: Incorrect prediction despite eyes on threat
and hands on wheel. Human Factors, 60(8), 1095–1116. doi:10.1177/0018720818788164
Weast, J., Yurdana, M., & Jordan, A. (2016). A Matter of Trust: How Smart Design Can
Accelerate Automated Vehicle Adoption. Intel White Paper. Santa Clara, CA.
doi:10.1016/S1353–4858(19)30060–1
Wegner, D. M., Sparrow, B., & Winerman, L. (2004). Vicarious agency: Experiencing control
over the movements of others. Journal of Personality and Social Psychology, 86(6),
838–848. doi:10.1037/0022–3514.86.6.838
Wen, W. & Haggard, P. (2018). Control changes the way we look at the world. Journal of
Cognitive Neuroscience, 30(4), 603–619. doi:10.1162/jocn
Winston, J. S., Strange, B. A., O’Doherty, J., & Dolan, R. J. (2002). Automatic and intentional
brain responses during evaluation of trustworthiness of faces. Nature Neuroscience,
5(3), 277–283. doi:10.1038/nn816
94 Human Factors for Automated Vehicles
Woods, D. D. (2015). Four concepts for resilience and the implications for the future of resil-
ience engineering. Reliability Engineering and System Safety, 141, 5–9.
Wooldridge, M. & Jennings, N. R. (1995). Intelligent agents: Theory and practice. Knowledge
Engineering Review, 10(2), 115–152.
Wu, K., Zhao, Y., Zhu, Q., Tan, X., & Zheng, H. (2011). A meta-analysis of the impact of trust
on technology acceptance model: Investigation of moderating influence of subject and
context type. International Journal of Information Management, 31(6), 572–581.
Zuboff, S. (1988). In the Age of Smart Machines: The Future of Work, Technology and Power.
New York: Basic Books.
5 Public Opinion About
Automated and
Self-Driving Vehicles
An International Review
Mitchell L. Cunningham
The University of Sydney
Michael A. Regan
University of New South Wales
CONTENTS
Key Points................................................................................................................. 95
5.1 Introduction..................................................................................................... 96
5.2 Overall Acceptability...................................................................................... 97
5.2.1 Perceived Benefits ............................................................................... 97
5.2.2 Perceived Concerns............................................................................. 97
5.2.3 Activities When Riding in an AV........................................................ 98
5.3 Public Opinion Towards AVs as a Function of Sociodemographic
Characteristics ................................................................................................ 99
5.4 Country Differences in Public Opinion Towards AVs.................................. 100
5.5 WTP for AVs.................................................................................................. 101
5.6 Acceptance of AVs after Experiencing the Technology ............................... 103
5.7 Conclusion..................................................................................................... 104
Acknowledgement.................................................................................................. 105
References .............................................................................................................. 105
KEY POINTS
• The predicted benefits of AVs are unlikely to materialize unless there is
societal acceptability and acceptance of them.
• To this end, it is important to gauge and understand public opinion about
them, before and after use.
• Such knowledge can benefit governments in making future planning and
investment decisions; benefit industry in helping technology developers
design and refine their products; and benefit research organizations in iden-
tifying new directions for research and development.
95
96 Human Factors for Automated Vehicles
• Despite some variation across countries, the general public appears largely
positive about the potential benefits that may be derived from AVs, although
there remain some significant concerns (e.g., in relation to safety) that may
hinder their uptake and use.
5.1 INTRODUCTION
Automated vehicles (AVs) are currently being trialed and deployed in many countries
around the world. While some predict that AVs capable of driving themselves in all
situations will be on our roads in the (relative) near future (e.g., 2030; Underwood,
2014), others predict they will not be ubiquitous until closer to 2050 or afterwards
(Litman, 2018). Technologies that automate human driving functions are estimated to
yield a variety of societal benefits. These include improved safety, improved mobility
(e.g., for the young, elderly, disabled, etc.), reduced congestion, improved productivity,
and reduced fuel consumption (see for review Fagnant & Kockelman, 2015). However,
these predicted benefits are at present largely speculative and yet to be proven.
In the lead up to the deployment of AVs, it is important to gauge public opinion
about them, even if the public has had little or no direct exposure to them (Regan,
Horberry, & Stevens, 2014). Gauging public opinion about them can help prepare
society for their rollout in a number of ways. For example, an understanding of pub-
lic opinion about AVs can provide benefits for governments (e.g., to inform future
planning and investment decisions), industry (e.g., to help automobile manufacturers
design and tailor their products to the perceived needs of end users), and academia
(e.g., to flag important directions for further research).
Ultimately, the predicted benefits of AVs will never be realized unless there is
societal acceptance of them (Regan et al., 2014). If not, drivers may not use them,
may refuse to travel in them, or may use them in ways unintended by designers;
and, in doing so, negate some or even all of their predicted benefits (Regan et al.,
2014; see also this Handbook, Chapters 6, 9). In the context of this discussion, it is
important to distinguish between two terms that are sometimes used interchange-
ably: “acceptance” and “acceptability”. Acceptance of a technology is different from
a priori acceptability of the technology. The latter refers to the judgment, evaluation,
and behavioral reactions toward a technology before use, whereas the former relates
to those reactions towards a technology after use (Adell, Várhelyi, & Nilsson, 2014).
Since AVs (at Level 2 onwards according to SAE, 2014) are only starting to appear
around the world, the examination of public acceptability of these technologies is
an important research exercise at this point in time: as the intention to use emerg-
ing technology, such as AVs, is likely to be predicted to some degree, by users’ a
priori acceptability and associated opinions and attitudes towards them (see Payre,
Cestac, & Delhomme, 2014; also see Chapter 4 in this Handbook).
In this chapter we review what is known, internationally, about public opinion and
attitudes towards AVs.1 We review, first, empirical research that has examined facets
of public opinion towards this technology, including (but not limited to) benefits the
1 From this point on in the Chapter, unless specified otherwise, “AV” will refer to fully automated, fully
self-driving vehicles.
Public Opinion About Automated Vehicles 97
public agree/disagree the technology will bring, and concerns and risks the public
believe to be associated with the technology which may be curtailing trust and inten-
tion to use it. Second, we examine how these AV-related opinions and attitudes vary
as a function of key sociodemographic variables such as gender and age, to help us
build a profile of the individuals who may be most (or least) receptive to the tech-
nology. Third, we review a number of international studies which have investigated
cross-cultural differences in AV-related opinions and attitudes, to help us understand
which countries/regions may be most ready and prepared for the introduction of
AVs from a public acceptance point of view. Fourth, we examine willingness to pay
(WTP) for the technology and, specifically, what proportions of the public are likely
to pay for the technology; and precisely how much they are willing to pay for dif-
ferent levels of vehicle automation. Fifth, we review the relatively smaller body of
literature on AV acceptance to gain an understanding of how actually experiencing
AV technology influences opinions and attitudes towards the technology. Finally, we
conclude the chapter by noting some key findings and limitations that emerge from
the literature and suggest some areas for future research.
5.2 OVERALL ACCEPTABILITY
5.2.1 perceived benefits
The emerging field of research highlights a variety of attitudes and opinions that
the public has towards AV technology. Despite the many predicted benefits AVs are
expected to bring about, the extant research suggests the public believes that some of
these are more likely to come to fruition than others.
In an earlier study by Schoettle and Sivak (2014a), for example, using an online
survey of 1,533 participants across the United States, United Kingdom, and Australia,
participants were asked “How likely do you think it is that the following benefits will
occur when using completely self-driving vehicles?” The study found that participants
were most likely to believe that AVs would bring about better fuel economy (72% said
“likely”), fewer crashes (70.4%), and reduced severity of crashes (71.7%), but were least
likely to believe that AVs will bring about shorter travel times (43.3%). Interestingly,
in contrast to these findings, Cunningham, Regan, Horberry, Weeratunga, and Dixit
(2019a) found that, among 6,133 survey respondents from Australia and New Zealand,
the predicted benefits of better fuel economy and enhanced safety were among the
least endorsed by participants. Specifically, the study found that less than half (42%) of
the respondents believed AVs would be safer than conventional cars, and only 38.9%
believed that they would consume less fuel than conventional cars. However, the study
found that the benefit participants most strongly believed AVs would bring about is
enhanced mobility for people with driving impairments or restrictions (76.7%).
sitting and observing the scenery, interacting with others in the vehicle, eating and
drinking, or doing nothing at all. Conversely, tasks such as doing work, grooming, or
sleeping when riding in an AV were among the least likely to be undertaken. These
findings are interesting. Increased productivity during commutes is a commonly
purported benefit of AVs; however, drivers may not actually be so inclined to engage
in work-related activities (for further discussion, see Singleton, 2019).
Understanding what activities people would like to undertake when riding in an
AV, or when supported by high levels of vehicle automation, has practical implica-
tions for vehicle design. Such findings can be used by designers in designing AVs to
optimize acceptance and facilitate uptake. For example, given the research suggesting
the high desire to interact with other passengers when riding in an AV, vehicles could
be designed to include swivel seats so that front and backseat passengers may more
readily interact and/or reclinable seats to support comfort while enjoying the scenery
(Large, Burnett, Morris, Muthumani, & Matthias, 2017; Cunningham et al., 2019a).
Of course, such design features would also need to consider the cases in which a
driver may need to re-engage the vehicle as effectively as possible. On the other hand,
reviewed research suggests that using the time riding in an AV to do work may not be
a priority for the public at this point in time. Therefore, for designers, designing in-
vehicle human–machine interfaces (HMIs) in AVs to support work-related activities
may not currently be a priority. Recognizing the different activities that drivers are
likely to adopt during periods of automated driving—and incorporating these within
the design process as early as possible—may help to ensure the acceptance and suc-
cessful marketing of AVs (Cunningham et al., 2019b).
5.4 COUNTRY DIFFERENCES
IN PUBLIC OPINION TOWARDS AVs
Only a handful of studies have specifically examined how attitudes and opinions
towards AVs differ across countries. The evidence reveals that there are some differ-
ences that exist in AV acceptability across countries. Schoettle and Sivak (2014a), for
example, examined public opinion about AVs—among 1,533 respondents across the
United States, United Kingdom, and Australia—and found that the highest propor-
tions of respondents with a positive view of the technology were those from Australia
(61.9%), then the United States (56.3%), and then the United Kingdom(52.2%). In line
with this, when respondents were asked how concerned they were about riding in an
AV, most U.S. respondents reported being “very concerned” (35.9%), and most from
the U.K. and Australia reported being “moderately concerned” (31.1%) and “slightly
concerned” (31.1%), respectively (Schoettle & Sivak, 2014a). Moreover, the study
found that different AV-related issues attracted different levels of concern across
the countries, particularly in the United States and United Kingdom. For example,
respondents from the United States (compared to U.K. respondents) were more
likely to express concern regarding issues such as (1) legal liability/responsibility
Public Opinion About Automated Vehicles 101
for drivers/owners, (2) data privacy, and (3) interactions with non-AVs. On the other
hand, respondents from the U.K. were less likely to be concerned about issues such
as system and vehicle security (from hackers), as well as interactions between the AV
and pedestrians/cyclists (Schoettle & Sivak, 2014a).
A follow-up study by Schoettle and Sivak (2014b) investigated differences in pub-
lic opinion towards AVs among 1,722 respondents across six countries/regions—
China, India, Japan, United States, United States Kingdom, and Australia—which
exposed some further interesting cross-country differences. For example, some quite
large differences were found in the proportions of respondents that had overall posi-
tive opinions about AVs, with China showing the largest (87.2%) and Japan showing
the lowest (49.8%). Moreover, Indian and Chinese respondents were most likely to
believe that AVs will bring about all of the potential AV-related benefits probed in the
survey (e.g., fewer crashes, reduced traffic congestion), while Japanese respondents
appeared to be the least likely to believe these would come to fruition. However,
interestingly, when asked about overall concern associated with riding in an AV,
Indian respondents were 3.3 times more likely to report being “very concerned” than
Chinese respondents, and Japanese respondents tended to be the least concerned
about all AV-related issues probed in the study (e.g., safety consequences of sys-
tem failure, liability, etc.) (Schoettle & Sivak, 2014b). This study builds upon earlier
work (i.e., Schoettle & Sivak, 2014a) in highlighting differences between countries/
regions in their attitudes and opinions towards AVs.
In their online survey, Kyriakidis et al. (2015) collected 5,000 responses spanning 109
countries, including 40 with at least 25 respondents, to allow an examination of cross-
national differences. Although specific countries were not identified in the publication
itself, a number of interesting findings were derived from a correlational analysis of
different aspects of AV acceptability and national statistics (e.g., traffic deaths, income,
education). The largest correlations were in relation to concern about data being trans-
mitted to different parties (e.g., surrounding vehicles, vehicle manufacturers, roadway
organizations). Specifically, respondents from more developed countries (with lower
accident death rates, and higher levels of education and income) were more likely to
express concern about data transmission to these different parties. Interestingly, these
national statistics showed only small to moderate correlations with levels of concern
with AV-related issues such as safety, legal liability/responsibly, and misuse.
technology than for conventional (manually driven) vehicles. In their international sur-
vey, Schoettle and Sivak (2014a) asked participants how much more they would be will-
ing to pay for an AV (compared to a car they own or lease in the future). Results showed
that more than half (56.6%) of the participants were not willing to pay any extra (i.e.,
$0) for AV technology, with 43.4% reporting they would pay extra (i.e., >$0). In a recent
Australian study (Cunningham et al., 2019a), respondents (5,089 in total) were asked
about their WTP for both partially- and fully automated cars. In relation to the former,
Cunningham et al. (2019a) found that 34.5% of participants were willing to pay more
for a partially automated car than for a car without partial automation, and this propor-
tion increased to 42.4% when probed about a fully automated car. Interestingly, this
figure is in contrast to that found in the international survey conducted by Kyriakidis
et al. (2015), with 78% of their 5,000-respondent sample reporting they would pay extra
for a fully automated vehicle technology to be installed in their vehicle.
This discrepancy between the two studies may be due, in part, to sampling differ-
ences; for example, the median age (30 years) of respondents was considerably lower in
the Kyriakidis et al. (2015) study than in other studies (e.g., Cunningham et al., 2019a;
Mdn age = 44 years). As noted earlier in this chapter, younger people appear to be more
receptive to AV technology (and therefore may be increasingly willing to pay for it).
In addition, a number of recent U.S. survey studies suggest that significant proportions
of people were not willing to pay any extra for a fully AV over the price of a con-
ventional vehicle (with no or minimal autonomous capabilities) (e.g., 59%; Bansal &
Kockelman, 2017). Together, this research highlights that, while there seemingly is a
considerable market for AV technology, there remain significant proportions of the
public reluctant to pay for the technology and the capabilities it may afford.
The next empirical question that derives from this line of research relates to pre-
cisely how much more people are willing to pay for AV technology. In their survey
study, Schoettle and Sivak (2014a) found that 25% of U.S. participants were willing
to pay at least $US 2,000 more for fully autonomous capabilities than for a con-
ventional car, with 10% willing to pay at least $US 5,800 more. Their findings also
showed that 10% of U.K. and Australian respondents were willing to pay at least
$US 5,130 and $US 9,400 extra, respectively (Schoettle & Sivak, 2014a). Bansal
and Kockelman (2017) surveyed 2,167 U.S. respondents and estimated the average
WTP to add full vehicle automation to their current vehicle was $US 5,857; which
increased considerably to $US 14,196 once respondents who were not willing to
pay anything for vehicle automation were excluded from their analysis. In an online
study of 347 Texans, Bansal, Kockelman, and Singh (2016) compared the WTP for
partial- and full vehicle automation, with findings suggesting that people perceive
the monetary value of full vehicle automation (WTP = $US 7,253) to be considerably
greater than partial automation (WTP = $US 3,300).
These findings are supported by a recent purchase discrete choice experiment
of 1,260 U.S. respondents, demonstrating that the average household is willing to
pay more for full vehicle automation ($US 4,900) than for partial automation ($US
3,500) (Daziano, Sarrias, & Leard, 2017). In their survey of 5,089 Australians,
Cunningham et al. (2019a) found respondents (who were willing to pay more for
vehicle automation) were willing to pay a median value of $AU 5,000 (~$US 3,870)
more for full vehicle automation than for their current vehicle. This value lies lower
Public Opinion About Automated Vehicles 103
estimates of WTP derived from U.S. studies (e.g., $US 4,900–7,253; Daziano et al.,
2017; Bansal et al., 2016). Moreover, the mean WTP value found in Cunningham
et al. (2019a), $AUD 14,919 (~$US 11,250), is roughly 21% less than the correspond-
ing estimated in a recent large U.S. study ($US 14,196; Bansal & Kockelman, 2017).
Although median and mean WTP values for full vehicle automation appear to be
lower than those deriving from U.S. studies, there appears to be a small proportion
of Australians willing to pay relatively large amounts for the luxury of full automa-
tion, with 9% of respondents willing to pay over $US 30,000 more than their current
car. Interestingly, corresponding estimates from a large international survey found
that only 5% of respondents were willing to pay this amount (Kyriakidis et al., 2015).
Together, these findings suggest that there is a market for automated driving tech-
nologies across different countries but that there is heterogeneity in WTP estimates
for vehicle automation, with large groups of respondents that are not willing to pay
any more for vehicle automation.
trust, perceived usefulness, safety, and intention to use the technology were gauged
through a questionnaire before and after the experience in the vehicle. The authors
found that levels of trust, perceived usefulness and perceived ease of use, but not the
intention to use the technology, increased significantly (although effects were mod-
est) after the ride (Xu et al., 2018). Like Nordhoff et al. (2018), Xu and colleagues
(2018) suggest that firsthand experience with the AV did not have a large effect on
trust, intention to use the technology, etc., due to riders likely holding unrealistically
high expectations of the technology.
5.7 CONCLUSION
The overarching aim of this chapter was to review what is known, internationally,
about public opinion and attitudes towards AVs. First, we highlighted some different
benefits and concerns the public has associated with the introduction and use of AVs,
demonstrating that, although the public tends to be largely accepting AV technology
(and believe many AV-related benefits will come to fruition), they still have a number
of concerns about the technology (e.g., in relation to safety). Second, we illustrated
that AV acceptability may, in part, vary as a function of different sociodemographic
variables, such as gender and age. Research suggests that males, and younger indi-
viduals, are particularly receptive to the introduction of AVs. Third, in addition to
varying AV acceptance at the individual level, public opinion and attitudes towards
the technology differ somewhat across countries/regions. This latter information can
help us understand which countries/regions may be most ready and prepared for the
introduction of AVs from a public acceptance point of view. Fourth, we examined
literature on WTP for AV technology, which revealed that, although there appears
to be a market for the technology, there are still significant proportions of the public
which may be hesitant to pay extra for the technology. Moreover, actual WTP values
for the technology seem to vary widely. Finally, we reviewed the relatively smaller
body of literature on AV acceptance to gain an understanding of how actually expe-
riencing riding in a conditionally or highly automated vehicle may influence opin-
ions and attitudes towards the technology. This literature suggests that, with current
automated driving capabilities, experiences riding in such vehicles may have a posi-
tive effect on peoples’ opinions towards the technology, but in other situations may
have a negative effect (e.g., relating to issues of efficiency or usefulness compared to
current transport options). The findings from this emerging literature seem to vary
from study to study.
The findings reported in this chapter have two main limitations. First, much of
the literature reviewed here was survey-based, correlational, and cross-sectional,
preventing us from drawing conclusions regarding temporal associations between
variables and relationships of causality (e.g., younger age does not cause higher
acceptability of AVs). Future research in this area will be greatly benefited from the
employment of prospective designs that enable researchers to track how AV-related
opinions and attitudes may change over time, and different variables that may influ-
ence these changes (e.g., does acceptability of the technology change as people get
older or as people become increasingly familiar with the technology?) Second, highly
and fully automated passenger vehicles (SAE Levels 3–5) are not yet commercially
Public Opinion About Automated Vehicles 105
ACKNOWLEDGEMENT
Sections of this Chapter derive, in part, from Cunningham et al. (2019a; b).
REFERENCES
Abraham, H., Lee, C., Brady, S., Fitzgerald, C., Mehler, B., & Coughlin, J. F. (2017).
Autonomous vehicles and alternatives to driving: Trust, preferences, and effects of age.
Proceedings of the Transportation Research Board 96th Annual Meeting. Washington,
D.C.: Transportation Research Board.
Adell, E., Várhelyi, A., & Nilsson, L. (2014). The definition of acceptance and acceptabil-
ity. In M. A. Regan, T. Horberry, & A. Stevens (Eds.), Driver Acceptance of New
Technology Theory, Measurement and Optimisation (pp. 11–21). Surrey, UK: Ashgate.
Bansal, P. & Kockelman, K. M. (2017). Forecasting Americans’ long-term adoption of con-
nected and autonomous vehicle technologies. Transportation Research Part A: Policy
and Practice, 95, 49–63.
Bansal, P., Kockelman, K. M., & Singh, A. (2016). Assessing public opinions of and interest
in new vehicle technologies: An Austin perspective. Transportation Research Part C:
Emerging Technologies, 67, 1–14.
Becker, F. & Axhausen, K.W. (2017). Literature review on surveys investigating the accep-
tance of automated vehicles. Transportation, 44, 1293–1306.
Cunningham, M. L., Regan, M. A., Horberry, T., Weeratunga, K., & Dixit, V. (2019a). Public
opinion about automated vehicles in Australia: Results from a large-scale national sur-
vey. Transportation Research Part A, 129, 1–18.
Cunningham, M. L., Regan, M. A., Ledger, S. A., & Bennett, J. M. (2019b). To buy or not
to buy? Predicting willingness to pay for automated vehicles based on public opinion.
Transportation Research Part F, 65, 418–438.
Daziano, R. A., Sarrias, M., & Leard, B. (2017). Are consumers willing to pay to let cars drive
for them? Analyzing response to autonomous vehicles. Transportation Research Part C:
Emerging Technologies, 78, 150–164.
Dingus, T. A., Guo, F., Lee, S., Antin, J. F., Perez, M., Buchanan-King, M., & Hankey, J.
(2016). Driver crash risk factors and prevalence evaluation using naturalistic driving
data. Proceedings of the National Academy of Sciences, 113(10), 2636–2641.
Fagnant, D. J. & Kockelman, K. M. (2015). Preparing a nation for autonomous vehicles:
Opportunities, barriers and policy recommendations for capitalizing on self-driven
vehicles. Transportation Research Part A: Policy and Practice, 77, 1–20.
106 Human Factors for Automated Vehicles
Hauk, N., Huffmeier, J., & Krumm, S. (2018). Ready to be a silver surfer? A meta-analysis on
the relationship between chronological age and technology acceptance. Computers in
Human Behavior, 84, 304–319.
Kyriakidis, M., Happee, R., & de Winter, J. C. (2015). Public opinion on automated driv-
ing: Results of an international questionnaire among 5000 respondents. Transportation
Research Part F: Traffic Psychology and Behaviour, 32, 127–140.
Large, D. R., Burnett, G., Morris, A., Muthumani, A., & Matthias, R. (2017). A longitu-
dinal simulator study to explore drivers’ behaviour during highly-automated driving.
International Conference on Applied Human Factors and Ergonomics (pp. 583–594).
Berlin: Springer.
Litman, T. (2018). Autonomous Vehicle Implementation Predictions: Implications for
Transport Planning. Victoria, BC: Victoria Transport Policy Institute.
Nordhoff, S., de Winter, J., Madigan, R., Merat, N., van Arem, B., & Happee, R. (2018).
User acceptance of automated shuttles in Berlin-Schöneberg: A questionnaire study.
Transportation Research Part F: Traffic Psychology and Behaviour, 58, 843–854.
Nordhoff, S., de Winter, J., Payre, W., van Arem, B., & Happeee, R. (2019). What impressions
do users have after a ride in an automated shuttle? An interview study. Transportation
Research Part F, 63, 252–269.
Payre, W., Cestac, J., & Delhomme, P. (2014). Intention to use a fully automated car: Attitudes
and a priori acceptability. Transportation Research Part F: Traffic Psychology and
Behaviour, 27, 252–263.
Regan, M. A., Horberry, T., & Stevens, A. (2014). Driver Acceptance of New Technology:
Theory, Measurement and Optimisation. Surrey, UK: Ashgate.
Schoettle, B. & Sivak, M. (2014a). A Survey of Public Opinion about Autonomous and Self-
Driving Vehicles in the US, the UK, and Australia. Ann Arbor, MI: University of
Michigan Transportation Research Institute.
Schoettle, B. & Sivak, M. (2014b). Public Opinion about Self-Driving Vehicles in China,
India, Japan, the US, the UK and Australia. Ann Arbor, MI: University of Michigan
Transportation Research Institute.
Singleton, P. A. (2019). Discussing the “positive utilities” of autonomous vehicles: Will travel-
lers really use their time productively? Transport Reviews, 39(1), 50–65.
Society of Automotive Engineers (SAE). (2014). Taxonomy and Definitions for Terms
Related to On-Road Motor Vehicle Automated Driving Systems. Warrendale, PA: SAE
International.
Underwood, S. (2014). Automated vehicle forecast: Vehicle Symposium Survey. Automated
Vehicles Symposium 2014. Burlingame, CA: AUVSI.
Vallet, M. (2013). Survey: Drivers Ready to Trust Robot Cars? Retrieved from https://www.
foxbusiness.com/features/survey-drivers-ready-to-trust-robot-cars
Xu, Z., Zhang, K., Min, H., Wang, Z., Zhao, X., & Liu, P. (2018). What drives people to accept
automated vehicles? Findings from a field experiment. Transportation Research Part
C: Emerging Technologies, 95, 320–334.
6 Workload, Distraction,
and Automation
John D. Lee
University of Wisconsin-Madison
Michael A. Regan
University of New South Wales
William J. Horrey
AAA Foundation for Traffic Safety
CONTENTS
Key Points .............................................................................................................. 108
6.1 Introduction................................................................................................... 108
6.2 Workload, Distraction, and Performance...................................................... 109
6.2.1 Workload........................................................................................... 109
6.2.1.1 Workload and the Yerkes–Dodson Law ............................ 110
6.2.1.2 Active Workload Management........................................... 112
6.2.2 Distraction......................................................................................... 112
6.2.2.1 Distraction and Types of Inattention.................................. 113
6.2.2.2 The Process of Driver Distraction...................................... 113
6.2.3 Driver Workload and Driver Distraction .......................................... 114
6.2.4 Summary .......................................................................................... 114
6.3 Types of Automation and Workload Implications......................................... 114
6.3.1 Effect of Different Levels of Automation on Workload ................... 114
6.3.2 The Interaction of Distraction, Workload, and Automation ............. 117
6.3.2.1 Automation Creating Distraction Directly......................... 118
6.3.2.2 Automation Creating Distraction Indirectly....................... 118
6.3.2.3 The Interaction of Other Mechanisms of Inattention,
Workload, and Automation ................................................ 120
6.3.3 Summary........................................................................................... 121
6.4 Managing Workload and Distraction in Automated Vehicles ...................... 121
6.5 Conclusion..................................................................................................... 122
References .............................................................................................................. 123
107
108 Human Factors for Automated Vehicles
KEY POINTS
• Automation stands to change the fundamental aspects of the driving task as
subtasks are added or subtracted, such that the role and responsibilities of
the driver will be transformed in the coming years.
• Various types of automation promise to reduce workload by reducing the
number of tasks required of the driver; however, automation also changes
existing tasks and adds new ones, which can increase workload.
• Automation can be problematic or “clumsy” if it reduces workload in
already low workload environments, but increases workload during high
workload periods.
• Driver distraction, workload, and automation may interact with one another
in ways that can decrease safety. On the one hand, automation may demand
drivers’ attention at inopportune moments creating a distraction, which can
increase the driver’s workload that can in turn increase risk. Alternatively,
automation may reduce drivers’ workload, inducing drivers to engage more
often and more deeply in non-driving tasks, thereby creating a distraction
and increasing risk.
• System design should consider: (1) how workload is created and re-distributed
according to changing driver tasks and roles; (2) avoiding abrupt workload
transitions and “clumsy” automation; (3) that monitoring automation is effort-
ful; and (4) support of strategic workload management.
6.1 INTRODUCTION
New technologies have entered the market, and are emerging, which are capable of
supporting the driver to perform (or of automating) some, or all, of the functional
activities performed traditionally by human drivers (e.g., Galvani, 2019): route finding;
route following; velocity and steering control; crash avoidance; compliance with traffic
laws; and vehicle monitoring (Brown, 1986). These new technologies are described in
Chapters 1 and 2 of this book, and fall into two broad categories: those that support the
driver (Driver Support Features (DSF); Levels 1–2) and those that automate—some
of the time (Level 3)—or all of the time (Levels 4 and 5) human driving functions
(Automated Driving Features (ADF); SAE International, 2018).
In spite of the promise and excitement surrounding these technologies many
important human factors considerations have been raised inasmuch as these new
technologies stand to impact driver behavior, performance, and safety—as evidenced
by the many chapters comprised in this Handbook. Indeed, automation stands to
change fundamental aspects of the driving task as subtasks are added or subtracted,
such that the role and responsibilities of the driver will be transformed in the com-
ing years (e.g., this Handbook, Chapter 8). This, in turn, will impact driver situation
awareness (e.g., this Handbook, Chapters 7, 21) as well as in-vehicle behaviors (e.g.,
this Handbook, Chapter 9). By extension—or even as a precursor to some of these
outcomes—automation will also impact driver workload. The extent of this impact
will vary by the type and/or level of automation as well as various situational and
environmental factors. For example, various types of automation promise to reduce
Workload, Distraction, and Automation 109
workload by reducing the number of tasks required of the driver; however, automa-
tion also changes existing tasks and adds new ones, which can increase workload
(Cook, Woods, McColligan, & Howie, 1990).
This chapter describes the concept of workload and its implications for performance,
drawing from existing studies and theories from the human factors literature. The role
and impact of driver distraction is also considered in the context of workload. In subse-
quent sections, the interaction between different levels of automation on the one hand
and driver workload and distraction on the other hand is described. Lastly, approaches
to manage workload and distraction are discussed in the context of automation.
1 In this chapter, we focus primarily on the mental and cognitive aspects of workload, although the
physical components are relevant in some sections (e.g., Table 6.2).
110 Human Factors for Automated Vehicles
available to respond when demands increase. These diminished resources can under-
mine performance when they coincide with a high demand situation. For this and
other reasons, workload is a construct that is often associated with performance or,
perhaps more specifically, can be mapped onto an individual’s capacity or potential
to perform in a given context (point #2). This is described further in Section 6.2.1.1.
There are a number of approaches that can be used to assess or quantify the work-
load demands associated with a given task (e.g., Wierwille & Eggemeier, 1993). These
tools and/or approaches can also distinguish between different types of workload, such
as the cognitive, visual, and manual elements of a task (or along similar categoriza-
tions). Some tasks might incur high levels of manual workload, but generally require
little thought and so do not impose significant cognitive load (e.g., lifting a heavy pack-
age). Other tasks might have little or no physical output, but impose significant cogni-
tive load (e.g., performing mental rotations of 3-D objects). It is important to consider
these nuances, especially for tasks that might appear to be effortless. For example, when
automation relieves the burden of performing a task and requires the person do nothing
but monitor it, one might think workload would be very low. And, indeed, the physical
workload would be low. However, the cognitive workload associated with monitoring
and vigilance tasks can be quite high (e.g., Warm, Parasuraman, & Matthews, 2008).
6.2.1.1 Workload and the Yerkes–Dodson Law
Over a century ago, Robert Yerkes and John Dodson described the relationship
between stimulus strength and task performance in mice (Yerkes & Dodson, 1908).
The Yerkes–Dodson law, as it came to be known, states that performance on a given
task increases with arousal up to a certain point, but decreases beyond this point (as
depicted in the inverted U functions shown in Figure 6.1). The precise shape of the
curve varies as a function of several factors, including task complexity or difficulty;
however, the key observation is that there are optimal and sub-optimal levels of
arousal when considering performance. Performance can suffer when arousal is too
high (i.e., overload), but it can likewise suffer when it is too low (i.e., under-load).
Although the original Yerkes–Dodson law related to arousal, the scientific literature
suggests that fluctuations in workload appear to have a similar relationship with
performance—with performance being optimal when workload is neither too low
nor too high, but as a result of different mechanisms (i.e., different from arousal).
In the context of Figure 6.1 and under the assumption that workload follows a similar
profile, performance is largely a reflection of the efficacy of an individual’s allocation
FIGURE 6.1 Depiction of the Yerkes–Dodson law for simple and complex tasks. (From
Lee et al., 2017.)
Workload, Distraction, and Automation 111
FIGURE 6.2 Performance-resource functions for a resource-limited task (Task A, solid line)
and a data-limited task (Task B, dashed line). (Adapted from Wickens & Hollands, 2000.)
112 Human Factors for Automated Vehicles
tasks. Switching between tasks demands effort that exceeds what might be expected
from the demands of either task considered individually (Trafton & Monk, 2007).
When multiple tasks are involved, people must adopt strategies to distribute their
limited resources to the tasks at hand. Vehicle automation, especially Level 3 auto-
mation, will likely magnify workload associated with task switching as automation
provides the opportunity to do other tasks, but also requires that drivers monitor
automation, and occasionally return to manually controlling the car as well. Active
workload management, described in the next section, is a process by which people
attempt to prioritize tasks in an attempt to balance attention, workload, and perfor-
mance. Importantly, these attempts are not always successful as sometimes certain
tasks can distract attention away from—in the current context—activities critical for
safe driving (discussed in Section 6.2.2).
6.2.2 distrAction
Driver distraction has been defined as “the diversion of attention away from activities
critical for safe driving toward a competing activity” (Lee, Young, & Regan, 2009, p.
34). Chapter 9 discusses the topic of distraction as it relates to automated vehicles, to
engagement in non-driving activities, and to the impact of such engagement on the take-
over timing and quality. In this chapter, we build on that treatment of the topic and con-
template, more broadly, how driver distraction, and the way we conceptualize it, is likely
to change during the journey from partially- to fully automated (self-driving) vehicles.
Workload, Distraction, and Automation 113
TABLE 6.1
Mechanisms of Inattention (from Regan et al., 2011)
Mechanism of
Inattention Definition
Driver restricted “Insufficient or no attention to activities critical for safe driving brought about by
attention something that physically prevents (due to biological factors) the driver from
detecting (and hence from attending to) information critical for safe driving”
(p. 1775); e.g., fatigue or drowsiness.
Driver neglected “Insufficient or no attention to activities critical for safe driving brought about by the
attention driver neglecting to attend to activities critical for safe driving” (p. 1775); e.g., due
to faulty expectations.
Driver “Insufficient or no attention to activities critical for safe driving brought about by the
misprioritized driver focusing attention on one aspect of driving to the exclusion of another, which
attention is more critical for safe driving” (p. 1775); e.g., a driver who looks over their
shoulder while merging and misses a lead vehicle braking.
Driver cursory “Insufficient or no attention to activities critical for safe driving brought about by the
attention driver giving cursory or hurried attention to activities critical for safe driving.”
(p. 1776); e.g., a driver on the entry ramp to a freeway who is in a hurry, does not
complete a full head check when merging, and ends up colliding with a merging car.
Driver diverted “The diversion of attention away from activities critical for safe driving toward a
attention competing activity, which may result in insufficient or no attention to activities
critical for safe driving” (synonymous with driver distraction; p. 1776); e.g., a driver
reading a text message on a cell phone.
114 Human Factors for Automated Vehicles
give consideration to the relationship between vehicle automation and these high-
level processes, drawing in part on a previous review of the topic (Cunningham &
Regan, 2018), and building on some of the findings reported in Chapter 9.
6.2.4 summAry
The concepts of workload, task management, distraction, and performance have
been studied in many different domains and settings. Various theories in the human
factors literature shed some light on how these concepts interact with one another—
especially where performance and safety are concerned. In the following section, we
discuss how varying degrees of automation impacts these relationships. It is impor-
tant to note that the implementation of automation will influence how performance
is conceived and operationalized (cf. this Handbook, Chapters 8, 9); driver perfor-
mance will not necessarily be reflected by the same measures assumed in man-
ual conditions and joint-system performance (i.e., driver plus system performance)
might become more relevant. Different types of automation can dramatically affect
workload and distraction, and associated performance measures.
enter the vehicle. However, the basic laws underlying the relation between workload
and performance (i.e., the Yerkes–Dodson law, Figure 6.1 and performance resource
functions, Figure 6.2) should presumably remain the same. Understanding how auto-
mation moves driver workload away from the optimal level, in either direction, will
be critical. Clearly, this is most pressing in situations where drivers maintain some
degree of responsibility in the human–automation system (e.g., SAE Levels 1–3).
Much of the variation in driver workload in light of automation will be a result of
changes in the underlying tasks as well as the redistribution of responsibility from
the driver to the system or vice versa (this Handbook, Chapter 8). For example, for
Level 1 and 2 automation, vehicle control (steering and/or speed control) might be
offloaded to the system, such that the driver can allocate those resources to other
activities, such as maintaining vigilance to monitor the traffic environment as well as
the system function (which becomes a more prominent task for drivers using such a
system). In some cases, the net impact on workload might vary: a reduction of work-
load associated with some of the driving subtasks can lead to increased workload on
other subtasks. The net impact on workload of changes in the level of automation can
also manifest itself by changes in workload across the stages of information process-
ing (e.g., Figure 6.3; Yamani & Horrey, 2018).
Building from the examples provided in Figure 6.3, Table 6.2 illustrates the
impact of a variety of automation technologies on specific driving subtasks. As
shown, the set of driving subtasks, here based on Brown (1986), incurs a certain
amount of workload in normal driving conditions. As new technologies are imple-
mented (e.g., DSFs, Levels 1–2), workload for each of the subtasks might increase,
FIGURE 6.3 Variations in driver workload and system responsibility across different levels
of automation (from Yamani & Horrey, 2018). Dark portion of the bars represent the driver’s
contribution to the demands of the task at a particular stage of information processing; the
white portion of the bars represents the contribution of the system. ACC = Adaptive cruise
control. © Inderscience (used with permission).
116 Human Factors for Automated Vehicles
TABLE 6.2
Workload Variation across Different Driving Subtasks by Different Levels of
Automation. For Illustrative Purposes, Hypothetical Ratings of Workload
Range from 0 to 10 Are Provided in Each Cell. Arrows Denote Directional
Changes from Manual Driving Conditions
Level 0: Blind
Spot Monitoring Level 2:
Driving Task1 Manual Driving System Level 1: ACC (ACC + LKA)
Vis Cog Man Vis Cog Man Vis Cog Man Vis Cog Man
Route finding 6 5 2 6 5 2 6 5 2 6 5 2
Route following 5 6 6 5 6 6 5 6 6 5 6 6
Steering and 5 3 8 5 3 8 ↓4 ↓2 ↓5 ↓2 ↓1 ↓1
velocity control
Collision 8 6 7 ↓6 ↓4 ↓5 8 6 ↓5 8 6 ↓4
avoidance
Rule compliance 3 3 3 3 3 3 3 3 3 3 3 3
Vehicle/system 3 3 0 3 3 0 ↑5 ↑5 0 ↑8 ↑8 0
monitoring
1 After Brown (1986). Vis = visual workload; Cog = cognitive workload, Man = manual workload,
ACC = adaptive cruise control, LKA = lane keeping assist.
decrease (in varying degrees), or remain unchanged. For example, the addition of a
blind spot monitoring system can reduce the visual, cognitive, and manual demands
associated with collision avoidance by providing drivers with additional information
regarding the proximity of nearby vehicles. In examining these relationships, it is
important to consider where decreases in workload on one subtask are associated
with increases on another task. For example, the implementation of adaptive cruise
control (ACC) and lane-keeping assist (LKA) are logically associated with reduced
demands for steering and velocity control, but with increased demands for vehicle
and system monitoring when these systems are in operation.
As noted, the information portrayed in Table 6.2 might be considered to reflect
routine operations. However, it is important to consider the impact of systems on
workload in other use cases, such as in takeover situations or emergency scenar-
ios. The steady-state distribution of workload might suggest that automation might
make routine tasks much easier, but the timeline associated with emergency sce-
narios might show substantially higher workload that drivers experience without
the automation. That is, automation can make easy tasks easier and hard tasks
harder (Bainbridge, 1983; Cook et al., 1990). This timeline approach to quantify-
ing task demands according to different mental resources is described elsewhere
(e.g., Wickens, 2002).
In considering the impact of different types and levels of automation on driver
workload, it is also important to discuss important interaction effects with various
individual and situational factors. As automation will not affect every driver in the
Workload, Distraction, and Automation 117
same manner, the same system might have differential impacts on driver work-
load depending on the specific traffic context. Automation can be problematic or
“clumsy” if it reduces workload in already low workload environments, but increases
it in demanding environments (Woods, 1996; Cook et al., 1990). For example, ACC,
when used on lightly trafficked highways, can lead to “passive fatigue” (which may
lead to reduced attention to activities critical for safe driving) and may encourage
drivers to engage in secondary activities. Saxby, Matthews, Warm, Hitchcock, and
Neubauer (2013) found that drivers exhibited higher levels of passive fatigue and
lower driving task engagement, alertness, and concentration while monitoring an
automated driving system. Further, Jamson, Merat, Carsten, and Lai (2013) found
that drivers were more likely to engage with in-vehicle entertainment systems when
driving with automated systems. It follows that engagement in these activities can
exacerbate the workload peaks caused by clumsy automation as well as the effective
response to takeover requests or circumstances. That is, Saxby et al. (2013) and oth-
ers have shown that drivers have slower responses in unexpected takeover situations
when monitoring automation compared with similar traffic situations encountered
while driving in manual conditions.
Clumsy automation can also increase workload during high workload periods.
A common example in aviation is when pilots are required to program flight man-
agement systems during landing preparation (see also, this Handbook, Chapter 21).
Similarly, automation surprises can lead to spikes in workload which may compro-
mise the performance of activities critical for safe driving. Automation surprises can
be manifest in situations where the system reaches its limitations (e.g., the boundar-
ies of its operational design domain) and the driver is either unprepared or unknow-
ing (see this Handbook, Chapters 3, 9, 21). Surprises can also occur when the user
misinterprets the system mode or fails to use the system properly, especially under
changing traffic conditions or driving goals. For example, a Level 2 system might
work perfectly well on a freeway with limited traffic (lower levels of workload)
but could make merging for an exit more challenging if the driver forgets to dis-
engage the system. In this case, the resulting increase in workload would be much
greater than would be encountered in a routine, non-automated, lane exit. Workload
spikes can undermine a driver’s effective allocation of resources and workload
management strategy.
Automation can both increase and decrease these different types of workload and
do so over both very short and long time horizons. The first question addressed in
this section was how the various types of workload will be influenced by the different
categories of automation and advanced driver assistance systems. In the following
section, a more in-depth discussion is provided concerning the interaction of driver
distraction, workload, and automation and the effect of that interaction on safety.
On the other hand, the reduction of workload gleaned from the offloading of some of
the driving responsibilities to automation can also lead to increased engagement in
non-driving related tasks. Such direct and indirect effects are explored in this section.
TABLE 6.3
Impact of Level 3 Automation on Different Mechanisms of Inattention
(following Regan et al., 2011)
Mechanism of
Inattention Impact of Level 3 Automation
Driver restricted For vehicles equipped with Level 3 automation operating autonomously, the
attention activity most critical for safe driving will be the requirement to take back of
control of the vehicle—when requested by automation or when deemed
necessary by the driver. Whilst there is to our knowledge no research on the
impact of this mechanism of inattention per se on takeover ability, there is some
research, reviewed in Chapter 9, on the impact of sleepiness on takeover ability.
Driver neglected In the present context, one might imagine that, if the requirement to take back
attention control of a Level 3 vehicle operating autonomously is rarely or never
encountered, and hence not expected, drivers may over time become inattentive
to this activity critical for safe driving.
Driver misprioritized In a vehicle equipped with Level 3 automation operating autonomously there is
attention only one activity critical for safe driving—the requirement for the human to
take back control of the vehicle if necessary (and all the sub-activities
associated with it). Hence, here, there would seem to be no scope for
misprioritized attention.
Driver cursory In a vehicle equipped with Level 3 automation operating autonomously, this
attention form of inattention is likely to manifest itself in the driver giving only cursory
attention to elements of the traffic environment that is critical in facilitating the
timely and successful resumption of vehicle control.
Driver diverted As noted, there is for this mechanism of inattention, unlike other mechanisms, an
attention accumulating body of research on its impact on the resumption of control in
vehicles equipped with Level 3 automation (see also, this Handbook,
Chapter 9).
Workload, Distraction, and Automation 121
Saxby et al., 2013) and, in turn, driver inattention (Saxby et al., 2013; Körber, Cingel,
Zimmermann, & Bengler, 2005). In this case, inattention is brought about not by
distraction per se, but by other mechanisms.
6.3.3 summAry
Section 6.3 describes many important considerations related to the intersection of
workload, distraction, and vehicle automation. The introduction of different lev-
els of automation can reduce workload for some driving-related tasks, but could
increase workload on other tasks or could introduce new tasks that were not pre-
viously required of drivers (e.g., Table 6.2)—automation doesn’t simply replace
the human role in performing tasks, but changes the nature of tasks performed.
Even when automation succeeds in its aim of reducing driving demands, perfor-
mance and safety might not improve because low workload can undermine perfor-
mance just as high workload can. Distraction and inattention are other noteworthy
concerns when considering workload and the advent of automation, especially
where drivers fail to attend to activities critical for safe driving either because
automation demands driver’s attention at inopportune moments (thereby creat-
ing a distraction which increases drivers’ workload because they need to ignore
the request in order to pay attention to the safety-critical elements) or automa-
tion induces drivers to engage more often and more deeply in non-driving tasks
(the low workload encourages involvement in non-driving related tasks which, in
turn, increases distraction). In both cases distraction, automation and workload
interact with each other and influence a driver’s performance. The following sec-
tion is aimed at trying to understand how to address the workload and distraction
issues so that drivers are neither over-engaged nor under-engaged when using these
automation technologies.
6.5 CONCLUSION
Automation promises to reduce workload and make driving easier. As in other
domains, automation does not simply replace the person’s role of driving; it changes
the nature of driving and can make driving harder. Automation design should:
tasks performed by drivers will change, and this will change the repertoire of knowl-
edge, skills, and behaviors required by drivers to maintain safe driving performance
(see Chapter 18). Even now, a modern driver has a unique skill set compared with
drivers two or three decades ago; many drivers today might never have had to pump
their brakes on slippery roads, but they might need to understand the distinction
between a traction control system and a stability control system (Spulber, 2016).
Likewise, drivers of increasingly autonomous vehicles might need to develop work-
load management skills that are not necessary in less sophisticated vehicles. Design
of vehicle automation that considers the implications for workload and distraction
might reduce the need for such workload management skills.
REFERENCES
Bainbridge, L. (1983). Ironies of automation. Automatica, 19(6), 775–779.
Brown, I. (1986). Functional requirements of driving. Berzelius Symposium on Cars and
Causalitie. Stockholm, Sweden.
Carsten, O. & Martens, M. H. (2019). How can humans understand their automated cars?
HMI principles, problems and solutions. Cognition, Technology & Work, 21(1), 3–20.
Carsten, O., Lai, F. C., Barnard, Y., Jamson, A. H., & Merat, N. (2012). Control task substi-
tution in semiautomated driving: Does it matter what aspects are automated? Human
Factors, 54(5), 747–761.
Casner, S. M. & Hutchins, E. L. (2019). What do we tell the drivers? Toward minimum driver
training standards for partially automated cars. Journal of Cognitive Engineering and
Decision Making, 13, 55–66.
Clark, H. & Feng, J. (2017). Age differences in the takeover of vehicle control and engage-
ment in non-driving-related activities in simulated driving with conditional automa-
tion. Accident Analysis & Prevention, 106, 468–479.
Cook, R. I., Woods, D. D., McColligan, E., & Howie, M. B. (1990). Cognitive consequences
of “clumsy” automation on high workload, high consequence human performance.
SOAR 90, Space Operations, Applications and Research Symposium. Houston, TX:
NASA Johnson Space Center.
Cunningham, M. L. & Regan, M. A. (2018). Driver distraction and inattention in the realm of
automated driving. IET Intelligent Transport Systems, 12, 407–413.
de Winter, J. C., Happee, R., Martens, M. H., & Stanton, N. A. (2014). Effects of adaptive
cruise control and highly automated driving on workload and situation awareness: A
review of the empirical evidence. Transportation Research Part F: Traffic Psychology
and Behaviour, 27, 196–217.
Desmond, P. A. & Hancock, P. A. (2001). Active and passive fatigue states. In P. A. Hancock &
P. A. Desmond (Eds.), Stress, Workload, and Fatigue (pp. 455–465). Hillsdale, NJ:
Lawrence Erlbaum.
Dingus, T. A., Guo, F., Lee, S., Antin, J. F., Perez, M., Buchanan-King, M., & Hankey, J.
(2016). Driver crash risk factors and prevalence evaluation using naturalistic driving
data. Proceedings of the National Academy of Sciences, 113(10), 2636–2641.
European Commission. (2015). Study on Good Practices for Reducing Road Safety Risks
Caused by Road User Distractions. Brussels, Belgium: European Commission.
Fuller, R. (2005). Towards a general theory of driver behaviour. Accident Analysis &
Prevention, 37(3), 461–472.
Galvani, M. (2019). History and future of driver assistance. IEEE Instrumentation &
Measurement Magazine, 22(1), 11–16.
124 Human Factors for Automated Vehicles
CONTENTS
Key Points .............................................................................................................. 127
7.1 Introduction................................................................................................... 128
7.2 SA Requirements for Driving........................................................................ 128
7.3 SA Model ...................................................................................................... 133
7.3.1 Individual Factors.............................................................................. 134
7.3.1.1 Limited Attention............................................................... 134
7.3.1.2 Limited Working Memory.................................................. 134
7.3.1.3 Goal-Driven Processing Alternating with Data-Driven
Processing........................................................................... 135
7.3.1.4 Long-Term Memory Stores ................................................ 136
7.3.1.5 Expertise............................................................................. 137
7.3.1.6 Cognitive Automaticity ...................................................... 138
7.3.2 Vehicle and Driving Environment..................................................... 139
7.3.2.1 Information Salience .......................................................... 139
7.3.2.2 Complexity ......................................................................... 140
7.3.2.3 Workload, Fatigue, and Other Stressors............................. 140
7.3.2.4 Distraction and Technology................................................ 141
7.3.3 Automation and Vehicle Design........................................................ 142
7.4 Conclusions ................................................................................................... 146
References .............................................................................................................. 147
KEY POINTS
• Situation awareness (SA), as a driver’s integrated understanding of what
is happening in the driving environment, is critical for successful perfor-
mance, with poor SA being implicated as a significant cause of vehicle
crashes.
• A goal-directed task analysis (GDTA) was used to systematically determine
the requirements for driver SA, including perception, comprehension, and
projection elements, needed for key decisions.
• The many factors that affect driver SA in the dynamic road transporta-
tion environment are presented through the lens of a model of SA showing
the cognitive processes involved, including attention, memory, goal-driven
127
128 Human Factors for Automated Vehicles
7.1 INTRODUCTION
Situation awareness (SA) forms the central organizing mechanism for a driver’s
understanding of the state of the vehicle and environment that forms the basis for
ongoing decision-making in the rapidly changing world of road transportation.
While much driving research examines aspects of driver performance in isolation
(e.g., distraction, expertise), SA research considers all of these factors and processes
in a more holistic fashion, shedding light on human performance strengths and limi-
tations within the context of the driving environment.
SA has been found to be central for successful performance in driving (Horswill &
McKenna, 2004; Ma & Kaber, 2005). Poor SA has been implicated as a signifi-
cant cause of vehicle crashes (Gugerty, 1997), with improper lookout and inatten-
tion as two salient examples (Treat et al., 1979). Distractions and recognition errors
(in which the driver “looks but does not see”) are also significant causes of vehicle
crashes that point to problems with SA (Sabey & Staughton, 1975). An analysis of
critical reasons for driver-related pre-crash events shows that over 40% of crashes
are related to poor Level 1 SA (perception), including inadequate surveillance, inter-
nal or external distractions, inattention, and other failures (National Highway Traffic
Safety Administration, 2008). Other problems cited, more indicative of failures of
Level 2 SA (comprehension) or Level 3 SA (projection), include false assumption of
other’s actions (4.5%) and misjudgment of gap or others’ speed (3.2%).
This chapter will examine what SA means from the standpoint of road transpor-
tation, based on a model of SA that describes the cognitive mechanisms involved
in driving, as well as external environmental and system factors that impact on SA,
including the advent of new technologies and vehicle autonomy. Based on this over-
view, directions for the design of vehicles and training to improve driver SA are
recommended.
many different domain areas including piloting, air traffic control, power system
operations, military command and control, space flight, and driving.
An important factor for its application to driving is a clear delineation of the spe-
cific perception (Level 1 SA), comprehension (Level 2 SA), and projection (Level 3
SA) requirements. This is typically accomplished via a goal-directed task analysis
(GDTA) that establishes the key goals for a given operational role in the domain
(e.g., driver, pedestrian, mechanic), the key decisions that need to be made to reach
each goal, and the SA requirements that are needed to accurately make each decision
(Endsley, 1993; Endsley & Jones, 2012). The GDTA provides a systematic method
for understanding the cognitive requirements associated with any job or task, includ-
ing performance under both routine and unusual conditions.
As an example, a hierarchical goal tree for an automobile driver is presented
in Figure 7.1. This shows the different goals and subgoals that are involved in the
task of driving. The overall goal is to maneuver a vehicle from the point of origin
to the point of destination in a safe, legal, and timely manner. There are a number
of associated main goals, including (1) ensuring that the vehicle is safe for driving,
(2) selecting the optimum path to the destination, (3) executing the chosen route in
a safe, legal, and timely manner, and (4) minimizing the impact of abnormal situ-
ations. Further, each of these goals breaks down into relevant subgoals as needed.
It should be noted that not all goals and subgoals are active at all times. For
example, the driver may ensure that the vehicle is safe for driving and determine
the best route to the destination at the beginning of a trip, and only reactivate those
goals later on during the trip if needed. The goal of minimizing the impact of abnor-
mal situations may only become active infrequently, when those abnormal situa-
tions occur. The majority of the time the driver will likely be focused on subgoals
associated with executing the chosen route, but may need to dynamically switch to
other goals in response to abnormal situations, or the need for a reroute, for example.
Further, many of these subgoals may interact with each other.
The GDTA then breaks out the detailed decisions and SA requirements for each
subgoal, an example of which is shown in Figure 7.2, including perception, com-
prehension, and projection elements. This provides a systematic analysis that deter-
mines not just the information that drivers need to be aware of but also how they
need to combine that data to form relevant situation comprehension and projections
of changes in the situation that are relevant to making decisions. Table 7.1 provides
a partial listing of the SA requirements for drivers, summarized across the various
goals, subgoals, and decisions contained in the full GDTA.
GDTAs in general strive to be technology free. That is they describe what deci-
sions need to be made, and the information needed to make them, but not how
that information will be obtained, which could change dramatically with technol-
ogy. For example, vehicle navigation could be accomplished via a compass, map,
or Global Positioning System (GPS)-enabled navigation system. Nonetheless, the
same questions will be relevant and the same SA considerations present. As vehi-
cles become more automated, it may be easier to determine some of these SA
requirements. For example, computers can easily determine many distance and
time calculations such as distance to other vehicles, time until refueling, and pro-
jected time to destinations. Even with semi-autonomous vehicles, as long as the
130
Maintain
Ensure vehicle Assess & respond
is set-up properly Avoid traffic compliance with
to operator distress
laws
Maintain safe
Eliminate internal Avoid weather & driving distances Assess and respond
hazards poor visibility with traffic, to vehicle collision
obstacles and
pedestrians
Modify vehicle
controls for
conditions
Human Factors for Automated Vehicles
Subgoal
ubgoal 3.3.1 Avoid objects in the roadway
Decision Will vehicle collide with the object?
Projected point of collision/miss distance
Predicted trajectory of the object
Position of object
Speed of object
Direction of movement of object
SA Requirements Projected changes in speed/direction of the object
Predicted trajectory of the vehicle
Position of vehicle
Speed of vehicle
Direction of vehicle
Projected changes in speed/direction of the vehicle
Does the object need to be avoided?
Predicted damage to vehicle during collision
Type of object
Mass of object
Speed of vehicle
Ability to avoid the object?
Predicted collision with other vehicles/objects/pedestrians
Distance to other vehicles/objects/pedestrians
Vehicle/object/pedestrian locations
Vehicle/object/pedestrian trajectories
Braking time available
Distance to object
Maximum breaking rate
Roadside conditions/clearance
Ability to execute avoidance maneuver
Projected point of collision/miss distance
driver remains responsible for the safe operation of the vehicle, he or she will need
to remain aware of this key information to ensure that the automation is operat-
ing properly, in accordance with the driver’s desires and taking into account the
many unforeseen circumstances that can occur. (This will be discussed in more
detail later.) Only if functions become fully automated will the driver’s SA needs
change.
Michon (1985) describes three types of driving behaviors: (1) strategic, which
focuses on high-level goal-related decisions such as navigation, (2) tactical, which
focuses on maneuvering, and (3) operational, which focuses on low-level actions such
as steering and braking. In comparing this taxonomy to the goal tree in Figure 7.1,
strategic behaviors would map to three of the major goals: ensure vehicle is safe
for driving, select optimum path to point of destination, and minimize the impact
of abnormal situations. Tactical behaviors would map primarily to executing the
chosen route. While operational behaviors like steering and braking require percep-
tion of situational information (such as brake lights, stoplights, and lane markings),
these low-level behaviors are subsumed under subgoals such as maintain safe driving
132 Human Factors for Automated Vehicles
TABLE 7.1
SA Requirements for Driving
Level 1 SA: Perception Level 2 SA: Comprehension Level 3 SA: Projection
Location of nearby objects Distance to other objects, Projected trajectory of own
(vehicles, pedestrians, cyclists, vehicles, pedestrians, cyclists vehicle, other vehicles, objects,
other objects) pedestrians, cyclists
Relative speed of traffic in Preferred lane for traffic Projected collision/miss distances
adjacent lanes avoidance/speed
Open areas in adjacent lanes Vehicle in blind spot Projected effect of evasive
maneuver/braking
Planned route Compliance with planned route Projected distance to turns/exits
Planned destination(s) location Traffic lane needed for route Projected distance & time
execution remaining to destination
Traffic density along route(s) Areas of congestion Projected trip delay time
(crashes, construction, major Alternate routes Projected time to destination on
events, time of day, road/exit alternate routes
closures)
Emergency vehicles Avoidance of emergency Projected traffic stops/slowdowns
vehicles ahead
Emergency personnel Compliance with safety
personnel
Hazardous weather along route Impact of weather on vehicle Projected changes in weather
(rain, snow, icing, fog, high safety, systems, & route time Projected safety of route(s)
winds, areas of flooding)
Daylight/dusk/night Visibility of road and vehicle
Road conditions along route Impact of road conditions on Projected time and distance to
(road size/paving, construction, route time destination on route(s)
frequency of stop signs/lights, Impact of road conditions on Projected cost/benefit of change
security) route safety in route
Speed limit Vehicle compliance with laws Projected locations of police
Stoplight status
Traffic control measures
Lane markings
Direction of traffic
Vehicle parameters (speed, gear, Fuel sufficiency & usage Projected time until refueling is
fuel level, position in lane, Fuel to reach destination needed
headlights, wipers) Road worthiness
Vehicle malfunctions Vehicle safety Projected ability of vehicle to
make trip
Location of fuel stations Distance to refueling stations Projected refueling points
Location of restaurants Distance to restaurants Projected stop points
Parking place(s) (location, size) Distance to vehicles, curbs
Driver status (fatigue, injury, Need for rest break, assistance, Projected time until stop is
hunger, thirst, intoxication) alternate driver needed
Situation Awareness in Driving 133
distances with traffic, obstacles, and pedestrians, and maneuvering in traffic. These
low-level operational behaviors (often categorized as skill based) may be conducted
in a conscious, deliberate way, or may become cognitively automatized (to be dis-
cussed in more detail later). Even the execution of heavily practiced routes (e.g., the
route home from work) can become highly automatized.
The driving GDTA includes the need for all three levels of SA for making decisions
under each goal. Kaber, Sangeun, Zahabi, and Pankok (2016) found evidence for the
importance of all three levels of SA for the performance of tactical driving tasks (e.g.,
passing) and that Level 3 SA, as well as overall SA, is important for strategic perfor-
mance (e.g., arrival time). Other research also supports the importance of Level 3 SA for
strategic driving performance (Ma & Kaber, 2007; Matthews, Bryant, Webb, & Harbluk,
2001) and for tactical driving performance (Ma & Kaber, 2005). Kaber et al. (2016)
found that while operational performance was generally highly cognitively automatized
and not affected by the addition of a secondary task during normal operations, after
exposure to a hazard it became more conscious and correlated with Level 1 SA.
7.3 SA MODEL
A cognitive model of SA is shown in Figure 7.3 (Endsley, 1995) that can be used to
understand the factors that affect driver SA in the dynamic road transportation envi-
ronment, each of which will be discussed. Based on the factors in this model, the role
and impact of automation and vehicle design on SA are then considered.
As people gain experience, they are able to draw on long-term memory stores
(such as mental models) to significantly overcome working memory limitations,
which has been demonstrated in a number of studies (Endsley, 1990; 2015; Sohn &
Doane, 2004; Sulistayawati, Wickens, & Chui, 2011). Bolstad (2001), for example,
found no relationship between working memory and driver performance in her study
with highly experienced drivers. So both the experience level of the driver and the
presence of abnormal or usual situations that require more conscious and focused
information processing affect the exact role of working memory in driving.
for avoiding other vehicles and maintaining lane following. While sometimes these
highly salient cues are non-driving related (i.e., distracted driving related to cell
phones, text messages, and passenger conversations), it should be pointed out that
in many cases they are relevant to the driving task. For example, drivers can be dis-
tracted by looking at pedestrians near a busy intersection and miss a sudden stop in
traffic ahead. In busy driving environments, it can be quite challenging to maintain
awareness of all the relevant information due to limited attention.
In addition to long-term memory in the form of mental models, there is also evi-
dence that people develop schema of prototypical states of the mental model. These
schema are patterns consisting of the state of each of the relevant elements of the
mental model (Endsley, 1995). By pattern matching between critical cues in the cur-
rent situation and known schema, people can instantly recognize many classes of
situations. Schema can be learned through direct experience or vicariously through
training or storytelling. For example, drivers may have typical schema for “traffic
Situation Awareness in Driving 137
7.3.1.5 Expertise
Data-driven processing by novice drivers is highly inefficient (Endsley, 2006).
Since novice drivers lack knowledge on which information is most important, their
scan patterns tend to be sporadic and non-optimal (Chapman & Underwood, 1998;
Underwood, 2007). They may neglect key information or over-sample information
unnecessarily. The novice driver will not know where to look for important informa-
tion. Novices also fail to direct their attention to as wide a range of information in the
environment as experts, possibly due to more impoverished mental models directing
their search (Underwood, Chapman, Bowden, & Crundall, 2002).
Properly understanding the significance of what is perceived may also pose a
problem, as novice drivers do not have the experience base that is needed for inter-
preting and prioritizing cues. Novice drivers often fail to recognize potentially
hazardous cues and to anticipate how hazards may develop (Borowsky, Shinar, &
Oron-Gilad, 2010; Borowsky & Oron-Gilad, 2013; Finn & Bragg, 1986; Parmet,
Borowsky, Yona, & Oron-Gilad, 2015). Lack of understanding of hidden hazards has
also been found to underlie the inferior speed management of novice drivers, both of
which are major factors associated with crashes in this group (Parmet et al., 2015).
Luckily, as drivers gain expertise through experience and training, they signifi-
cantly reduce these problems through a number of mechanisms, including mental
138 Human Factors for Automated Vehicles
models, schema, and goal-driven processing. Mourant and Rockwell (1972) showed
that experts were better able to adjust their scan patterns to the road type. Expert
drivers have better search models regarding where to look for hazards (Underwood
et al., 2002) that can guide their scanning behaviors.
A number of researchers have examined the ability of drivers to project future
hazards in the driving environment (McKenna & Crick, 1991; 1994), a part of
Level 3 SA, showing that experienced drivers are significantly faster at detecting
and reacting to hazards. This superior level of hazard projection improves with both
driving time and advanced driver training (McKenna & Crick, 1991). Horswill and
McKenna (2004) showed improved hazard awareness to be significantly related to a
reduced likelihood of being involved in a crash.
Horswill and McKenna (2004) found that while experienced drivers are better at
anticipating hazards than less experienced drivers, they were even more negatively
affected by secondary tasks that tap into the central executive function of working
memory (such as discussions on a cell phone or mental calculations). This implies
that the projection of hazards in the driving environment (a part of Level 3 SA) is
cognitively demanding, requiring working memory, and is not automatic. It involves
effortful processes as experienced drivers consciously search for hazards using a
dynamic mental model of the driving environment.
Of significant concern is the role that vehicle automation will have on the devel-
opment of expertise in novice drivers. If many of the tasks of driving are handed off
to automation, will new drivers ever develop the deep knowledge (mental models,
schema and goal-directed processing) that is critical for SA? For example, as GPS
navigation devices have become common, the ability to read maps to navigate has
become a lost skill. Further, evidence suggests that even experienced drivers can lose
skills if they primarily rely on automation (Wiener & Curry, 1980). When vehicles
provide automated blind spot warnings, drivers may no longer look behind them,
becoming dependent on automation that may not always be reliable. Significant atten-
tion will need to be paid to how vehicle automation affects the development of exper-
tise in future drivers, as well as its effect on skill atrophy in experienced drivers.
moving displays, and audible tones or communications. Thus, the high salience of
these technologies can compete directly for attention with external information that
may be critical, but of low salience.
7.3.2.2 Complexity
As driving environments become more complex, there are more things to keep track
of, making Level 1 SA more difficult, and it is more difficult to obtain an accu-
rate mental model of that environment, making Level 2 and 3 SA more challeng-
ing. For example, National Highway Traffic Safety Administration (NHTSA) found
that 36% of crashes involve turning or crossing at intersections (National Highway
Traffic Safety Administration, 2008). Factors that increase environmental complex-
ity include intersections with the potential for crossing traffic; an increased number
of lanes of traffic; the presence of pedestrians and cyclists; heavy traffic; complex
roadways and interchanges (e.g., five-way intersections, exits in unexpected direc-
tions); and adverse weather creating more diverse driver behaviors. Vehicle automa-
tion presents a new form of complexity for drivers, compounded by the numbers of
features, modes, and logic patterns inherent in its design.
processes. Engström, Markkula, Victor, and Merat (2017) reviewed a number of stud-
ies to show that high cognitive workload tends to impair the performance of novel,
unpracticed, or highly variable driving tasks, but that well-automatized tasks are
often unaffected. Under high workload conditions, experienced drivers are far less
negatively affected than inexperienced drivers (Patten, Kircher, Ostlund, Nilsson, &
Svenson, 2006), therefore, due to their increased ability to automatize certain tasks
and draw on long-term memory structures such as mental models and schema. This
apparently does not extend to tasks such as hazard awareness, however, which is not
automatized, but requires central executive control (Horswill & McKenna, 2004).
On the low workload side, drivers may suffer from performance deficits when placed
in low attention, demanding conditions that decrease vigilance (Davies & Parasuraman,
1980). Schmidt et al. (2009), for example, showed progressive increases in reaction
times and perceived subjective monotony, which were correlated with physiological
indicators of lowered arousal (as measured by changes in electroencephalography
(EEG) and heart rate) in drivers over each segment of a 2-hour trip during daytime
driving. Vigilance decrements are more likely to occur when the need for driver action
is low. For example, LaRue, Rakotonirainy, and Pettitt (2011) found both road design
(degree of curvature and changes in elevation) and lack of variability in roadside scen-
ery increased the probability of vigilance-related performance decrements.
The 23% of vehicle crashes that are attributed to driver inattention and inadequate
surveillance (National Highway Traffic Safety Administration, 2008) may in part be
due to these challenges with vigilance, workload, and stress, leading to attentional
narrowing and inefficiencies associated with information gathering.
New vehicle automation may affect driver workload in different ways. Similar
to automation in aviation, it can significantly reduce workload during low workload
periods leading to vigilance problems (e.g., driving on highways with little traffic),
but also increase it during high workload periods, such as maneuvering in heavy
traffic (Bainbridge, 1983; Endsley, 2017a; Ma & Kaber, 2005; Schmidt et al., 2009;
Wiener & Curry, 1980; this Handbook, Chapter 6).
can also be directed at a combination of all three, with the driver expected to act as
a supervisory controller (SAE Levels 1–3). The higher levels of automation (SAE
Levels 4 and 5) lean towards full automation in which human supervision is not
necessary. While SAE Level 4 is considered full automation when operating within
some prescribed set of circumstances (e.g., a highway, a certain section of a city, or
in a certain weather), it reverts to a lower level of automation if operated outside of
those conditions.
Based on an analysis of some 30 years’ worth of research, it is expected that
each of these SAE levels of automation vary significantly in terms of driver SA and
performance (Endsley, in press). The majority of forms of SAE Level 0 automation,
which provide assistance on some tasks but leave the driver with dynamic driving
control, are expected to be beneficial. Automation directed at driver SA should be
particularly helpful for SA, as long as it is reliable and has a low false alarm rate.
Forward collision warning systems, for example, reduce crashes and do not lead
to increases in taking on secondary tasks (Muhrer, Reinprecht, & Vollrath, 2012).
Automation involving decision aids will be beneficial if it provides correct advice,
but will lead to decision biasing, and loss of SA when wrong. Automation involving
low-level manual tasks is also expected to be primarily positive.
However, SAE Level 1–3 automation that requires human oversight and interven-
tion is expected to significantly lower driver SA, and will form the remainder of the
focus in this chapter. Supervisory control automation can significantly lower SA,
increasing the likelihood of being out-of-the-loop when driver action is required
144 Human Factors for Automated Vehicles
(Endsley, 2017b; Endsley & Kiris, 1995; Onnasch et al., 2014). This can be attributed
to three main factors (Endsley & Kiris, 1995):
lowered when the drivers also used cell phones, with the driver’s ability to project
future events (Level 3 SA) most affected.
Solving this challenge is not easy. While some vehicle manufacturers have
attempted to maintain driver engagement with alerts for drivers to keep their hands
on the wheel or their eyes on the road, these are likely to be inadequate. Drivers
using SAE Level 2 automation have been found to generally remove their hands from
the steering wheel during most driving and only return them due to system warnings
(Banks, Eriksson, Donoghue, & Stanton, 2018). However, lowered driver engage-
ment due to automation is primarily cognitive. Interventions such as making drivers
keep their hands on the wheel or eyes on the road have been found to be ineffective,
with 28% of drivers still crashing into unexpected objects (Victor et al., 2018). The
challenge is more of keeping one’s mind on the road.
Overall, a wide body of research shows that vehicle automation placing drivers
in the role of a supervisory controller (SAE Levels 1–3), with the expectation that
they should be able to monitor and intervene, will significantly lower driver SA due
to reduced cognitive engagement and vigilance, accompanied by an increased likeli-
hood of conducting secondary tasks, both of which will lead to a resultant increase in
automation-induced crashes. While some problems have been seen with SAE Level
1 automation, these problems will be greatly exacerbated with SAE Levels 2 and 3
automation that take on greater amounts of the driving task.
If and when vehicles achieve full automation (SAE Level 5), concerns about
driver SA may become moot. However, it will take significant gains in automation
software reliability and robustness to achieve acceptable levels of safety equal to or
exceeding that of average human drivers. Software capable of full automation will
likely take many years, and a number of constraining factors will act to reduce the
likelihood of its adoption, including issues of who will bear the costs, willingness of
manufacturers or consumers to take on the responsibility and liabilities of automated
system performance, lack of trust, and the realization of actual safety advantages and
disadvantages (as compared to current promises) (Endsley, 2019).
7.4 CONCLUSIONS
SA is fundamental to successful driving. People are extremely good at learning
and adapting to highly variable driving conditions, resulting in over 495,000 miles
between crashes and over 95 million miles between fatal crashes, a record that vehicle
automation is far from matching (Endsley, 2018). Thus, it will be many years before
driving becomes fully automated. In the meantime, finding ways to keep driver SA
high will be vital. A number of recommendations can be made towards this goal.
will provide a more effective approach than SAE Level 2/3 supervisory
control approaches (Endsley, in press).
3. Implementations of vehicle automation need to pay close attention to the
need to support driver SA. Guidelines for supporting SA on automation
functioning and transparency, allowing drivers to better understand and
project automation actions, can be used to improve driver-automation inter-
actions (Endsley, 2017a; Endsley & Jones, 2012).
4. Enhanced driver training programs will be needed to help drivers bet-
ter understand the behaviors and limitations of new automated features.
Because automation can learn and change over time, developers will need
to constantly update this training and deliver it to drivers at home as well as
when a vehicle is purchased. Given that even very knowledgeable automa-
tion researchers can fall prey to automation induced loss of SA, however, it
is unlikely that automation reliability training will be sufficient to compen-
sate for its inherent vigilance challenges (Endsley, 2017a).
5. Enhanced driver training focused on improving SA, including hazard
awareness, is highly recommended. This training would be very useful for
beginner drivers, helping them to more rapidly develop the mental models
that characterize expert driving (Endsley & Jones, 2012).
REFERENCES
Bacon, S. J. (1974). Arousal and the range of cue utilization. Journal of Experimental
Psychology, 102, 81–87.
Baddeley, A. D. (1972). Selective attention and performance in dangerous environments.
British Journal of Psychology, 63, 537–546.
Bainbridge, L. (1983). Ironies of automation. Automatica, 19, 775–779.
Banks, V. A., Eriksson, A., Donoghue, J. A., & Stanton, N. A. (2018). Is partially automated
driving a bad idea? Observations from an on-road study. Applied Ergonomics, 68,
138–145.
Biondi, F., Lohani, M., Hopman, R., Mills, S., Cooper, J. L., & Strayer, D. L. (2018). 80 MPH
and out-of-the-loop: Effects of real-world semi-automated driving on driver workload
and arousal. Proceedings of the Human Factors and Ergonomics Society Annual
Meeting (pp. 1878–1882). Santa Monica, CA: Human Factors and Ergonomics Society.
Bolstad, C. A. (2001). Situation awareness: Does it change with age? Proceedings of the
Human Factors and Ergonomics Society 45th Annual Meeting (pp. 272–276). Santa
Monica, CA: Human Factors and Ergonomics Society.
Borowsky, A. & Oron-Gilad, T. (2013). Exploring the effects of driving experience on hazard
awareness and risk perception via real-time hazard identification, hazard classification,
and rating tasks. Accident Analysis & Prevention, 59, 548–565.
Borowsky, A., Shinar, D., & Oron-Gilad, T. (2010). Age and skill differences in driving
related hazard perception. Accident Analysis & Prevention, 42, 1240–1249.
Briggs, G. F., Hole, G. J., & Land, M. F. (2011). Emotionally involving telephone conversa-
tions lead to driver error and visual tunnelling. Transportation Research Part F: Traffic
Psychology and Behaviour, 14, 313–323.
Caird, J. K., Simmons, S. M., Wiley, K., Johnston, K. A., & Horrey, W. J. (2018). Does talking
on a cell phone, with a passenger, or dialing affect driving performance? An updated
systematic review and meta-analysis of experimental studies. Human Factors, 60(1),
101–133.
148 Human Factors for Automated Vehicles
Carsten, O., Lai, F. C. H., Barnard, Y., Jamson, A. H., & Merat, N. (2012). Control task sub-
stitution in semiautomated driving: Does it matter what aspects are automated? Human
Factors, 54(5), 747–761.
Casson, R. W. (1983). Schema in cognitive anthropology. Annual Review of Anthropology,
12, 429–462.
Chaparro, A., Groff, L., Tabor, K., Sifrit, K., & Gugerty, L. J. (1999). Maintaining situa-
tional awareness: The role of visual attention. Proceedings of the Human Factors and
Ergonomics Society 43rd Annual Meeting (pp. 1343–1347). Santa Monica, CA: Human
Factors and Ergonomics Society.
Chapman, P. R. & Underwood, G. (1998). Visual search of driving situations: Danger and
experience. Perception, 27, 951–9964.
Charlton, S. G. & Starkey, N. J. (2011). Driving without awareness: The effects of practice
and automaticity on attention and driving. Transportation Research Part F: Traffic
Psychology and Behaviour, 14(6), 456–471.
Charlton, S. G. & Starkey, N. J. (2013). Driving on familiar roads: Automaticity and inatten-
tion blindness. Transportation Research Part F: Traffic Psychology and Behaviour,
19, 121–133.
Damos, D. & Wickens, C. D. (1980). The acquisition and transfer of time-sharing skills. Acta
Psychologica, 6, 569–577.
Davies, D. R. & Parasuraman, R. (1980). The Psychology of Vigilance. London: Academic
Press.
de Winter, J. C., Happee, R., Martens, M. H., & Stanton, N. A. (2014). Effects of adaptive
cruise control and highly automated driving on workload and situation awareness: A
review of empirical evidence. Transportation Research Part F: Traffic Psychology and
Behaviour, 27, 196–217.
Drews, F. A., Yazdani, H., Godfrey, C. N., Cooper, J. M., & Strayer, D. L. (2009). Text mes-
saging during simulated driving. Human Factors, 51(5), 762–770.
Endsley, M. R. (1988). Design and evaluation for situation awareness enhancement.
Proceedings of the Human Factors Society 32nd Annual Meeting (pp. 97–101). Santa
Monica, CA: Human Factors Society.
Endsley, M. R. (1990). A methodology for the objective measurement of situation aware-
ness. Situational Awareness in Aerospace Operations (AGARD-CP-478) (pp. 1/1–1/9).
Neuilly Sur Seine, France: NATO - AGARD.
Endsley, M. R. (1993). A survey of situation awareness requirements in air-to-air combat
fighters. International Journal of Aviation Psychology, 3(2), 157–168.
Endsley, M. R. (1995). Toward a theory of situation awareness in dynamic systems. Human
Factors, 37(1), 32–64.
Endsley, M. R. (2006). Expertise and situation awareness. In K. A. Ericsson, N. Charness,
P. J. Feltovich, & R. R. Hoffman (Eds.), The Cambridge Handbook of Expertise and
Expert Performance (pp. 633–651). New York: Cambridge University Press.
Endsley, M. R. (2015). Situation awareness misconceptions and misunderstandings. Journal
of Cognitive Engineering and Decision Making, 9(1), 4–32.
Endsley, M. R. (2017a). Autonomous driving systems: A preliminary naturalistic study of the
Tesla Model S. Journal of Cognitive Engineering and Decision Making, 11(3), 225–238.
Endsley, M. R. (2017b). From here to autonomy: Lessons learned from human-automation
research. Human Factors, 59(1), 5–27.
Endsley, M. R. (2018). Situation awareness in future autonomous vehicles: Beware the unex-
pected. Proceedings of the 20th Congress of the International Ergonomics Association
(pp. 303–309). Florence, Italy: Springer.
Endsley, M. R. (2019). The limits of highly autonomous vehicles: An uncertain future.
Ergonomics, 62(4), 496–499.
Situation Awareness in Driving 149
Janis, I. L. (1982). Decision making under stress. In L. Goldberger & S. Breznitz (Eds.),
Handbook of Stress: Theoretical and Clinical Aspects (pp. 69–87). New York: The
Free Press.
Jeon, M., Walker, G. N., & Gable, T. M. (2014). Anger effects on driver situation awareness
and driving performance. Presence, 23(1), 71–89.
Johannsdottir, K. R. & Herdman, C. M. (2010). The role of working memory in supporting
drivers’ situation awareness for surrounding traffic. Human Factors, 52(6), 663–673.
Kaber, D., Sangeun, J., Zahabi, M., & Pankok, C. (2016). The effect of driver cognitive abili-
ties and distractions on situation awareness and performance under hazard conditions.
Transportation Research Part F: Traffic Psychology and Behaviour, 42(1), 177–194.
Kaber, D. B. & Endsley, M. R. (2004). The effects of level of automation and adaptive auto-
mation on human performance, situation awareness and workload in a dynamic control
task. Theoretical Issues in Ergonomic Science, 5(2), 113–153.
Kass, S. J., Cole, K. S., & Stanny, C. J. (2007). Effects of distraction and experience on
situation awareness and simulated driving. Transportation Research Part F: Traffic
Psychology and Behaviour, 10, 321–329.
Keinan, G. & Friedland, N. (1987). Decision making under stress: Scanning of alternatives
under physical threat. Acta Psychologica, 64, 219–228.
Kircher, K. & Ahlstrom, C. (2017). Minimum required attention: A human-centered approach
to driver inattention. Human Factors, 59(3), 471–484.
Large, D. R., Burnett, G. E., Morris, A., Muthumani, A., & Matthias, R. (2017). Design
implications of drivers’ engagement with secondary activities during highly-automated
driving – A longitudinal simulator study. Proceedings of the Road Safety and Simulation
International Conference (RSS2017), The Hague, Netherlands: RSS.
LaRue, G. S., Rakotonirainy, A., & Pettitt, A. N. (2011). Driving performance impairments
due to hypovigilance on monotonous roads. Accident Analysis & Prevention, 43(6),
2037–2046.
Lee, J. D. & See, K. A. (2004). Trust in automation: Designing for appropriate reliance.
Human Factors, 46(1), 50–80.
Lee, Y. C., Lee, J. D., & Boyle, L. N. (2007). Visual attention in driving: The effects of cogni-
tive load and visual disruption. Human Factors, 49(4), 721–733.
Lewis, B. A., Eisert, J. L., & Baldwin, C. L. (2018). Validation of essential acoustic param-
eters for highly urgent in-vehicle collision warnings. Human Factors, 60(2), 248–261.
Logan, G. D. (1988). Automaticity, resources and memory: Theoretical controversies and
practical implications. Human Factors, 30(5), 583–598.
Ma, R. & Kaber, D. (2005). Situation awareness and workload in driving while using adap-
tive cruise control and a cell phone. International Journal of Industrial Ergonomics,
35, 939–953.
Ma, R. & Kaber, D. (2007). Situation awareness and driving performance in a simulated
navigation task. Ergonomics, 50(8), 1351–1364.
Ma, R., Sheik-Nainar, M. A., & Kaber, D. B. (2005). Situation awareness in driving while
using adaptive cruise control and a cell phone. Proceedings of the Human Factors and
Ergonomics Society 49th Annual Meeting (pp. 381–385). Santa Monica, CA Human
Factors and Ergonomics Society.
Manzey, D., Reichenbach, J., & Onnasch, L. (2012). Human performance consequences of
automated decision aids: The impact of degree of automation and system experience.
Journal of Cognitive Engineering and Decision Making, 6, 57–87.
Matthews, M. L., Bryant, D. J., Webb, R. D., & Harbluk, J. L. (2001). Model of situation
awareness and driving. Transportation Research Record, 1779, 26–32.
McKenna, F. & Crick, J. L. (1991). Hazard Perception in Drivers: A Methodology for Testing
and Training. Reading, UK: University of Reading, Transport and Road Research
Laboratory.
Situation Awareness in Driving 151
Selkowitz, A. R., Lakhmani, S. G., & Chen, J. Y. C. (2017). Using agent transparency to
support situation awareness of the autonomous squad member. Cognitive Systems
Research, 46, 13–25.
Sethumadhavan, A. (2009). Effects of automation types on air traffic controller situation
awareness and performance. Proceedings of the Human Factors and Ergonomics
Society 53rd Annual Meeting (pp. 1–5). Santa Monica, CA: Human Factors and
Ergonomics Society.
Slamecka, N. J. & Graf, P. (1978). The generation effect: Delineation of a phenomenon.
Journal of Experimental Psychology: Human Learning and Memory, 4(6), 592–604.
Sohn, Y. W. & Doane, S. M. (2004). Memory processes of flight situation awareness:
Interactive roles of working memory capacity, long-term working memory and exper-
tise. Human Factors, 46(3), 461–475.
Strayer, D. L. & Fisher, D. L. (2016). SPIDER: A framework for understanding driver distrac-
tion. Human Factors, 58(1), 5–12.
Strayer, D. L., Drews, F. A., & Johnston, W. A. (2003). Cell phone induced failures of visual
attention during simulated driving. Journal of Experimental Psychology: Applied, 9,
23–52.
Sulistayawati, K., Wickens, C. D., & Chui, Y. P. (2011). Prediction in situation awareness:
Confidence bias and underlying cognitive abilities. International Journal of Aviation
Psychology, 21(2), 153–174.
Treat, J. R., Tumbas, N. S., McDonald, S. T., Shinar, D., Hume, R. D., Mayer, R. E., ... Catellan,
N. J. (1979). Tri-level Study of the Causes of Traffic Accidents: Final Report Volume
I: Causal Factor Tabulations and Assessments. Institute for Research in Public Safety
(DOT HS-805). Bloomington, IN: Indiana University.
Treisman, A. & Paterson, R. (1984). Emergent features, attention and object perception.
Journal of Experimental Psychology: Human Perception and Performance, 10(1),
12–31.
Underwood, G. (2007). Visual attention and the transition from novice to advanced driver.
Ergonomics, 50(8), 1235–1249.
Underwood, G., Chapman, P., Bowden, K., & Crundall, D. (2002). Visual search while driv-
ing: Skill and awareness during inspection of the scene. Transportation Research, Part
F, 5(2), 87–97.
Victor, T. W., Tivesten, E., Gustavsson, P., Johansson, J., Sangberg, F., & Ljung Aust, M.
(2018). Automation expectation mismatch: Incorrect prediction despite eyes on threat
and hands on wheel. Human Factors, 60(8), 1095–1116.
Wickens, C. D. (1992). Engineering Psychology and Human Performance (2nd ed.).
New York: Harper Collins.
Wickens, C. D. & Dixon, S. R. (2007). The benefits of imperfect diagnostic automation: A
synthesis of the literature. Theoretical Issues in Ergonomics Science, 8, 201–212.
Wiener, E. L. & Curry, R. E. (1980). Flight deck automation: Promises and problems.
Ergonomics, 23(10), 995–1011.
Yanko, M. R. & Spalek, T. M. (2014). Driving with the wandering mind: The effect that mind-
wandering has on driving performance. Human Factors, 56(2), 260–269.
Young, M. S. & Stanton, N. (2007). Back to the future: Brake reaction times for manual and
automated vehicle. Ergonomics, 50, 46–58.
8 Allocation of Function
to Humans and
Automation and the
Transfer of Control
Natasha Merat and Tyron Louw
University of Leeds
CONTENTS
Key Points .............................................................................................................. 153
8.1 Introduction .................................................................................................. 154
8.2 Defining FA .................................................................................................. 154
8.2.1 Allocating Responsibility.................................................................. 155
8.2.2 Allocating Authority to Take Responsibility for a Function............. 157
8.3 Defining the Driving Task: How Automation Changes FA .......................... 158
8.4 The Can and Why of Allocating Functions .................................................. 160
8.5 The Consequences of Inappropriate FA........................................................ 163
8.6 Transfer of FA in AVs ................................................................................... 165
8.7 Summary and Conclusions ........................................................................... 166
References .............................................................................................................. 167
KEY POINTS
• Traditional allocation of functions to machines has historically been used in
static environments, which is challenging for the dynamic driving domain.
• As more functions are allocated to automated vehicles, it is important to
understand the dynamic and fluid relationship that exists between humans
and machines, especially in SAE Level 2–4 vehicles
• For highly automated vehicles to deliver the promise of reducing human
error in road-related crashes, it is important that system designers are aware
of the limitations and expectations of human users, to minimize the unex-
pected consequences of inappropriate function allocation.
• Safe and successful transfer of control from automated vehicles to
humans requires better knowledge of how human attention and vigilance
can be maintained during prolonged periods of automation, and how fac-
tors such as fatigue, distraction and complacency can be mitigated and
managed.
153
154 Human Factors for Automated Vehicles
8.1 INTRODUCTION
Over its entire history, the motor vehicle has probably had its most fundamental struc-
tural change during the past 20 years, with the addition of many primary (active) and
secondary (passive) safety systems, which have mostly been implemented due to the
need to improve road safety, and reduce vehicle-related injuries and deaths (Peden
et al., 2004). Examples of primary (active) safety systems include electronic stabil-
ity control, automatic emergency braking, and antilock brakes, which help reduce
the likelihood of crashes (Scanlon, Sherony, & Gabler, 2017; Page, Hermitte, &
Cuny, 2011). Secondary (passive) safety systems include airbags, seatbelts, and more
advanced vehicle body engineering solutions, which mitigate the impact of crashes
(Richter, Pape, Otte, & Krettek, 2005; Frampton & Lenard, 2009).
In addition to these safety systems, today’s vehicles incorporate features that pro-
vide drivers with higher levels of assistance in performing the driving task, guiding,
warning and informing the driver, as well as taking over particular driving func-
tions, which can replace drivers’ actual physical control of the vehicle, for certain
time periods. As more tasks are taken away from the driver, and controlled by the
vehicle’s various systems, a number of human factors implications need to be con-
sidered regarding this change of role, to fully understand the implication of these
additional systems on driver’s behavior and performance, and the effect of any con-
sequent changes on driver and road safety (see e.g., this Handbook, Chapters 1, 2).
To understand whether, and how, such allocation of function(s) to the vehicle’s
various systems is likely to influence the driving task, this chapter begins by provid-
ing a short overview of Function Allocation (FA) between humans and machines,
initially considered during human–machine interaction studies in other domains. We
then set the scene by outlining the functions required by humans in a conventional
driving task, using several well-established models in this context, before discussing
how, when, and why, different functions are allocated in an automated vehicle (AV).
Specifically, we examine the rationale used by designers and engineers to allocate
functions to each actor in an AV, followed by an overview of how, and when, drivers
are informed about this allocation. Furthermore, we discuss what the consequences
of such allocation of function might be on driver behavior and performance, how
these may be managed, as well as the broader effect such consequences might have
on road safety.
8.2 DEFINING FA
When considering the interaction and cooperation between humans and machines,
function (or task) allocation simply considers whether, why, and how a function/task,
or a series of related functions/tasks, are allocated to, and must be managed by the
human, the machine, or a combination of the two, in order for a particular goal to be
achieved (Bouzekri, Canny, Martinie, Palanque, & Gris, 2018). According to Pankok
and Bass (2017), FA is “a process which examines a list of functions that the human–
machine system needs to execute in order to achieve operational requirements, and
determines whether the human, machine (i.e., automation), or some combination
should implement each function” (p. A-7). Historically, this allocation of function
Functional Allocation in Automated System 155
TABLE 8.1
The original Fitts List
Humans Appear to Surpass Present-Day Present-Day Machines Appear to Surpass
Machines with Respect to the Following: Humans with Respect to the Following:
1. Ability to detect a small amount of visual or 1. Ability to respond quickly to control signals
acoustic energy and to apply great force smoothly and
2. Ability to perceive patterns of light or sound precisely
3. Ability to improvise and use flexible 2. Ability to perform repetitive, routine tasks
procedures 3. Ability to store information briefly and then to
4. Ability to store very large amounts of erase it completely
information for long periods and to recall 4. Ability to reason deductively, including
relevant facts at the appropriate time computational ability
5. Ability to reason inductively 5. Ability to handle highly complex operations,
6. Ability to exercise judgment i.e., to do many different things at once.
has been fixed in nature, with the MABA-MABA (“Men Are Better At”—“Machines
Are Better At”) lists (Price, 1985) revealing the assumption that either the human or
the machine would be superior for a particular function. As initially outlined by Fitts
et al. (1951; see Table 8.1), this allocation is partly determined by the ability of each
actor to successfully achieve the required task, with humans being generally better
at tasks requiring judgment, reasoning, and improvization, while machines are gen-
erally better at repetitive tasks, or those that require force, precision, and/or a quick
response (de Winter & Hancock, 2015). However, although this type of FA continues
to be considered (de Winter & Dodou, 2014), it has been widely criticized (Jordan,
1963; Fuld, 1993; Hancock & Scallen, 1996; Sheridan, 2000; Dekker & Woods,
2002), because it assumes that FA is static, and is considered acontextual, because it
is insensitive to the influence of environmental variables (Scallen & Hancock, 2001).
This view solidified as we began to understand the nature of work better, including
its context and environment, while also appreciating that users of a system have
different needs and information processing capabilities. Today, human factors inves-
tigations demonstrate that, for tasks requiring cooperation between machines and
humans, although assigning the right function to the right actor is important, other
factors must also be considered, to ensure safe and efficient task completion. These
include the number of functions assigned, as well as the frequency and sequence of
allocation of these functions. Also, it is essential that each actor assumes the appro-
priate responsibility, and authority, for taking control of, or assigning, a function.
reference to appropriate information and training about the task, or by the relay of
suitable messages from the system during task completion, for instance, via relevant
Human–Machine Interfaces (HMI). Assigning and assuming the right degree, and
type, of responsibility is likely to reduce errors and confusion. The transfer of this
responsibility between actors must also be achieved in a timely manner, and under
the correct circumstances, in order to avoid or reduce task error. For example, safety
may be affected if the human resumes responsibility for a task unnecessarily, when
it is being well-controlled and managed by the system (Noy, Shinar, & Horrey, 2018).
Equally, passing responsibility back to the human by the system in a timely manner
is important, to ensure that adequate mental and physical resources are available to
assume such responsibility (Louw, Kountouriotis, Carsten, & Merat, 2015).
Therefore, it is important that system engineers consider this allocation of respon-
sibility to the system and human carefully, communicating this information clearly,
to ensure the user is aware of their role, versus that of the system. Any allocation of
responsibility to the human is also done under the assumption that the human honors
that responsibility, and that there is a minimal likelihood of misuse or abuse of the
system’s functionality, which would result in reductions of their own and others’
safety. However, the automation must also be designed with the human’s limitations
in mind, to ensure that any likely failures can be appropriately managed by the user.
Part of the designers’ challenge is ensuring that users have the correct mental
model of system functionality (Sarter & Woods, 1995; see also, this Handbook,
Chapter 3) so that their responsibility for every stage of task completion is clear.
Research shows that the ability to assume responsibility for a particular task, or
a series of related tasks, also relies on users’ expectation, training, and experi-
ence (Flemisch et al., 2012). This responsibly may also shift, be interrupted, or
neglected, if the user is engaged in other competing tasks, which may or may not
be related to the user’s primary goal outcome. For example, in a recent driving
simulator study, we investigated driver response to “silent” failures in SAE Level
2 and 3 automated driving, where automation failure during a simulator drive was
not preceded by a takeover warning (Louw et al., 2019). This is an example of
when automation hands responsibility of the function back to the driver, due to an
unexpected/unknown failure, perhaps because the system encounters a scenario not
anticipated by its designer, such as absence/obstruction of lane markings used for
keeping the vehicle in its lane. Drivers completed two drives in a counterbalanced
order. In one drive, they were required to monitor the road and driving environ-
ment during automation, where attention to the road was maintained by asking
them to read the words on a series of road-based Variable Message Signs (VMS,
Level 2). For the other condition (Level 3), drivers performed an additional non-
driving related task (NDRT) during automation, the visual search-based Arrows
task (see Jamson & Merat, 2005). This task was presented on a screen near the
gear shift, obliging drivers to look down and away from the road. The VMS task
was also required in this drive, which meant that the drivers divided their attention
between the road, the VMS, and the NDRT.
When considering performance, results showed that, after the silent failure, a sig-
nificantly higher number of lane excursions were observed during the NDRT drive
(Level 3), and participants took longer to take over control. Participants also had a
Functional Allocation in Automated System 157
more erratic pattern of eye movements after silent automation failure in the NDRT
drive, presumably because they were attempting to gather the most useful visual
information from the road ahead and the dash-based HMI, which contained infor-
mation about automation status (on/off). This example of a fluid, and un-signalized,
shift of task control between machine and human, illustrates the detrimental effects
of unclear responsibilities, especially if the driver’s attention is directed away
from the forward road, and if the accompanying HMI is confusing, rather than
assisting, the driver.
One main reason for striving towards clearer allocation of responsibility for each
actor is to ensure that the source of any possible errors during task engagement can
be rapidly identified. Therefore, the main human factors challenge here is not only
for system designers to assign the correct responsibility to each actor, but also for
humans to be aware, and capable, of honoring this responsibility, through to comple-
tion of the task. Of course, problems arise when this allocation is unreasonable, or
when sustained commitment by the human is not possible, for example, when fatigue
and distraction creep in.
A key challenge in this context is understanding how humans can remain vigilant
and engaged during prolonged periods of automation use, sustaining the ability to
successfully resume responsibility of the function (for instance, due to failures; see
also, this Handbook, Chapter 6). A consideration of how this vigilance and capabil-
ity of humans can be determined by the system is also important, as is the ethical
consequences of incorrect FA, or authority to assume control. For example, should an
appropriately functioning system cede control to an impaired driver, if asked to do so?
Having provided a generic overview of FA and summarized the implications of
allocating responsibility and authority to each agent in complex scenarios involving
human–machine interactions, it is now important to understand how this general
overview relates specifically to the interaction of humans with higher levels of vehi-
cle automation. This is especially important when considering functions that are cur-
rently shared between the human and the vehicle, where responsibility and authority
are assigned to/assumed by either agent. Before considering this FA in AVs, the next
section provides a brief overview of models developed for describing the driver’s role
during manual control of driving and describes how these roles are likely to change
as a result of allocating functions to the vehicle.
FIGURE 8.1 Driver’s monitoring role in manual control of the driving task. (From Merat
et al., 2019; based on Michon’s model, 1985; © 2019 Springer. Reprinted with Permission of
Springer Publications.)
user complacency (Parasuraman & Riley, 1997; this Handbook, Chapter 4), loss of
skill (Hampton, 2016; this Handbook, Chapter 10), and degraded situation aware-
ness (Salmon, Walker, & Stanton, 2016; this Handbook, Chapters 7, 13). Some of
these errors are thought to be exacerbated by lack of suitable feedback from the
system via its HMI (Lee & Seppelt, 2009). Here, we distinguish between monitor-
ing of systems controlled by the human, where the perceptual-motor (physical) link
is still preserved in manual driving (as shown in Figure 8.1), compared to where an
automated system’s performance is monitored without this physical link. Indeed,
there is also a need to describe precisely what monitoring refers to in this context.
While “monitoring” is considered synonymous with “checking” and “observing” a
system’s performance, it is not simply a case of verifying that some level of physical/
perceptual-motor/cognitive engagement is maintained with the system (such as
establishing whether eyes are on the road and hands are on the steering wheel).
Instead, as Victor et al. (2018) highlight, an additional cognitive element (an element
beyond paying attention) may also be required as part of such monitoring, to ensure
that the user is capable of “understanding in the mind the need for action control.”
Therefore, as more functionality is taken over by the automated system, and
the role of the driver changes to that of a supervisor of these functions, there is
a need for additional aides in the vehicle, to help the human controller with their
altered role. These include interfaces that offer intuitive and accurate information
about the automated system’s functionality, informing the driver of likely changes
in this functionality. This information should also be timely, provided with ade-
quate notice, and should not surprise, distract, or overload the user (see also, this
Handbook, Chapter 15). Drivers may also need assistance in managing the func-
tion that is being transferred back from the automated system, since skill degra-
dation and reduced situation awareness are known to accompany such FA to the
vehicle (Endsley, 2017), especially after longer periods of system use (Trösterer
et al., 2017). Here, driver monitoring systems will be a useful addition to the vehicle
160 Human Factors for Automated Vehicles
FIGURE 8.2 (See color insert.) A proposed model showing the changing position and role
of the human due to the introduction of automated functions in the driving task.
(this Handbook, Chapter 11), to ensure that drivers are vigilant, and capable of hon-
oring their responsibilities (this Handbook, Chapters, 9, 10). Figure 8.2 illustrates
the effect of these changes on the driver’s role, altering the original models of driver
behavior, developed for conventional driving. As more and more functions are allo-
cated to systems, the driver’s physical control of the vehicle decreases. Depending
on the level of automation engaged, drivers’ reliance on good warning and commu-
nication from the different HMIs will increase. To ensure there is a suitable degree
of monitoring of this HMI, and that important information and warnings are not
missed by the driver, an informative and accurate driver monitoring system (DMS),
which manages the human–HMI-vehicle link is required. In addition to acquiring
more accurate data about driver state for such DMS, future research must consider
opportunities for informative and intuitive HMI for highly AVs (Carsten & Martens,
2018). This knowledge can also be used to inform extensive, and regular, training
of the human driver, to ensure they have a good understanding and mental model of
system capabilities and limitations.
when it is suitable for this allocation to take place, and who should be responsible for
this allocation. Some regard of how different functions interact with each other, and
their effect on human performance, is also important, where it can be argued that the
sum effect on performance is not equal to all of its parts.
As discussed above, when considering the allocation of functions between humans
and machines, it is important to establish what task is being allocated (i.e., the nature
of the task/function), as well as the degree of involvement in task management for
each agent (i.e., how is responsibility assumed or shared between agents). However,
as part of this discussion, it is also essential to establish whether or not a function can
be allocated (i.e., can the machine perform the function/task, at least as well as, or
better than, the human?). There are also safety, ethical, and moral issues when decid-
ing whether a function should be allocated, that go beyond the system’s technical
capabilities. This is also linked to how much authority a designer gives the system
for taking responsibility of the task. Here, it seems essential for engineers and system
designers to have a good appreciation of the unintended consequences of this FA on
humans, which, as outlined above (and further below), may lead to user confusion,
distraction, fatigue, loss of skill, or complacency. These unintended consequences
are also closely tied to system failures, which may occur due to unforeseen techno-
logical limitations (not yet known by designers), or an unintentionally lax testing
protocol, as well as user (mis)understanding of system capabilities.
In the case of vehicle automation, the manufacturer’s motivations, and rationale,
for FA is partly motivated by the challenges facing our congested and polluted cities,
and a desire to reduce transport-related emissions and increase throughput, while
also enhancing driver comfort and improving road safety. However, based on our
understanding of how automation affects human performance, the aspiration to
increase road safety by removing “the human element” is currently a growing irony.
For example, the “out-of-the-loop” problems associated with a lack of engagement
in the driving task (Merat et al., 2019; this Handbook, Chapter 21), especially when
sparse and uncomplicated road conditions provide drivers with a false sense of secu-
rity about system capabilities, are leading to real-world crashes (Vlasic & Boudette,
2016; Silverstein, 2019; Stewart, 2019).
Another important motivation for allocation of more functions to the system,
which in turn increases the level of vehicle automation, is the desire to release
humans from the monotonous aspects of the driving task, providing freedom for
engagement in other (more productive) tasks. This freedom to attend to other tasks is
linked to economic benefits, estimated to be worth tens of billions of dollars (Leech,
Whelan, Bhaiji, Hawes, & Scharring, 2015). Again, if not planned and implemented
well, this task substitution can lead to the same human factors problems outlined
above, in addition to an eventual loss of skill, with prolonged system engagement
(Carsten et al., 2012).
Different approaches have been used for categorizing the capabilities of auto-
mated driving systems. For example, original categorization of levels of automation,
proposed by control engineers, typically accounts for the locus of control (human or
automation) and how information is presented to the human (cf. Sheridan & Verplank,
1978). There are also more driving-specific levels of automation, which describe, at
each level of automation, what aspects of the primary driving task are performed by
162 Human Factors for Automated Vehicles
the human or the system (SAE, 2016a). Finally, when considered within a human
factors context, systems can also be classified based on their correspondence to mod-
els of human information processing, such as sensory perception, working memory,
decision-making, and response selection (Parasuraman, Sheridan, & Wickens, 2000;
this Handbook, Chapter 21).
While each of these approaches has a particular application, and implies some
pre-determined FA, the main failure of such categorization is that, for the most part,
they focus on the capabilities of systems, rather than that of their users. They also
fail to identify the influence of system and human on one another, when the two have
to work together, as is currently the case for vehicle automation. Therefore, while
simply considering a vehicle’s ability based on its stated levels of automation would
be a desirable solution, it does not represent the ideal approach for determining an
appropriate FA.
Here, it can be argued that the MABA-MABA type lists (Price, 1985; see
Table 1) represent the skills and abilities of machines and humans in the best-
case scenarios. However, in reality, these abilities cannot be maintained in per-
petuity. For example, systems will have specified Operational Design Domains
(ODD), which will determine whether, where, and when they reach their limita-
tions. Due to shortcomings in technological advances, this limitation is not an issue
that will be resolved in the near future, even though substantial developments are
being achieved in this context, on a daily basis, with both automotive companies,
Tier 1 suppliers, and big and small newcomers in the market investing heavily in
this area. However, widespread penetration of vehicles with Level 5 (SAE, 2016a)
automated driving capability, for all road types and environments, is not likely for
some decades, before which humans will still be involved in, and responsible for,
different aspects of the driving task.
The continuous nature of driving, and the constantly changing environment in
which it is performed, means that the moment-to-moment driving tasks and respon-
sibilities will also change. Consequently, the allocation of responsibility for some
functions/tasks will need to transfer between the AV and the human driver, depend-
ing on the capability of the system, and the particular driving environment. To illus-
trate, using an example from limited ODD, an AV may be able to operate at Level 2
in most areas, at Level 3 in some areas, and at Level 4 in only a few areas. If this
vehicle moves from an area where it can function at Level 4, to one where it can func-
tion at Level 2, the human is required to be aware of this change, and start monitor-
ing system and road environment for Level 2 functionality. However, if the vehicle
moves from a Level 4 to a Level 0 area, this change in ODD would require a funda-
mental shift in the human driver’s responsibilities, which will not only require moni-
toring the road and vehicle, but also resuming lateral and longitudinal control of the
vehicle. This transition may also involve the responsibility for obstacle detection and
avoidance. Therefore, functional capability alone is not adequate, with different (and
changing) environmental settings also playing a role in this relationship. The need
for dynamic allocation of responsibility and authority between human and machines
is therefore necessary for some functions, until the system can satisfactorily perform
in all possible driving conditions. Here, an ideal solution for system functionality is
its ability to recognize its own limitations, as the environment changes, informing
Functional Allocation in Automated System 163
the human, in sufficient time (Figure 8.2). System functionality should therefore only
be available for the correct environmental setting, and otherwise, not operational.
Another consideration in this context is that the hardware and software utilized
in automated driving systems can change rapidly and frequently. For example, some
OEMs allow “over-the-air” download updates of automated driving software (Barry,
2018). This type of, instant, change may alter the vehicle’s behavior in certain sce-
narios, also changing system capability, for example, by activating latent hardware,
which may enable new features. The nature of this update creates problems with
“type approval” of vehicles, as well as presenting significant human factors chal-
lenges. For example, this approach requires users to update their mental model of the
functionality of the system, which presents a higher risk of mode confusion. These
issues can be of concern for driver training, especially novices, since research on
training in other domains, such as aviation, has shown that some novice pilots are
biased towards trusting automation over their own judgment (Parasuraman & Riley,
1997). Casner and Hutchins (2019) argue that, for successful use of new automated
systems in the driving domain, a comparable level of consideration, to that used in
aviation, should be given to the training protocols developed for human drivers, to
ensure they are familiar with the capabilities of the system, appreciating their own
capabilities and limitations, as well as having a good understanding of the “human-
automation team.”
In sum, the key point to consider here is that, while a function may be allocated in
good faith for a particular system, or in a particular context, as soon as that context
changes, the reallocation of a function may actually cause more harm than good, if
it is not properly understood or implemented by its user. It is therefore important to
fully appreciate these aspects of the technology’s fallibility and propensity to change
rapidly, when deciding who does what, and when, and also how the authority for
resuming this responsibility is determined. At the moment, the rapid implementation
of automation in driving means that humans are left to do the tasks that machines
are either not yet, or perhaps ever, able to achieve, a concept known as the leftover
approach (Bailey, 1989). As Chapanis (1970) aptly argues, it is our job as human fac-
tors researchers/engineering psychologists to ensure that these tasks are manageable
within human capabilities.
Ward, 2000). Of course, these issues may also arise in compliant drivers (i.e., driv-
ers who are appropriately monitoring their Level 2 vehicle). Therefore, the primary
concern of inappropriate allocation of tasks is that errors and risks are not detected
and dealt with, either by the human or the system, compromising safety. Safety is
also affected if there is inappropriate intervention by the human. For instance, if the
efficient operation of the system is unnecessarily interrupted by the human resuming
control of steering, disturbing the safe trajectory of the vehicle, resulting in a colli-
sion. This would also be of concern if the human is incapacitated (e.g., due to fatigue
or intoxication). Research needs to establish the risk of the above occurrence in the
context of when, and how, drivers will have to interact with the system.
REFERENCES
Bailey, R. W. (1989). Human Performance Engineering: Using Human Factors/Ergonomics
to Achieve Computer System Usability. Upper Saddle River, NJ: Prentice-Hall.
Bainbridge, L. (1983). Ironies of automation. In G. Johannsen & J. E. Rijnsdorp (Eds.), Analysis,
Design and Evaluation of Man–Machine Systems (pp. 129–135). Oxford: Pergamon.
Barry, K. (2018). Automakers Embrace Over-the-Air Updates, But Can We Trust Digital
Car Repair? Retrieved from www.consumerreports.org/automotive-technology/
automakers-embrace-over-the-air-updates-can-we-trust-digital-car-repair/
Bouzekri, E., Canny, A., Martinie, C., Palanque, P., & Gris, C. (2018). Using task descriptions
with explicit representation of allocation of functions, authority and responsibility to
design and assess automation. IFIP Working Conference on Human Work Interaction
Design (pp. 36–56). Berlin: Springer.
Brown, I. D. (1994). Driver fatigue. Human Factors, 36(2), 298–314.
Byrne, E. A. & Parasuraman, R. (1996). Psychophysiology and adaptive automation.
Biological Psychology, 42(3), 249–268.
Carsten, O., Lai, F. C., Barnard, Y., Jamson, A. H., & Merat, N. (2012). Control task substi-
tution in semiautomated driving: Does it matter what aspects are automated? Human
Factors, 54(5), 747–761.
Carsten, O. & Martens, M. H. (2018). How can humans understand their automated cars?
HMI principles, problems and solutions. Cognition, Technology & Work, 21(1), 3–20.
Casner, S. M. & Hutchins, E. L. (2019). What do we tell the drivers? Toward minimum driver
training standards for partially automated cars. Journal of Cognitive Engineering and
Decision Making, 13(2), 55–66. doi:10.1177/1555343419830901
Chapanis, A. (1970). Relevance of physiological and psychological criteria to man-machine
systems: The present state of the art. Ergonomics, 13(3), 337–346.
De Boer, R. & Dekker, S. (2017). Models of automation surprise: Results of a field survey in
aviation. Safety, 3(3), 20.
de Winter, J. C. F. & Dodou, D. (2014). Why the Fitts list has persisted throughout the history
of function allocation. Cognition, Technology & Work, 16(1), 1–11.
de Winter, J. C. F. & Hancock, P. A. (2015). Reflections on the 1951 Fitts list: Do humans
believe now that machines surpass them? Procedia Manufacturing, 3, 5334–5341.
Dekker, S. W. & Woods, D. D. (2002). MABA-MABA or abracadabra? Progress on human–
automation co-ordination. Cognition, Technology & Work, 4(4), 240–244.
Endsley, M. R. (2017). Autonomous driving systems: A preliminary naturalistic study of the
Tesla Model S. Journal of Cognitive Engineering and Decision Making, 11(3), 225–238.
Endsley, M. R. & Garland, D. J. (2000). Pilot situation awareness training in general avia-
tion. Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 44,
357–360.
Federal Aviation Administration (2013). Safety Alert for Operators (No. 13002). Retrieved
from www.faa.gov/other_visit/aviation_industry/airline_operators/airline_safety/safo/
all_safos/media/2013/SAFO13002.pdf
Fitts, P. M., Viteles, M. S., Barr, N. L., Brimhall, D. R., Finch, G., Gardner, E. ,... Stevens,
S. S. (1951). Human Engineering for an Effective Air-Navigation and Traffic-Control
System. Columbus, OH: Ohio State University Research Foundation.
Flemisch, F., Heesen, M., Hesse, T., Kelsch, J., Schieben, A., & Beller, J. (2012). Towards
a dynamic balance between humans and automation: Authority, ability, responsibil-
ity and control in shared and cooperative control situations. Cognition, Technology &
Work, 14(1), 3–18.
Frampton, R. & Lenard, J. (2009). The potential for further development of passive safety.
Annals of Advances in Automotive Medicine/Annual Scientific Conference (Vol. 53,
p. 51). Chicago, IL: Association for the Advancement of Automotive Medicine.
168 Human Factors for Automated Vehicles
Fuld, R. B. (1993). The fiction of function allocation. Ergonomics in Design, 1(1), 20–24.
Gary, C. S., Lakhiani, C., Defazio, M. V., Masden, D. L., & Song, D. H. (2018). Caution with
use: Smartphone-related distracted behaviors and implications for pedestrian trauma.
Plastic and Reconstructive Surgery, 142(3), 428e.
Gold, C., Damböck, D., Lorenz, L., & Bengler, K. (2013). “Take over!” How long does it take
to get the driver back into the loop? Proceedings of the Human Factors and Ergonomics
Society Annual Meeting, 57, 1938–1942.
Gonçalves, R., Louw, T., Madigan, R., & Merat, N. (2019). Using Markov chains to under-
stand the sequence of drivers’ gaze transitions during lane-changes in automated
driving. Proceedings of the 10th International Driving Symposium on Human Factors
in Driver Assessment, Training, and Vehicle Design (pp. 217–223). Santa Fe, NM: Iowa
Public Policy Center.
Goode, J. H. (2003). Are pilots at risk of accidents due to fatigue? Journal of Safety Research,
34(3), 309–313.
Hampton, M. E. (2016). Memorandum: Enhanced FAA Oversight Could Reduce Hazards
Associated with Increased Use of Flight Deck Automation. Washington, DC: U.S.
Dept. of Transportation.
Hancock, P. A. & Scallen, S. F. (1996). The future of function allocation. Ergonomics in
Design, 4(4), 24–29.
Hancock, P. A. & Warm, J. S. (1989). A dynamic model of stress and sustained attention.
Human Factors, 31, 519–537.
Hollnagel, E. & Bye, A. (2000). Principles for modelling function allocation. International
Journal of Human-Computer Studies, 52(2), 253–265.
Hollnagel, E. & Woods, D. D. (2005). Joint Cognitive Systems: Foundations of Cognitive
Systems Engineering. Boca Raton, FL: CRC Press.
Horberry, T., Anderson, J., Regan, M. A., Triggs, T. J., & Brown, J. (2006). Driver distraction:
The effects of concurrent in-vehicle tasks, road environment complexity and age on
driving performance. Accident Analysis & Prevention, 38(1), 185–191.
Jamson, A. H. & Merat, N. (2005). Surrogate in-vehicle information systems and driver behav-
iour: Effects of visual and cognitive load in simulated rural driving. Transportation
Research Part F: Traffic Psychology and Behaviour, 8(2), 79–96.
Jordan, N. (1963). Allocation of functions between man and machines in automated systems.
Journal of Applied Psychology, 47(3), 161.
Körber, M., Cingel, A., Zimmermann, M., & Bengler, K. (2015). Vigilance decrement and
passive fatigue caused by monotony in automated driving. Procedia Manufacturing, 3,
2403–2409.
Lee, J. D. & Seppelt, B. D. (2009). Human factors in automation design. In S. Nof (Ed.),
Springer Handbook of Automation (pp. 417–436). Berlin: Springer.
Lee, J. D., Wickens, C. D., Liu, Y., & Boyle, L. N. (2017). Designing for People: An
Introduction to Human Factors Engineering. Charleston, SC: CreateSpace.
Leech, J., Whelan, G., Bhaiji, M., Hawes, M., & Scharring, K. (2015). Connected and
Autonomous Vehicles - The UK Economic Opportunity. Amstelveen, The Netherkands:
KPGM.
Loukopoulos, L. D., Dismukes, R. K., & Barshi, I. (2001). Cockpit interruptions and distrac-
tions: A line observation study. Proceedings of the 11th International Symposium on
Aviation Psychology (pp. 1–6). Columbus, OH: Ohio State University Press.
Louw, T., Kountouriotis, G., Carsten, O., & Merat, N. (2015). Driver inattention during vehicle
automation: How does driver engagement affect resumption of control? 4th International
Conference on Driver Distraction and Inattention. Sydney: ARRB Group.
Louw, T., Kuo, J., Romano, R., Radhakrishnan, V., Lenné, M., & Merat, N. (2019). Engaging
in NDRTs affects drivers’ responses and glance patterns after silent automation fail-
ures. Transportation Research Part F: Traffic Psychology and Behaviour, 62, 870–882.
Functional Allocation in Automated System 169
Louw, T., Madigan, R., Carsten, O., & Merat, N. (2017b). Were they in the loop during auto-
mated driving? Links between visual attention and crash potential. Injury Prevention,
23(4), 281–286.
Louw, T., Markkula, G., Boer, E., Madigan, R., Carsten, O., & Merat, N. (2017a). Coming
back into the loop: Drivers’ perceptual-motor performance in critical events after auto-
mated driving. Accident Analysis & Prevention, 108, 9–18.
Louw, T., Merat, N., & Jamson, H. (2015). Engaging with highly automated driving: To be or
not to be in the loop? Proceedings of 8th International Driving Symposium on Human
Factors in Driver Assessment, Training and Vehicle Design (pp. 190–196). Snowbird,
UT: Iowa Public Policy Center.
Ma, R. & Kaber, D. B. (2005). Situation awareness and workload in driving while using adap-
tive cruise control and a cell phone. International Journal of Industrial Ergonomics,
35(10), 939–953.
Mackworth, N. H. (1948). The breakdown of vigilance during prolonged visual search.
Quarterly Journal of Experimental Psychology, 1(1), 6–21.
Madigan, R., Louw, T., & Merat, N. (2018). The effect of varying levels of vehicle automation
on drivers’ lane changing behaviour. PloS ONE, 13(2), e0192190.
McDonald, A. D., Alambeigi, H., Engström, J., Markkula, G., Vogelpohl, T., Dunne, J., & Yuma,
N. (2019). Toward computational simulations of behavior during automated driving take-
overs: A review of the empirical and modeling literatures. Human Factors, 61(4), 642–688.
Merat, N., Jamson, A. H., Lai, F. C., Daly, M., & Carsten, O. M. (2014). Transition to
manual: Driver behaviour when resuming control from a highly automated vehicle.
Transportation Research Part F: Traffic Psychology and Behaviour, 27, 274–282.
Merat, N., Seppelt, B., Louw, T., Engström, J., Lee, J. D., Johansson, E., ... McGehee, D.
(2019). The “out-of-the-loop” concept in automated driving: Proposed definition, mea-
sures and implications. Cognition, Technology & Work, 21(1), 87–98.
Michon, J. A. (1985). A critical view of driver behavior models: What do we know, what
should we do? In L. Evans & R. C. Schwing (Eds.), Human Behavior and Traffic Safety
(pp. 485–524). Boston, MA: Springer.
Mole, C., Lappi, O., Giles, O., Markkula, G., Mars, F., & Wilkie, R. (2019). Getting back into
the loop: The perceptual-motor determinants of successful transitions out of automated
driving. Human Factors, 61(7), 1037–1065.
Moray, N. (2003). Monitoring, complacency, scepticism and eutactic behaviour. International
Journal of Industrial Ergonomics, 31(3), 175–178.
Noy, I. Y., Shinar, D., & Horrey, W. J. (2018). Automated driving: Safety blind spots. Safety
Science, 102, 68–78.
Page, Y., Hermitte, T., & Cuny, S. (2011). How safe is vehicle safety? The contribution of
vehicle technologies to the reduction in road casualties in France from 2000 to 2010.
Annals of Advances in Automotive Medicine/Annual Scientific Conference (Vol. 55,
p. 101). Chicago, IL: Association for the Advancement of Automotive Medicine.
Pankok, Jr. C. & Bass, E. J. (2017). Appendix A – Function allocation literature review. In Jr. C.
Pankok, E. J. Bass, P. J. Smith, J. Bridewell, I. Dolgov, J. Walker, ... Spencer, A. (Authors),
A7—UAS Human Factors Control Station Design Standards (Plus Function Allocation,
Training, and Visual Observer). Washington, DC: Federal Aviation Administration.
Retrieved from https://rosap.ntl.bts.gov/view/dot/36213/dot_36213_DS1.pdf
Parasuraman, R. & Manzey, D. H. (2010). Complacency and bias in human use of automation:
An attentional integration. Human Factors, 52(3), 381–410.
Parasuraman, R. & Riley, V. (1997). Humans and automation: Use, misuse, disuse, abuse.
Human Factors, 39(2), 230–253.
Parasuraman, R., Sheridan, T. B., & Wickens, C. D. (2000). A model for types and levels
of human interaction with automation. IEEE Transactions on Systems, Man, and
Cybernetics-Part A: Systems and Humans, 30(3), 286–297.
170 Human Factors for Automated Vehicles
Peden, M., Scurfield, R., Sleet, D., Mohan, D., Hyder, A. A., Jarawan, E., & Mathers, C.
(2004). World Report on Road Traffic Injury Prevention. Geneva, Switzerland: World
Health Organization.
Price, H. E. (1985). The allocation of functions in systems. Human Factors, 27(1), 33–45.
Richter, M., Pape, H. C., Otte, D., & Krettek, C. (2005). Improvements in passive car safety
led to decreased injury severity–A comparison between the 1970s and 1990s. Injury,
36(4), 484–488.
SAE. (2016a). Taxonomy and Definitions for Terms Related to Driving Automation Systems
for On-Road Motor Vehicles (J3016 201609). Warrendale, PA: Society of Automotive
Engineers.
SAE. (2016b). Human Factors Definitions for Automated Driving and Related Research
Topics (J3114 201612). Warrendale, PA: Society of Automotive Engineers.
Saffarian, M., de Winter, J. C., & Happee, R. (2012). Automated driving: Human-factors
issues and design solutions. Proceedings of the Human Factors and Ergonomics
Society Annual Meeting, 56, 2296–2300.
Salmon, P. M., Walker, G. H., & Stanton, N. A. (2016). Pilot error versus sociotechnical sys-
tems failure: A distributed situation awareness analysis of Air France 447. Theoretical
Issues in Ergonomics Science, 17(1), 64–79.
Sarter, N. B. & Woods, D. D. (1995). How in the world did we ever get into that mode? Mode
error and awareness in supervisory control. Human Factors, 37(1), 5–19.
Scallen, S. F. & Hancock, P. A. (2001). Implementing adaptive function allocation. The
International Journal of Aviation Psychology, 11(2), 197–221.
Scanlon, J. M., Sherony, R., & Gabler, H. C. (2017). Injury mitigation estimates for an inter-
section driver assistance system in straight crossing path crashes in the United States.
Traffic Injury Prevention, 18 (sup1), S9–S17.
Sheridan, T. B. (2000). Function allocation: Algorithm, alchemy or apostasy? International
Journal of Human-Computer Studies, 52(2), 203–216.
Sheridan, T. B. & Verplank, W. L. (1978). Human and Computer Control of Undersea
Teleoperators. Cambridge, MA: Massachusetts Institute of Technology, Man-Machine
Systems Lab.
Silverstein, J. (2019). Driver Says Tesla Car Gets "Confused" and Crashes on Highway.
Retrieved from www.cbsnews.com/news/tesla-autopilot-car-gets-confused-and-
crashes-on-highway/
Stewart, J. (2019). Tesla’s Self-Driving Autopilot Involved in Another Deadly Crash. Retrieved
from www.wired.com/story/tesla-autopilot-self-driving-crash-california/
Tefft, B. C. (2017). Rates of Motor Vehicle Crashes, Injuries and Deaths in Relation to Driver
Age, United States, 2014–2015. Washington, DC: AAA Foundation for Traffic Safety.
Treat, J. R., Tumbas, N. S., McDonald, S. T., Shinar, D., Hume, R. D., Mayer, R. E., …
Castellan, N. J. (1979). Tri-Level Study of the Causes of Traffic Accidents. Final
Report, Vol. I. Causal Factor Tabulations and Assessments. Bloomington, IN: Indiana
University Institute for Research in Public Safety.
Trösterer, S., Meschtscherjakov, A., Mirnig, A. G., Lupp, A., Gärtner, M., McGee, F., ...
Engel, T. (2017). What we can learn from pilots for handovers and (de) skilling in
semi-autonomous driving: An interview study. Proceedings of the 9th International
Conference on Automotive User Interfaces and Interactive Vehicular Applications
(pp. 173–182). New York: ACM.
Victor, T. W., Tivesten, E., Gustavsson, P., Johansson, J., Sangberg, F., & Ljung Aust, M.
(2018). Automation expectation mismatch: Incorrect prediction despite eyes on threat
and hands on wheel. Human Factors, 60(8), 1095–1116.
Vlasic, B. & Boudette, N. (2016). Self-Driving Tesla Was Involved in Fatal Crash, U.S. says.
Retrieved from www.nytimes.com/2016/07/01/business/self-driving-tesla-fatal-crash-
investigation.html
Functional Allocation in Automated System 171
Ward, N. J. (2000). Task automation and skill development in a simplified driving task.
Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 44(3),
302–305.
Wickens, C. D., Clegg, B. A., Vieane, A. Z., & Sebok, A. L. (2015). Complacency and auto-
mation bias in the use of imperfect automation. Human Factors, 57(5), 728–739.
Zeeb, K., Buchner, A., & Schrauf, M. (2015). What determines the take-over time? An inte-
grated model approach of driver take-over after automated driving. Accident Analysis &
Prevention, 78, 212–221.
Zhang, B., de Winter, J., Varotto, S., Happee, R., & Martens, M. (2019). Determinants of
take-over time from automated driving: A meta-analysis of 129 studies. Transportation
Research Part F: Traffic Psychology and Behaviour, 64, 285–307.
Taylor & Francis
Taylor & Francis Group
http://taylorandfrancis.com
9 Driver Fitness in the
Resumption of Control
Dina Kanaan and Birsen Donmez
University of Toronto
Tara Kelley-Baker
AAA Foundation for Traffic Safety
Stephen Popkin
Volpe National Transportation Systems Center
Andy Lehrer
Changeis, Inc.
Donald L. Fisher
Volpe National Transportation Systems Center
CONTENTS
Key Points............................................................................................................... 174
9.1 Introduction................................................................................................... 175
9.2 Distraction .................................................................................................... 176
9.2.1 Definitions and Effects ..................................................................... 176
9.2.1.1 What Is Driver Distraction? ............................................... 177
9.2.1.2 Potential Sources of Distraction and Their Effects on
Non-Automated Driving .................................................... 178
9.2.1.3 Effects of Automation on Distraction ................................ 179
9.2.1.4 Effects of Distraction on Driver-Automation
Coordination ...................................................................... 179
9.2.2 Detection............................................................................................ 182
9.2.3 Remediation ...................................................................................... 183
9.3 Sleepiness ..................................................................................................... 186
9.3.1 Definitions and Effects ..................................................................... 186
9.3.1.1 What Is Sleepiness? ........................................................... 186
9.3.1.2 Potential Sources of Sleepiness and Their Effects on
Non-Automated Driving..................................................... 187
9.3.1.3 Effects of Automation on Sleepiness ................................. 188
9.3.1.4 Effects of Sleepiness on Driver-Automation Coordination... 188
173
174 Human Factors for Automated Vehicles
KEY POINTS
• For non-automated driving, research is conclusive that distracting activities
that claim visual/manual resources are particularly detrimental to safety as
these resources are also required for vehicle control.
Driver Fitness: Resumption of Control 175
9.1 INTRODUCTION
Although automation can relieve drivers from performing lateral and longitudinal
vehicle control tasks, as currently implemented, even the most advanced driving auto-
mation technologies—namely, SAE Level 2, 3, or 4 automation (SAE International,
2018)—require drivers to take over, or resume, control when the automation is
unable to safely drive the vehicle. Given the limitations of these technologies, drivers
are expected to either constantly monitor both environment and automation, and to
identify and step in when there are situations that the automation is unable to handle
(SAE Level 2), or take over control when the automation identifies a situation that it
cannot handle and notifies the driver (SAE Level 3), or perform scheduled takeovers
(SAE Level 4). Therefore, even with state-of-the-art systems, the drivers need to be
fit to perform these activities when needed. That is, the drivers need to be in a state
where they have (SAE Level 2) or can achieve (SAE Levels 3 and 4) adequate levels
of situation awareness (SA, Endsley, 1995; this Handbook, Chapter 7) that would
translate to proper takeover performance, as indicated by response time and quality.
176 Human Factors for Automated Vehicles
The fitness of the driver to take over vehicle control may be degraded due to vari-
ous states that the driver may experience, including distraction, sleepiness, impair-
ment from alcohol and other drugs (AOD), and motion sickness. Drivers may recover
from some of these states more rapidly (e.g., distraction) and some states may be
more prolonged throughout a drive (e.g., motion sickness, and impairment from
AODs). All of these states can degrade the driver’s information processing abili-
ties, including perception, cognition, and action, but manifest in different ways, at
different rates, and for different durations. The sources that lead to these impaired
states are also varied; for example, carried-in devices such as mobile phones may
be a source of distraction (e.g., Caird, Simmons, Wiley, Johnston, & Horrey, 2018),
while extended work hours and night and irregular shifts may be a source of sleepi-
ness (e.g., McCartt, Rohrbaugh, Hammer, & Fuller, 2000). The remedies for these
impairments may depend on the source, the nature of any interactions among impair-
ment types, as well as the individual driver. One source of impairment may even be
used to prevent another type of impairment. For example, a momentary impairment
such as distraction can be mitigated with an alert, whereas a more prolonged impair-
ment such as from alcohol would require prevention; sleepiness due to under-arousal
may be prevented by keeping the driver engaged in arousal-enhancing activities that
themselves could be distracting. Although detection is important for remediating all
these states, prediction of a degraded state would be an ideal strategy to more proac-
tively remediate certain states, such as sleep and motion sickness.
This chapter provides an overview of four major driver impairments to takeover
performance, namely, distraction, sleepiness, impairment due to AOD, and motion
sickness. Each impairment type is discussed in detail with regard to its sources,
effects, detection, and remediation. The focus is on the former two as Chapter 11 of
this handbook provides a more detailed review of driver state detection and reme-
diation of impairment. Figure 9.1 provides an overview of the chapter content by
presenting an information-processing-centric driver model as well as the potential
interactions between the driver and automation.
9.2 DISTRACTION
9.2.1 definitions And effects
Driver distraction has been recognized as a major traffic safety concern particularly
over the past two decades, starting with the proliferation of mobile phone technolo-
gies. The National Highway Traffic Safety Administration (NHTSA) reported that
in 2015, 391,000 people in the United States were injured in motor vehicle crashes
that involved distracted drivers, and 3,477 traffic fatalities (10% of all traffic fatali-
ties) were attributed to distraction (National Highway Traffic Safety Administration,
2017). The Traffic Injury Research Foundation (2018) reported that 25% of motor
vehicle fatalities in Canada in 2015 involved at least one distracted driver.
There is a large body of literature for non-automated vehicles on the effects of
driver distraction on driving performance and crash risk, as well as ways to miti-
gate the negative effects of distraction (Donmez, Boyle, & Lee, 2006; 2007; 2008b;
Regan, Lee, & Young, 2008). In the past few years, efforts have started to shift
Driver Fitness: Resumption of Control 177
(SAE Levels 0–4). However, for different SAE Levels, “activities critical for safe
driving” mean different things. For example, for lower SAE Levels (0 and 1), these
activities relate to the manual control of the vehicle; for SAE Level 2, they include
monitoring the automation, identifying the need for takeover, and performing the
takeover; and for higher SAE Levels (3 and 4), these activities include noticing and
acting upon takeover requests. Because of this changing notion of “activities critical
for safe driving” based on the level of driving automation, distraction may affect
the driving task differently at different automation levels, and thus may need to be
mitigated using different strategies. Further, as constant monitoring of the environ-
ment is not required in SAE Level 3 and above, it can be argued that non-driving
activities, usually referred to as “secondary tasks” in the literature, may no longer
be considered secondary to driving (see also this Handbook, Chapters 6, 8). In this
chapter, we adopt the term “non-driving tasks” rather than “secondary tasks” to refer
to non-driving activities, with the understanding that non-driving activities are still
secondary to driving in SAE Level 2 and below.
not require a driver’s full attention, allowing for spare capacity to engage in non-
driving activities. Another reason is that drivers are known to demonstrate adap-
tive and compensatory behaviors while distracted (e.g., Metz, Schömig, & Krüger,
2011; Oviedo-Trespalacios, Haque, King, & Washington, 2017; Platten, Milicic,
Schwalm, & Krems, 2013; Reimer et al., 2013).
Although drivers can have spare capacity to engage in distracting activities and can
to some extent adapt their engagement and driving behaviors, they may not always be
successful in doing so, as they may fall into attention traps (Lee, 2014). Further, driv-
ers may not always have control over their non-driving task engagement behaviors:
distraction can also be involuntary, with drivers’ attention automatically captured by
an external stimulus or internal thoughts (i.e., mind-wandering) (Chen, Hoekstra-
Atwood, & Donmez, 2018). Individual differences between drivers (e.g., age, driving
experience) play a significant role in their susceptibility to both voluntary and invol-
untary distractions (e.g., Chen & Donmez, 2016) and how their driving is affected by
distraction (e.g., Haque & Washington, 2014; He & Donmez, 2018). These individual
differences can inform personalized distraction mitigation strategies.
In summary, extensive research investigated the sources and effects of distraction
on non-automated driving in the past two decades. In comparison, there is relatively
little research conducted to date on distraction in the context of automated driving,
though this research is expanding.
drivers’ non-driving task engagement was less affected by automation as they had
shorter and fewer long (>2 seconds) glances and a lower rate of manual interaction
with the non-driving task than novice drivers. Further, Jamson, Merat, Carsten, and
Lai (2013) suggest that when given the option of interacting with non-driving tasks
or taking over vehicle control at any time, drivers of a highly automated vehicle pre-
ferred to keep the automation in control and continue to engage in non-driving tasks
in light traffic, but demonstrated higher attention to the road and the driving task
in heavier traffic. In contrast, Gold et al. (2016) observed a negative effect of traffic
density on reaction time as well as measures of takeover quality such as accelera-
tion, time to collision, and risk of crashes, when drivers were engaged in a verbal
non-driving task. It appears that although drivers may adapt their attention alloca-
tion based on roadway demands in automated vehicles as they do in non-automated
vehicles, they still may not be able to fully compensate for the degradation in their
monitoring performance when engaged in a non-driving task.
Patterns and types of non-driving task engagement are other contextual factors
that can affect driver behavior and, specifically, takeover performance. Wandtner,
Schömig, and Schmidt (2018b) found that when drivers were aware of an impending
takeover and were not already engaged in a non-driving task, they were less likely to
voluntarily engage in one and were able to take over safely; however, when they were
already engaged in a non-driving task, drivers tended to continue their interaction
despite the need for takeover. Yoon and Ji (2019) observed that the type of non-driving
task (searching for a new radio channel using the entertainment console, watching
a video on a smartphone, or playing a game on a smartphone) had varying effects
on different takeover performance measures: the video task resulted in the highest
takeover time and longest first road glance after a takeover request. Vogelpohl et al.
(2018) and Zeeb et al. (2016) also found differences in takeover performance based
on the type of non-driving task, e.g., an increased time to first road glance while
performing a gaming task compared with a reading task or not performing any task
(Vogelpohl et al., 2018) and longer time to deactivate the automation after a takeover
request while watching a video compared to while responding to an email, reading
the news, or not performing any non-driving task (Zeeb et al., 2016).
In line with our knowledge about the effects of different distraction types on non-
automated driving, Wandtner, Schömig, and Schmidt (2018a) found that an auditory–
vocal version of a non-driving task was least detrimental to takeover performance
compared with visual–vocal or visual–manual versions, with the slowest response
times being associated with the visual–manual version of the task. Roche, Somieski,
and Brandenburg (2019) also found similar results regarding the effects of visual
versus auditory non-driving tasks in terms of takeover performance and attention to
the roadway. Dogan, Honnêt, Masfrand, and Guillaume (2019) found that the type of
the automation failure (obstacle avoidance vs. lane-keeping failure) had more of an
effect on takeover performance than the type of non-driving task (watching videos
vs. writing emails). These results demonstrate that there is a need to further study
the differences in driver behaviors in automated driving based on individual charac-
teristics and driving and non-driving task demands. A recent meta-analysis (Zhang,
de Winter, Varotto, Happee, & Martens, 2019) suggests that urgency of the takeover
situation, the modality of the non-driving task and its medium of presentation (e.g.,
182 Human Factors for Automated Vehicles
handheld device), and the modality of the takeover request are factors that can influ-
ence mean takeover time. McDonald, Alambeigi, et al. (2019) further identified time
budget, non-driving tasks performed using a handheld device, repeated exposure to
takeovers, and silent failures (i.e., failures without a warning) as factors that affect
takeover time. In addition to these factors, the authors identified the driving envi-
ronment, takeover request modality, level of automation, trust, fatigue, and alcohol
impairment to influence takeover quality, and stated a research need for examining
the interacting effects of these factors as well as the link between delayed takeover
time and degraded takeover quality.
Another direction for further research is investigating the effects of non-driving
tasks in mitigating the effects of under-arousal during automated driving, how these
effects may be different in different levels of driving automation, and how non-
driving task interactions can be designed to keep drivers engaged in the driving task
without diverting their attention to an extent that compromises safety. Miller et al.
(2015) found that drivers who were engaged in a reading task or a video watching
task were less likely to experience drowsiness in SAE Level 3 automated driving,
and Naujoks, Höfling, Purucker, and Zeeb (2018) found that engaging in a variety of
non-driving tasks during a long drive (1–2 hours) in SAE Level 2 automated driving
kept drowsiness at relatively low levels. However, Saxby, Matthews, and Neubauer
(2017) found that a verbal conversation did not fully counteract the effects of under-
arousal resulting from SAE Level 3 driving automation as drivers demonstrated
slower responses to a critical event after they were handed back vehicle control.
9.2.2 detection
Reviews of different driver state monitoring techniques, including distraction, can
be found in Young, Regan, and Lee (2008), Victor, Engström, and Harbluk (2008),
Dong, Hu, Uchimura, and Murayama (2011), Aghaei et al. (2016), Kircher and
Ahlstrom (2018), He, Risteska, Donmez, and Chen (in press), and McDonald, Ferris,
and Wiener (2019) (see also, this Handbook, Chapter 11). These techniques can uti-
lize measures that are vehicle-based (e.g., speed), physiological (e.g., heart rate), and
facial and body expression based (e.g., eyes-off-road time as implemented in Cadillac
Super Cruise automated driving feature in the 2019 Cadillac CT6 (General Motors,
2018) or hands-off wheel time as implemented in Tesla Autopilot (e.g., Tesla, 2019)),
or more ideally a combination of these categories given that different measures
tend to have different limitations (Aghaei et al., 2016). Further, to assess readiness
for resumption of control, they can also leverage information about road demands.
Based on these measures, algorithms need to be developed, setting criteria for clas-
sifying the driver to be distracted, or even more ideally, not fit to resume control in
the context of automated driving.
For non-automated driving, various types of algorithms and measures have been
explored for detecting distraction. Earlier algorithms were based on analytical meth-
ods that mainly focused on eye-tracking measures (e.g., eyes off forward roadway
(Klauer, Dingus, Neale, Sudweeks, & Ramsey, 2006), AttenD (Kircher, Kircher, &
Ahlström, 2009), risky visual scanning patterns (Donmez et al., 2007, 2008b), and
multi-distraction detection (Victor, 2010)). Recent efforts have shifted towards
Driver Fitness: Resumption of Control 183
9.2.3 remediAtion
For non-automated driving, a number of distraction mitigation frameworks have been
proposed over the past two decades. For example, Donmez, Boyle and Lee (2003;
2006) proposed a taxonomy consisting of three dimensions: type of task (driving-
or non-driving-related), source of initiation (driver or system), and level of automa-
tion of the mitigation strategy (low, medium, or high). This taxonomy focused on
184 Human Factors for Automated Vehicles
strategies that are implemented pre-drive or during a drive. However, feedback can
also be presented post-drive and cumulatively over time to help change behavior.
Therefore, Donmez, Boyle, and Lee (2008a) later considered feedback timing as
another dimension that can inform the design of distraction mitigation strategies,
and classified feedback timescales into concurrent (real-time), delayed (by a few sec-
onds), retrospective (post-drive), and cumulative.
Driving-related strategies are remediation strategies that target the driving task
and support the driver in the control of the vehicle, while non-driving-related strate-
gies target driver interactions with non-driving tasks. These mitigation strategies
can be either initiated by the system or the driver and may differ according to the
level of automation of the strategy. Under driving-related, system initiated strategies,
intervening (high automation) is the process whereby the system takes control of
the driving task when the driver is too distracted to react to critical events. Warning
(moderate automation) refers to alerts that are provided by the system to the driver
to take a needed action, while informing (low automation) refers to the display of
relevant information to the driver. Driving aids ranging from notifications to rec-
ommendation to automated driving capabilities (e.g., emergency braking) fall under
this category of driving-related, system-initiated strategies. Strategies similar to
these may also be initiated by the driver. Interaction with non-driving tasks may be
modulated by the system through locking the driver out of or interrupting their non-
driving tasks, automatically prioritizing and filtering the most important, urgent, or
relevant ones, or giving feedback to the driver (i.e., advising) about their degree of
interaction with non-driving tasks. Driver-initiated, non-driving related strategies
include the driver pre-setting task features, place keeping during engagement with
the non-driving task (e.g., using a bookmark), and choosing methods of interaction
with the non-driving task that impose lower cognitive or visual–manual demand.
As driving becomes a more automated task, traditional views of distraction mit-
igation need to be re-evaluated and modified. As automation takes over more of
the driving task, driving may come to be regarded at the same level of importance
with non-driving tasks and may even be regarded as secondary to them, which may
necessitate a shift away from lockout strategies and towards strategies that support
time-sharing between non-driving and driving tasks. He, Kanaan, and Donmez
(2019) proposed a revised taxonomy of strategies (Table 9.1) for automated vehicles
that focuses on supporting time-sharing between the driving and non-driving tasks.
The updated taxonomy classifies strategies based on their timing into pre-drive/drive
strategies (by re-interpreting entries in the original taxonomy), post-drive (or retro-
spective) strategies, and cumulative strategies. The taxonomy classifies retrospective
strategies into driving-related (risk evaluation strategies) and non-driving-related
(engagement assessment): these strategies provide post-drive feedback to drivers
about their takeover performance and non-driving task engagement in a completed
drive. Cumulative strategies are also classified into driving and non-driving related
strategies, respectively named as education and informing social norms. While edu-
cation strategies target driver’s awareness of automation capabilities and limitations
so that they could learn when to pay more attention to the driving environment,
strategies that inform social norms attempt to influence driver’s non-driving task
engagement behaviors through social norms interventions.
Driver Fitness: Resumption of Control 185
TABLE 9.1
Taxonomy of Time-Sharing Strategies
Driving-Related Non-Driving-Related
Automation- Driver- Automation- Driver-
Initiated Initiated Initiated Initiated
Pre-drive/ High Intervening Delegating Locking & Controls
drive intervention interrupting pre-setting
Moderate Warning Warning Prioritizing & Place
intervention tailoring filtering keeping
Low intervention Informing Perception Advising Demand
augmenting minimizing
Post-drive (retrospective) Risk evaluation Engagement assessment
Cumulative Education Informing social norms
The pre-drive/drive strategies are re-interpreted in the revised taxonomy (He et al.,
2019) as mechanisms that support time-sharing between driving and non-driving tasks,
rather than adopting the view that distraction should ideally be prevented, a view appro-
priate for non-automated vehicles or vehicles with lower levels of automation (SAE
Levels 0–2). For example, takeover requests (under the category of warning) can warn
the driver about an impending event or a possible need for taking over vehicle control,
while allowing the driver to engage in non-driving tasks in non-takeover situations.
In addition to discrete and relatively infrequent warnings, more continual information
about automation can also be provided to the drivers, which may guide their non-
driving task engagement in a more informed manner. For example, the driver may be
informed continually about the reliability of the automation (Stockert, Richardson, &
Lienkamp, 2015; Wulf, Rimini-Doring, Arnon, & Gauterin, 2015), which can help
drivers allocate their attention between non-driving tasks and monitoring of the auto-
mation. At a higher intervention level, the level of control authority between the driver
and the automation (Benloucif, Sentouh, Floris, Simon, & Popieul; 2017) or the state
of the automation (Cabrall, Janssen, & de Winter, 2018) can be dynamically changed
in a system-initiated manner based on driver distraction state: a re-interpretation of the
intervening strategy from the original taxonomy (see also, this Handbook, Chapter 16).
For example, if the driver is detected to improperly monitor automation, the driver may
be prevented from engaging automation. However, such methods may cause drivers to
be susceptible to mode confusion errors (Sarter & Woods, 1995).
The taxonomy presented in He et al. (2019) highlights areas of future research.
Most research on supporting time-sharing in automated vehicles has focused mainly
on driving-related, driver-initiated strategies, with relatively more focus on the
design of warning and informing displays (e.g., Gold, Damböck, Lorenz, & Bengler,
2013; Seppelt & Lee, 2019; Stockert et al., 2015). Research is still ongoing on the
effects of different design parameters (such as timing and modality) related to warn-
ing and informing strategies, with results suggesting the potential usefulness of
186 Human Factors for Automated Vehicles
9.3 SLEEPINESS
Despite consensus among scientists on the basic human biology underlying sleepiness
and its safety impact (Dinges, 1995; Wesensten, Belenky, & Balkin, 2005), it remains
a national safety and health issue. Crash statistics would prompt national focus where
similar data are reported due to illness or disease. Sleepiness is estimated to be a
factor in 100,000 police-reported crashes each year, including over 71,000 injuries
and 1,550 fatalities (National Safety Council, 2019). These are likely underestimates
given the difficulty in determining driver sleepiness at the time of crash.
FIGURE 9.2 (See color insert.) The two-process model of sleep–wake regulation (Borbély,
1982; Borbély, Daan, Wirz-Justice, & Deboer, 2016) depicting the interaction between homeo-
static sleep drive (Process S) and sleep-independent circadian arousal drive (Process C) to
produce S+C alertness level (solid line).
independent processes. The onset, duration, and end of a sleep period are guided by
both sleep and wake pressures as described in Borbély’s (1982) two-process model
of sleep regulation (see Figure 9.2).
The model continuously pits homeostatic sleep pressure (Process S; time awake)
against sleep-independent circadian arousal drive (Process C; time of day) to pro-
duce a combined alertness level (solid line). Ideal opportunities for restorative sleep
are available once sleep pressure is sufficiently greater than arousal drive. Sleep pres-
sure builds as one remains awake and can be compounded through intentional sleep
deprivation. The circadian curve, however, operates outside of conscious awareness
and imparts a cyclical drive. Both circadian and homeostatic pressures directly and
continuously mediate sleepiness. Åkerstedt and Folkard’s (1995) “alertness nomo-
gram” further reviews and validates the S and C model components.
Sleepiness can occur over extended periods of time without the driver falling
asleep. However, on occasion it precedes brief sleep periods, called microsleeps,
where the driver is asleep for up to several seconds before spontaneously reawaken-
ing. Finally, the driver can fully fall asleep at the wheel. Sleepiness is also conceptu-
ally distinct from moderating factors such as boredom—a state of low arousal due
to low or no task demand or interest, as can occur with vigilance-only driving under
automated control, as the driver may over-rely on technology and lose interest in
sustaining vigilance (see e.g., this Handbook, Chapter 6).
of Michon’s (1985) model of driving: strategic, tactical, and operational. At the stra-
tegic level, sleepiness contributes to cognitive impairment risk including lapses in
judgment and decision-making (e.g., see Dinges, 1995; Moore-Ede, Sulzman, &
Fuller, 1982; Wesensten et al., 2005). At the tactical level, as with distraction, sleepi-
ness can impair SA (Cuddy, Sol, Hailes, & Ruby, 2015; Dinges et al., 1997; Lim &
Dinges, 2008; also see this Handbook, Chapter 7), impeding capacity to scan, pre-
dict, identify, make decisions, and execute safe responses (Fisher & Strayer, 2014).
Finally, at the operational level, sleepiness relates to performance declines in simple
motor functions (Owens et al., 2013). Not surprisingly, driving performance deterio-
rates during microsleeps. Driving simulator studies indicate that, during a micro-
sleep, drivers show a significant deterioration in their vehicle control performance,
correlated with the duration of the microsleep, particularly on curved roads (Boyle,
Tippin, Paul, & Rizzo, 2008).
Yet, prophylactic napping is not recommended even in SAE Level 4 due to sig-
nificant risk of loss of situational awareness and sleep inertia effects in case the
driver was suddenly awoken to resume control. Abrupt awakening has been shown
to cause subjective grogginess and diminished motor skills (Tassi & Muzet, 2000)
as well as significantly impaired cognitive performance, which often recedes only
over a period of tens of minutes (Wertz, Ronda, Czeisler, & Wright, 2006). This
imposes a significant risk in dynamic driving ecosystems that could require the
driver to understand the situation and what is required for immediate or near-term
driver resumption of control. Thus driver state monitoring systems and the various
countermeasures these monitoring systems are intended to activate need to be tuned
to environmental and individual variables (this Handbook, Chapter 11 and below), as
preemptive driver napping enroute is not recommended.
9.3.2 detection
Detection technologies of all three states of driver sleepiness have been available
for some time. What is needed are prediction technologies that, ideally, can alert the
driver ahead of time when the driver is sleepy, when the driver will have a micro-
sleep, and when the driver will fall asleep (Jacobé De Naurois, Bourdin, Stratulat,
Diaz, & Vercher, 2019). It should be acknowledged up front that it is not always
possible to determine whether a reduced state of arousal is due to sleepiness or
rather boredom.
Detection and prediction of any of the above three states of sleepiness require
both sensors and algorithms to classify the real-time data being provided by the
sensors (see this Handbook, Chapter 11). The complexity of the algorithms can vary
greatly, as can the complexity of the sensors, focusing on operator physiology, driv-
ing behavior, or both. As for the sensors, they can be off-the-shelf technologies which
record steering wheel and yaw angles (Li, Chen, Peng, & Wu, 2017), more complex
sensors that record measures of eye behavior such as eyelid closure, or still more
sophisticated sensors such as electrocardiogram (EKG) capable heart rate monitors
(Watson & Zhou, 2016). As for algorithms, given that a driver’s operational state
involves a complex set of psychological, physiological, and physical parameters, it is
not surprising that some of the better performing algorithms integrate the data across
all three parameters (Jacobé De Naurois et al., 2019).
Prediction is the ultimate objective. A more recent study used artificial neural net-
works both to detect and predict driver sleepiness (Jacobé De Naurois et al., 2019).
The best models (those whose rates of successful detection or prediction are the
highest) used information about eyelid closure, gaze and head movements, and driv-
ing time. The performance of the model relative to prediction was promising, since
the model could predict to within 5 minutes when the driver’s performance would
become impaired (moderately sleepy). Interestingly, knowledge about the individual
participant (e.g., age, quality of sleep, caffeine consumption, driving exposure) did
not significantly improve the predictions. Thus, the algorithm can be used across
individuals (at least the individuals in the study), an important issue when scaling
is considered.
9.3.3 remediAtion
Sleepiness research reveals that a driver need not be fully asleep—nor even in
microsleep—for sleepiness to negatively impact safety risk and outcomes (Jacobé
De Naurois et al., 2019); impairment to judgment and decision-making happens
before the eyes close. The challenge then is how to apply the current understanding
of sleep to innovative technologies and practices while addressing knowledge gaps
to further reduce risk.
Driver Fitness: Resumption of Control 191
9.3.3.1.2 Training
Education and training could leverage realistic, immersive simulation exercises as
part of a licensing and renewal curriculum to integrate sleepiness-specific challenges
within and across SAE levels. Such training has already demonstrated success with
nurses (Hamid, Samuel, Borowsky, Horrey, & Fisher, 2016). As well, both group
and individual differences are important to recognize and support through periodic
training as new technologies enter the driver-automation ecosystem.
192 Human Factors for Automated Vehicles
For example, a driver could be allowed to resume braking control but not steer-
ing given that suddenly awoken drivers could overcompensate on steering and even
flip the vehicle if recovering from drifting over a small berm or curb, for example.
Automation can calculate the precise combination of safest driver/automation con-
trol and adjust that control along a continuum as the driver becomes less sleepy
and thus more cognitively and physically fit to resume full control. The automation
would capture real-time sleepiness-related risk and continuously solve for optimal
proportions of driver/automation takeover in SAE Levels 2 and 3, for example, as
long as proportional control imparts less current risk than full driver control.
9.3.3.2.2 Big Data
To further understand gaps and reduce risk, a private sector initiative could collect
and analyze voluntary driver sleepiness data via dash-mounted sensors, for example,
from millions of drivers over time in a collaborative, transparent effort to better
understand sleeping driving factors, risks, and outcomes. A key challenge remains
to understand and prevent sleepiness, microsleep, and falling asleep while driving, as
each has demonstrated a significant safety risk. The ability to capture events in real
time before, during, and after each aspect of sleepiness will substantially advance
the understanding of risk and mitigation. The DRIVE - TRUST (Driver Real-time
Integrated Vehicle Ecosystem) could be a name for such an initiative. Driver status
could be continuously tracked and mapped to risk algorithms, countermeasures, and
safety data. Data analysis could determine, for example, if driver sleepiness results in
more accidents, injuries, and fatalities than actually falling asleep. An expert panel
could further apply the data to create a tiered “sleepiness” rating scale with relative
risk and countermeasure assigned to each state.
9.3.3.2.3 Alternative Routing
Driver sleepiness status could activate several prevention strategies. Automated sys-
tems could sense sleepiness and either not start, operate at reduced speed in a desig-
nated slow lane, pull over, temporarily reroute to centrally located driver “safe-nap”
facilities, or alert the driver of being “intoxicated” by sleepiness. The latter is an
apt comparison—Dawson and Reid (1997) showed that after 24 hours of sustained
wakefulness, cognitive psychomotor performance deficits were similar to those
observed at a blood alcohol concentration (BAC) of roughly 0.10%, more than the
least stringent intoxication level of 0.08% in the United States.
9.3.3.2.4 Additional Possibilities
Considering interaction effects with other impairment risks, there remains much to
be learned, assessed, and applied. Interestingly, ongoing studies of hyper-aroused
states will likely further inform sleepiness risk as an aspect of the same challenge
to allocate cognitive and physical resources to the demands of automated driving.
As well, attention shifting declines in older drivers with various types of age-related
dementia (Parasuraman & Nestor, 1991), suggesting the need to integrate aging with
other interacting effects, such as distraction, which can further moderate the impact
of sleepiness and vice versa. Sleepiness is also a common possible side effect of
medication and drugs (National Sleep Foundation, 2019). Excessive sleepiness can
194 Human Factors for Automated Vehicles
reaction time, divided attention, lane-keeping, and speed control, among other driv-
ing-related functions, have found that individual drugs and drugs in combinations
produce different impairing effects and are likely to increase driving risk (Couper &
Logan, 2014; Ogden & Moskowitz, 2004).
The most recent crash-risk study conducted in the United States found crash-involved
drivers to be significantly more likely to test positive for THC and sedatives, to have
used more than one class of drug, and to have used any type of drug compared with con-
trol drivers (Compton & Berning, 2015). THC was associated with a 25% elevated risk
of crashing. However, once adjusted for age, gender, and race/ethnicity, the increases
associated with risk were no longer significant (Compton & Berning, 2015). It should
be noted that the results of this study should not imply drugs do not increase crash risk,
but rather highlight the difficulties in obtaining drug measures that can accurately and
consistently assess prevalence, potential impairment, and subsequent crash risk.
9.4.2 detection
9.4.2.1 Alcohol Breath Testers and Sensors,
Alcohol-Ignition Interlocks, and DADSS
The first alcohol breath-testing device was invented in 1954. This technology is used
to detect alcohol presence with a quantitative result for indicating level of impair-
ment (i.e., BAC). Akin to the alcohol breath-testing device that requires a mouth-
piece for collecting the air specimen is the passive alcohol sensor (PAS). This device
activates a small electrical pump that pulls the expired air from the front of the
driver’s face and provides an indication of the presence of alcohol and its detected
level (Cammisa, Ferguson, & Wells, 1996; Lund & Jones, 1987).
Utilizing breath testing is the alcohol ignition interlock device. The device is con-
nected to the vehicle’s engine ignition system. It requires the driver to blow into the
mouthpiece of the device (like a breathalyzer) to start and operate the vehicle. If
the sample contains alcohol greater than the programmed BAC, the device prevents
the engine from starting. Further, the device is also programmed to require “rolling
re-tests” (i.e., random breath samples throughout the trip) to ensure drinking does
not occur after the car is started. Many interlocks are equipped with cameras to
198 Human Factors for Automated Vehicles
ensure that the person taking the test is the driver. In the United States, these devices
are typically used as a sanction for Driving While Intoxicated (DWI). However, they
are being used in other countries as a preventative mechanism in professional and
commercial transportation (Magnusson, Jakobsson, & Hultman, 2011).
Today, the Driver Alcohol Detection System for Safety (DADSS) program is
working to automatically detect when a driver is intoxicated and prevent a vehicle
from moving (Ferguson, Traube, Zaouk, & Strassburger, 2009). The DADDS pro-
gram is working with two different technologies. The first is a breath-based system
and the second is a touch-based system. Similar to the PAS, the breath-based system
measures alcohol from the exhaled air of the driver (Ljungblad, Hök, Allalou, &
Pettersson, 2017). The touch-based system measures alcohol levels under the skin’s
surface by shining an infrared light through the fingertip of the driver (using dis-
crete semiconductor laser diodes; Ver Steeg et al., 2017). Integrated into the vehicle
controls through the start button or steering wheel, both technologies prevent the
engine from starting should the driver have a set alcohol level. It will take time for
this technology to become available commercially, but it is thought that in the near
future vehicles will be deployed with these systems. As with advanced driver assis-
tance systems (ADAS), DADSS could be integrated into future fleets of vehicles
(automated at various levels) to aid with the detection, and eventually, remediation
of alcohol-impaired driving. One concern for programs like DADSS is the disabling
of the system by the driver, especially if it is regarded as a preventative mechanism.
An alcohol detection system such as DADSS is likely most effective in a vehicle
with Level 2 and 3 driving automation, provided it is not disabled. Here the vehicle
simply would not start and thus prevent the driver from driving alcohol impaired. A
similar detection system in a vehicle with Level 4 automation, where the driver still
has the option to control the vehicle, would need to be equipped with remediation
strategies should alcohol be detected and the driver demonstrates impaired behavior.
have been employed for monitoring patient treatment outcomes (Swift, Martin,
Swette, Laconti, & Kackley, 1992), and results correlate well with breath alcohol
measurements (Leffingwell et al., 2013). More recent studies have examined their
utility for alcohol-use self-monitoring (Kim et al., 2016; Simons, Wills, Emery, &
Marks, 2015).
Wearable biosensor devices also can test for drugs other than alcohol. Some use
electrodermal activity, skin temperature, and locomotion (Wang, Fang, Carreiro,
Wang, & Boyer, 2017). Several studies have reported sweat as a reliable matrix for
detecting recent drug use in controlled studies (de la Torre & Pichini, 2004; Huestis
et al., 2008). Roadside testing for drugs using sweat have also demonstrated success-
ful detection for amphetamine-type stimulant drugs, although improvements with
cannabis and benzodiazepines may be needed. Finally, a system currently under
development that could be employed in automated vehicles uses an “Electronic Nose
System” that obtains body odor from the skin surface (Voss et al., 2014). Although
still in the early stages, these and similar detection devices could be equipped on
future automated vehicles to aid in drug use detection. Even the DADSS touch-based
system may be open to adaptation for eventually detecting drugs.
or to the side). Other indicators include high-risk behaviors such as speeding, tailgat-
ing, swerving, and issues in maintaining lane control. Driving automation at lower
levels can attempt to alert and even attempt to prevent these behaviors (high-speed
alert, lane warning and assist, crash avoidance, automatic emergency braking, etc.),
but it will be the higher levels of driving automation that will likely have the greatest
impact on safety when the driver is impaired.
Recently, Volvo announced that their newest vehicles will be able to slow and
even stop cars being operated with alcohol-impaired and distracted drivers. The
vehicles are said to be able to identify long periods without steering input, weav-
ing across lanes, slow reaction times, and identify closed eyes via in-car driver-
monitoring cameras.
9.4.3 remediAtion
Detecting a driver who is impaired by AOD poses distinct challenges as impairment
thresholds and behaviors are not necessarily standard. Mitigating risk and interven-
ing with an AOD-impaired driver present an entirely different set of challenges both
programmatically and socially. AOD use can create a prolonged state of impairment
similar to motion sickness (e.g., dizziness and nausea) and some forms of sleepiness
(Roehrs & Roth, 2001). However, unlike other impaired states, even when alerted
and warned, the driver is not likely to be in a suitable condition to take control of the
vehicle (and/or may not relinquish the control voluntarily as many drugs, including
alcohol, alter judgment).
As noted in earlier sections, the prevalence of AODs by itself does not indicate
impairment. Therefore, in addition to identifying minimum thresholds for detecting
alcohol and drugs (i.e., screening levels), behavioral signs must accompany these
detection levels (for example, positive for alcohol at 0.04 and exhibiting lack of lane
control). Once an agreed standard has been established, a number of remediation
strategies have to be determined and evaluated. These will likely vary by driving
automation level.
At the highest levels, driving automation may provide the greatest opportunity
to prevent crashes. This type of automation may be the “designated driver” of the
future. However, should an AOD-impaired driver refuse to relinquish control or dis-
able the vehicle automation and take control, a mitigation plan becomes necessary.
Depending on the situation, several remediation strategies might be available at vari-
ous points in the driving experience/trip.
At a recent National Academies of Sciences Transportation Research Board
meeting (January, 2019), a committee of researchers and experts, in both impaired
driving and automation, arrived at an example of what a staged AOD-impaired miti-
gation plan might look like. Though we are a long way from saying definitively,
it could entail the following. At the onset of the drive (pre-drive), the system (at
virtually any automation level) can be programmed to warn the driver that AOD
has been detected. This not only provides information and advises the driver about
their potential impaired state but also alerts the system to activate other detection
methods (i.e., behavioral signs) for impairment. Depending upon programmed AOD
screening levels, the vehicle could also “not start.” Much like today’s alcohol ignition
Driver Fitness: Resumption of Control 201
interlock, immediate detection of the drug at some set level prevents the driver from
operating the vehicle. Another remediation strategy might include notifying an
emergency contact that the driver is assuming control and may be impaired. When
the driver begins the trip (post-start), the system can not only detect the presence
of AOD but also activate other monitoring systems to detect driver behavioral cues
of impairment. The system can again provide information or warning to the driver
about impairment, and even attempt to assist with the vehicle control.
Should the driver continue the trip and ignore the warnings and refuse the assis-
tance (disabling or attempting to disable the system), a number of potential strategies
might activate. For example, the system might slow down the vehicle until it reaches
a safe point to stop. Additionally, the system could activate the vehicle horn and
lights should the driver persist. This strategy is similar to the alcohol ignition inter-
lock, when a driver repeatedly does not provide a re-test sample during the ride. A
more extreme remediation strategy might be rerouting the driver either back “home”
or safer still, a hospital, fire station, etc. Should the system not be able to stop the
vehicle or reroute the driver, notifying law enforcement might be an option.
The final mitigation stage would occur in the event of a crisis, such as an impend-
ing crash. Here, the system would need to be prepared to minimize collision and
appropriately brace for impact.
Similar staged mitigation plans can be supposed. However, there needs to be sig-
nificant programming and testing, in both the lab and field. Further, even if a suitable
plan can be developed and programmed into the system, getting the public on-board
may require considerable marketing and legislation.
9.5 MOTION SICKNESS
9.5.1 definitions And effects
With increasing levels of driving automation, motion sickness is expected to become
a significant issue for drivers. Motion sickness can induce a variety of symptoms and
can impair resumption of control.
exacerbate motion sickness in passengers, as they can prompt the type of motions
that can induce motion sickness (Diels, 2014). For more detailed reviews about the
etiology, symptoms, and treatment of motion sickness in general, see Reason and
Brand (1975), Reason (1978), Benson (2002), and Lackner (2014).
9.5.2 detection
9.5.2.1 Self-Report Measures
Subjective questionnaires, most prominently the Pensacola Motion Sickness
Questionnaire (Kennedy & Graybiel, 1965) and the Motion Sickness Susceptibility
Questionnaire (Reason & Brand, 1975), can be used to assess whether drivers are
Driver Fitness: Resumption of Control 203
9.5.2.2 Other Measures
Physiological measures, such as skin conductance or skin temperature, can be used
in real time, as cold sweating is one of the symptoms associated with motion sickness
(e.g., Benson, 2002). Video analysis of the driver’s face can also be used to detect
whether the driver has turned pale, another possible symptom of motion sickness
(e.g., Benson, 2002). Measures of brain activity, such as EEG, have also been used to
predict the incidence of motion sickness (Lin, Tsai, & Ko, 2013). However, because
of the wide variety of motion sickness symptoms that can manifest in different indi-
viduals, it may be difficult to train detection algorithms to identify motion sickness
in drivers of automated vehicles; algorithms may have to be trained for the particular
individual. Stoner et al. (2011) state that in the case of simulator sickness, relatively
limited studies have found a correlation between simulator sickness and physiologi-
cal measures, thus further research is needed to assess the efficacy of physiological
measures for detecting motion sickness in automated vehicles.
Body movement measures, obtained using video cameras or motion sensors for
example, can be used to detect whether drivers are in positions that could induce motion
sickness. However, video cameras may raise privacy concerns and the signal quality
may degrade with ambient light. Physiological measures have similar issues in addi-
tion to the potential intrusiveness of the sensors. Further research is needed to evaluate
physiological and body movement measures of motion sickness, and to develop meth-
ods of obtaining such measures that are practical for use in commercial vehicles.
Vehicle-based measures can be used to detect, for example, levels of acceleration
or types of road geometry that are associated with inducing motion sickness and can
support motion sickness detection when used in combination with other measures.
For example, Jones et al. (2018) used a combination of vehicle-based, self-report, and
head tilt measures in an ongoing effort to develop a platform to quantify motion sick-
ness. However, further research is needed to evaluate the efficacy of vehicle kine-
matics in supporting other measures in detecting motion sickness.
204 Human Factors for Automated Vehicles
9.5.3 remediAtion
Although research is limited, some remediation strategies have been proposed for
motion sickness in automated vehicles. Such strategies can include behavioral and
postural adjustments such as avoiding certain head tilt angles or seating positions,
use of medication, as well as interface and vehicle adjustments. For example, it has
been shown that certain head tilt angles are associated with an increased likelihood
of motion sickness (e.g. Wada, Fujisawa, & Doi, 2018), thus drivers can be guided to
adjust their head tilt accordingly. Drivers who are susceptible to motion sickness can
also be encouraged to refrain from non-driving activities and avoid rearward facing
positions for vehicles that provide flexible seating. Drugs that can alleviate motion
sickness symptoms can also be used; however, such drugs might cause side effects
like drowsiness and thus can still impair performance of the automated driving task
(Diels & Bos, 2016).
Diels and Bos (2016) outline different techniques and design considerations that
can help remediate the symptoms of motion sickness and increase driver’s comfort.
Such techniques include enhancing visual cues of the driving environment and the
driver’s ability to anticipate the vehicle’s trajectory, for example, by increasing the
window surface area or by using augmented reality displays. Moreover, non-driving
tasks could be designed in a way that does not require drivers to move their heads to
positions that can induce or exacerbate motion sickness symptoms, for example, by
repositioning the non-driving task interface or presenting the task using head-up or
augmented reality displays that do not fully compromise the driver’s ability to view
the driving environment and vehicle’s trajectory. An alternative approach is to rede-
sign vehicle suspension systems to reduce jerk (e.g., Ekchian et al., 2016; Giovanardi
et al., 2018).
9.6 CONCLUSION
This chapter presented an overview of the four different impairments to automated
driving, namely, distraction, sleepiness, impairment from AODs, and motion sick-
ness. We explored the sources of these impairments (Figure 9.1) as well as their
effects on automated driving performance. We have also briefly discussed possi-
ble detection and remediation techniques; a more detailed consideration of driver
state monitoring approaches is presented in Chapter 11. In general, with increas-
ing advances in automated and intelligent vehicle technologies, further research is
needed in defining and quantifying the effects of different types of impairment,
including the interaction between them, as well as on developing and evaluating
techniques for detection and remediation.
A noteworthy consequence of increasingly automated driving environments is—
ironically—periods of increased risk. This seems counterintuitive at first blush given
automation’s promise to protect safety by reducing risk. And automation does look
to do just that dans l’ensemble; overall, risk reduces. But there exists a meaning-
ful redistribution of risk that disproportionately increases safety risk during criti-
cal transition junctures. Especially relevant are relatively brief but safety critical
situations such as a driver’s decision to resume control while supposedly acting in
Driver Fitness: Resumption of Control 205
ACKNOWLEDGMENTS
The authors would like to thank the members of the Human Factors and Applied
Statistics (HFASt) laboratory at the University of Toronto, particularly Dengbo He,
Chelsea DeGuzman, and Mehdi Hoseinzadeh Nooshabadi, for their valuable feed-
back on this chapter.
REFERENCES
Aghaei, A. S., Donmez, B., Liu, C. C., He, D., Liu, G., Plataniotis, K. N., … Sojoudi, Z.
(2016). Smart driver monitoring: When signal processing meets human factors. IEEE
Signal Processing Magazine, 33(6), 35–48.
Åkerstedt, T. & Folkard, S. (1995). Validation of the S and C components of the three-process
model of alertness regulation. Sleep, 18(1), 1–6.
Altmann, E. M. & Trafton, J. G. (2002). Memory for goals: An activation-based model.
Cognitive Science, 26(1), 39–83.
Barr, A. & Mackinnon, D. P. (1998). Designated driving among college students. Journal of
Studies on Alcohol, 59(5), 549–554.
206 Human Factors for Automated Vehicles
Beck, O., Sandqvist, S., Dubbelboer, I., & Franck, J. (2011). Detection of
Δ9-tetrahydrocannabinol in exhaled breath from cannabis users. Journal of Analytical
Toxicology, 35(8), 541–544.
Benloucif, M. A., Sentouh, C., Floris, J., Simon, P., & Popieul, J.-C. (2017). Online adaptation
of the level of haptic authority in a lane keeping system considering the driver’s state.
Transportation Research Part F: Traffic Psychology and Behaviour, 61, 107–119.
Benson, A. J. (2002). Motion sickness. In K. B. Pandolf & R. R. Burr (Eds.), Medical Aspects
of Harsh Environments (pp. 1048–1083). Washington, DC: U.S. Department of the
Army, Office of the Surgeon General.
Berning, A., Compton, R., & Wochinger, K. (2015). Results of the 2013–2014 National
Roadside Survey of Alcohol and Drug Use by Drivers (DOT HS 812 118). Washington,
DC: National Highway Traffic Safety Administration.
Blomberg, R. D., Peck, R. C., Moskowitz, H., Burns, M., & Fiorentino, D. (2005). Crash Risk
of Alcohol Involved Driving: A Case-Control Study. Stamford, CT: Dunlap.
Borbély, A. A. (1982). A two-process model of sleep regulation. Human Neurobiology, 1(3),
195–204.
Borbély, A. A., Daan, S., Wirz-Justice, A., & Deboer, T. (2016). The two-process model of
sleep regulation: A reappraisal. Journal of Sleep Research, 25(2), 131–143.
Borkenstein, R. F., Crowther, R. F., & Shumate, R. P. (1974). The role of the drinking driver in
traffic accidents (the Grand Rapids study). Blutalkohol, 11(Suppl), 1–131.
Borkenstein, R. F., Crowther, R. F., Shumate, R. P., Ziel, W. B., & Zylman, R. (1964). The
Role of the Drinking Driver in Traffic Accidents. Bloomington, IN: Indiana University.
Boyle, L. N., Tippin, J., Paul, A., & Rizzo, M. (2008). Driver performance in the moments
surrounding a microsleep. Transportation Research Part F: Traffic Psychology and
Behaviour, 11(2), 126–136.
Braunagel, C., Geisler, D., Rosenstiel, W., & Kasneci, E. (2017). Online recognition of
driver-activity based on visual scanpath classification. IEEE Intelligent Transportation
Systems Magazine, 9(2), 23–36.
Bugarski, V., Bačkalić, T., & Kuzmanov, U. (2013). Fuzzy decision support system for ship
lock control. Expert Systems with Applications, 40(10), 3953–3960.
Cabrall, C. D. D., Janssen, N. M., & de Winter, J. C. F. (2018). Adaptive automation:
Automatically (dis)engaging automation during visually distracted driving. PeerJ
Computer Science, 4, e166.
Caird, J. K., Johnston, K. A., Willness, C. R., Asbridge, M., & Steel, P. (2014). A meta-analysis
of the effects of texting on driving. Accident Analysis & Prevention, 71, 311–318.
Caird, J. K., Simmons, S. M., Wiley, K., Johnston, K. A., & Horrey, W. J. (2018). Does talking
on a cell phone, with a passenger, or dialing affect driving performance? An updated
systematic review and meta-analysis of experimental studies. Human Factors, 60(1),
101–133.
Cammisa, M., Ferguson, S., & Wells, J. (1996). Laboratory Evaluation of PAS III Sensor with
New Pump Design. Arlington, VA: Insurance Institute for Highway Safety.
Carsten, O., Lai, F. C. H., Barnard, Y., Jamson, A. H., & Merat, N. (2012). Control task sub-
stitution in semiautomated driving: Does it matter what aspects are automated? Human
Factors, 54(5), 747–761.
Chen, H. Y. W. & Donmez, B. (2016). What drives technology-based distractions? A struc-
tural equation model on social-psychological factors of technology-based driver dis-
traction engagement. Accident Analysis and Prevention, 91, 166–174.
Chen, H. Y. W., Hoekstra-Atwood, L., & Donmez, B. (2018). Voluntary- and involuntary-
distraction engagement: An exploratory study of individual differences. Human
Factors, 60(4), 575–588.
Compton, R. P. (2017). Marijuana-Impaired Driving - A Report to Congress (DOT HS 812
440). Washington, DC: National Highway Traffic Safety Administration.
Driver Fitness: Resumption of Control 207
Compton, R. P. & Berning, A. (2015). Drug and Alcohol Crash Risk (DOT HS 812 117).
Washington, DC: National Highway Traffic Safety Administration.
Costa, G., Gaffuri, E., Ghirlanda, G., Minors, D. S., & Waterhouse, J. M. (1995). Psychophysical
conditions and hormonal secretion in nurses on a rapidly rotating shift schedule and
exposed to bright light during night work. Work & Stress, 9(2–3), 148–157.
Couper, F. J. & Logan, B. K. (2014). Drugs and Human Performance Fact Sheets (DOT HS
809 725). Washington, DC: National Highway Traffic Safety Administration.
Cuddy, J. S., Sol, J. A., Hailes, W. S., & Ruby, B. C. (2015). Work patterns dictate energy
demands and thermal strain during wildland firefighting. Wilderness and Environmental
Medicine, 26, 221–226.
Cunningham, M. L. & Regan, M. A. (2018). Driver distraction and inattention in the realm of
automated driving. IET Intelligent Transport Systems, 12(6), 407–413.
Dawson, D. & Reid, K. (1997). Fatigue, alcohol and performance impairment. Nature,
388(6639), 235.
de la Torre, R. & Pichini, S. (2004). Usefulness of sweat testing for the detection of cannabis
smoke. Clinical Chemistry, 50(11), 1961–1962.
de Winter, J. C. F. F., Happee, R., Martens, M. H., & Stanton, N. A. (2014). Effects of adaptive
cruise control and highly automated driving on workload and situation awareness: A
review of the empirical evidence. Transportation Research Part F: Traffic Psychology
and Behaviour, 27(PB), 196–217.
Diels, C. (2014). Will autonomous vehicles make us sick? In S. Sharples & S. Shorrock (Eds.),
Contemporary Ergonomics and Human Factors (pp. 301–307). Boca Raton, FL: CRC
Press.
Diels, C. & Bos, J. E. (2016). Self-driving carsickness. Applied Ergonomics, 53, 374–382.
Diels, C., Bos, J. E., Hottelart, K., & Reilhac, P. (2016). Motion sickness in automated vehicles:
The elephant in the room. In G. Meyer & S. Beiker (Eds.), Road Vehicle Automation 3.
Lecture Notes in Mobility (pp. 121–129). Berlin: Springer.
Dijk, D.-J. & Czeisler, C. A. (1994). Paradoxical timing of the circadian rhythm of sleep pro-
pensity serves to consolidate sleep and wakefulness in humans. Neuroscience Letters,
166(1), 63–68.
Dijk, D.-J. & Larkin, W. (2004). Fatigue and performance models: General background and
commentary on the circadian alertness simulator for fatigue risk assessment in trans-
portation. Aviation, Space, and Environmental Medicine, 75(3, Suppl.), A119–A121.
Dinges, D. F. (1995). An overview of sleepiness and accidents. Journal of Sleep Research,
4(S2), 4–14.
Dinges, D. F., Pack, F., Williams, K., Gillen, K. A., Powell, J. W., Ott, G. E., … Pack, A. I.
(1997). Cumulative sleepiness, mood disturbance, and psychomotor vigilance perfor-
mance decrements during a week of sleep restricted to 4–5 hours per night. Sleep, 20(4),
267–277.
Dingus, T. A., Guo, F., Lee, S., Antin, J. F., Perez, M., Buchanan-King, M., & Hankey, J.
(2016). Driver crash risk factors and prevalence evaluation using naturalistic driving
data. Proceedings of the National Academy of Sciences, 113(10), 2636–2641.
Ditter, S. M., Elder, R. W., Shults, R. A., Sleet, D. A., Compton, R., & Nichols, J. L. (2005).
Effectiveness of designated driver programs for reducing alcohol-impaired driving: A
systematic review. American Journal of Preventive Medicine, 28(5S), 280–287.
Dogan, E., Honnêt, V., Masfrand, S., & Guillaume, A. (2019). Effects of non-driving-related
tasks on takeover performance in different takeover situations in conditionally auto-
mated driving. Transportation Research Part F: Traffic Psychology and Behaviour,
62, 494–504.
Dong, Y., Hu, Z., Uchimura, K., & Murayama, N. (2011). Driver inattention monitoring sys-
tem for intelligent vehicles: A review. IEEE Transactions on Intelligent Transportation
Systems, 12(2), 596–614.
208 Human Factors for Automated Vehicles
Donmez, B., Boyle, L., & Lee, J. D. (2008a). Designing feedback to mitigate distraction. In
M. A. Regan, J. D. Lee, & K. L. Young (Eds.), Driver Distraction: Theory, Effects, and
Mitigation (1st ed., pp. 519–531). Boca Raton, FL: CRC Press.
Donmez, B., Boyle, L., & Lee, J. D. (2003). Taxonomy of mitigation strategies for driver dis-
traction. Proceedings of the Human Factors and Ergonomics Society Annual Meeting,
1865–1869.
Donmez, B., Boyle, L. N., & Lee, J. D. (2006). The impact of distraction mitigation strategies
on driving performance. Human Factors, 48(4), 785–804.
Donmez, B., Boyle, L. N., & Lee, J. D. (2007). Safety implications of providing real-time
feedback to distracted drivers. Accident Analysis and Prevention, 39(3), 581–590.
Donmez, B., Boyle, L. N., & Lee, J. D. (2008b). Mitigating driver distraction with retrospec-
tive and concurrent feedback. Accident Analysis & Prevention, 40(2), 776–786.
Ekchian, J., Graves, W., Anderson, Z., Giovanardi, M., Godwin, O., Kaplan, J., … DiZio,
P. (2016). A High-Bandwidth Active Suspension for Motion Sickness Mitigation in
Autonomous Vehicles. SAE Technical Paper Series (Vol. 1). Warrendale, PA: Society
for Automotive Engineers.
Endsley, M. R. (1995). Toward a theory of situation awareness in dynamic systems. Human
Factors, 37(1), 32–64.
Endsley, M. R. & Kiris, E. O. (1995). The out-of-the-loop performance problem and level of
control in automation. Human Factors, 37(2), 381–394.
Fawcett, J. M., Risko, E. F., & Kingstone, A. (2015). The Handbook of Attention. Cambridge,
MA: MIT Press.
Ferguson, S. A., Traube, E., Zaouk, A., & Strassburger, R. (2009). Driver Alcohol Detection
System For Safety (DADSS) – A non-regulatory approach in the development
and deployment of vehicle safety technology to reduce alcohol-impaired driving.
Proceedings of the 21st International Technical Conference on the Enhanced Safety of
Vehicles. Stuttgart, Germany: ESV.
Fisher, D. L. & Strayer, D. L. (2014). Modeling situation awareness and crash risk. Annals of
Advances in Automotive Medicine, 58, 33–39.
Gaspar, J. G., Schwarz, C., Kashef, O., Schmitt, R., & Shull, E. (2018). Using Driver State
Detection in Automated Vehicles. Iowa City, IA: SAFER-SIM University Transportation
Center.
General Motors. (2018). 2019 Cadillac CT6 Owner’s Manual. Detroit, MI: General Motors LLC.
Giovanardi, M., Graves, W., Ekchian, J., DiZio, P., Ventura, J., Lackner, J. R., … Anderson, Z.
(2018). An active suspension system for mitigating motion sickness and enabling read-
ing in a car. Aerospace Medicine and Human Performance, 89(9), 822–829.
Gold, C., Damböck, D., Lorenz, L., & Bengler, K. (2013). “Take over!” How long does it take
to get the driver back into the loop? Proceedings of the Human Factors and Ergonomics
Society Annual Meeting (pp. 1938–1942). SAGE Publications, Los Angeles, CA.
Gold, C., Körber, M., Lechner, D., & Bengler, K. (2016). Taking over control from highly
automated vehicles in complex traffic situations: The role of traffic density. Human
Factors, 58(4), 642–652.
Hamid, M., Samuel, S., Borowsky, A., Horrey, W. J., & Fisher, D. L. (2016). Evaluation of
training interventions to mitigate effects of fatigue and sleepiness on driving perfor-
mance. Transportation Research Record, 2584, 30–38.
Haque, M. M. & Washington, S. (2014). A parametric duration model of the reaction times of
drivers distracted by mobile phone conversations. Accident Analysis and Prevention,
62, 42–53.
He, D. & Donmez, B. (2018). The effect of distraction on anticipatory driving. Proceedings of
the Human Factors and Ergonomics Society Annual Meeting, 1960–1964.
He, D. & Donmez, B. (2019). Influence of driving experience on distraction engagement in
automated vehicles. Transportation Research Record, 2673(9), 142–151.
Driver Fitness: Resumption of Control 209
He, D., Kanaan, D., & Donmez, B. (2019). A taxonomy of strategies for supporting time-
sharing with non-driving tasks in automated driving. Proceedings of the Human
Factors and Ergonomics Society Annual Meeting (pp. 2088–2092). Santa Monica,
CA: HFES.
He, D., Risteska, M., Donmez, B., & Chen, K. (in press). Driver cognitive load classification
based on physiological data. In Introduction to Digital Signal Processing and Machine
Learning for Interactive Systems Developers (pp. 1–19). New York: ACM Press.
Heaton, K., Browning, S., & Anderson, D. (2008). Identifying variables that predict fall-
ing asleep at the wheel among long-haul truck drivers. American Association of
Occupational Health Nurses, 56(9), 379–385.
Horrey, W. J. & Wickens, C. D. (2006). Examining the impact of cell phone conversations on
driving using meta-analytic techniques. Human Factors, 48(1), 196–205.
Huestis, M. A., Scheidweiler, K. B., Saito, T., Fortner, N., Abraham, T., Gustafson, R. A., &
Smith, M. L. (2008). Excretion of D 9-tetrahydrocannabinol in sweat. Forensic Science
International, 174(2–3), 173–177.
Jacobé De Naurois, C., Bourdin, C., Stratulat, A., Diaz, E., & Vercher, J.-L. (2019). Detection
and prediction of driver drowsiness using artificial neural network models. Accident
Analysis and Prevention, 126, 95–104.
Jamson, A. H., Merat, N., Carsten, O. M. J., & Lai, F. C. H. (2013). Behavioural changes
in drivers experiencing highly-automated vehicle control in varying traffic conditions.
Transportation Research Part C: Emerging Technologies, 30, 116–125.
Jones, M. L. H., Sienko, K., Ebert-Hamilton, S., Kinnaird, C., Miller, C., Lin, B., … Sayer,
J. (2018). Development of a Vehicle-Based Experimental Platform for Quantifying
Passenger Motion Sickness during Test Track Operations. SAE Technical Paper Series
(Vol. 1). Warrendale, PA: Society for Automotive Engineers.
Junaedi, S. & Akbar, H. (2018). Driver drowsiness detection based on face feature and
PERCLOS. Journal of Physics: Conference Series, 1090.
Kanaan, D., Ayas, S., Donmez, B., Risteska, M., & Chakraborty, J. (2019). Using naturalis-
tic vehicle-based data to predict distraction and environmental demand. International
Journal of Mobile Human Computer Interaction, 11(3), 59–70.
Kelley-Baker, T., Berning, A., Ramirez, A., Lacey, J. H., Carr, K., Waehrer, G., … Compton,
R. P. (2017). 2013–2014 National Roadside Study of Alcohol and Drug Use by Drivers:
Drug Results (DOT HS 812 411). Washington, DC: National Highway Traffic Safety
Administration.
Kelley-Baker, T., Waehrer, G., & Pollini, R. A. (2017). Prevalence of self-reported prescrip-
tion drug use in a national sample of U.S. drivers. Journal of Studies on Alcohol and
Drugs, 78(1), 30–38.
Kennedy, R. S. & Graybiel, A. (1965). The Dial Test: A Standardized Procedure for
the Experimental Production of Canal Sickness Symptomatology in a Rotating
Environment (NSAM-930). Pensacola, FL: Naval School of Aviation Medicine.
Kim, J., Jeerapan, I., Imani, S., Cho, T. N., Bandodkar, A., Cinti, S., … Wang, J. (2016).
Noninvasive alcohol monitoring using a wearable tattoo-based iontophoretic-biosensing
system. Sensors, 1(8), 1011–1019.
Kircher, K. & Ahlstrom, C. (2018). Evaluation of methods for the assessment of attention
while driving. Accident Analysis & Prevention, 114, 40–47.
Kircher, K., Kircher, A., & Ahlström, C. (2009). Results of a Field Study on a Driver
Distraction Warning System. Linköping, Sweden: Swedish National Road and
Transport Research Institute.
Klauer, S. G., Dingus, T. A., Neale, V. L., Sudweeks, J. D., & Ramsey, D. J. (2006). The
Impact of Driver Inattention on Near-Crash/Crash Risk: An Analysis Using the 100-
Car Naturalistic Driving Study Data (DOT HS 810 594). Washington, DC: National
Highway Traffic Safety Administration.
210 Human Factors for Automated Vehicles
Köhn, T., Gottlieb, M., Schermann, M., & Krcmar, H. (2019). Improving take-over quality in
automated driving by interrupting non-driving tasks. 24th International Conference on
Intelligent User Interfaces (IUI’19) (pp. 510–517). New York: ACM.
Lacey, J. H., Kelley-Baker, T., Berning, A., Romano, E., Ramirez, A., Yao, J., … Compton,
R. (2016). Drug and Alcohol Crash Risk: A Case-Control Study (DOT HS 812 630).
Washington, DC: National Highway Traffic Safety Administration.
Lackner, J. R. (2014). Motion sickness: More than nausea and vomiting. Experimental Brain
Research, 232, 2493–2510.
Lee, J. D. (2014). Dynamics of driver distraction: The process of engaging and disengaging.
Annals of Advances in Automotive Medicine, 58, 24–32.
Lee, J. D., Young, K. L., & Regan, M. A. (2008). Defining driver distraction. In M. A. Regan,
J. D. Lee, & K. L. Young (Eds.), Driver Distraction: Theory, Effects and Mitigation
(pp. 31–40). Boca Raton, FL: CRC Press.
Leffingwell, T. R., Cooney, N. J., Murphy, J. G., Luczak, S., Rosen, G., Dougherty, D. M., &
Barnett, N. P. (2013). Continuous objective monitoring of alcohol use: Twenty-first cen-
tury measurement using transdermal sensors. Alcoholism: Clinical and Experimental
Research, 37(1), 16–22.
Lehrer, A. M. (2015). A systems-based framework to measure, predict, and manage fatigue.
Reviews of Human Factors and Ergonomics, 10(1), 194–252.
Lehrer, A. M. & Popkin, S. M. (2014). Current and Next Generation Fatigue Models (DOT-
VNTSCOST-14-01). Cambridge, MA: Volpe Center.
Levitin, D. J. (2014). The Organized Mind: Thinking Straight in the Age of Information
Overload. New York: Plume/Penguin Books.
Li, G., Brady, J. E., & Chen, Q. (2013). Drug use and fatal motor vehicle crashes: A case-
control study. Accident Analysis and Prevention, 60, 205–210.
Li, Z., Bao, S., Kolmanovsky, I. V., & Yin, X. (2018). Visual manual distraction detection
using driving performance indicators with naturalistic driving data. IEEE Transactions
on Intelligent Transportation Systems, 19(8), 2528–2535.
Li, Z., Chen, L., Peng, J., & Wu, Y. (2017). Automatic detection of driver fatigue using driving
operation information for transportation safety. Sensors, 17(6), 1212.
Liang, Y. & Lee, J. D. (2014). A hybrid Bayesian Network approach to detect driver cognitive
distraction. Transportation Research Part C, 38, 146–155.
Lim, J. & Dinges, D. F. (2008). Sleep deprivation and vigilant attention. Annals of the New
York Academy of Sciences, 1129(1), 305–322.
Lin, C. T., Tsai, S. F., & Ko, L. W. (2013). EEG-based learning system for online motion sick-
ness level estimation in a dynamic vehicle environment. IEEE Transactions on Neural
Networks and Learning Systems, 24(10), 1689–1700.
Ljungblad, J., Hök, B., Allalou, A., & Pettersson, H. (2017). Passive in-vehicle driver breath
alcohol detection using advanced sensor signal acquisition and fusion. Traffic Injury
Prevention, 18(Suppl. 1), S31–S36.
Louw, T., Markkula, G., Boer, E., Madigan, R., Carsten, O., & Merat, N. (2017). Coming back
into the loop: Drivers’ perceptual-motor performance in critical events after automated
driving. Accident Analysis & Prevention, 108, 9–18.
Louw, T. & Merat, N. (2017). Are you in the loop? Using gaze dispersion to understand driver
visual attention during vehicle automation. Transportation Research Part C: Emerging
Technologies, 76, 35–50.
Lund, A. K. & Jones, I. S. (1987). Detection of impaired drivers with a passive alcohol
sensor. Proceedings of the 10th Conference on Alcohol, Drugs, and Traffic Safety
(pp. 379–382). Amsterdam: ICADTS.
Magnusson, P., Jakobsson, L., & Hultman, S. (2011). Alcohol interlock systems in Sweden:
10 years of systematic work. American Journal of Preventative Medicine, 40(3),
378–379.
Driver Fitness: Resumption of Control 211
McCartt, A. T., Rohrbaugh, J. W., Hammer, M. C., & Fuller, S. Z. (2000). Factors associated
with falling asleep at the wheel among long-distance truck drivers. Accident Analysis
and Prevention, 32, 493–504.
McDonald, A. D., Alambeigi, H., Engström, J., Markkula, G., Vogelpohl, T., Dunne, J., &
Yuma, N. (2019). Towards computational simulations of behavior during automated
driving takeovers: A review of the empirical and modeling literatures. Human Factors,
61(4), 642–688.
McDonald, A. D., Ferris, T. K., & Wiener, T. A. (2019). Classification of driver distraction: A
comprehensive analysis of feature generation, machine learning, and input measures.
Human Factors.
McFarlane, D. & Latorella, K. A. (2002). The scope and importance of human interruption in
human-computer interaction design. Human-Computer Interaction, 17(1), 1–61.
Merat, N., Seppelt, B., Louw, T., Engström, J., Lee, J. D., Johansson, E., … Keinath, A. (2019).
The “Out-of-the-Loop” concept in automated driving: Proposed definition, measures
and implications. Cognition, Technology & Work, 21(1), 87–98.
Metz, B., Schömig, N., & Krüger, H.-P. P. (2011). Attention during visual secondary tasks in
driving: Adaptation to the demands of the driving task. Transportation Research Part
F: Traffic Psychology and Behaviour, 14(5), 369–380.
Michon, J. A. (1985). A critical view of driver behavior models: What do we know, what
should we do? In L. Evans & R. C. Schwing (Eds.), Human Behavior and Traffic Safety
(pp. 485–520). New York, NY: Plenus.
Miller, D., Sun, A., Johns, M., Ive, H., Sirkin, D., Aich, S., & Ju, W. (2015). Distraction
becomes engagement in automated driving. Proceedings of the Human Factors and
Ergonomics Society Annual Meeting, 59(1), 1676–1680.
Mitler, M. M., Carskadon, M. A., Czeisier, C. A., Dement, W. C., Dinges, D. F., & Graeber,
R. C. (1988). Catastrophes, sleep, and public policy: Consensus report. Sleep, 11(1),
100–109.
Moore-Ede, M. C., Sulzman, F. M., & Fuller, C. A. (1982). The Clocks that Time Us:
Physiology of the Circadian Timing System. Cambridge, MA: Harvard University
Press.
National Center for Statistics and Analysis. (2018). Alcohol-Impaired Driving: 2017 Data.
Washington, D.C.: NCSA.
National Highway Traffic Safety Administration. (2013). Visual-manual NHTSA driver
distraction guidelines for in-vehicle electronic devices. Federal Register, 78(81),
24818–24890. (Washington, DC: National Highway Traffic Safety Administration.)
National Highway Traffic Safety Administration. (2016). Visual-manual NHTSA driver dis-
traction guidelines for portable and aftermarket devices. Federal Register, 81(233),
87656–87683. (Washington, DC: National Highway Traffic Safety Administration.)
National Highway Traffic Safety Administration. (2017). Research Note: Distracted Driving
2015. Washington, DC: National Highway Traffic Safety Administration.
National Highway Traffic Safety Administration. (2019). Research Note: Driver elec-
tronic device use in 2017. Washington, DC: National Highway Traffic Safety
Administration.
National Safety Council. (2019). Drowsy Driving Is Impaired Driving. Retrieved from www.
nsc.org/road-safety/safety-topics/fatigued-driving
National Sleep Foundation. (2019). Sleepiness, Medication & Drugs: Why Your OTC
Medications and Prescription Drugs Might Make You Tired. Retrieved from www.
sleepfoundation.org/excessive-sleepiness/causes/sleepiness-medication-drugs-why-
your-otc-medications-and-prescription
Naujoks, F., Höfling, S., Purucker, C., & Zeeb, K. (2018). From partial and high automation to
manual driving: Relationship between non-driving related tasks, drowsiness and take-
over performance. Accident Analysis & Prevention, 121, 28–42.
212 Human Factors for Automated Vehicles
Naujoks, F., Wiedemann, K., & Schömig, N. (2017). The importance of interruption manage-
ment for usefulness and acceptance of automated driving. Proceedings of the 9th ACM
International Conference on Automotive User Interfaces and Interactive Vehicular
Applications (AutomotiveUI ’17) (pp. 254–263). New York: ACM.
Ogden, E. & Moskowitz, H. (2004). Effects of alcohol and other drugs on driver performance.
Traffic Injury Prevention, 5(3), 185–198.
Oviedo-Trespalacios, O., Haque, M. M., King, M., & Washington, S. (2017). Self-regulation
of driving speed among distracted drivers: An application of driver behavioral adapta-
tion theory. Traffic Injury Prevention, 18(6), 599–605.
Owens, J., Gruber, R., Brown, T., Corkum, P., Cortese, S., O’Brien, L., … Weiss, M. (2013).
Future research directions in sleep and ADHD: Report of a consensus working group.
Journal of Attention Disorders, 17(7), 550–564.
Parasuraman, R. & Nestor, P. G. (1991). Attention and driving skills in aging and Alzheimer’s
disease. Human Factors, 33(5), 539–557.
Pech, T., Enhuber, S., Wandtner, B., Schmidt, G., & Wanielik, G. (2018). Real time recognition
of non-driving related tasks in the context of highly automated driving. In J. Dubbert,
B. Müller, & G. Meyer (Eds.), Advanced Microsystems for Automotive Applications
2018. AMAA 2018. Lecture Notes in Mobility (pp. 43–55). Berlin: Springer.
Platten, F., Milicic, N., Schwalm, M., & Krems, J. (2013). Using an infotainment system while
driving - A continuous analysis of behavior adaptations. Transportation Research Part
F: Traffic Psychology and Behaviour, 21, 103–112.
Politis, I., Brewster, S., & Pollick, F. (2017). Using multimodal displays to signify critical
handovers of control to distracted autonomous car drivers. International Journal of
Mobile Human Computer Interaction, 9(3), 1–16.
Proctor, R. W. & Zandt, T. Van. (2018). Human Factors in Simple and Complex Systems
(3rd ed.). Boca Raton, FL: CRC Press.
Ranney, T. A., Garrott, W. R., & Goodman, M. J. (2000). NHTSA Driver Distraction
Research: Past, Present, and Future (No. 2001–06–0177). Warrendale, PA: SAE.
Reason, J. T. (1978). Motion sickness adaptation: A neural mismatch model. Journal of the
Royal Society of Medicine, 71, 819–829.
Reason, J. T. & Brand, J. J. (1975). Motion Sickness. Oxford: Academic Press.
Regan, M. A., Hallett, C., & Gordon, C. P. (2011). Driver distraction and driver inattention:
Definition, relationship and taxonomy. Accident Analysis and Prevention, 43(5),
1771–1781.
Regan, M. A., Lee, J. D., & Young, K. (2008). Driver Distraction: Theory, Effects and
Mitigation. Boca Raton, FL: CRC Press.
Reimer, B., Donmez, B., Lavallière, M., Mehler, B., Coughlin, J. F., & Teasdale, N. (2013).
Impact of age and cognitive demand on lane choice and changing under actual highway
conditions. Accident Analysis and Prevention, 52, 125–132.
Roche, F., Somieski, A., & Brandenburg, S. (2019). Behavioral changes to repeated takeovers
in highly automated driving: Effects of the takeover-request design and the nondriving-
related task modality. Human Factors, 61(5), 839–849.
Roehrs, T. & Roth, T. (2001). Sleep, sleepiness, and alcohol use. Alcohol Research and
Health, 25(2), 101–109.
Romano, E. & Voas, R. B. (2011). Drug and alcohol involvement in four types of fatal crashes.
Journal of Studies on Alcohol and Drugs, 72(4), 567–576.
Rosa, R. R., Bonnet, M. H., Bootzin, R. R., Eastman, C. I., Monk, T., Penn, P. E., … Walsh,
J. K. (1990). Intervention factors for promoting adjustment to nightwork and shiftwork.
Occupational Medicine: State of the Art Reviews, 5(2), 391–415.
SAE International. (2018). Taxonomy and Definitions for Terms Related to Driving
Automation Systems for On-Road Motor Vehicles (J3016). Warrendale, PA: Society for
Automotive Engineers.
Driver Fitness: Resumption of Control 213
Sarter, N. B. & Woods, D. D. (1995). How in the world did we ever get into that mode? Mode
error and awareness in supervisory control. Human Factors, 37(1), 5–19.
Saxby, D. J., Matthews, G., & Neubauer, C. (2017). The relationship between cell phone use
and management of driver fatigue: It’s complicated. Journal of Safety Research, 61,
129–140.
Schömig, N., Hargutt, V., Neukum, A., Petermann-Stock, I., & Othersen, I. (2015). The inter-
action between highly automated driving and the development of drowsiness. Procedia
Manufacturing, 3, 6652–6659.
Schroeter, R., Oxtoby, J., & Johnson, D. (2014). AR and gamification concepts to reduce driver
boredom and risk taking behaviours. Proceedings of the 6th International Conference
on Automotive User Interfaces and Interactive Vehicular Applications - AutomotiveUI
’14 (pp. 1–8). Seattle, WA: ACM Press.
Schwarz, C., Brown, T., Lee, J., Gaspar, J., & Kang, J. (2016). The Detection of Visual
Distraction Using Vehicle and Driver Based Sensors. Warrendale, PA: SAE
International.
Seppelt, B. D. & Lee, J. D. (2019). Keeping the driver in the loop: Dynamic feedback to sup-
port appropriate use of imperfect vehicle control automation. International Journal of
Human-Computer Studies, 125, 66–80.
Shen, S. & Neyens, D. M. (2017). Assessing drivers’ response during automated driver sup-
port system failures with non-driving tasks. Journal of Safety Research, 61, 149–155.
Sibi, S., Ayaz, H., Kuhns, D. P., Sirkin, D. M., & Ju, W. (2016). Monitoring driver cognitive
load using functional near infrared spectroscopy in partially autonomous cars. IEEE
Intelligent Vehicles Symposium (IV) (pp. 419–425). Gotenburg, Sweden: IEEE.
Simons, J. S., Wills, T. A., Emery, N. N., & Marks, R. M. (2015). Quantifying alcohol
consumption: Self-report, transdermal assessment, and prediction of dependence
symptoms. Addictive Behaviors, 50, 205–212.
Steinberger, F., Schroeter, R., Foth, M., & Johnson, D. (2017). Designing gamified applica-
tions that make safe driving more engaging. CHI 2017: Conference on Human Factors
in Computing Systems, ACM SIGCHI (pp. 2826–2839). Denver, CO: ACM Press.
Stockert, S., Richardson, N. T., & Lienkamp, M. (2015). Driving in an increasingly auto-
mated world – Approaches to improve the driver-automation interaction. Procedia
Manufacturing, 3, 2889–2896.
Stoner, H. A., Fisher, D. L., & Mollenhauer, M., Jr. (2011). Simulator and scenario factors
influencing simulator sickness. In D. L. Fisher, M. Rizzo, J. K. Caird, & J. D. Lee
(Eds.), Handbook of Driving Simulation for Engineering, Medicine, and Psychology
(pp. 14:1–14:24). Boca Raton, FL: CRC Press.
Swift, R. M., Martin, C. S., Swette, L., Laconti, A., & Kackley, N. (1992). Studies on a wear-
able, electronic, transdermal alcohol sensor. Alcoholism Clinical and Experimental
Research, 16(4), 721–725.
Tassi, P. & Muzet, A. (2000). Sleep inertia. Sleep Medicine Reviews, 4(4), 341–353.
Tepas, D. I. & Carvalhais, A. B. (1990). Sleep patterns of shiftworkers. Occupational
Medicine, 5(2), 199–208.
Tesla. (2019). Model X Owner’s Manual. Palo Alto, CA: Tesla.
Thomas, G. R., Raslear, T. G., & Kuehn, G. I. (1997). The Effects of Work Schedule on Train
Handling Performance and Sleep of Locomotive Engineers: A Simulator Study (DOT/
FRA/ORD-97–09). Washington, DC: Federal Railroad Administration.
Tijerina, L., Gleckler, M., Stoltzfus, D., Johnston, S., Goodman, M. J., & Wierwille, W. W.
(1998). A Preliminary Assessment of Algorithms for Drowsy Driver and Inattentive
Driver Detection on the Road (HS-808 905). Washington, DC: National Highway
Traffic Safety Administration.
Traffic Injury Research Foundation. (2018). Distraction-Related Fatal Collisions, 20 00–2015.
Ottawa, Canada: TRIF.
214 Human Factors for Automated Vehicles
Turner, M. & Grifffin, M. J. (1999). Motion sickness in public road transport: The rela-
tive importance of motion, vision, and individual differences. British Journal of
Psychology, 90, 519–530.
Van Dongen, H. P. A. (2006). Shift work and inter-individual differences in sleep and sleepi-
ness. Chronobiology International, 23(6), 1139–1147.
Ver Steeg, B., Treese, D., Adelante, R., Kraintz, A., Laaksonen, B., Ridder, T., … Cox, D.
(2017). Development of a solid state, non-invasive, human touch based blood alcohol
sensor. Proceedings of the 25th International Technical Conference on the Enhanced
Safety of Vehicles. Detroit, MI: ESV.
Victor, T. W. (2010). The Victor and Larsson (2010) Distraction Detection Algorithm and
Warning Strategy. Gothenburg, Sweden: Volvo Technology.
Victor, T. W., Engström, J., & Harbluk, J. L. (2008). Distraction assessment methods based
on visual behaviour and event detection. In M. A. Regan, J. D. Lee, & K. L. Young
(Eds.), Driver Distraction: Theory, Effects and Mitigation (pp. 135–165). Boca Raton,
FL: CRC Press.
Vogelpohl, T., Kühn, M., Hummel, T., Gehlert, T., & Vollrath, M. (2018). Transitioning to
manual driving requires additional time after automation deactivation. Transportation
Research Part F: Traffic Psychology and Behaviour, 55, 464–482.
Voss, A., Witt, K., Kaschowitz, T., Poitz, W., Ebert, A., Roser, P., & Bär, K.-J. (2014). Detecting
cannabis use on the human skin surface via an electronic nose system. Sensors, 14(7),
13256–13272.
Wada, T., Fujisawa, S., & Doi, S. (2018). Analysis of driver’s head tilt using a mathematical
model of motion sickness. International Journal of Industrial Ergonomics, 63, 89–97.
Wandtner, B., Schömig, N., & Schmidt, G. (2018a). Effects of non-driving related task modal-
ities on takeover performance in highly automated driving. Human Factors, 60(6),
870–881.
Wandtner, B., Schömig, N., & Schmidt, G. (2018b). Secondary task engagement and disen-
gagement in the context of highly automated driving. Transportation Research Part F:
Traffic Psychology and Behaviour, 58, 253–263.
Wang, J., Fang, H., Carreiro, S., Wang, H., & Boyer, E. (2017). A new mining method to detect
real time substance use events from wearable biosensor data stream. 2017 International
Conference on Computing, Networking and Communications (ICNC) (pp. 465–470).
Silicon Valley, CA: ICNC.
Watson, A. & Zhou, G. (2016). Microsleep prediction using an EKG capable heart rate moni-
tor. 2016 IEEE First International Conference on Connected Health: Applications,
Systems and Engineering Technologies (CHASE) (pp. 328–329).
Wertz, A. T., Ronda, J. M., Czeisler, C. A., & Wright, K. P. (2006). Effects of sleep inertia on
cognition. Journal of the American Medical Association, 295(2), 163–164.
Wesensten, N. J., Belenky, G., & Balkin, T. J. (2005). Cognitive readiness in network-centric
operations. Parameters: US Army War College Quarterly, 35(1), 94–105.
Wickelgren, W. A. (1977). Speed-accuracy tradeoff and information processing dynamics.
Acta Psychologica, 41, 67–85.
Wickens, C. D. (2008). Multiple resources and mental workload. Human Factors, 50(3),
449–455.
Wierwille, W. W., Wreggit, S. S., Kirn, C. L., Ellsworth, L. A., & Fairbanks, R. J. (1994).
Research on Vehicle-Based Driver Status/Performance Monitoring; Development,
Validation, and Refinement of Algorithms for Detection of Driver Drowsiness.
Final Report (HS-808 247). Washington, DC: National Highway Traffic Safety
Administration.
Wulf, F., Rimini-Doring, M., Arnon, M., & Gauterin, F. (2015). Recommendations sup-
porting situation awareness in partially automated driver assistance systems. IEEE
Transactions on Intelligent Transportation Systems, 16(4), 2290–2296.
Driver Fitness: Resumption of Control 215
Xie, J. Y., Chen, H.-Y. W., & Donmez, B. (2016). Gaming to safety: Exploring feedback
gamification for mitigating driver distraction. Proceedings of the Human Factors and
Ergonomics Society Annual Meeting, 60(1) 1884–1888.
Yoon, S. H. & Ji, Y. G. (2019). Non-driving-related tasks, workload, and takeover perfor-
mance in highly automated driving contexts. Transportation Research Part F: Traffic
Psychology and Behaviour, 60, 620–631.
Young, K. L., Regan, M. A., & Lee, J. D. (2008). Measuring the effects of driver distraction:
Direct driving performance methods and measures. In M. A. Regan, J. D. Lee, & K. L.
Young (Eds.), Driver Distraction: Theory, Effects and Mitigation (pp. 85–106). Boca
Raton, FL: CRC Press.
Zador, P. L., Krawchuk, S. A., & Voas, R. B. (2000). Alcohol-related relative risk of driver
fatalities and driver involvement in fatal crashes in relation to driver age and gender: An
update using 1996 data. Journal of Studies on Alcohol, 61(3), 387–395.
Zeeb, K., Buchner, A., & Schrauf, M. (2015). What determines the take-over time? An inte-
grated model approach of driver take-over after automated driving. Accident Analysis
& Prevention, 78, 212–221.
Zeeb, K., Buchner, A., & Schrauf, M. (2016). Is take-over time all that matters? The impact of
visual-cognitive load on driver take-over quality after conditionally automated driving.
Accident Analysis & Prevention, 92, 230–239.
Zhang, B., de Winter, J., Varotto, S., Happee, R., & Martens, M. (2019). Determinants of
take-over time from automated driving: A meta-analysis of 129 studies. Transportation
Research Part F: Traffic Psychology and Behaviour, 64, 285–307.
Zijlstra, F. R. H., Cropley, M., & Rydstedt, L. W. (2014). From recovery to regulation: An
attempt to reconceptualize “recovery from work.” Stress and Health, 30, 244–252.
Taylor & Francis
Taylor & Francis Group
http://taylorandfrancis.com
10 Driver Capabilities in the
Resumption of Control
Sherrilene Classen
University of Florida
Liliana Alvarez
University of Western Ontario
CONTENTS
Key Points .............................................................................................................. 218
10.1 Introduction .................................................................................................. 218
10.2 Michon’s Model ............................................................................................ 219
10.3 Medically-at-Risk
Conditions ....................................................................... 219
10.3.1 Deskilling: Implications of the Aging Process ................................ 219
10.3.1.1 Age-Related Deskilling ..................................................... 219
10.3.1.2 Functional Performance Deficits........................................ 220
10.3.1.3 Effect on Driving Behaviors............................................... 220
10.3.1.4 AV Technology to Compensate for Functional
Driving Impairments by SAE Level ���������������������������������� 221
10.3.2 Low Vision........................................................................................ 222
10.3.2.1 Cataracts............................................................................. 223
10.3.2.2 Age-Related Macular Degeneration................................... 224
10.3.2.3 Glaucoma ........................................................................... 226
10.3.2.4 Diabetic Retinopathy (DR) ................................................ 228
10.3.2.5 AV Technology to Compensate for Functional
Driving Impairments by SAE Level ���������������������������������� 229
10.3.2.6 Case Study: Zane, an Older Adult with a Diagnosis of
Glaucoma ��������������������������������������������������������������������������� 229
10.3.3 Neurological and Neurodegenerative Disorders ............................... 231
10.3.3.1 Introduction
........................................................................ 231
10.3.3.2 ASD and ADHD ................................................................ 231
10.3.3.3 Parkinson’s Disease............................................................ 232
10.3.3.4 Strategic, Tactical, and Operational Deficits for
Those with ASD, ADHD, and/or PD �������������������������������� 233
10.3.3.5 Case Study: Elizabeth, a Client with PD ........................... 238
10.4 Conclusion..................................................................................................... 241
Acknowledgments.................................................................................................. 241
References .............................................................................................................. 241
217
218 Human Factors for Automated Vehicles
KEY POINTS
• Medical conditions, and their associated characteristics, affect the func-
tional abilities (visual, cognitive, motor, and other sensory) and therefore
the performance of drivers.
• These functional abilities are critical to perform the strategic, tactical, and
operational tasks associated with driving.
• Driver support features such as in-vehicle information systems (any of
the SAE levels, although sometimes identified synonymously with SAE
Level 0), lane-centering assist, and braking/acceleration assist (SAE Level
1 and 2) may provide benefits but also challenges for drivers who are
medically-at-risk.
• Automated vehicle technologies hold great potential to facilitate fitness-to-
drive abilities in drivers wanting to resume control of the vehicle
• Empirical testing of the potential benefits of automated vehicle technolo-
gies is mission critical
10.1 INTRODUCTION
Because it is outside the scope of this chapter to focus on all medical conditions
that may impact fitness to drive, the authors describe selected medical condi-
tions grouped in four distinct categories, i.e., deskilling, visual disorders, neu-
rological disorders, and neurodegenerative disorders. From these categories, the
authors indicate the core clinical characteristics and explicate how those char-
acteristics relate to functional performance deficits. Driving behaviors, an indi-
cator of fitness to drive, are discussed within the structure of Michon’s model
(Michon, 1985)—which classifies driving behaviors on the strategic, tactical, and
operational level.
Rehabilitation scientists and professionals are concerned with empowering
clients to overcome deficient driving behaviors. Therefore, this chapter expounds
how automation is providing exciting possibilities to address this challenge of
deficient driving behaviors. The authors demonstrate the benefits of automated
vehicle technologies (SAE Levels 0–2) to enable the driver to resume control of
his/her fitness-to-drive abilities (Society of Automotive Engineers International,
2016). SAE Level 3 of automation may yield more risks than benefits for the
medically-at-risk driver and will as such not be further discussed in this chapter.
Levels 4 and 5 of automation may yield multiple benefits related to transportation
equity, especially for the disadvantaged, medically-at-risk, and disabled popu-
lations. However, because the driver will not “resume control” when engaged
with these levels of automation, but rather be an “operator” (SAE Level 4) or
a “passenger” (SAE Level 5), the authors do not further discuss those levels of
automation.
The authors present two case studies, to address the issues related to an aging
adult with a visual disorder and an adult with a neurodegenerative condition, to tie
the previously discussed concepts together.
Driver Capabilities: Resumption of Control 219
10.2 MICHON’S MODEL
Michon’s model of driving behaviors (Michon, 1985) is widely used and accepted
in the driving literature, and it acts as a conduit to communicate driving behaviors
in a way that is understandable to transportation engineers, psychologists, road traf-
fic safety officials, and driver rehabilitation specialists. Michon’s model categorizes
aspects of the driver and the environment into three hierarchical levels. First, the
strategic level requires the highest cognitive processing for the driver, and involves
high-level cognitive skills such as decision-making, planning, and problem-solving
to discern where, with whom, how, and how much to drive. The strategic level also
incorporates discerning the level of risk, i.e., anticipating risks, such as skidding,
sliding, or being unable to come to a stop when encountering icy roads. Usually
these decisions are made prior to the driving task over a period of minutes, hours,
or days—depending on the complexity of the trip and the environment. The tac-
tical level requires intermittent behaviors when maneuvering the vehicle to travel
from one destination to another. These behaviors include, but are not limited to,
handling the vehicle, avoiding obstacles, accepting gaps, making a turn, backing
up, or overtaking another vehicle. Such behaviors occur during the driving task and
may last for minutes to hours, depending on the length of the route. The operational
level demands the driver’s skills related to motor coordination, reaction time, visual
scanning, and spatial perception and orientation. These behaviors are critical when
carrying out distinct tasks, such as braking in time when a child runs across the
road or swerving around a distracted pedestrian. Usually such behaviors occur in
seconds. Therefore, Michon’s model (1985) emphasizes how the driver interacts
with the environment when taking the driver’s behaviors–influenced by attention,
judgment, working memory, cognitive processing, and sensory-motor abilities—into
consideration.
10.3 MEDICALLY-AT-RISK CONDITIONS
10.3.1 deskiLLing: impLicAtions of the Aging process
Driving deskilling is a potential consequence of SAE Level 3 of automation, which
requires the driver to cede control of all safety-critical functions of the vehicle, under
certain conditions, but expects the driver to resume control if such conditions change
(Trosterer et al., 2016). As noted earlier in this chapter, SAE Level 3 of automation is
not further discussed, as its use may yield more risks than benefits for at-risk drivers.
However, changes in the functional skills of drivers can also result in deskilling. For
the remainder of this section, we will discuss the normal aging process as an exam-
ple of natural deskilling and its implications for vehicle automation technologies.
10.3.1.1 Age-Related Deskilling
In the United States alone, in 2016, there were approximately 46 million adults
65 years of age or older—15% of the entire population, and this number is expected
to almost double by 2060 when older adults will make up a quarter of the American
220 Human Factors for Automated Vehicles
Together, impairments at all three of these levels increase older adults’ involve-
ment in motor vehicle collisions, particularly due to difficulties with giving the
right of way, negotiating turns, driving backwards during parking maneuvers,
and navigating complex and unfamiliar roadways (Karthaus & Falkenstein,
2016). In fact, older adults over the age of 65 experience an increase in motor
vehicle collisions per mile driven (Davis, Casteel, Hamann, & Peek-Asa, 2018).
In addition, older adults can overestimate their driving skills. In a study con-
ducted by Ross and colleagues (2012), 85% of older adults (N = 350) rated them-
selves as either good or excellent drivers, in spite of previous crash or citation
rates. However, driving is the primary means of community mobility for older
adults, particularly in Western countries. Restrictions in community mobility
lead to social isolation, as well as cognitive and mental health decline (Fonda,
Wallace, & Herzog, 2001). Furthermore, driving cessation can increase the need
for long-term care in older adults compared with active drivers of the same age,
independent of health status (Freeman, Gange, Munoz, & West, 2006). Therefore,
autonomous vehicle (AV) technologies may provide opportunities for older adults
to remain on the road when possible, while compensating for specific functional
impairments.
but three states have a minimum best corrected visual acuity requirement of 20/40
(Steinkuller, 2010). Refractive errors are commonly corrected through the use of pre-
scription glasses, contact lenses, and refractive surgeries, and as such, global efforts
are improving access to such services and interventions (World Health Organization,
2013). These corrections allow individuals to achieve the visual acuity requirements
of their jurisdictions. Because correction is the goal of interventions for refractive
errors, and drivers who do not meet the required standard are otherwise restricted or
prevented from driving, the remainder of this section will focus on the subsequent
four leading causes of visual impairment.
10.3.2.1 Cataracts
A cataract is a visual impairment that results from a clouding or opacity of the lens
in the eye (National Eye Institute, 2013). The lens, located behind the iris and the
pupil, focuses the light onto the retina. When the proteins that make up the lens
accumulate, the lens becomes clouded, a phenomenon that can occur in one or both
eyes (National Eye Institute, 2013).
FIGURE 10.1 (See color insert.) Scene as viewed by a person with cataract. (Image from
the National Eye Institute, National Institutes of Health.)
224 Human Factors for Automated Vehicles
FIGURE 10.2 (See color insert.) Scene as viewed by a person with AMD. (Image from the
National Eye Institute, National Institutes of Health.)
and may see dark spots blanking out their central vision (Canadian Association of
Optometrists, n.d.). AMD is broadly classified into two types. The dry form is the
most common and presents as a gradual degeneration of the tissue in the macula
with symptoms developing more slowly. In contrast, the wet form is more severe
and results from the bleeding of weakened vessels under the macula which causes
the symptoms to progress more rapidly (Mitchell, Liew, Gopinath, & Wong, 2018).
Figure 10.2 illustrates a scene as viewed by an individual with AMD.
10.3.2.3 Glaucoma
The term glaucoma refers to a group of diseases characterized by progressive dam-
age to the optic nerve. In the absence of early detection and treatment, glaucoma can
eventually lead to blindness (National Eye Institute, 2015b). In 2013, it was estimated
that approximately 64 million people around the world lived with glaucoma, a num-
ber that is expected to almost double by 2040 (Tham et al., 2014).
FIGURE 10.3 (See color insert.) Scene as viewed by a person with glaucoma. (Image
from the National Eye Institute, National Institutes of Health.)
can go undetected until the disease has advanced considerably (Cohen & Pasquale,
2014). Once present, performance impairments are most often characterized by a
loss of peripheral vision. As the disease progresses, it might compromise an indi-
vidual’s central vision and even cause blindness (National Eye Institute, 2015b).
FIGURE 10.4 (See color insert.) Scene as viewed by a person with DR. (Image from the
National Eye Institute, National Institutes of Health.)
Driver Capabilities: Resumption of Control 229
Finally, drivers diagnosed with all of the above-mentioned visual disorders experience
difficulty watching for hazards. Pedestrian detection systems (SAE Level 0) can alert
these drivers if there is a vulnerable road user (e.g., pedestrian) in the vehicle’s path.
TABLE 10.1
Strengths and Challenges
Strengths Challenges
1. Has insight and has implemented self-restrictions on his driving. 1. Visual impairments are
2. Drives in familiar environments and routes while avoiding increasing even after
hazardous climates and nighttime driving. surgical intervention.
3. Has medical care and adequate follow-up from his circle of care. 2. Has high blood pressure
4. Has no known comorbidities beside high blood pressure. which affects severity of
5. Has a family network of support. his symptoms.
Recommendations. The CDRS recommended that Zane uses the two-second rule, leav-
ing at least 2 seconds between him and the lead vehicle, as well as stopping where he can
see the tires of the lead vehicle. She also suggests an alternative route to the supermarket
that avoids four-way stops completely. She suggests that Zane works with her during
upcoming sessions to practice this new route and build confidence. She recommends the
use of a lane change assist and a blind spot detector. These technologies will warn Zane
when he is drifting from the lane, as well as provide a warning if there are objects in
his blind spot or in a position where a lane change would be unsafe. Other technologies
that could support Zane include lane-centering control, adaptive cruise control, and a
pedestrian/bicycle collision warning system so that when Zane is turning his head away
from the forward view the forward roadway is being monitored by the AV technology
to the extent possible to provide Level 2 assistance. However, given Zane’s age and the
resulting deskilling that is expected to emerge, integrating all these technologies could
result in increased cognitive load. As such, the CDRS prioritizes these two.
Before recommending the technologies, however, she explores Zane’s AV tech-
nology acceptance. Zane wants to be able to drive for as long as possible and is will-
ing to purchase technologies that might be helpful. He heard one of his daughters
talking about a new device in her car and he is actually excited to try one himself.
The CRDS conducts education sessions with a vendor before any purchase, so that
Zane can try the technology and use it appropriately.
Driver Capabilities: Resumption of Control 231
Classen et al., 2013); and teens with ADHD/ASD did worse on measures of visual
performance (Classen, Monahan, & Brown, 2013; Classen et al., 2013).
For cognitive functions, teens with ASD have difficulty in problem-solving driv-
ing events such as approaching an emergency vehicle. During these driving maneu-
vers, when they experience increased demands on working memory, they make more
steering and braking errors (Cox et al., 2016). Noticeably, as can be seen from Video
1, teens with ASD have difficulty carrying out the correct sequence when perform-
ing a turn.
Video 10.1: Teen with ASD Performing a Turn [UPLOAD VIDEO]
As such, they may not effectively sequence the adjustment of speed and rota-
tion of the steering wheel to control a vehicle through a turn (Classen et al.,
2013). Moreover, these teens divert their attention away from complex roadway
situations that require increased cognitive demands (Reimer et al., 2013). Teens
with ASD and/or ADHD performed worse than neurotypical peers on measures
of cognition (Classen, Monahan, & Brown, 2013). Specifically, in the Classen
et al. study, they responded late or not at all to traffic lights, regulatory signs, or
pedestrians.
Reduced ability to estimate risk and impulsive tendencies of teens with ADHD
impairs their judgment when driving (Barkley, 2004). Such impairments are evi-
dent in misjudging gaps in traffic and not adjusting speed for hazardous conditions.
Compared with neurotypical peers, teens with ADHD also demonstrate impaired
selective attention (Classen, Monahan, & Brown, 2013).
For motor functions, teens with ASD have difficulty with bilateral upper extremity
motor coordination for turning the wheel when negotiating a turn in a high-fidelity
driving simulator (Classen, Monahan, & Brown, 2013). Compared with neurotypical
peers, teens with ADHD also showed impaired motor coordination during a simula-
tor driving task (Classen, Monahan, Brown et al., 2013).
10.3.3.3 Parkinson’s Disease
In the United States, PD affects about 1 million Americans (Parkinson’s
Foundation, 2018). PD is most commonly diagnosed in people over the age of 60,
with only 5% of all cases diagnosed before the age of 60. Men are 1.5 times more
likely diagnosed with PD than women, and the incidence of PD increases with age
(Parkinson’s Foundation, 2018).
Driver Capabilities: Resumption of Control 233
behaviors on the strategic, tactical, and operational levels (Michon, 1985). Examples
of such impairments on each of the levels of driving behavior are indicated below:
10.3.3.4.1 Strategic Impairment
Strategic impairments may include decisions that are made a priori, before starting
to drive and incorporate decision-making, problem-solving, reasoning, judgment,
and/or initation of strategies to overcome potential driving difficulties. Examples of
impairments include experiencing
• Challenges with decisions related to trip planning, prior to the actual drive,
which result in getting lost during travel, or being disoriented in time and space;
• Difficulty with judgment and decision-making, during the actual drive,
when the driver with a cognitive impairment needs to negotiate a complex
travel route; and
• Increased cognitive load when driving during peak traffic hours (vs. non-
peak hours) which result in deterioration of driving performance.
10.3.3.4.2 Tactical Impairment
These impairments are evident during the driving task with routine functions being
deficient. Such functions include steering, braking, accelerating, stopping, or con-
trolling the vehicle in the driving environment, while adhering to the rules of the
road and the traffic regulations. Examples include:
10.3.3.4.3 Operational
Such impairments occur when a swift reaction of the driver is necessary to avoid a
potential obstacle or adverse condition. Examples include:
• Inability to swerve to avoid an obstacle in the road that may cause harmful
effects;
• Challenges to control the gas and/or brake pedals in an emergency situa-
tion, such as a child running across the road; and
• Limited ability to manipulate swift steering control when a potential
adverse event is unfolding, such as a driver in a parked car opening a car
door in front of one’s moving vehicle.
As such, because driving behaviors seem to be impaired at all levels (strategic, tacti-
cal, and operational) in those with ASD, ADHD, and/or PD drivers, AV technology
holds out the promise to mitigate at least some of the functional impairments.
Driver Capabilities: Resumption of Control 235
TABLE 10.2
Functional Deficits of Drivers with Neurological or Neurodegenerative
Disorders, the AV Technology by Type and SAE Level, as Well as the
Potential to Offset the Functional Deficits Caused by the Disorder
Source Deficit AV Technology
Teens with ASD and/or ADHD
Vision Teens with ASD do not scan the Pedestrian detection and avoidance system
environment as effectively as (SAE Level 0): Assist the driver to detect
neurotypical peers and may not notice pedestrians in the path by alerting the
potential hazards in their immediate driver to such stimuli, and braking the car,
path. should the driver not adjust his/her
response.
Blind spot detection (SAE Level 0): See
Section 10.3.4.4.
Visual Teens with ADHD may misjudge gaps ACC (SAE Level 1): ACC automatically
perception in traffic. adjusts the vehicle speed to maintain a safe
distance from vehicles ahead.
Cognition Teens with ASD make more steering Intelligent speed adaptation (SAE Level
and braking errors when experiencing 0–1): On-board camera scans for signs and
increased demands on working reads the speed limit while the GPS
memory. satellite passes road speed limits (Level 0)
directly to the car, and the on-board
computer warns the driver and slows (Level
1) the car to match the speed limit
information received.
Cognition Teens with ASD have difficulty carrying Intersection assist (SAE Level 0 or 1):
out the correct sequence of actions Monitors cross traffic in an intersection or
when performing a turn. As such, they road junction. When it detects a hazardous
may not effectively sequence the situation, it prompts the driver to start
adjustment of speed and rotation of the emergency braking by activating visual
steering wheel to control a vehicle and/or auditory warnings or automatically
through a turn. engages the brakes (also see Section
10.3.4.4).
(Continued )
236 Human Factors for Automated Vehicles
TABLE 10.3
Strengths and Challenges for Elizabeth
Strengths Challenges
1. Regulating her driving 1. Neurodegenerative nature of PD
2. Avoiding potential hazards 2. Chronic and degenerative visual conditions
3. Living alone—independent in Activities of Daily 3. Side effects of blood pressure medication
Living (ADLs) and possibly most Instrumental (lightheadedness, dizziness)
Activities of Daily Living (IADLs) 4. Limited support network
4. Having higher education 5. Effects of the comorbidities, e.g. stiff
5. Wearing prescription lenses joints, depression, potential for daytime
6. Having a good driving record sleepiness
TABLE 10.4
Use of AV Technology to Mitigate Impairments Related to Functional
Declines
AV Technology
Impairment in Driving to Offset the
Behaviors (Strategic/ Driving
Tactical/Operational) Impairments Benefits for Elizabeth
Makes wide turns at curves Lane-keeping Lane departure warning systems use visual/
and turns into the furthest assist auditory or vibrational (haptic) stimuli to alert
lane (tactical) a driver that they are drifting out of the lane
and the lane-keeping assist centers the vehicle to
its lane to overcome the notion of making
wide turns.
Not scanning adequately to the Intersection Scans the environment for oncoming traffic and
left and right before crossing assistant warns drivers of vehicles approaching from the
intersections (tactical) sides at intersections, highway exits, or car parks
to overcome the inadequate scanning behaviors
of the client.
Not reacting appropriately to Blind spot Alerts the driver through auditory and visual cues
cars in her blind spot detection of approaching vehicles in the blind spot to
(tactical) overcome the impairment related to blind spot
detection.
Not maintaining lateral lane Lane departure As described above.
position as she drifts to the warning and
left (tactical) correction
system
Not using turn signal Lane departure As described above.
consistently (tactical) warning and
correction
system
Driver Capabilities: Resumption of Control 241
10.4 CONCLUSION
In this chapter the authors discussed deskilling, visual disorders, and neurological and
neurodegenerative disorders. Specifically, the authors focus on the core clinical char-
acteristics, associated functional performance deficits, the effect on driving behav-
iors, and potential of automated vehicle technologies (SAE Levels 0–2) to mitigate the
effects of the functional performance deficits, and to enable the driver to resume con-
trol. The content discussed is tied together through the illustration of two case studies.
Although the empirical literature supports most of the sections in the chapter, the
last section (i.e., deployment of automated vehicle technologies to resume control)
represents a conceptual amalgamation of collective thinking on the possibilities of
vehicle automation. This section is based on conjecture, informed by the authors’
collective best clinical reasoning, understanding the medically-at-risk driver’s func-
tional performance deficits, as well as the potential possibilities that automated vehi-
cle technologies hold for the driver to resume control.
The authors hope that this chapter lays the conceptual foundation for engineers,
clinicians, rehabilitation scientists, and other transportation professionals to recognize
the vulnerabilities of medically-at-risk drivers, as well as the opportunities that auto-
mation holds to enable these populations to be independent and safe in their driving
tasks. The authors also hope that this chapter will prompt scientists, to empirically test
the assumptions related to automated vehicle technologies, for at-risk drivers.
ACKNOWLEDGMENTS
Sarah Krasniuk, PhD student in Rehabilitation Science, University of Western
Ontario, London, Ontario, Canada for formatting, coordinating, and technical
support.
REFERENCES
Almberg, M., Selander, H., Falkmer, M., Vaz, S., Ciccarelli, M., & Falkmer, T. (2015).
Experiences of facilitators or barriers in driving education from learner and nov-
ice drivers with ADHD or ASD and their driving instructors. Developmental
Neurorehabilitation, 20(2), 59–67. doi:10.3109/17518423.2015.1058299
Alvarez, L. & Classen, S. (2017). In-vehicle technology and driving simulation. In S. Classen
(Ed.), Driving Simulation for Assessment, Intervention, and Training: A Guide for
Occupational Therapy and Health Care Practitioners (pp. 265–278). Bethesda, MD:
AOTA Press.
American Psychiatric Association. (2000). Diagnostic and statistical manual of mental
disorders: DSM-IV-TR. Washington, DC: American Psychiatric Association.
American Psychiatric Association. (2012). DSM-5 proposed criteria for autism spectrum
disorder designed to provide more accurate diagnosis and treatment. News Release.
Retrieved from DSM-5: The Future of Psychiatric Diagnosis website: www.dsm5.org/
Pages/Default.aspx
Amick, M. M., Grace, J., & Ott, B. R. (2007). Visual and cognitive predictors of driving safety
in Parkinson’s disease patients. Archives of Clinical Neuropsychology, 22(8), 957–967.
Anderson, D. R. (2011). Normal-tension glaucoma (low-tension glaucoma). Indian Journal of
Ophthalmology, 59(Suppl 1), S97–S101. doi:10.4103/0301–4738.73695
242 Human Factors for Automated Vehicles
Aquino, M. C. D., Tan, A. M., Loon, S. C., See, J., & Chew, P. T. (2011). A randomized
comparative study of the safety and efficacy of conventional versus micropulse
diode laser transscleral cyclophotocoagulation in refractory glaucoma. Investigative
Ophthalmology & Visual Science, 52(14), 2609–2609.
Barkley, R. A. (2004). Driving impairments in teens and adults with attention deficit hyper-
activity disorder. Psychiatric Clinics of North America, 27, 233–260. doi:10.1016/
S0193–953X(03)00091–1
Boisgontier, M. P., Olivier, I., Chenu, O., & Nougier, V. (2012). Presbypropria: The effects of
physiological ageing on proprioceptive control. Age (Dordrecht, Netherlands), 34(5),
1179–1194. doi:10.1007/s11357-011-9300–y
Bron, A. M., Viswanathan, A. C., Thelen, U., de Natale, R., Ferreras, A., Gundgaard, J., …
Buchholz, P. (2010. International vision requirements for driver licensing and disability
pensions: Using a milestone approach in characterization of progressive eye disease.
Clinical Ophthalmology, 4, 1361–1369. doi:10.2147/OPTH.S15359
Canadian Association of Optometrists. (n.d.). Age-Related Macular Degeneration. Retrieved
from https://opto.ca/health-library/amd%20and%20low%20vision
Canadian Medical Association. (2017). Determining Medical Fitness to Operate Motor
Vehicles: CMA Driver’s Guide. Toronto, ON: Joule Inc.
Centers for Disease Control and Prevention. (2017). Facts about ADHD. Retrieved from
www.cdc.gov/ncbddd/adhd/facts.html
Centers for Disease Control and Prevention. (2018). Prevalence of Autism Spectrum Disorder
Among Children Aged 8 Years—Autism and Developmental Disabilities Monitoring
Network, 11 Sites, United States, 2014. Morbidity and Mortality Weekly. Retrieved
from www.cdc.gov/mmwr/volumes/67/ss/ss6706a1.htm?s_cid=ss6706a1_w
Chen, K. B., Xu, X., Lin, J.-H., & Radwin, R. G. (2015). Evaluation of older driver head
functional range of motion using portable immersive virtual reality. Experimental
Gerontology, 70, 150–156. doi:10.1016/j.exger.2015.08.010
Clark, T., Feehan, C., Tinline, C., & Vostanis, P. (1999). Autistic symptoms in children with
attention deficit-hyperactivity disorder. European Child and Adolescent Psychiatry,
8(1), 50–55. doi:10.1007/s007870050083
Classen, S., Monahan, M., & Brown, K. (2013). Indicators of simulated driving performance
in teens with attention deficit hyperactivity disorder. The Open Journal of Occupational
Therapy, 2(1), Art. 3.
Classen, S., Monahan, M., Brown, K. E., & Hernandez, S. (2013). Driving indicators in teens
with attention deficit hyperactivity and/or autism spectrum disorder. Canadian Journal
of Occupational Therapy, 80(5), 274–283. doi:10.1177/0008417413501072
Classen, S., Witter, D. P., Lanford, D. N., Okun, M. S., Rodriguez, R. L., Romrell, J., …
Fernandez, H. H. (2011). The usefulness of screening tools for predicting driving per-
formance in people with Parkinson’s disease. The American Journal of Occupational
Therapy, 65(5), 579–588. doi:10.5014/ajot.2011.001073
Cohen, L. P. & Pasquale, L. R. (2014). Clinical characteristics and current treatment of glau-
coma. Cold Spring Harbor Perspectives in Medicine, 4(6), a017236. doi:10.1101/csh-
perspect.a017236
Cox, S. M., Cox, J. M., Kofler, M. J., Moncrief, M. A., Johnson, R. J., Lambert, A.
E., … Reeve, R. E. (2016). Driving simulator performance in novice drivers with
autism spectrum disorder: The role of executive functions and basic motor skills.
Journal of Autism and Developmental Disorders, 46, 1379–1391. doi:10.1007/
s10803-015-2677-1
Crizzle, A. M., Classen, S., Lanford, D. N., Malaty, I. I., Rodriguez, R. L., McFarland,
N. R., & Okun, M. S. (2013). Driving performance and behaviors: A comparison of
gender differences in drivers with Parkinson’s disease. Traffic Injury and Prevention,
14(4), 340–345. doi:10.1080/15389588.2012.717730
Driver Capabilities: Resumption of Control 243
Crizzle, A. M., Classen, S., & Uc, E. Y. (2012). Parkinson Disease and driving: An evidence-
based review. Neurology, 79(20), 2067–2074.
Davis, J., Casteel, C., Hamann, C., & Peek-Asa, C. (2018). Risk of motor vehicle crash for
older adults after receiving a traffic charge: A case-crossover study. Traffic Injury
Prevention, 1–26. doi:10.1080/15389588.2018.1453608
Devos, H., Ranchet, M., Akinwuntan, A. E., & Uc, E. Y. (2015). Establishing an evidence-
base framework for driving rehabilitation in Parkinson’s disease: A systematic review
of on-road driving studies. NeuroRehabilitation, 37(1), 35–52.
Devos, H., Ranchet, M., Bollinger, K., Conn, A., & Akinwuntan, A. E. (2018). Performance-
based visual field testing for drivers with glaucoma: A pilot study. Traffic Injury
Prevention, 1–7. doi:10.1080/15389588.2018.1508834
Dubinsky, R. M., Gray, C., Husted, D., Busenbark, K., Vetere-Overfield, B., Wiltfong, D., …
Koller, W. C. (1991). Driving in Parkinson’s disease. Neurology, 41, 517–520.
Fabiano, G. A., Hulme, K., Linke, S., Nelson-Tuttle, C., Pariseau, M., Gangloff, B., …
Buck, M. (2010). The Supporting a Teen’s Effective Entry to the Roadway (STEER)
program: Feasibility and preliminary support for a psychosocial intervention for
teenage drivers with ADHD. Cognitive and Behavioral Practice, 18(2), 267–280.
doi:1077–7229/10/267–280$1.00/0
Federal Highway Administration. (2015). Highway Statistics 2014. Washington, DC.
Retrieved from www.fhwa.dot.gov/policyinformation/statistics/2014/
Fonda, S., Wallace, R., & Herzog, A. (2001). Changes in driving patterns and worsen-
ing depressive symptoms among older adults. Journals of Gerontology. Series B:
Psychological Sciences and Social Sciences, 56B(6), S343–351.
Freeman, E. E., Gange, S. J., Munoz, B., & West, S. K. (2006). Driving status and risk of entry
into long-term care in older adults. American Journal of Public Health, 92(8), 1284–1289.
Glaucoma Research Foundation. (n.d.). Types of Glaucoma. Retrieved from www.glaucoma.
org/glaucoma/types-of-glaucoma.php
Haegerstrom-Portnoy, G., Schneck, M. E., & Brabyn, J. A. (1999). Seeing into old age: Vision
function beyond acuity. Optometry and Vision Science, 76(3), 141–158.
Huang, P., Kao, T., Curry, A., & Durbin, D. R. (2012). Factors associated with driving in teens
with autism spectrum disorders. Journal of Developmental Behavioral Pediatrics,
33(1), 1–5. doi:10.1097/DBP.0b013e31823a43b7
Jerome, L., Habinski, L., & Segal, A. (2006). Attention-deficit/hyperactivity disorder
(ADHD) and driving risk: A review of the literature and a methodological critique.
Current Psychiatry Reports, 8(5), 416–426.
Jerome, L., Segal, A., & Habinski, L. (2006). What we know about ADHD and driving risk:
A literature review, meta-analysis and critique. Journal of the Canadian Academy of
Child and Adolescent Psychiatry, 15(3), 105–125.
Karthaus, M. & Falkenstein, M. (2016). Functional changes and driving performance in older
drivers: Assessment and interventions. Geriatrics, 1(2), 12.
Lachenmayer, L. (2000). Parkinson’s disease and the ability to drive. Journal of Neurology,
247(Suppl 4), 28–30.
Lee, R., Wong, T. Y., & Sabanayagam, C. (2015). Epidemiology of diabetic retinopathy, dia-
betic macular edema and related vision loss. Eye and Vision, 2, 17–17. doi:10.1186/
s40662-015-0026–2
Lin, F. R., Thorpe, R., Gordon-Salant, S., & Ferrucci, L. (2011). Hearing loss prevalence and
risk factors among older adults in the United States. The Journals of Gerontology. Series
A, Biological Sciences and Medical Sciences, 66(5), 582–590. doi:10.1093/gerona/glr002
Lings, S. & Dupont, E. (1992). Driving with Parkinson’s disease: A controlled laboratory
investigation. Acta Neurologica Scandinavica, 86, 33–39.
Mather, M., Jacobsen, L., & Pollard, K. (2015). Aging in the United States. Population
Bulletin, 70(2), 1–17.
244 Human Factors for Automated Vehicles
McLay, P. (1989). The parkinsonian and driving. International Disability Studies, 11(1), 50–51.
Meindorfner, C., Korner, Y., Moller, J. C., Stiasny-Kolster, K., Oertel, W. H., & Kruger, H. P.
(2005). Driving in Parkinson’s disease: Mobility, accidents, and sudden onset of sleep
at the wheel. Movement Disorders, 20(7), 832–842. doi:10.1002/mds.20412
Michon, J. A. (1985). A critical view of driver behavior models: What do we know, what
should we do? In E. L. Evans & R. Schwing (Eds.), Human Behavior and Traffic Safety
(pp. 485–520). New York: Plenum.
Midena, E., Degli Angeli, C., Blarzino, M. C., Valenti, M., & Segato, T. (1997). Macular
function impairment in eyes with early age-related macular degeneration. Investigative
Ophthalmology & Visual Science, 38(2), 469–477.
Mitchell, P., Liew, G., Gopinath, B., & Wong, T. Y. (2018). Age-related macular degeneration.
The Lancet, 392(10153), 1147–1159. doi:10.1016/S0140–6736(18)31550–2
National Eye Institute. (2010, October, 2010). Facts about Refractive Errors. Refractive
Errors. Retrieved from https://nei.nih.gov/health/errors/errors
National Eye Institute. (2013, September, 2015). Facts about Cataract. Retrieved from https://
nei.nih.gov/health/cataract/cataract_facts
National Eye Institute. (2015a, September, 2015). Facts about Diabetic Eye Disease. Retrieved
from https://nei.nih.gov/health/diabetic/retinopathy
National Eye Institute. (2015b, September, 2015). Facts about Glaucoma. Retrieved from
https://nei.nih.gov/health/glaucoma/glaucoma_facts
National Institute of Mental Health. (2017). What Is Autism Spectrum Disorder (ASD)?
Retrieved from www.nimh.nih.gov/health/publications/a-parents-guide-to-autism-
spectrum-disorder/what-is-autism-spectrum-disorder-asd.shtml
Owsley, C. & McGwin, G., Jr. (2008). Driving and age-related macular degeneration. Journal
of Visual Impairment & Blindness, 102(10), 621–635.
Owsley, C., McGwin, G., Jr., Scilley, K., & Kallies, K. (2006). Development of a questionnaire
to assess vision problems under low luminance in age-related maculopathy. Investigative
Ophthalmology & Visual Science, 47(2), 528–535. doi:10.1167/iovs.05–1222
Owsley, C., Stalvey, B. T., Wells, J., & Sloane, M. E. (1999). Older drivers and cataract:
Driving habits and crash risk. Journal of Gerontology: Series A: Biological Sciences &
Medical Sciences, 54(4), M203–M211.
Papa, M., Boccardi, V., Prestano, R., Angellotti, E., Desiderio, M., Marano, L., … Paolisso,
G. (2014). Comorbidities and crash involvement among younger and older drivers.
PLoS One, 9(4), e94564. doi:10.1371/journal.pone.0094564
Papadopoulos, M., Loh, A., & Fenerty, C. (2015). Secondary glaucoma: Glaucoma associated
with acquired conditions. The Ophtalmic New and Education Network, Diease Reviews.
Parkinson’s Foundation. (2018). Retrieved from www.parkinson.org/
Pascolini, D. & Mariotti, S. P. (2012). Global estimates of visual impairment: 2010. British
Journal of Ophthalmology, 96(5), 614–618. doi:10.1136/bjophthalmol–2011–300539
Pennington, K. L. & DeAngelis, M. M. (2016). Epidemiology of age-related macular degen-
eration (AMD): Associations with cardiovascular disease phenotypes and lipid factors.
Eye and Vision, 3, 34–34. doi:10.1186/s40662-016-0063–5
Reimer, B., Fried, R., Mehler, B., Joshi, G., Bolfek, A., Godfrey, K. M., … Biederman, J. (2013).
Brief report: Examining driving behavior in young adults with high functioning autism
spectrum disorders: A pilot study using a driving simulation paradigm. Journal of Autism
and Developmental Disorders, 43, 2211–2217. doi:10.1007/s10803-013-1764-4
Riggeal, B. D., Crucian, G. P., Seignoural, P. S., Jacobson, C. E., Okun, M. S., Rodriguez, R.
L., & Fernandez, H. H. (2007). Cognitive decline tracks motor progression and not disease
duration in Parkinson disease. Neuropsychiatric Disease and Treatment, 3(6), 955–958.
Ross, L. A., Dodson, J. E., Edwards, J. D., Ackerman, M. L., & Ball, K. (2012). Self-rated
driving and driving safety in older adults. Accident Analysis Prevention, 48, 523–527.
doi:10.1016/j.aap.2012.02.015
Driver Capabilities: Resumption of Control 245
Sadana, R., Blas, E., Budhwani, S., Koller, T., & Paraje, G. (2016). Healthy ageing: Raising
awareness of inequalities, determinants, and what could be done to improve health
equity. The Gerontologist, 56(Suppl 2), S178–S193. doi:10.1093/geront/gnw034
See, J. L. S., Aquino, M. C. D., Aduan, J., & Chew, P. T. K. (2011). Management of angle
closure glaucoma. Indian Journal of Ophthalmology, 59(Suppl 1), S82–S87.
doi:10.4103/0301–4738.73690
Sheppard, E., Ropar, D., Underwood, G., & Van Loon, E. (2010). Brief report: Driving hazard
perception in autism. Journal of Autism and Developmental Disorders, 40, 504–508.
doi:10.1007/s10803-009-0890–5
Sheppard, E., Van Loon, E., Underwood, G., & Ropar, D. (2017). Attentional differences in
a driving hazard perception task in adults with autism spectrum disorders. Journal of
Autism and Developmental Disorders, 47, 405–414. doi:10.1007/s10803-016-2965–4
Shrestha, G. S. & Kaiti, R. (2014). Visual functions and disability in diabetic retinopathy
patients. Journal of Optometry, 7(1), 37–43. doi:10.1016/j.optom.2013.03.003
Singh, R., Pentland, B., Hunter, J., & Provan, F. (2007). Parkinson’s disease and driv-
ing ability. Journal of Neurology, Neurosurgery, and Psychiatry, 78(4), 363–366.
doi:jnnp.2006.103440 [pii] 10.1136/jnnp.2006.103440
Society of Automotive Engineers International. (2016). Taxonomy and Definitions for Terms
Related to On-Road Motor Vehicle Automated Driving Systems (J3016_201401).
Steinkuller, P. (2010). Legal vision requirements for drivers in the United States. Virtual
Mentor, 12, 938–940.
Szlyk, J. P., Mahler, C. L., Seiple, W., Edward, D. P., & Wilensky, J. T. (2005). Driving per-
formance of glaucoma patients correlates with peripheral visual field loss. Journal of
Glaucoma, 14(2), 145–150.
Szlyk, J. P., Mahler, C. L., Seiple, W., Vajaranant, T. S., Blair, N. P., & Shahidi, M. (2004).
Relationship of retinal structural and clinical vision parameters to driving performance
of diabetic retinopathy patients. Journal of Rehabilitation Research and Development,
41(3a), 347–358.
Szlyk, J. P., Pizzimenti, C. E., Fishman, G. A., Kelsch, R., Wetzel, L. C., Kagan, S., & Ho, K.
(1995). A comparison of driving in older subjects with and without age-related macular
degeneration. Archives of Ophthalmology, 113(8), 1033–1040.
Tanabe, S., Yuki, K., Ozeki, N., Shiba, D., Abe, T., Kouyama, K., & Tsubota, K. (2011).
The association between primary open-angle glaucoma and motor vehicle colli-
sions. Investigative Ophthalmology & Visual Science, 52(7), 4177–4181. doi:10.1167/
iovs.10–6264
Tham, Y.-C., Li, X., Wong, T. Y., Quigley, H. A., Aung, T., & Cheng, C.-Y. (2014). Global prev-
alence of glaucoma and projections of glaucoma burden through 2040. Ophthalmology,
121(11), 2081–2090. doi:10.1016/j.ophtha.2014.05.013
Trosterer, S., Gartner, M., Mirnig, A., Meschtscherjakov, A., McCall, R., Louveton, N., …
Engel, T. (2016). You Never forget how to drive: Driver skilling and deskilling in
the advent of autonomous vehicles. Paper Presented at the Proceedings of the 8th
International Conference on Automotive User Interfaces and Interactive Vehicular
Applications, Ann Arbor, MI. http://delivery.acm.org/10.1145/3010000/3005462/p209-
trosterer.pdf?ip=129.100.55.89&id=3005462&acc=ACTIVE%20SERVICE&key=FD0
067F557510FFB%2E26B1F5E6B598D80D%2E4D4702B0C3E38B35%2E4D4702B0
C3E38B35&__acm__=1541780895_6dd6dd198cde137adcc00eb8fd121e1d
Uc, E. Y., Rizzo, M., Anderson, S. W., Qian, S., Rodnitzky, R. L., & Dawson, J. D. (2005).
Visual dysfunction in Parkinson disease without dementia. Neurology, 65(12),
1907–1913.
Uc, E. Y., Rizzo, M., Anderson, S. W., Sparks, J., Rodnitzky, R. L., & Dawson, J. D. (2006a).
Impaired visual search in drivers with Parkinson’s disease. Annals of Neurology, 60(4),
407–413.
246 Human Factors for Automated Vehicles
Uc, E. Y., Rizzo, M., Anderson, S. W., Sparks, J. D., Rodnitzky, R. L., & Dawson, J.
D. (2006b). Driving with distraction in Parkinson disease. Neurology, 67(10),
1774–1780.
Uc, E. Y., Rizzo, M., Johnson, A. M., Dastrup, E., Anderson, S. W., & Dawson, J. D.
(2009). Road safety in drivers with Parkinson disease. Neurology, 73(24), 2112–2119.
doi:10.1212/WNL.0b013e3181c67b77
Uitti, R. J. (2009). Parkinson’s disease and issues related to driving. Parkinsonism and
Related Disorders, 15(3), A122–125.
Vernon, S. A., Bhagey, J., Boraik, M., & El-Defrawy, H. (2009). Long-term review of driv-
ing potential following bilateral panretinal photocoagulation for proliferative diabetic
retinopathy. Diabetic Medicine, 26(1), 97–99. doi:10.1111/j.1464–5491.2008.02623.x
Vieluf, S., Godde, B., Reuter, E. M., Temprado, J. J., & Voelcker-Rehage, C. (2015). Practice
effects in bimanual force control: Does age matter? Journal of Motor Behavior, 47(1),
57–72. doi:10.1080/00222895.2014.981499
Visser, S. N., Bitsko, R. H., Danielson, M. L., Perou, R., & Blumberg, S. J. (2010). Increasing
prevalence of parent-reported attention-deficit/hyperactivity disorder among children -
United States, 2003 and 2007. Morbidity & Mortality Weekly Report, 59(44), 1439–1443.
Willis, J. R., Jefferys, J. L., Vitale, S., & Ramulu, P. Y. (2012). Visual impairment, uncor-
rected refractive error, and accelerometer-defined physical activity in the United States.
Archives of Ophthalmology, 130(3), 329–335. doi:10.1001/archopthalmol.2011.1773
Wood, J. M., Black, A. A., Mallon, K., Thomas, R., & Owsley, C. (2016). Glaucoma and driv-
ing: On-road driving characteristics. PLoS One, 11(7), e0158318-e0158318. doi:10.1371/
journal.pone.0158318
Wood, J. M. & Carberry, T. P. (2004). Older drivers and cataracts: Measures of driving per-
formance before and after cataract surgery. Transportation Research Record, 1865,
7–13.
World Health Organization. (2001). International Classification of Functioning, Disability
and Health. Geneva: World Health Organization.
World Health Organization. (2013). Universal Eye Health: A Global Action Plan 2014–
2019. Retrieved from Geneva, Switzerland. Retrieved from www.who.int/blindness/
AP2014_19_English.pdf?ua=1
World Health Organization. (2018). Blindness and Vision Impairment. Retrieved from www.
who.int/en/news-room/fact-sheets/detail/blindness-and-visual-impairment
Worringham, C. J., Wood, J. M., Kerr, G. K., & Silburn, P. A. (2006). Predictors of driv-
ing assessment outcome in Parkinson’s disease. Movement Disorders, 21(2), 230–235.
doi:10.1002/mds.20709
11 Driver State Monitoring
for Decreased
Fitness to Drive
Michael G. Lenné, Trey Roady, and Jonny Kuo
Seeing Machines Ltd.
CONTENTS
Key Points .............................................................................................................. 247
11.1 Introduction–Why Driver State Monitoring Is Being Widely Promoted
as a Vehicle Safety Technology ....................................................................248
11.2 Current Approaches to DSM......................................................................... 249
11.2.1 Distraction and Engagement ............................................................. 250
11.2.2 Drowsiness ........................................................................................ 254
11.2.3 Automation-Driven Changes to Driver State.................................... 255
11.3 Future Directions and Applications in DSM................................................. 256
11.4 Human Factors and Safety Considerations ................................................... 257
11.4.1 Identifying Emerging Risks.............................................................. 258
11.4.2 Interfacing with the Driver ............................................................... 258
11.5 Conclusion .................................................................................................... 259
References .............................................................................................................. 260
KEY POINTS
• Governments and the automotive industry have come to recognize the need
for DSM as a central safety measure in the next wave of advanced driver
assistance systems (ADAS) technologies.
• There are a number of different underlying measurement approaches, such
as vehicle-based measures; however, most research and industry applica-
tions now pursue camera-based approaches focusing on eye, head, and
facial features.
• Driver drowsiness and distraction are two of the most studied driver states,
with some algorithms now published in the research literature.
• There are a number of human factors issues to work through, including
ensuring insights from the field inform development of future solutions and
ensuring that vehicle technologies interface with the driver in ways that
support safe performance.
247
248 Human Factors for Automated Vehicles
They perhaps have most relevance as part of a composite monitoring system that
includes physiological measures. However, vehicle control measures are the most
established method of drowsiness monitoring fitted by automotive manufacturers as
original equipment. (p. 415)
While driver monitoring can potentially be done using different sensor inputs
(e.g., Lenné & Jacobs, 2016), there is a consensus that camera-based driver moni-
toring is the best approach. This is particularly true for automated vehicles where
traditional vehicle input measures are likely unavailable and vehicle metrics take
on new meaning. Notwithstanding, even when vehicle-based metrics are available,
there is an argument that these are more diagnostic of the driver state rather than
impairment per se. For example, identifying that there is a lane departure is terrific
from the safety viewpoint but provides more limited insight into the driver state that
may lead to that safety event.
The following section discusses some measures and measurement approaches
that are available in the literature for two of the more prominent driver states that
will emerge during automated driving—distraction and drowsiness.
TABLE 11.1
Engagement States and Types of Distraction
Manually Visually Cognitively
State Engaged Engaged Engaged
Fully Attentive Yes Yes Yes
“On the loop” (Merat et al., 2018) No Yes Yes
Divided attention (Parasuraman, 2000) Optional No Yes
“Out of the Loop” (Merat et al., 2018) Optional Yes No
Completely disengaged Optional No No
sufficient effort to drive effectively and are completely disengaged when they
fail to direct substantive attention to the task. Divided attention is the phenom-
enon where attention is directed to two tasks, simultaneously, resulting in visual
time sharing (VTS). “On the loop” is a state of active awareness without direct
mechanical control, which may result in slightly slowed takeover due to motor
skills adapting to task.
For over a decade, human factors research has measured visual distraction with
eye behaviors in laboratory and field studies. Glance metrics are typically used to
assess the impact of different in-vehicle display designs on driver distraction and
mobile phone use. Due to this widespread acceptance, many research groups have
developed statistical models or algorithms to assess or classify driver state in a more
sophisticated manner. Many gaze algorithms classify disengagement and represent
variations and modifications of several distinct concepts:
TABLE 11.2
Gaze Algorithm Feature Classification
#1: #2:
Single Single #3: #5: #7: #8:
Off-Road Attentive Multi- #4: Speed #6: Risk Head- Yaw
Glance Glance glance Overfocusing Cutoffs Regions Tracking Center
AttenD: x x x x
Kircher and
Ahlström
(2009)
AttenD: x x x x
Seppelt
et al. (2017)
Multi- x x x x
Distraction:
Victor
(2010)
Multi- x x x x x x x
Distraction:
Lee et al.
(2013)
Driver State Monitoring 253
to 2 seconds (after a 0.1 second latency), whereas glances to rearview and speedom-
eter less than 1 second are buffer-neutral, and glances elsewhere decrement from the
attention buffer at 1 unit per second.
Seppelt et al. (2017) made modifications to AttenD, and though the specifics
remain proprietary, three updates are noted: changes to the increment rate of atten-
tive glances (#2, Table 11.2), changes to latency delay effects prioritizing off-road
glance region durations, and addition of an Overfocus component to detect cognitive
distraction (#4, Table 11.2). Most notably, this suggests the importance of recogni-
tion of cognitive distraction, and that the time required to form a reliable model of
the roadway is not 2 seconds (though it remains unclear whether it is more or less).
Victor (2010, as cited in Lee et al., 2013) developed the Multi-Distraction algo-
rithm to identify both visual and cognitive distraction in real time through measure-
ment of Percent Road Centre (PRC; the proportion of time focusing on a 10° radius
circle, centered on the driver’s most frequent point of gaze on the forward roadway).
At speeds above 50 km/h, three major time windows are considered: a long single
glance, a shorter PRC below 60%, and a longer PRC exceeding 92% (Figure 11.1).
An additional PRC window is implemented to capture VTS, where drivers are classi-
fied distracted if PRC decreases to below 65% and subsequently increases above 75%
within a 4 second window. This approach does not account for different off-road
regions or differentiate between driving-related and non-driving-related glances.
Building on the original Multi-Distraction, Lee et al. (2013) made four modifica-
tions in developing their implementation: (1) if sensor quality degrades, tracking
shifts from considering gaze, to head position, to posture; (2) the cognitive distrac-
tion PRC threshold is decreased to 83%; (3) speed thresholding is implemented to
limit tracking to speeds over 47 km/h, with imposed hysteresis to prevent rapid
switching across a single value; and (4) the center cone shifts based on vehicle yaw.
Lee et al. (2013) concluded that the modified Multi-Distraction algorithm demon-
strated a comparative advantage (True Positive: 90+% vs. 70%–90%; False Positive:
40%–60% vs. 10%–30%). However, this assessment predated Seppelt et al.’s (2017)
modifications. Also noteworthy are the six years of improvement in hardware and
eye tracking that have occurred since publication.
11.2.2 drowsiness
Alongside distraction, driver drowsiness remains a significant contributing factor to
road crashes worldwide. The National Highway Traffic Safety Administration (2008)
estimated that there are 56,000 crashes each year, in which drowsiness or fatigue was
cited by police as a causal factor. These crashes lead to, on average, 40,000 non-fatal
injuries and 1,550 fatalities per year. Driver drowsiness in the context of L2 and L3
driving remains a nascent field of research at the time of writing. However, drivers’
tendency toward disengagement and “out-of-the-loop” states under automated driv-
ing, as previously discussed, would suggest the continued prevalence of and critical
need to examine this form of impairment (see also, this Handbook, Chapter 9).
While electroencephalography (EEG) is often regarded as the gold standard for
objectively quantifying discrete stages of sleep, existing research on its applicability
to the transitional stage of drowsiness has yet to yield conclusive results. Significant
variability exists in both how participants are observed as well as which candidate
signals are proposed for analysis (Ahlström, Jansson, & Anund, 2017; Hung et al.,
2013; Perrier et al., 2016). In contrast, greater consistency has been reported from
research on ocular measures of drowsiness (e.g. Barua, Ahmed, Ahlström, & Begum,
2019). In practical terms, there is the additional advantage of using an unobtrusive,
camera-based system for measuring ocular metrics (a method that, as of yet, has not
been developed for measuring neural activity).
With respect to ocular measures, PERCLOS continues to be used as a drowsiness
indicator (Jackson et al., 2016; McDonald, Lee, Schwarz, & Brown, 2014), though
implementation and statistical approaches differ. PERCLOS is a measure of the pro-
portion of time that the eyes are closed, or nearly closed, over a given period of time,
typically between 1 and 20 minutes. The accepted use of the term PERCLOS refers to
the proportion of time eyes are more than 80% closed (i.e., based on the degree of eye-
lid closure; Wierwille, Ellsworth, Wreggit, Fairbanks, & Kirn, 1994), although how
this is determined does vary. Early studies used PERCLOS to assess drowsiness as
established by performance on the Psychomotor Vigilance Task (PVT) and reported
greater coherence in PERCLOS when longer time windows were used. The research
by Dinges, Mallis, Maislin, and Powell (1998) found that PERCLOS was a more reli-
able indicator of drowsiness when considered minute-by-minute over a 20-minute
window compared with shorter durations of 1 or 2 minutes, hence recommendations
were made to use a 20-minute window. In these studies, PERCLOS is calculated
through examination of video by manual annotators who make assessments on the
degree of eye closure compared with a set of reference images. As discussed follow-
ing, more recent studies have examined PERCLOS in driving studies predominantly
conducted using driving simulators, for example, to use PERCLOS to either establish
a level of drowsiness in a given sample or condition over a period of time.
In recent studies, there is greater interest in the use of real-time assessments of
drowsiness to link with much more specific safety-related events, such as lane depar-
ture events, steering movements, and micro sleeps and other eye and facial metrics.
By necessity, the 20-minute PERCLOS time window is reduced in these studies
to increase the confidence that the potential drowsiness level determined through
PERCLOS applies to a time at which the driving behavior of interest occurred.
Driver State Monitoring 255
When this approach is taken, the performance of PERCLOS is not as good as found
with longer time windows. For example, McDonald et al. (2014) in a driving simula-
tor used a 2-minute window and found poorer performance of PERCLOS compared
with both previous research and also a novel steering-based algorithm for the detec-
tion of drowsiness associated with lane departure events. One potential explanation
provided for the poor PERCLOS performance is that the 2-minute PERCLOS time
window is indeed too short to accurately detect acute drowsiness associated with
lane excursion events.
Related to PERCLOS, blink duration is another metric that has received consider-
able attention in drowsy driving research. In a study of real-world driving, Hallvig,
Anund, Fors, Kecklund, and Akerstedt (2014) reported intra-individual mean blink
duration to be a significant predictor of unintentional lane departures. The importance
of warning latency cannot be overlooked in an operational real-world system and, in
comparison to existing implementations of PERCLOS, measures of blink duration are
generally able to perform closer to real time. An extension to this concept is the idea of
a pre-drive assessment of fitness to drive. Using a camera-based driver monitoring sys-
tem, Mulhall et al. (2018) demonstrated the significant predictive ability of mean blink
duration measured before a drive on subsequent, real-world lane departure events.
As a caveat to the studies described above, it is important to note the critical
distinction between DSM versus driver eye tracking. While the aforementioned met-
rics show significant promise in the real-time measurement of driver drowsiness,
unidimensional ocular features form but one component of the multi-dimensional
problem space that is driver state. A simple example of this can be seen in that of the
drowsy driver who, with the vehicle safely stopped at a red light, voluntarily rests
his/her eyes compared with the drowsy driver who suffers a microsleep event while
the vehicle is in motion.
These issues are further confounded by interactions between driver states, where
drivers have been shown to have an increased propensity toward distracting behav-
ior following sleep deprivation (Anderson & Horne, 2013; Kuo et al., 2019). DSM
systems based solely on canonical eye tracking metrics such as eyes-off-road time or
eye closure are unlikely to provide the insights necessary to infer high-level driver
state. This is especially pertinent in the context of autonomous driving where there is
a fundamental shift in the nature of the driving task and objective risks of behaviors
typically associated with safety critical events.
to determine their capability to intervene, and the indicators we do have may change.
For instance, automated lane-keeping features change the requirement for drivers
to visually monitor their lane position, and consequently change their scanning
behavior. Shifting driving from motor-control to monitoring, driver visual behavior
will change to match the task, widening the visual range that drivers scan (Louw &
Merat, 2017). Further, steering is only useful as a measure of driver state when the
driver is in control.
While there are vastly differing opinions on the rate at which different levels of
automated driving features will be widely adopted on roads around the world, it is
apparent now that the jump to mass-available autonomous vehicles is not coming all
at once. Rather it is likely that waves of vehicles with Level 1 to Level 5 automation
functionality will coexist on the road with widespread regional variation. It remains
to be seen if dense regional centers will have rapid autonomy adoption, mirroring
technologies such as cell phones and internet, as some anticipate (Litman, 2019;
Corwin, Jameson, Pankratz, & Willigmann, 2016), but this will likely be deter-
mined by the degree of dependence on infrastructure improvements. For instance,
GM Super Cruise automation has succeeded by implementing geofencing within
regions that have been Light Detection and Ranging (LIDAR)-mapped in advance,
an approach more beneficial for heavily traveled routes.
Currently, the age of the average vehicle on U.S. roads is 12 years, with 10% of
vehicles being older than 20 years (Federal Highway Administration, 2017); if this
pattern holds, half of the vehicles on the roads of 2030 are already here. This mixed
fleet means that different levels of automation will have to interact with each other
and human drivers who must anticipate behavior, changing and invalidating old pat-
terns (see also, this Handbook, Chapter 19). New drivers will eventually lack famil-
iarity with old modes of driving. Just as a 99.99% reliable anti-lock braking system
(ABS) will result in new drivers with no concept of “pumping the brake,” some
drivers may only be truly qualified to “drive” in geo-fenced areas. With increasing
automation, DSM must be able to identify a wide range of driver capabilities and
determine an appropriate level of engagement for the specific automation and driv-
ing context.
of driver trust using camera-based technologies. Progress has already been made in
that regard (Hergeth, Lorenz, Vilimek, & Krems, 2016).
Drivers experiencing medical distress will be identifiable by their vehicles.
Current detection of driver drowsiness and attention needs to expand to handle
cases of unresponsive drivers, such as those experiencing cardiac arrest. Modeling
advances may identify conditions such as obstructive sleep apnea, which is report-
edly undiagnosed in 80% of cases, is more frequent in commercial fleets, and can
cause excessive driver sleepiness (Bonsignore, 2017).
Emotionality is also of interest. While emotions do not cause action, they do
motivate pursuit and avoidance of certain actions and increase or decrease risk aver-
sion (Baumeister, Vohs, DeWall, & Zhang, 2007). Strong negative emotions (e.g.,
anger, sadness, frustration, etc.) require effort to manage and process, a form of
cognitive distraction that can impact driving performance (Jeon, 2016).
Emotion recognition could also mitigate and model driver risk on an individual
and aggregate level (e.g., identifying problem features within a city or vehicle fleet),
help build more effective coping countermeasures, and even provide useful data
fusion with existing gaze classification algorithms.
Emotional recognition has a variety of approaches: facial expression detection,
voice analysis, brain scanning, physiological measures, body posture, and combina-
tions thereof. Facial recognition is widely explored, with many applications of the
Facial Action Coding System (FACS), which maps small movements in the face to
Ekman et al.’s (1987) “six universal emotions” (disgust, sadness, happiness, fear,
anger, and surprise). Voice is effective for interpersonal communication but is less
relevant in driving. Brain scanning and physiological measures give a massive level
of information on driver experience but interpreting the importance of any specific
event requires a mature understanding of context as well as skillful and timely pro-
cessing. Further, many powerful measures are also prohibitively invasive, making
them useful only in research settings. Most of these methods are interesting, alone,
but the real value lies in aggregation. For instance, body posture’s strongest benefits
occur when coupled with facial recognition (Aviezer, Trope, & Todorov, 2012). For
a detailed methods review, we recommend Calvo & D’Mello (2010), Zeng, Pantic,
Roisman, and Huang (2009), and Mauss and Robinson (2009).
Emotion tracking has its challenges. First, Ekman’s six universal emotions vary
in relevance to driving (Jeon & Walker, 2011) (e.g., disgust versus anger), and the
claimed universality may be less reliable across cultures (Gendron, Roberson, van
der Vyver, & Barrett, 2014; Russell, 1994). Further, most recognition algorithms
are trained on posed emotions in ideal conditions, not the messily framed, naturally
occurring emotions that are critical. Finally, to find social acceptance, emotion rec-
ognition must navigate difficult personal boundaries between observation of others’
feelings and how polite it is to mention them.
focuses on measuring distraction and drowsiness. These driver states can be defined
and operationalized in different ways, and the implications for effectiveness are of
critical concern. By necessity, there is a tension between OEMs accounting for cost
and driver experience and regulators who must ensure public safety, with both priori-
ties meriting consideration.
a smart vehicle assistant could interact with the driver after a warning, assess their
drowsiness, guide them to an appropriate resting place, and ensure that an effective
mitigation strategy is adopted. This function would also minimize false positives,
directly improving driver acceptance—a critical issue given that even lifesaving sys-
tems, such as lane departure warnings, may be deactivated by drivers due to unac-
ceptable annoyance levels.
Similar findings have also been reported in the context of driver distraction.
Kujala, Karvonen, and Mäkelä (2016) tested the efficacy of a proactive smart phone-
based distraction warning system that adjusted warning thresholds according to the
expected visual demands of an upcoming driving situation, feeding information
back to the driver in real time. The system detected visually high-demanding driv-
ing scenarios in which visual distraction would be particularly dangerous (based on
factors such as the experience level of the driver and the proximity of intersections
and pedestrian crossings ahead) and aimed to identify when the driver was looking
at the phone.
The closed test track study design showed that the distraction warning system sig-
nificantly increased glance time on road while multi-tasking with the mobile phone.
However, no effects on individual in-car glance durations were evident, although,
as noted by the authors, this may have reflected limitations associated with the gaze
tracking system that was used. These results further highlight the codependency
between accurate DSM and HMI in developing an effective system.
It seems inevitable that the future vehicle will modify its performance and/
or take over control when it senses driver impairment. If a driver is drowsy, the
vehicle could take many measures, such as alerting the driver and providing a time
window to find a rest stop before disabling the engine; limiting speed; increasing
the sensitivity of lane departure and other ADAS systems; communicating with
nearby vehicles to coordinate passing and following distances; enabling autono-
mous driving; engaging the driver in conversation; guiding them to a rest stop;
and so on.
11.5 CONCLUSION
Many regions in the world are striving toward the Vision Zero philosophy first
implemented in Swedish Parliament in 1997. Technologies such as driver monitoring
are acknowledged in EC documents to hold promise in reducing road injury. The
design and implementation of these technologies will determine whether they have
minimum or maximum injury reduction benefits.
There is much excitement, energy, and skepticism surrounding talk about the
autonomous future and separating reality from fiction. What the introduction of
autonomous vehicles has done is to get camera-based DSM into a vehicle to maintain
driver safety when operating a vehicle in autonomous mode. Automation notwith-
standing, this affords future benefits in non-automated driving to address longstand-
ing risks with distraction and drowsiness. The issues noted herein must be carefully
considered, however, if DSM technology is to achieve maximum benefits in reducing
road injury.
260 Human Factors for Automated Vehicles
REFERENCES
Anderson, C. & Horne, J. (2013). Driving drowsy also worsens driver distraction. Sleep
Medicine, 14(5), 466–468.
Ahlström, C., Jansson, S., & Anund, A. (2017). Local changes in wake electroencephalogram
precedes lane departures. Journal of Sleep Research, 26(6), 816–819.
Ahlström, C., Kircher, K., & Kircher, A. (2009). Considerations when calculating percent road
centre from eye movement data in driver distraction monitoring. Fifth International
Driving Symposium on Human Factors in Driver Assessment, Training and Vehicle
Design (pp. 132–139).Iowa City, IA: University of Iowa.
Aviezer, H., Trope, Y., & Todorov, A. (2012). Body cues, not facial expressions, discriminate
between intense positive and negative emotions. Science, 338(6111), 1225. doi:10.1126/
science.1224313
Barua, S., Ahmed, M. U., Ahlström, C., & Begum, S. (2019). Automatic driver sleepi-
ness detection using EEG, EOG, and contextual information. Expert Systems with
Applications, 115, 121–135.
Baumeister, R. F., Vohs, K. D., DeWall, N. C., & Zhang, L. (2007). How emotion shapes behav-
ior: Feedback, anticipation, and reflection, rather than direct causation. Personality and
Social Psychology Review, 11(2), 167–203. doi:10.1177/1088868307301033
Bonsignore, M. (2017). Sleep apnea and its role in transportation safety. F1000 Research, 6, 2168.
Calvo, R. A. & D’Mello, S. (2010). Affect detection: An interdisciplinary review of mod-
els, methods, and their applications. IEEE Transactions on Affective Computing, 1(1),
18–37. doi:10.1109/T-AFFC.2010.1
Cicchino, J. (2018). Effects of lane departure warning on police-reported crash rates. Journal
of Safety Research, 66, 61–70.
Corwin, S., Jameson, N., Pankratz, D., & Willigmann, P. (2016). The Future of Mobility:
What’s Next? Tomorrow’s Mobility Ecosystem – and How to Succeed in It. New York:
Deloitte University Press.
Dinges, D. F., Mallis, M. M., Maislin, G., & Powell, J. W. (1998). Evaluation of Techniques
for Ocular Measurement as an Index of Fatigue and as the Basis for Alertness
Management (DOT-HS-808-762). Washington, DC: National Highway Traffic Safety
Administration.
Ekman, P., Friesen, W. V., O‘Sullivan, M., Chan, A., Diacoyanni-Tarlatzis, I., Heider, K., &
Tzavaras, A. (1987). Universals and cultural differences in the judgments of facial
expressions of emotion. Journal of Personality and Social Psychology, 53(4), 712–717.
doi:10.1037/0022-3514.53.4.712
Euro NCAP. (2017). Euro NCAP 2025 Roadmap: In Pursuit of Vision Zero. Brussels, Belgium.
Retrieved 30 April, from www.euroncap.com/en/for-engineers/technical-papers/
Favarò, F. M., Nader, N., Eurich, S. O., Tripp, M., & Varadaraju, N. (2017). Examining
accident reports involving autonomous vehicles in California. PLoS ONE, 12(9).
doi:10.1371/journal.pone.0184952
Federal Highway Administration. (2017). National Household Travel Survey. Washington,
DC: FHWA.
Fitzharris, M., Liu, S., Stephens, A. N., & Lenné, M. G. (2017). The relative importance of
real-time in-cab and external feedback in managing fatigue in real-world commercial
transport operations. Traffic Injury Prevention, 81(S1), S71–78.
Gendron, M., Roberson, D., van der Vyver, J. M., & Barrett, L. F. (2014). Cultural relativ-
ity in perceiving emotion from vocalizations. Psychological Science, 25(4), 911–920.
doi:10.1177/0956797613517239
Hallvig, D., Anund, A., Fors, C., Kecklund, G., & Akerstedt, T. (2014). Real driving at night –
Predicting lane departures from physiological and subjective sleepiness. Biological
Psychology, 191, 18–23.
Driver State Monitoring 261
Hergeth, S., Lorenz, L., Vilimek, R., & Krems, J. F., 2016. Keep your scanners peeled: Gaze
behavior as a measure of automation trust during highly automated driving. Human
Factors, 58, 509–519. doi:10.1177/0018720815625744
Hung, C. S., Sarasso, S., Ferrarelli, F., Riedner, B., Ghilardi, F., Cirelli, C., & Tononi, G.
(2013). Local experience-dependent changes in the wake EEG after prolonged wake-
fulness. Sleep, 36(1), 59–72
Hynd, D., McCarthy, M., Carroll, J., Seidl, M., Edwards, M., Visvikis, C., … Stevens, A.
(2015). Benefit and Feasibility of a Range of New Technologies and Unregulated
Measures in the Fields of Vehicle Occupant Safety and Protection of Vulnerable
Road Users: Final Report. Report Prepared for the European Commission.
Retrieved 30 April 2019, from htttps://publications.europa.eu/en/publication-detail/-/
publication/47beb77e-b33e-44c8-b5ed-505acd6e76c0
ISO 15007-1. (2014). Road Vehicles — Measurement of Driver Visual Behaviour with Respect
to Transport Information and Control Systems — Part 1: Definitions and Parameters.
Retrieved November 19, 2018, from www.iso.org/obp/ui/#iso:std:iso:15007:-1:ed-2:v1:en
Jackson, M. L., Kennedy, G. A., Clarke, C., Gullo, M., Swann, P., Downey, L. A., … Howard,
M. E. (2016). The utility of automated measures of ocular metrics for detecting driver
drowsiness during extended wakefulness. Accident Analysis & Prevention, 87, 127–
133. doi:10.1016/j.aap.2015.11.033
Jeon, M. (2016). Don’t cry while you’re driving: Sad driving is as bad as angry driving.
International Journal of Human–Computer Interaction, 32(10), 777–790. doi:10.1080
/10447318.2016.1198524
Jeon, M. & Walker, B. N. (2011). What to detect? Analyzing factor structures of affect
in driving contexts for an emotion detection and regulation system. Proceedings
of the Human Factors and Ergonomics Society Annual Meeting, 55, 1889–1893.
doi:10.1177/1071181311551393
Kircher, K. & Ahlström, C. (2009). Issues related to the driver distraction detection algo-
rithm AttenD. First International Conference on Driver Distraction and Inattention.
Gothenburg, Sweden: Chalmers.
Kircher, K., Ahlström, C., & Kircher, A. (2009). Comparison of two eye-gaze based real-
time driver distraction detection algorithms in a small-scale field operational test. Fifth
International Driving Symposium on Human Factors in Driver Assessment, Training
and Vehicle Design (pp. 16–23).
Kujala, T., Karvonen, H., & Mäkelä, J. (2016). Context-sensitive distraction warnings –
Effects on drivers’ visual behavior and acceptance. International Journal of Human-
Computer Studies, 90, 39–52. doi:10.1016/j.ijhcs.2016.03.003
Kuo, J., Lenne, M., Mulhall, M., Sletten, T., Anderson, C., Howard, M., … Collins, A. (2019).
Continuous monitoring of visual distraction and drowsiness in shift-workers during
naturalistic driving. Safety Science. 119, 112–116.
Lee, J., Moeckli, J., Brown, T., Roberts, S., Victor, T., Marshall, D., … Nadler, E. (2013).
Detection of driver distraction using vision-based algorithms. Proceedings of the 23rd
Enhanced Vehicle Safety Conference, Seoul, Korea: ESV.
Lenné, M. G. & Jacobs, E. M. (2016). Predicting drowsiness-related driving events: A review
of recent research methods and future opportunities. Theoretical Issues in Ergonomics
Science, 17, 533–553.
Litman, T. (2019). Autonomous Vehicle Implementation Predictions. Victoria, Canada:
Victoria Transport Policy Institute.
Louw, T. & Merat, N. (2017). Are you in the loop? Using gaze dispersion to understand driver
visual attention during vehicle automation. Transportation Research Part C: Emerging
Technologies, 76, 35–50. doi:10.1016/j.trc.2017.01.001
Mauss, I. B. & Robinson, M. D. (2009). Measures of emotion: A review. Cognition & Emotion,
23(2), 209–237. doi:10.1080/02699930802204677
262 Human Factors for Automated Vehicles
McDonald, A. D., Lee, J. D., Schwarz, C., & Brown, T. L. (2014). Steering in a random forest:
Ensemble learning for detecting drowsiness-related lane departures. Human Factors,
56(5), 986–998. doi:10.1177/0018720813515272
Merat, N., Seppelt, B., Louw, T., Engström, J., Lee, J. D., Johansson, E., … Keinath, A.
(2018). The “out-of-the-Loop” concept in automated driving: proposed definition,
measures and implications. Cognition, Technology & Work, 21(2), 87–98. doi:10.1007/
s10111-018-0525-8
Mulhall, M. D., Cori, J., Kuo, J., Magee, M., Collins, A., Anderson, C., … Howard, M. E.
(2018). Pre-drive ocular assessment predicts driving performance in shift workers: A
naturalistic driving study. Journal of Sleep Research, 27(S2).
National Highway Traffic Safety Administration. (2008). National Motor Vehicle Crash
Causation Survey (HS 811 059). Washington, DC: National Highway Traffic Safety
Administration.
National Transportation Safety Board. (2017). Safety Recommendation H-17–042.
Washington, DC: National Transportation Safety Board.
Parasuraman, R. (2000). The Attentive Brain. Cambridge, MA: MIT Press.
Perrier, J., Jongen, S., Vuurman, E., Bocca, M. I., Ramaekers, J. G., & Vermeeren, A. (2016).
Driving performance and EEG fluctuations during on-the-road driving following sleep
deprivation. Biological Psychology, 121, 1–11.
Regan, M. A., Lee, J. D., & Young, K. L. (2008). Driver Distraction: Theory, Effects and
Mitigation. Boca Raton, FL: CRC Press.
Russell, J. A. (1994). Is there universal recognition of emotion from facial expression?
A review of the cross-cultural studies. Psychological Bulletin, 115(1), 102–141.
doi:10.1037/0033–2909.115.1.102
Seppelt, B., Seaman, S., Lee, J., Angell, L., Mehler, B., & Reimer, B. (2017). Glass half-full:
On-road glance metrics differentiate crashes from near-crashes in the 100-Car data.
Accident Analysis and Prevention, 107, 48–62.
Vegega, M., Jones, B., & Monk, C. (2013). Understanding the Effects of Distracted Driving
and Developing Strategies to Reduce Resulting Deaths and Injuries: A Report to
Congress (DOT HS 812 053). Washington, DC: National Technical Information
Service.
Victor, T. W. (2010). The Victor and Larsson (2010) Distraction Detection Algorithm and
Warning Strategy. Gothenburg, Sweden: Volvo Technology.
Victor, T. W., Tivesten, E., Gustavsson, P., Johansson, J., Sangberg, F., & Ljung Aust, M.
(2018). Automation expectation mismatch: Incorrect prediction despite eyes on threat
and hands on wheel. Human Factors, 60(8), 1095–1116. doi:10.1177/0018720818788164
Wierwille, W. W., Ellsworth, L. A., Wreggit, S. S., Fairbanks, R. J., & Kirn, C. L. (1994).
Research on Vehicle-Based Driver Status/Performance Monitoring: Development,
Validation and Refinement of Algorithms for Detections of Driver Drowsiness (DOT
HS 808 247). Washington, DC: National Highway Traffic Safety Administration.
World Health Organization. (2018). Global Status Report on Road Safety 2018. Geneva:
World Health Organization.
Zeng, Z., Pantic, M., Roisman, G. I., & Huang, T. S. (2009). A survey of affect recognition
methods: Audio, visual, and spontaneous expressions. IEEE Transactions on Pattern
Analysis and Machine Intelligence, 31(1), 39–58. doi:10.1109/TPAMI.2008.52
12 Behavioral Adaptation
John M. Sullivan
University of Michigan Transportation Research Institute
CONTENTS
Key Points .............................................................................................................. 263
12.1 Behavioral Adaptation and Automated, Connected, and
Intelligent Vehicles ....................................................................................... 264
12.1.1 Why Is BA a Concern? ..................................................................... 265
12.1.1.1 Negative BA ....................................................................... 265
12.1.1.2 Positive Adaptation ............................................................ 267
12.1.1.3 Discussion .......................................................................... 268
12.1.2 Modeling Behavioral Adaptation ..................................................... 269
12.1.2.1 How Might Behavior Change with Adaptation? ................ 272
12.1.2.2 What Specific Behaviors Might Change with Adaptation? ... 273
12.1.2.3 When Does Behavioral Adaptation Occur? ....................... 274
12.2 Behavioral Adaptation to Advanced Driving Assistance Systems ............... 275
12.2.1 Adaptation to Automation of Longitudinal and Lateral Control ...... 276
12.2.1.1 Automation of Longitudinal and Lateral Control .............. 276
12.2.1.2 Changes in Driver Event Detection Related to
Control Automation ����������������������������������������������������������� 277
12.2.1.3 Control Automation and Changes in Driving
Behavior—Control Management ��������������������������������������� 281
12.2.1.4 Control Automation and Driver Workload......................... 282
12.2.1.5 Control Automation and SA............................................... 283
12.2.2 Adaptation to ADAS Warning Systems............................................ 283
12.2.2.1 Adaptation to ISA............................................................... 284
12.2.2.2 Lane Departure Warning ................................................... 284
12.2.2.3 Forward Collision Warning................................................ 285
12.2.2.4 Combination Discrete Warning Systems ........................... 285
12.3 Conclusion .................................................................................................... 286
References .............................................................................................................. 287
KEY POINTS
• A driver’s behavior is malleable and may change (or adapt) in response to
alterations in the function and performance of their vehicle. This change
has generally been called behavioral adaptation (BA).
• BA can occur for a variety of reasons and be manifest in a variety of ways.
A major concern involves adaptations that diminish the expected safety
improvements after a driving enhancement is introduced.
263
264 Human Factors for Automated Vehicles
BA, review more recent thoughts about the mechanisms that drive BA, and finally
discuss how BA is influenced by various forms of vehicle automation, including
the kind of part-task automation provided by advanced driver assistance systems
(ADAS), as well as other connected vehicle applications.
12.1.1.1 Negative BA
Perhaps the chief concern about BA involves adaptations that diminish the expected
safety improvements after a driving enhancement is introduced. This is known as
negative BA. Classic examples of this include the observation made by Gibson and
Crooks (1938), whereby drivers equipped with more efficient brakes adjusted their
safety margins to account for the new stopping capability. Other early examples
include the introduction of studded tires (Rumar, Berggrund, Jernberg, & Ytterbom,
1976) to improve traction under icy conditions. In this study, drivers drove faster
in icy conditions when equipped with studded tires. Although a greater net safety
margin was observed, it was not as great as it would have been if the driver’s speed
behavior on icy roads had not changed. Another example is the introduction of
anti-lock brakes (ABS). While earlier studies of drivers equipped with ABS (e.g.,
Aschenbrenner, Biehl, & Wurm, 1988; OECD, 1990) seemed equivocal—no dif-
ferences were observed in the number of accident involvements, but faster driving
was noted by observers of drivers in vehicles equipped with ABS—later work found
evidence that vehicles equipped with ABS are driven with significantly shorter head-
ways and lower rates of seat-belt use (Sagberg, Fosser, & Sætermo, 1997).
In each case, the expected improvement in driver safety appears to be offset by
a change in behavior that increases crash risk. How much the behavioral change
offsets the reduction in risk likely varies, but some researchers suggest that driv-
ers adjust their driving to achieve an ideal target risk, completely negating the risk
reduction produced by the intervention. This has been called risk homeostasis
(Wilde, 1998). This is one of the several explanations of BA that will be discussed
in more detail later.
266 Human Factors for Automated Vehicles
12.1.1.1.1 Misuse
Other forms of negative BA may involve the use of a new technology in circum-
stances for which it is not appropriate (Parasuraman & Riley, 1997). This problem is
specifically associated with different forms of vehicle automation in which a driver
is provided the capability of delegating part of the driving task to an automated sys-
tem, or supplementing his or her monitoring abilities with enhanced sensing capa-
bilities. Unlike the previous examples, in which the driver might increase crash risk
by altering his or her choice of speed, gap acceptance, or following distance, misuse
involves decisions about when, where, and how one elects to employ these new capa-
bilities. In the previous example, performance improvements in pre-existing vehicle
functions resulted in the modulation of pre-existing driving behaviors. Automation
introduces new tasks and capabilities to drivers, changing the repertoire of actions
drivers perform to manage their vehicles.
For example, adaptive cruise control (ACC) is designed to be used on limited
access roadways that are free of pedestrians, bicyclists, mopeds, and skateboarders.
This is because most ACC radar has difficulty detecting small, stationary, or low-
speed objects. ACC is also ignorant of traffic signage, signals, and other characteris-
tics of the roadway environment. A misuse of ACC would involve activating it in an
inappropriate environment like a surface street—a misuse of where it is activated.
ACC is also limited in its capabilities in fog, snow, and heavy rain. Use of ACC in
these conditions is also inappropriate—a misuse of when it is activated. ACC might
be used by some drivers to maintain relatively shorter headways than they would
during manual driving with the objective of reducing the incidence of forward cut-
ins. This form of misuse is associated with how it is used.
Because studies of BA in which drivers are assisted by automation are generally
focused on performance comparisons of behavior with and without the automation,
behavioral differences have also been attributed to adaptation effects. However, in
the early stages of use, it may be more accurate to attribute these behavioral dif-
ferences to a lack of understanding about the technology’s functions and limita-
tions. Drivers may modify their driving behavior based on their (sometimes flawed)
understanding of how a system operates, accepting the system to stand in for oneself
(see e.g., this Handbook, Chapter 3). Many automation systems invite drivers to
back out of the control role to allow the system to take over. After drivers oblige,
they may be slow to recognize the need to retake control, resulting in a delayed
intervention for a critical event (see also this Handbook, Chapter 7). Given the
limited awareness some drivers may have about their technologies (e.g., Beggiato,
Pereira, Petzoldt, & Krems, 2015; Dickie & Boyle, 2009; Jenness, Lerner, Mazor,
Osberg, & Tefft, 2008), compromises to safety may be rooted in ignorance of these
system limitations.
It is also important to recognize that use of these systems involve changes in
how the vehicle is managed. For example, it would be hard to argue that refraining
from use of a brake pedal during ACC activation is a result of BA—this is a behav-
ioral change that is mandated by the ACC system to allow it to function properly.
Stepping on the brake disengages ACC. Such a change should not be considered
an instance of BA—it is a requirement to allow the ACC to perform. Likewise,
Behavioral Adaptation 267
other forms of ADAS may require some degree of display monitoring or performing
an activation sequence that may introduce new driving behaviors. While this may
differ from routine non-ADAS equipped behavior, it should not be considered an
instance of BA.
12.1.1.1.2 Disuse
Failure to realize safety benefits from automation technologies may also be a con-
sequence of drivers choosing to not employ automation when it may provide some
assistance. For example, lane departure warning (LDW) technologies alert drivers
when their vehicle is about to depart the lane, unless the lane change is signaled.
These camera-based systems work best on roadways with clearly marked lanes, and
less reliably in construction areas or when weather conditions obscure the lane lines.
While they have the clear potential to assist drivers (Scanlon, Kusano, Sherony, &
Gabler, 2015; Sternlund, Strandroth, Rizzi, Lie, & Tingvall, 2017), drivers often do
not elect to activate them (e.g., Eichelberger & McCartt, 2014; Reagan & McCartt,
2016). This may not be so much an instance of BA—the driver limits their exposure
to the system and thus eliminates the possibility of any change in driving behavior—
but it does represent an example of a failure to realize a potential safety benefit as a
consequence of the driver’s non-use behavior.
12.1.1.2 Positive Adaptation
Positive BA occurs when drivers’ behavioral changes improve the likely safety
benefit in ways that exceed those originally anticipated. There are generally fewer
examples of positive adaptations reported in the literature. Most often, the reported
behavioral change is a consequence of adopting driving behaviors demonstrated (or
encouraged) by the technology. For example, users of an LDW system that produced
an audible warning whenever an unsignaled lane change was executed, reduced
their frequency of unsignaled lane changes by 43% on freeways and 24% on surface
roads (LeBlanc, Sayer, Winkler, Bogard, & Devonshire, 2007); a similar effect was
observed in a later study in which a 66% reduction in unsignaled lane changes was
observed (Sayer et al., 2010). Another example was reported by Ervin et al. (2005)
in which ACC drivers executed 50% fewer passing maneuvers than they did when
driving without ACC.
Other forms of positive BA have been reported that compare some characteris-
tics of driving performance while assisted with ADAS to unassisted performance.
For example, Bianchi Piccinini, Rodrigues, Leitão, and Simões (2014) report safer
overall time headways when drivers are permitted use of an ACC compared with
fully manual driving. In this study, the behavioral component of driving that is given
focus is headway and speed management during ACC-supported and unsupported
segments of a drive. While identified as a BA outcome, the overall performance
improvement during ACC-assisted segments of the drive may actually be deter-
mined by the degree to which the driver delegates some of the longitudinal control
to the ACC system, which is less variable than manual control. We also note that the
performance measures (speed and headway management) in Bianchi Piccinini et al.
(2014) are focused exclusively on those parts of the driving task that are directly
268 Human Factors for Automated Vehicles
related to the ACC function. Other characteristics of the driver’s behavior that were
not monitored in this study might not fare as well. For example, Rudin-Brown and
Parker (2004) reported a decline in responsiveness to safety-relevant brake light acti-
vation in a forward vehicle. It is important to keep in mind that myriad driving
behaviors might change in response to changes in the driving task.
12.1.1.3 Discussion
In the past, BA effects were generally categorized as either positive, negative, or
none. Earlier perspectives of BA effects (e.g., Evans, 1985; OECD, 1990; Wilde,
1998) evaluated adaptation effects in terms of the projected benefits, based on an
assumption of no change in driver behavior. Thus, if the observed decline in crash
risk after the introduction of a technology (like ABS, for example) fell short of the
projected decline, it was believed that some degree of negative BA of the driver
was responsible for the shortfall. Early improvements in vehicle technologies were
largely associated with the performance of the vehicle (e.g., braking, handling,
steering, and acceleration) or with occupant protection (e.g., air bags, seat belts).
The driver’s basic task was generally unchanged throughout these improvements.
Within these constraints, BA was evaluated in terms of changes in overall crash
risk, operationalized as elevated travel speed, aggressive acceleration and brak-
ing, short following distances, small gap acceptances, and the execution of other
aggressive maneuvers.
The introduction of ADAS and other automation technologies into the driving
task has somewhat altered perspectives on driving such that the evaluation of BA
effects is less focused on overall assessment of crash risk. BA effects are now more
focused on component driving behaviors. Perhaps this is because ADAS technolo-
gies add new tasks for drivers to perform during driving: drivers must activate
and deactivate systems, decide when to delegate and retake control, monitor sys-
tem displays for status, heed warnings that might indicate critical conditions, and
take appropriate action in response to such warnings. Understanding BA now also
involves identifying when, where, and how a driver makes these choices. At the
same time, many of these systems relieve the driver of the tedium of perform-
ing various control tasks, assist in the monitoring the roadway environment for
conflicts, and intervene to avoid imminent crashes or to stabilize control. Perhaps
because ADAS technologies are doing parts of the driving task (e.g., monitoring
for lane excursion, maintaining a safe headway, regulating speed), the driving task
is now conceived of as many component tasks. Some of these tasks can be under-
taken by technology (e.g., headway management), some remain with the driver
(e.g., tactical decisions), and some are new to the driver (i.e., activation and moni-
toring of ADAS systems). In this new perspective, behavioral changes resulting
from ADAS use are evaluated with reference to a baseline driving condition in
which the ADAS technology is not available. Often this involves identifying how
much a specific activity that the driver would normally perform under manual
driving has changed with automation. Do drivers use their rear-view mirrors less
when blind zone detection is available? Do drivers glance less at the lane boundar-
ies with lane-keeping assistance? Do drivers monitor forward headway less with
ACC engaged?
Behavioral Adaptation 269
and are limited in their ability to generate testable hypotheses or sufficient guidance
to predict the character and kind of BA that might occur.
As touched on in the discussion of negative BA associated with automation tech-
nologies, the motivational models of BA do not explicitly identify details of the driv-
ing task that might be subject to BA effects. As automation technologies have been
introduced into the vehicle, models of the driving task have involved partitioning
it into those parts that can be supported with automation and those that are left to
the driver. For example, conventional cruise control (CCC) originally assisted with
longitudinal control by allowing the driver to set a fixed cruise speed that would
be maintained by automation. The driver remained responsible to manage forward
headway in a safe manner. Thus, components of longitudinal control involved speed
and headway management. As technology advanced, both speed and headway man-
agement could be supported by automation, and the driver’s task changed again.
Although motivational models of BA explained the impetus to change driving behav-
ior (e.g., to adjust perceived risk level or to adjust the level of skill challenge), later
models of BA attempt to identify particular driving behaviors that may be subject to
BA (e.g., forward road monitoring, mirror checks, turn signal use). Thus, driving is
conceived as a coordinated agglomeration of complex tasks involving several cogni-
tive, perceptual, and motor functions. Some of these functions can be supported by
automation, others cannot (see e.g., this Handbook, Chapter 8); the driver is now
responsible to understand which parts of the task remain his or her responsibility and
which are handled by automation. Sometimes this can be a challenge—early ver-
sions of automated parking assist would steer, but leave the brake pedal application,
transmission operation, and accelerator pedal application to the driver.
Even before much automation began appearing on vehicles, Michon (1985) cri-
tiqued motivational models, when he suggested that they were actually discussing
“…the products of cognitive functions (beliefs, emotions, intentions) rather than such
functions themselves.” Michon (1979) offered a “cognitive” framework as an alter-
native to the existing behavioral approaches (see also, Janssen, 1979). The cogni-
tive framework looked to production system architectures and human information
processing approaches (e.g., Anderson, 1993; Anderson & Lebiere, 1998; Lindsay
& Norman, 1977; Newell, 1990; Newell & Simon, 1972) that employed “cognitive”
procedures to account for driver behavior. These models involved a far more granu-
lar analysis than before, identifying explicit inputs, processes, and outputs to explain
driver behavior. They departed from traditional control-theoretic models (e.g., Reid,
1983) by incorporating processes like pattern matching, propositional logic, learning
mechanisms, and goal-directed behavioral hierarchies to explain driving behavior.
Thus, Michon casts the task of driving into a hierarchical framework that divided
driving into strategic, tactical (or maneuvering), and control (or operational) tasks
(Janssen, 1979; Michon, 1979). The strategic level formulates travel goals and plans,
the tactical level governs deliberate maneuvers (like passing), and the control level
covers automatic actions like lane tracking and speed control (see Figure 12.1). A
similar hierarchy was suggested by Rasmussen (1983) in his description of the per-
formance levels of skilled operators, dividing the operation levels into knowledge-,
rule-, and skill-based behavior (similar to Michon’s strategic, tactical, and control
levels, respectively). Ranney (1994) endorsed this hierarchical approach, linking it
Behavioral Adaptation 271
FIGURE 12.1 A hierarchical model of the task of driving. (From Michon, 1985.)
FIGURE 12.2 An early version of the qualitative model of behavioral adaptation. (Rudin-
Brown & Noy, 2002.)
272 Human Factors for Automated Vehicles
internal operations that affect driver behavior; a behavioral component that catego-
rizes overt actions incorporating Michon’s (1985) hierarchy; and an external world
(Object) component that depicts the influence of the environment, roadway, and
vehicle. On the driver side, components such as trust, the driver’s mental model (of
the ADAS and vehicle), as well as some personality factors such as locus-of-control
and sensation seeking contribute to trigger BA (see Figure 12.2). Although mod-
estly characterized as qualitative, the model has led to some testable hypotheses and
results (e.g., Rudin-Brown & Noy, 2002; Rudin-Brown & Parker, 2004) that link
personality factors such as locus-of-control and sensation seeking to BA effects. The
model was later revised to include other driver factors such as gender, driver state,
and age (Rudin-Brown, 2010).
Most ACC systems have limitations that could be worrisome for drivers. In par-
ticular, the sensor systems used for forward detection may not detect small forward
objects such as pedestrians, bicycles, animals, or motorcycles. On curvy segments of
road, forward objects may not be properly aligned with the radar such that the radar
mistakes an object in an adjacent lane as a forward object. It might also fail to detect
a forward object outside of the radar’s field of view. Most high-speed ACC sys-
tems do not detect forward objects that are moving slowly or stopped. The stopping
authority of most ACC systems is limited—many are incapable of braking above
0.3 g and will fail to avoid a forward collision in this situation. Finally, ACC perfor-
mance deteriorates in snowy, rainy, or foggy weather, making the forward detection
unreliable. These limitations are a concern, particularly if drivers are unaware of
them and are responsible to intervene when they arise.
Lateral control. Active management of lane position is an outgrowth of LDW
technologies that rely on lane markings to detect when a vehicle is crossing a lane
or road boundary. Instead of simply warning the driver that a boundary is being
approached, systems can now steer the vehicle back into the lane. For convenience,
we will generically refer to them as active steering (AS) systems, although some
intervene when the vehicle nears the lane boundary and others continuously cen-
ter the vehicle in the lane. AS automation is another step toward fully autonomous
vehicle control. When paired with ACC, it has been variously branded as Super
Cruise (General Motors), Autopilot (Tesla), and Pilot Assist (Volvo). Most AS sys-
tems provide limited steering authority and cannot fully manage lateral position at
high speed on high-curvature roadways.
Like ACC, AS systems are limited by the degree to which they can detect lane
markings using video, laser, or infrared sensors. The system does one simple thing:
it maintains the vehicle position between two lane detected boundary lines. If lane
markings are obscured by snow, road wear, or other debris, the AS system will fail,
returning lateral control to the driver. AS systems will not detect obstacles in the
center of the roadway—no attempt will be made to avoid debris in the roadway. They
detect lane lines, not objects. When lane boundaries are distorted by extreme roadway
geometry (e.g., high curvature) or, if complicated line patterns are drawn in the road
(as in a construction zone), the system’s ability to maintain lane position deteriorates.
Most research on AS involves simulator studies. We include discussion of AS
systems along with ACC because the two systems are frequently paired together
in simulator studies that investigate progressively greater levels of automation (e.g.,
Carsten, Lai, Barnard, Jamson, & Merat, 2012; Jamson, Merat, Carsten, & Lai, 2001;
Jamson, Merat, Carsten, & Lai, 2013; Merat, Jamson, Lai, & Carsten, 2012; Merat
et al., 2014; Stanton & Young, 1998; Young & Stanton, 2002, 2007a; b).
take control of the vehicle when circumstances exceeded the ACC’s performance
boundaries. Using mostly simulator studies, ACC-equipped drivers were slower than
manual drivers to react to critical traffic situations like the abrupt braking of a lead
vehicle, unexpected cut-ins, the sudden appearance of a stationary vehicle in the
travel path, or system failures (Bianchi Piccinini et al., 2015; de Winter et al., 2014;
Hoedemaeker & Brookhuis, 1998; Larsson, Kircher, & Andersson Hultgren, 2014;
Nilsson, 1995; Stanton et al., 1997; Stanton et al., 2001; Vollrath, Schleicher, & Gelau,
2011; Young & Stanton, 2007a). For example, Nilsson (1995) observed late braking
among ACC-equipped drivers when approaching a stationary queue compared with
manual drivers. Stanton et al. (1997) observed 4 of 12 drivers fail to retake control
of their vehicle when the ACC system abruptly accelerated into a forward vehicle.
Hoedemaeker and Brookhuis (1998) observed both larger brake force maximums
and smaller minimum headway times for drivers of vehicles equipped with ACC.
Larsson et al. (2014) observed longer brake reaction times (BRT) in response to
cut-ins when using ACC, compared with manual driving. A similar pattern was also
observed in a test-track study of Rudin-Brown and Parker (2004) where drivers with
ACC took about 0.6–0.8 seconds longer to react to a lead vehicle’s brake lights than
the average 2.0 seconds when driving without ACC.
Comparable results were observed in studies of AS paired with ACC. Collectively,
the pairing is referred to as highly automated driving (HAD) (e.g., de Winter et al.,
2014; Merat et al., 2014). Strand, Nilsson, Karlsson, and Nilsson (2014) found more
hard braking and collisions among HAD drivers than ACC-only drivers under condi-
tions of automation failure. Merat and Jamson (2009) also found that drivers braked
about 1.5 seconds later to respond to a forward vehicle braking with HAD compared
with manual driving. In a meta-analysis, de Winter et al. (2014) report that most of the
evidence suggests that HAD and ACC evoke “…long response times and an elevated
rate of (near-) collisions in critical events as compared to manual driving” (p. 208).
It is unclear whether these effects are a consequence of BA, or simply the relative
ignorance about the functional characteristics of an unfamiliar ADAS. As discussed
earlier, BA is thought to stabilize after the learning and appropriation phase, dur-
ing an integration phase (Cacciabue & Saad, 2008). This suggests that BA may
take time to develop, however many of the above results are generated shortly after
a driver is introduced to the system for the first time. That is, drivers are often rela-
tively new to ACC and AS capabilities and the response to the critical event that is
used to evaluate adaptation, occurs only after a brief period of exposure to these
systems. In the driving simulator studies of Stanton and Young, trials with differ-
ent levels of ACC lasted between 10 and 20 minutes and were preceded by about
5 minutes of practice (Stanton et al., 1997; 2001; Young & Stanton, 2007a); the simu-
lator trials in Hoedemaeker and Brookhuis (1998) each lasted about 15 minutes.
Although many later simulator studies use longer periods of ACC and AS exposure,
they are not longer by much. For example, simulator studies by University of Leeds
researchers employed about 45 minutes of simulator practice, followed by experi-
mental trials that lasted 45 minutes each (Carsten et al., 2012; Jamson et al., 2001;
Jamson et al., 2013; Merat et al., 2012; Merat et al., 2014). Based on track length
and travel speed, other simulator experimental trials appear to exceed 30 minutes
(Beggiato & Krems, 2013; Bianchi Piccinini et al., 2014; 2015; Vollrath et al., 2011).
Behavioral Adaptation 279
Similar exposure levels were used in the test track study (Rudin-Brown & Parker,
2004), where exposure to the ACC system involved a briefing on ACC operation and
a 30-minute warm-up session on the track. There were two 30-minute experimental
trials with the ACC active.
If most studies of ACC and HAD involve drivers who are relatively unfamiliar
with these systems, apart from an initial briefing and test drive, is it surprising that
drivers are less prepared to respond to critical events that involve what could be
characterized as different forms of ADAS failures? Perhaps participants’ limited
exposure to the ADAS in these studies reveal more about their misunderstanding
about the system’s limitations, than they do about ADAS BA effects. That is, should
behavior that may be the result of a poor understanding of an ADAS be considered
a BA effect? In the earlier theoretical discussion of BA mechanisms, the driver’s
mental model was identified as one of the several components that influence BA.
While it is important to first have a mental model of an ADAS, it is also important
to understand that both the objective accuracy of the model and the driver’s level of
confidence or trust in his or her model also play a role in adaptation.
12.2.1.2.2 Long-Term Changes
A driver’s understanding of the functioning of ADAS has been explored by study-
ing drivers with ACC experience (e.g., Bianchi Piccinini et al., 2015; Larsson et al.,
2014). Comparing both experienced and novice ACC users, Larsson et al. (2014)
found both groups had slower BRT with ACC automation, compared to full manual
control, and the effect was smaller for experienced ACC users. A comparison of
experienced and novice ACC users also revealed that experienced users were faster
to respond than novices. These results suggest that experience with ACC can influ-
ence the degree to which drivers respond to unpredictable events.
Two studies directly examined a drivers’ trust, acceptance, and mental models
of ACC. Beggiato and Krems (2013) conducted a simulator study involving three
separate drive sessions over a six-week period in which drivers were given different
briefings about the ACC’s functional capabilities. The correct group of drivers was
given accurate information about the ACC that included details related to the system’s
difficulty detecting small vehicles, functioning in adverse weather conditions, and its
management around narrow road bends. The incomplete group was provided with a
basic functional overview of ACC, but was not advised of the ACC problem areas.
An incorrect group was given the same information as the correct group, but was
also given erroneous information that the system had problems with large vehicles
and with white/silver cars. Later, mental models were assessed using questionnaires
immediately following the ACC briefing, and after each experimental trial. Over time,
the three groups’ mental models converged. Notably, non-occurring, non-experienced
failures originally called out in the descriptions (e.g., white/silver cars, large vehicles)
were forgotten, while unexpected experienced failures (incomplete group) led to quick
adjustments of the mental model toward the correct group. A follow-up on-road study
(Beggiato et al., 2015) was also conducted among drivers with no prior ACC experi-
ence. Participants drove an ACC-equipped test vehicle in ten drives over a two-month
period. They were initially given a complete description of the ACC function, which
included specific details about ACC problem areas (i.e., detection of small vehicles,
280 Human Factors for Automated Vehicles
12.2.1.2.3 Driver Trust
Many studies of trust in automation have looked at the use of ACC. They suggest that
both experienced and novice users over-trust automation, that trust increases with
exposure, and that trust appears insensitive to failure (Itoh, 2012; Rudin-Brown &
Parker, 2004). Similar results have been reported for inaccurate LDW systems
(Rudin-Brown & Noy, 2002). When drivers are initially given many details about
ACC operational exceptions, some of which are incorrect, their rated trust starts low;
however, it increases over successive exposures to ACC (Beggiato & Krems, 2013).
If drivers are given incomplete information about ACC function, trust starts high
but declines with exposure to unanticipated boundary conditions—i.e., an unmis-
takable deviation from expectation that likely triggers an amendment to an internal
model (Engstrom et al., 2018). Trust increases over sessions according to a power law
(Beggiato et al., 2015) leveling out by about the fifth exposure session, although it
would seem that this development may be affected by the variability in the predicted
and observed system behavior (Engstrom et al., 2018). This suggests that trust in an
ADAS will quickly grow with exposure alone and may become inappropriately high
if the driver has limited or no experience with boundary conditions that reflect the
system’s performance variability over a wider operating range. When the operating
range is limited, gaps or inconsistencies may develop in the driver’s mental model.
It is possible that the brief exposure times in the early studies may be responsible for
the results, such that the effect is diminished with increased exposure.
Control automation may also affect a driver’s tactical decisions, although there are
few reports of this. Ervin et al. (2005) found that drivers using ACC linger behind
forward vehicles twice as often as they do when ACC is inactive. Similarly, Jamson
et al. (2001; 2013) report that in highly automated conditions, drivers refrain from
behaviors that would require temporarily retaking manual control of the vehicle.
Perhaps this occurs because of a disincentive to turn off automation once it is acti-
vated. For example, perhaps automation setup takes time and effort, or the workload
reduction accompanying automation is attractive.
approaches a curved road segment at a high rate of speed; LDWs are generated
when a lane boundary is crossed, using either a haptic pulse or a rumble-strip sound;
forward collision warnings (FCW) produce audible alerts when headway or time-
to-collision to a forward vehicle reaches an established limit; and intelligent speed
adaptation (ISA) systems can either provide warnings to the driver or intervene to
control speed to conform to the posted speed limits. As mentioned earlier, some
warning systems might be thought of as a form of connected vehicle application that
relays supplemental information to the driver that may not be directly observable.
the highest levels of trust in the LDW, suggesting that they may have developed a
greater reliance on the system to keep them informed than those who did not depart
the roadway.
A field test of another integrated warning system, called the Continuous Support
system (Várhelyi, Kaufmann, & Persson, 2015), included devices that advised driv-
ers about speed limit exceedance, CSW, forward collision, and warnings about vehi-
cles in the blind zones. The on-road study involved two 45-minute drives along a
prescribed route: once with the system turned on and once with the system off. There
were few observed changes in driver behavior. No changes were found in speed or
headway management, although curve speeds appeared to be lower when the system
was active. Some negative adaptation outcomes were also observed—turn speeds
through intersections were higher, and drivers appeared to come dangerously close
to the sides of the road more frequently when the system is active.
12.3 CONCLUSION
This review of BA phenomena hints at several challenges human factors research-
ers face when looking for the impact of any change to the driving task on how
a driver will adjust the way the vehicle is managed on the road. The challenges
reviewed include identifying which of the many-component driving behaviors
might be changed with the introduction of automation. Not only might there be
many task components, but also they may occur across different levels of the driv-
ing task hierarchy. For each component, it will be important to characterize how
the change is manifest, how long each takes to develop and stabilize, and what
the ultimate consequence of this change is for safety. In some situations, the BA
may occur at an executive level that may have downstream effects. For example,
if a driver changes the way spatial attention is allocated during driving, we may
find that the driver does not monitor the forward areas of the roadway as much as
before, perhaps because it may not be deemed necessary since the automation is
covering this. Perceived reduction in monitoring demand may enable a driver to
redirect attention to other tasks that may be considered more important and easily
managed. Some of these tasks may not be driving-related and ultimately result in
diminished SA (see also, this Handbook, Chapters 7, 9). This diminished SA (or a
distracted driver) might be considered an indirect byproduct of BA of attentional
distribution.
In our view, BA is likely instigated by reaching an adequate level of trust (or con-
fidence) in one’s ability to predict how automation will respond in a variety of situa-
tions. How long it might take to reach this level is likely driven by direct experience
using and observing the automation in action, as well as an individual’s personal
perception of what constitutes “adequate.” Early studies of BA to vehicles equipped
with airbags demonstrate that sometimes it is sufficient to simply know a capability
is present without necessarily witnessing its operation, for a BA effect to materialize.
In other cases, reaching the “adequate” level of trust may require direct experience
of the system in operation over a longer period of time, especially if the system is
more complex. As suggested by others, if the system is observed in a relatively nar-
row operational domain, trust may develop quickly (Engstrom et al., 2018). However,
in this case, the driver may be left with little experience of the system performing at
the edges of its capabilities. With high confidence and limited experience, BA effects
may appear that look like misuse or over-reliance on the system.
Behavioral Adaptation 287
REFERENCES
AAAFTS. (2008). Use of Advanced In-Vehicle Technology by Young and Older Early
Adopters. Washington, DC: AAA Foundation for Traffic Safety.
Anderson, J. R. (1993). Rules of the Mind. Hillsdale, NJ: Lawrence Earlbaum Associates.
Anderson, J. R. & Lebiere, C. (1998). The Atomic Components of Thought. Mahwah, NJ:
Lawrence Earlbaum Associates.
Aschenbrenner, K. M., Biehl, B., & Wurm, G. W. (1988). Is Traffic Safety Improved Through
Better Engineering? Investigation of Risk Compensation with the Example of Antilock
Brake Systems [German]. Mannheim, Germany: BASt.
Ball, K., Owsley, C., Stalvey, B., Roenker, D. L., Sloane, M. E., & Graves, M. (1998).
Driving avoidance and functional impairment in older drivers. Accident Analysis and
Prevention, 30(3), 313–322.
Beggiato, M. & Krems, J. F. (2013). The evolution of mental model, trust and acceptance of
adaptive cruise control in relation to initial information. Transportation Research Part
F, 18, 47–57.
Beggiato, M., Pereira, M., Petzoldt, T., & Krems, J. (2015). Learning and development
of trust, acceptance and the mental model of ACC. A longitudinal on-road study.
Transportation Research Part F, 35, 75–84.
Bianchi Piccinini, G. F., Rodrigues, C. M., Leitão, M., & Simões, A. (2014). Driver’s behav-
ioral adaptation to adaptive cruise control (ACC): The case of speed and time headway.
Journal of Safety Research, 49, 77.e71–84.
Bianchi Piccinini, G. F., Rodrigues, C. M., Leitão, M., & Simões, A. (2015). Reaction to a
critical situation during driving with adaptive cruise control for users and non-users of
the system. Safety Science, 72, 116–126.
Brookhuis, K. & de Waard, D. (1999). Limiting speed, towards an intelligent speed adapter
(ISA). Transportation Research Part F: Traffic Psychology and Behaviour, 2(2), 81–90.
Cacciabue, P. C. & Saad, F. (2008). Behavioural adaptations to driver support systems: A
modelling and road safety perspective. Cognition, Technology & Work, 10(1), 31–39.
Carroll, J. M. & Olson, J. R. (1988). Mental models in human-computer interaction.
In M. Helander (Ed.), Handbook of Human-Computer Interaction (pp. 45–65).
Amsterdam: North-Holland.
Carsten, O. (2013). Early theories of behavioural adaptation. In C. M. Rudin-Brown &
S. L. Jamson (Eds.), Behavioural Adaptation and Road Safety - Theory, Evidence and
Action (pp. 23–34). Boca Raton, FL: CRC Press.
Carsten, O., Lai, F. C. H., Barnard, Y., Jamson, A. H., & Merat, N. (2012). Control task sub-
stitution in semiautomated driving: Does it matter what aspects are automated? Human
Factors, 54(5), 747–761.
Comte, S. L. (2000). New systems: New behaviour? Transportation Research Part F: Traffic
Psychology and Behaviour, 3(2), 95–111.
de Winter, J. C. F., Happee, R., Martens, M. H., & Stanton, N. A. (2014). Effects of adaptive
cruise control and highly automated driving on workload and situation awareness: A
review of the empirical evidence. Transportation Research Part F, 27, 196–217.
288 Human Factors for Automated Vehicles
Dickie, D. A. & Boyle, L. N. (2009). Drivers’ understanding of adaptive cruise control limi-
tations. Proceedings of the Human Factors and Ergonomics Society Annual Meeting,
53(23), 1806–1810.
Eichelberger, A. H. & McCartt, A. T. (2014). Volvo drivers’ experiences with advanced crash
avoidance and related technologies. Traffic Injury Prevention, 15(2), 187–195.
Endsley, M. R. (1995). Toward a theory of situation awareness in dynamic-systems. Human
Factors, 37(1), 32–64.
Engström, J., Bargman, J., Nilsson, D., Seppelt, B., Markkula, G., Piccinini, G. B., & Victor,
T. (2018). Great expectations: A predictive processing account of automobile driving.
Theoretical Issues in Ergonomics Science, 19(2), 156–194.
Ervin, R. D., Sayer, J., LeBlanc, D., Bogard, S., Mefford, M., Hagan, M., … Winkler, C.
(2005). Automotive Collision Avoidance System Field Operational Test Methodology
and Results, Volume 1: Technical Report (DOT HS 809 900). Washington, DC:
Department of Transportation.
Evans, L. (1985). Human-behavior feedback and traffic safety. Human Factors, 27(5),
555–576.
Fuller, R. (1984). A conceptualization of driving behavior as threat avoidance. Ergonomics,
27(11), 1139–1155.
Fuller, R. (2000). The task-capability interface model of the driving process. Recherche -
Transports - Sécurité, 66, 47–57.
Fuller, R. (2005). Towards a general theory of driver behaviour. Accident Analysis &
Prevention, 37(3), 461–472.
Fuller, R. (2011). Driver control theory: From task difficulty homeostasis to Risk Allostasis.
In B. E. Porter (Ed.), Handbook of Traffic Psychology (pp. 13–26). San Diego, CA:
Academic Press.
Funke, G., Matthews, G., Warm, J. S., & Emo, A. K. (2007). Vehicle automation: A remedy
for driver stress? Ergonomics, 50(8), 1302–1323.
Gentner, D. (1983). Structure-mapping: A theoretical framework for analogy. Cognitive
Science, 7(2), 155–170.
Gibson, J. J. & Crooks, L. E. (1938). A theoretical field-analysis of automobile-driving.
American Journal of Psychology, 51, 453–471.
Gugerty, L. J. (2011). Situation awareness in driving. In J. Lee, M. Rizzo, D. L. Fisher, &
J. Caird (Eds.), Handbook for Driving Simulation in Engineering, Medicine and
Psychology. Boca Raton, FL: CRC Press.
Hart, S. G. & Staveland, L. E. (1988). Development of NASA-TLX (Task Load Index): Results
of empirical and theoretical research. In P. A. Hancock & N. Meshlcati (Eds.), Advances
in Psychology: Human Mental Workload (pp. 139–183): North Holland: Elsevier.
Hoedemaeker, M. & Brookhuis, K. A. (1998). Behavioural adaptation to driving with an
adaptive cruise control (ACC). Transportation Research Part F, 1, 95–106.
Hoedemaeker, M. & Kopf, M. (2001). Visual sampling behaviour when driving with adap-
tive cruise control. Ninth International Conference on Vision in Vehicles (pp. 19–22).
Loughborough, UK: Applied Vision Research Centre.
Itoh, M. (2012). Toward overtrust-free advanced driver assistance systems. Cognition
Technology & Work, 14(1), 51–60.
Jamson, A. H., Merat, N., Carsten, O., & Lai, F. (2001). Fully-automated driving: The road
to future vehicles. 6th International Driving Symposium on Human Factors in Driver
Assessment, Training, and Vehicle Design. Lake Tahoe, CA.
Jamson, A. H., Merat, N., Carsten, O., & Lai, F. (2013). Behavioural changes in drivers expe-
riencing highly-automated vehicle control in varying traffic conditions. Transportation
Research Part C-Emerging Technologies, 30, 116–125.
Janssen, W. (1979). Routeplanning en geleiding: Een literatuurstudie [Dutch] (IZF 1979
C-13). Soesterberg, The Netherlands: Institute for Perception, TNO.
Behavioral Adaptation 289
Jenness, J. W., Lerner, N. D., Mazor, S., Osberg, J. S., & Tefft, B. C. (2008). Use of Advanced
In-Vehicle Technology by Young and Older Early Adopters. Survey Results on Adaptive
Cruise Control Systems (DOT HS 810–917). Washington, DC: National Highway
Traffic Safety Administration.
Johnson-Laird, P. N. (1980). Mental models in cognitive science. Cognitive Science, 4(1),
71–115.
Kallberg, V.-P. (1993). Reflector posts - signs of danger? Transportation Research Record,
1403, 57–66.
Kinnear, N., Stradling, S., & McVey, C. (2008). Do we really drive by the seats of our pants? In
L. Dorn (Ed.), Driver Behavior and Traning, Vol III (pp. 349–365). Andershot: Ashgate.
Lai, F. & Carsten, O. (2012). What benefit does Intelligent Speed Adaptation deliver: A
close examination of its effect on vehicle speeds. Accident Analysis and Prevention,
48, 4–9.
Lai, F., Hjälmdahl, M., Chorlton, K., & Wiklund, M. (2010). The long-term effect of intel-
ligent speed adaptation on driver behaviour. Applied Ergonomics, 41(2), 179–186.
Larsson, A. F. L. (2012). Driver usage and understanding of adaptive cruise control. Applied
Ergonomics, 43(3), 501–506.
Larsson, A. F. L., Kircher, K., & Andersson Hultgren, J. (2014). Learning from experi-
ence: Familiarity with ACC and responding to a cut-in situation in automated driving.
Transportation Research Part F: Traffic Psychology and Behaviour, 27, 229–237.
LeBlanc, D., Sayer, J., Winkler, C., Bogard, S., & Devonshire, J. (2007). Field test results
of a road departure crash warning system: driver utilization and safety implications.
Proceedings of the Fourth International Driving Symposium on Human Factors in
Driving Assessment, Training, and Vehicle Design. Stephenson, WA.
LeBlanc, D., Sayer, J., Winkler, C., Ervin, R., Bogard, S., Devonshire, J. … Gordon, T.
(2006). Road Departure Crash Warning System Field Operational Test: Methodology
and Results. Volume 1 (UMTRI-2006–9-1). Ann Arbor: University of Michigan
Transportation Research Institute.
Lee, J. D., McGehee, D. V., Brown, T. L., & Marshall, D. (2007). Effects of adaptive cruise
control and alert modality on driver performance. Transportation Research Record,
1980, 49–56.
Lewis-Evans, B., de Waard, D., & Brookhuis, K. (2013). Contemporary models of driver
adaptation. In C. M. Rudin-Brown & S. Jamsom (Eds.), Behavioural Adaptation and
Road Safety (pp. 35–59). Boca Raton, FL: CRC Press.
Lewis-Evans, B. & Rothengatter, T. (2009). Task difficulty, risk, effort and comfort in a
simulated driving task—Implications for risk allostasis theory. Accident Analysis &
Prevention, 41(5), 1053–1063.
Lin, R., Ma, L., & Zhang, W. (2018). An interview study exploring Tesla drivers’ behavioural
adaptation. Applied Ergonomics, 72, 37–47.
Lindsay, P. H. & Norman, D. A. (1977). Human Information Processing: An Introduction to
Psychology (2d ed.). New York: Academic Press.
Llaneras, R. E., Salinger, J., & Green, C. A. (2013). Human factors issues associated with lim-
ited ability autonomous driving systems: Drivers’ allocation of visual attention to the
forward roadway. Proceedings of the 7th International Driving Symposium on Human
Factors in Driver Assessment, Training and Vehicle Design. Iowa City, IA: University
of Iowa.
Ma, R. & Kaber, D. B. (2005). Situation awareness and workload in driving while using adap-
tive cruise control and a cell phone. International Journal of Industrial Ergonomics,
35(10), 939–953.
Manser, M., Creaser, J., & Boyle, L. (2013). Behavioural adaptation: Methodological and mea-
surement issues. In C. M. Rudin-Brown & S. Jamson (Eds.), Behavioural Adaptation
and Road Safety (pp. 35–59). Boca Raton, FL: CRC Press.
290 Human Factors for Automated Vehicles
Merat, N. & Jamson, A. H. (2009). How do drivers behave in a highly automated car? Fifth
International Driving Symposium on Human Factors in Driver Assessment, Training
and Vehicle Design. Iowa City, IA: University of Iowa.
Merat, N., Jamson, A. H., Lai, F. C. H., & Carsten, O. (2012). Highly automated driving,
secondary task performance, and driver state. Human Factors, 54(5), 762–771.
Merat, N., Jamson, A. H., Lai, F. C. H., Daly, M., & Carsten, O. M. J. (2014). Transition to
manual: Driver behaviour when resuming control from a highly automated vehicle.
Transportation Research Part F: Traffic Psychology and Behaviour, 27, 274–282.
Michon, J. A. (1979). Dealing with Danger (VK 79-01). Groningen, The Netherlands: Traffic
Research Center, University of Groningen.
Michon, J. A. (1985). A critical view of driver behavior models: What do we know, what
should we do? In L. Evans & R. C. Schwing (Eds.), Human Behavior and Traffic Safety
(pp. 485–520). New York: Plenum Press.
Muhrer, E., Reinprecht, K., & Vollrath, M. (2012). Driving with a partially autonomous for-
ward collision warning system: How do drivers react? Human Factors, 54(5), 698–708.
Näätänen, R. & Summala, H. (1974). A model for the role of motivational factors in drivers’
decision-making. Accident Analysis & Prevention, 6(3–4), 243–261.
National Highway Traffic Safety Administration. (2015). Critical Reasons for Crashes
Investigated in the National Motor Vehicle Crash Causation Survey (DOT HS 812
115). Washington, DC: National Highway Traffic Safety Administration.
Neubauer, C., Matthews, G., Langheim, L., & Saxby, D. (2012). Fatigue and voluntary utiliza-
tion of automation in simulated driving. Human Factors, 54(5), 734–746.
Newell, A. (1990). Unified Theories of Cognition. Cambridge, MA: Harvard University Press.
Newell, A. & Simon, H. A. (1972). Human Problem Solving. Englewood Cliffs, NJ:
Prentice-Hall.
Nilsson, L. (1995). Safety effects of adaptive cruise control in critical traffic situations.
Second World Congress on Intelligent Transport Systems: Vol. 3. Yokohama, Japan.
Nodine, E., Lam, A., Stevens, S., Razo, M., & Najm, W. G. (2011). Integrated Vehicle-Based
Safety Systems (IVBSS) Light Vehicle Field Operational Test: Independent Evaluation
(DOT HS 811 516). Washington, DC: National Highway Traffic Safety Adminstration.
OECD. (1990). Behavioural Adaptations to Changes in the Road Transport System (92-64-
13389-5). Paris: Organisation for Economic Co-operation and Development.
Parasuraman, R. & Riley, V. (1997). Humans and Automation: Use, misuse, disuse, abuse.
Human Factors, 39(2), 230–253.
Piccinini, G. F., Simões, A., Rodrigues, C. M., & Leitão, M. (2012). Assessing driver’s mental
representation of adaptive cruise control (ACC) and its possible effects on behavioural
adaptations. Work, 41, 4396–4401.
Ranney, T. A. (1994). Models of driving behavior: A review of their evolution. Accident
Analysis & Prevention, 26(6), 733–750.
Rasmussen, J. (1983). Skills, rules, and knowledge - signals, signs, and symbols, and other
distinctions in human-performance models. IEEE Transactions on Systems Man and
Cybernetics, 13(3), 257–266.
Reagan, I. J. & McCartt, A. T. (2016). Observed activation status of lane departure warning
and forward collision warning of Honda vehicles at dealership service centers. Traffic
Injury Prevention, 17(8), 827–832.
Reid, L. D. (1983). A survey of recent driver steering behavior models suited to accident stud-
ies. Accident Analysis and Prevention, 15(1), 23–40.
Rudin-Brown, C. M. (2010). ‘Intelligent’ in-vehicle intelligent transport systems: Limiting
behavioural adaptation through adaptive design. Intelligent Transport Systems, IET,
4(4), 252–261.
Rudin-Brown, C. M. & Noy, Y. I. (2002). Investigation of behavioral adaptation to lane depar-
ture warnings. Transportation Research Record, 1803, 30–37.
Behavioral Adaptation 291
FIGURE 2.3 Assigning predictions to each object in the traffic scene. (From Waymo.)
FIGURE 8.2 A proposed model showing the changing position and role of the human due
to the introduction of automated functions in the driving task.
FIGURE 9.2 The two-process model of sleep–wake regulation (Borbély, 1982; Borbély,
Daan, Wirz-Justice, & Deboer, 2016) depicting the interaction between homeostatic sleep
drive (Process S) and sleep-independent circadian arousal drive (Process C) to produce S+C
alertness level (solid line).
FIGURE 10.1 Scene as viewed by a person with cataract. (Image from the National Eye
Institute, National Institutes of Health.)
FIGURE 10.2 Scene as viewed by a person with AMD. (Image from the National Eye
Institute, National Institutes of Health.)
FIGURE 10.3 Scene as viewed by a person with glaucoma. (Image from the National Eye
Institute, National Institutes of Health.)
FIGURE 10.4 Scene as viewed by a person with DR. (Image from the National Eye Institute,
National Institutes of Health.)
FIGURE 13.1 Mill avenue on approach to the scene of Uber–Volvo collision. (Source:
Google Maps.)
FIGURE 23.1 Data from a study on secondary task engagement. The data in (a) was
reported in Peng et al (2013). A review of the data afterwards showed that the spread of
maximum eyes-off-road time was greater in the text entry task when compared with the text
reading task (b).
FIGURE 23.2 Washtenaw Ave & S Huron Pkwy, Ann Arbor, MI 48104 traffic patterns.
(a) Travel movements through intersection, (b) Aerial view of intersection, (c) all movements
in eastbound direction, and number of movements by (d) red signal (e) green signal, and
(f) yellow signal.
13 Distributed Situation
Awareness and
Vehicle Automation
Case Study Analysis and
Design Implications
Paul M. Salmon
University of the Sunshine Coast
Neville A. Stanton
University of Southampton
Guy H. Walker
Heriot-Watt University
CONTENTS
Key Points .............................................................................................................. 294
13.1 Introduction................................................................................................... 294
13.2 Situation
Awareness ...................................................................................... 296
13.2.1 Individual SA .................................................................................... 296
13.2.2 Team Models..................................................................................... 297
13.2.3 System Models................................................................................... 297
13.3 SA on the Road ............................................................................................. 299
13.4 SA and Automated Vehicles ......................................................................... 301
13.5 When DSA Breaks Down: Uber–Volvo Case Study..................................... 302
13.6 Uber–Volvo Incident Case Study .................................................................. 303
13.6.1 Analysis of DSA in the Events Leading Up to the Collision............ 304
13.6.1.1 Task Network...................................................................... 305
13.6.1.2 Social Network................................................................... 306
13.6.1.3 Information
Network.......................................................... 308
13.7 Implications for Automated Vehicle Design................................................. 308
13.8 Conclusions.................................................................................................... 311
References .............................................................................................................. 315
293
294 Human Factors for Automated Vehicles
KEY POINTS
• The nature of automated vehicle systems is such that there is a need to move
beyond simply considering the situation awareness (SA) needs of human
road users to also focus on the SA needs of automated vehicles, infrastruc-
ture, and indeed the overall road transport system.
• The Distributed Situation Awareness (DSA) perspective has important
ramifications for the design and implementation of automated vehicles and
the road systems in which they will operate.
• Automated vehicles will have their own SA, they will be required to
exchange SA with vehicle operators, and human agent SA (e.g., drivers,
vehicle operators) and non-human SA (e.g., automated vehicles, infrastruc-
ture) will have to connect to support transportation.
• The Event Analysis of Systemic Teamwork (EAST) framework provides an
integrated suite of methods for analyzing behavior, and specifically DSA,
in complex systems.
• DSA frameworks could be usefully applied to the design, testing, and
implementation of automated vehicles.
13.1 INTRODUCTION
Whilst all road collisions are caused by multiple interacting factors, one aspect is con-
stant across them—almost always at least one of the road users involved is momen-
tarily not aware of something important, be it other road users, the road conditions,
hazards in the environment, or the safest way to negotiate a particular road situation
(Salmon, Read, Walker, Lenne, & Stanton, 2018). Within human factors and safety
science, the concept that we use to study and optimize awareness in complex and
dynamic environments is known as “Situation Awareness” (SA; Endsley, 1995a; also
see Chapter 7 in this Handbook). Based on over 30 years of applied research there are
now various theoretical models and analysis methods that can be used to understand
how humans, teams, organizations, and even entire systems develop an appropriate
understanding of “what is going on” (Endsley, 1995a). SA has become an important
lens through which to view and understand behavior and has been applied to support
the design of tools, technologies, procedures, and environments aiming to optimize
human performance in many areas (Salmon & Stanton, 2013; Wickens, 2008).
Whilst it may seem obvious that SA requirements should be a critical consider-
ation during the design of automated and autonomous vehicles, and the road sys-
tems in which they operate, this is not always undertaken. Indeed, in relation to
automated vehicles, specifically, there are concerns that SA is one in a long list of
human factors concepts that may not be receiving the attention it warrants (Banks &
Stanton, 2016; Hancock, 2019; Salmon et al., 2018). Despite the projected safety
benefits of automated vehicles, it has been argued that the period between now and
fully automated driving will be particularly problematic and could in fact lead to
an increase in road crashes (Banks, Plant, & Stanton, 2018; Hancock, 2017, 2019;
Salmon, 2019). There are various reasons for this (see Hancock, 2019), one of which
is a failure to fully consider the SA requirements of road users, vehicles, and the road
Distributed Situation Awareness and Automation 295
ramifications for the design and implementation of automated vehicles and the road
systems in which they will operate. Accordingly, we provide an overview of the DSA
model and discuss the implications for automated vehicle design. To demonstrate
some of the core tenets of DSA we present an analysis of the recent Uber–Volvo col-
lision, in which an automated test vehicle collided with a vulnerable road user, and
close with a series of implications and future research requirements.
13.2 SITUATION AWARENESS
At its broadest level of description, SA refers to how agents, human or non-human,
develop and maintain an understanding of “what is going on” around them (Endsley,
1995a; this Handbook, Chapter 7). Depending on the theoretical and methodological
approaches employed, SA models and methods are used by researchers and practi-
tioners to
There are many definitions and models presented in the literature. These can be
broadly categorized as those relating to the SA held by individuals, teams, and socio-
technical systems (STS). For a detailed review and comparison of models, the reader
is referred to Salmon et al. (2008) and Stanton et al. (2017). A brief overview of
popular definitions and models is given below.
13.2.1 individuAL sA
Early definitions and models of SA focused on individual operators (e.g., drivers,
pilots, control room operators) and the cognitive processes involved in developing
and maintaining the awareness required to complete relevant tasks (e.g., driving).
Mica Endsley, a pioneer in this area, introduced the most widely known and used
definition, which describes SA as “the perception of the elements in the environment
within a volume of time and space, the comprehension of their meaning and a projec-
tion of their status in the near future” (Endsley, 1988).
Endsley (1995a) outlined an information processing-based “three-level model”
of SA (also see Chapter 7). This describes SA as an individual’s understanding of
Distributed Situation Awareness and Automation 297
the ongoing situation that encompasses three levels: Level 1, perception of the ele-
ments in the environment; Level 2, comprehension of their meaning; and Level 3,
projection of future system states. The three-level model describes how SA is a
central component of information processing that underpins decision-making and
action. Within this model SA is influenced by various factors, including individual
(e.g., mental models, workload), task (e.g., difficulty and complexity), and system
factors (e.g., system complexity, interface design).
Level 1 SA involves perceiving the status, attributes, and dynamics of task-
related elements in the surrounding environment (Endsley 1995a). Level 2 SA
involves interpreting this data to understand its relevance in relation to one’s
goals. Level 3 SA involves anticipating the likely behavior of different elements
in the environment. Level 1 and 2 SA is used along with mental models of similar
situations to forecast likely events. Endsley describes how mental models play a
critical role in SA, directing attention to pertinent elements in the environment
(Level 1 SA), facilitating the integration of elements to aid comprehension (Level
2 SA), and supporting the generation of future states and behaviors (Level 3 SA;
see also Chapter 7).
the first article which initiated this paradigm shift, Neville Stanton, and colleagues
defined distributed SA (DSA) as “activated knowledge for a specific task within a
system…. [and] the use of appropriate knowledge (held by individuals, captured by
devices, etc.) which relates to the state of the environment and changes as the situa-
tion develops” (Stanton et al., 2006, p. 1291).
Stanton et al. (2006) outlined a model of DSA, inspired by Hutchins’ (1995a; b)
seminal work on distributed cognition, which argues that SA is an emergent property
that is held by the overall system and is built through interactions between “agents,”
both human (e.g., human operators) and non-human (e.g., tools, documents, displays).
Whilst Hutchins’ work on distributed cognition describes how information process-
ing generally can be undertaken at a systems level by groups of individual agents and
artifacts, Stanton et al.’s model focuses explicitly on SA, arguing similarly that SA
can transcend individuals and be held by a system.
Stanton et al. (2017) recently updated the core tenets of the DSA model to include
TABLE 13.1
Automated Vehicle System Design SA Requirements
SA Requirement Description Examples
Human driver SA The information that the human • Current speed and speed limit
requirements driver needs to develop and • Route and directions
maintain the SA required to safely • Location and actions of other road users
drive the vehicle • Hazards
Human operator The information that the human • Automation mode
SA requirements operator needs to supervise the • Requirement to take over control of the
vehicle in driverless mode and to vehicle
understand when they are required • Location and actions of other road users
to take over control of the vehicle • Hazards
Automated vehicle The information that the vehicle and • Current speed and speed limit
SA requirements its ADAS requires to operate • Route and directions
automatically in a safe and efficient • Location and actions of other road users
manner • Hazards
Other road user The information that other road • Automation mode
SA requirements users require to understand what • Intended path
mode the automated vehicle is in, • Awareness of surrounding road users
what it is doing, what it is aware of,
and what it will do next
Other automated The information that other • Automation mode
vehicle SA automated vehicles require to • Intended path
requirements understand what mode the • Awareness of surrounding road users
automated vehicle is in, what it is
doing, what it is aware of, and what
it will do next
Infrastructure SA The information that intelligent road • Automation mode
requirements infrastructure requires to • Intended path
understand what mode the • Awareness of surrounding road users
automated vehicle is in, what it is
doing, what it is aware of, and what
it will do next
driver were ostensibly not aware of the truck and the risk of colliding with it, and the
driver was seemingly not aware of the need to take over control of the vehicle. DSA
provides a useful approach to respond to issues such as this by allowing designers to
consider the SA needs of both the driver and the vehicle as well as how SA can be
transacted between the driver and their vehicle and between vehicles (e.g., the truck
and the Tesla).
More recently a Volvo fitted with Uber’s self-driving system struck and killed a
pedestrian in Tempe, Maricopa County, Arizona during operational testing (NTSB,
2018). Below we examine this incident through a DSA lens in order to articulate
some critical DSA design requirements for automated vehicles. Although the full
report was unavailable at the time of writing this chapter, there was sufficient infor-
mation within the preliminary report (NTSB, 2018) to undertake a preliminary
DSA-based analysis.
FIGURE 13.1 (See color insert.) Mill avenue on approach to the scene of Uber–Volvo col-
lision. (Source: Google Maps.)
304 Human Factors for Automated Vehicles
initiate an emergency braking maneuver, and the test vehicle operator failed to inter-
vene until it was too late, only seeing the pedestrian immediately prior to impact.
The test vehicle was equipped with Uber’s developmental self-driving sys-
tem which comprised forward- and side-facing cameras, radars, Light Detection
and Ranging (LIDAR), navigation sensors, and a computing and data storage unit
(NTSB, 2018). Within the vehicle, a monitor located on the center console presented
diagnostic information to the vehicle operator. The test vehicle had two control
modes: computer and manual. It was also equipped with various Volvo driver assis-
tance functions, including the City Safety™ system, which provides collision avoid-
ance and automatic emergency braking. These functions, however, were disabled
due to the vehicle being driven in the computer control mode and a desire to avoid an
erratic ride in the vehicle, such as the vehicle braking in the event that objects in the
vehicle’s path were falsely detected.
At the time of the collision (approximately 9.58), the vehicle had been under com-
puter control for around 19 minutes and was negotiating the second loop of an estab-
lished test route. According to the NTSB report, the Uber system first registered
radar and LIDAR observations of the pedestrian around 6 seconds prior to impact.
The self-driving system initially classified the pedestrian as an unknown object and
then as a bicycle, but could not identify an intended path. Around 1.3 seconds before
impact, the self-driving system determined that an emergency braking maneuver
was required in order to avoid a collision. Such a maneuver could not be initiated by
the vehicle under computer control due to the City Safety™ system being disabled
and so no braking action was initiated.
The vehicle operator only noticed the pedestrian and intervened less than a sec-
ond before impact. Initially she engaged the steering wheel, but did not brake until
after the vehicle hit the pedestrian. More recent accounts of the incident have sug-
gested that the vehicle operator was watching the Hulu streaming service on her
mobile phone (Stanton, Salmon, Walker, & Stanton, 2019a). The role of the vehicle
operator was to observe the vehicle and to note events of interest on a central tablet.
Vehicle operators were also supposed to monitor the environment for hazards and to
regain control of the vehicle in the event of an emergency.
EAST and its component methods have been used extensively to examine inci-
dents involving DSA failures (Griffin, Young, & Stanton, 2010; 2015; Salmon et al.,
2016). Using the NTSB preliminary reports, as well as other relevant documentation,
we constructed task, social, and information networks for the events leading up to
the Uber–Volvo collision. Specifically, the analysis focused on the behavior of the
vehicle operator, the test vehicle, and the pedestrian. Construction of the networks
involved working through the materials to identify tasks, interactions, and informa-
tion relating to the behavior of the three core agents during the events leading up
to the collision. This was undertaken initially by two of the authors (PS and NS),
following which the third author reviewed the networks (GW) which were refined
accordingly.
13.6.1.1 Task Network
The task network is presented in Figure 13.3, where the square nodes represent the
tasks that were, and should have been, undertaken and the lines linking the nodes
represent their interrelations where tasks were either undertaken together, sequen-
tially, or influenced one another. Shading of the nodes is used to represent which
agent was associated with each task.
Seemingly 7 of the 15 tasks within the task network were either not undertaken at
all or were performed inadequately, leading to sub-optimal DSA and the subsequent
collision. For the pedestrian, these included finding a safe place to cross and check-
ing the road for traffic. For the vehicle operator, these included their monitoring of
the road environment, detection of obstacles, and takeover of control of the vehicle.
For the automated vehicle and its sub-systems, the inadequate tasks included the
detection of obstacles, provision of warnings, and monitoring of driver alertness.
Notably, many of the tasks that were not undertaken or were undertaken inadequately
represent information gathering tasks or information communications tasks that are
critical to DSA for this system. Further, the fact that the behaviors or omissions that
306 Human Factors for Automated Vehicles
Monitor Monitor
Check road Monitor road
Cross road instrument behaviour of
for traffic environment
cluster vehicle
Find safe
Control Monitor Tag events of
place to cross
vehicle Uber display interest
the road
KEY
Listen for
Provide
audible
warnings
warnings
Vehicle
Pedestrian
operator
Vehicle
Monitor
operator and Autonomous
driver
autonomous vehicle
alertness
vehicle
led to sub-optimal DSA were distributed across the vehicle, vehicle operator, and
pedestrian/cyclist emphasizes the systemic nature of DSA. A final interesting feature
of the task network is the “Watch TV via mobile phone” node. The TV show thus
became an inappropriate part of DSA which had the effect of overriding and degrad-
ing other important aspects of DSA.
13.6.1.2 Social Network
Whilst the task network includes the three primary agents of the vehicle, the vehi-
cle operator, and the pedestrian/cyclist, the social network presented in Figure 13.4
decomposes the system further to include 20 human and non-human agents. SA was
thus distributed across these agents and was further exchanged between them
throughout the incident. The lines connecting the agents show the communications
pathways along which SA was exchanged between agents.
As shown in Figure 13.4, the relevant agents include the human road users (the
vehicle operator and the pedestrian/cyclist), the test vehicle and associated sub-agents
(e.g., radars, LIDAR, automatic emergency braking system), and road and road infra-
structure agents (e.g., the road and road signage). According to the analysis, various
failures in the transaction of SA between agents played a role in the incident. For
example, whilst the vehicle’s radar detected the pedestrian/cyclist (as represented by
the link in Figure 13.4), the pedestrian/cyclist ostensibly did not detect the vehicle
itself. These failed SA transactions are described in Table 13.2.
Distributed Situation Awareness and Automation 307
TABLE 13.2
SA Transaction Failures
Transaction Failure Type Transaction Required Agents Involved
Absent transaction Warning the pedestrian not to cross at chosen areas Road signage, pedestrian
Absent transaction Pedestrian to see approaching vehicle Pedestrian, vehicle
Delayed transaction Vehicle to detect and classify pedestrian Pedestrian, vehicle
Absent transaction Vehicle to determine pedestrians’ intended path Pedestrian, vehicle
Delayed transaction Test operator to detect pedestrian Pedestrian, test operator
Absent transaction Vehicle to alert test operator of obstacle Vehicle, test operator
Absent transaction Vehicle to monitor test operator state Vehicle, test operator
Inappropriate transaction Mobile phone showing TV show to test operator Mobile phone, test
operator
13.6.1.3 Information Network
The information network is presented in Figure 13.5, where the nodes represent the
information that underpinned DSA during the incident. Nodes are shaded in gray
in the figure below (in red in the color insert) to show where DSA was ostensi-
bly inadequate. These include instances where one or multiple agents did not have
the information as required (e.g., vehicle operator not aware of the pedestrian as
shown by the shaded “Obstacle (ped)” node), where the information was understood
but was incorrect (e.g., the misclassification of the obstacle as shown by the shaded
“Classification” node), or where the information being used was inappropriate (e.g.,
the voice TV show as shown by the shaded “The Voice” node). The other non-shaded
nodes within the network represent information that was available and understood
by the appropriate agent at the appropriate time.
Figure 13.5 shows that there were various DSA failures involved in the incident.
Notably, these failures span the three agents involved in the incident. The pedestrian/
cyclist was either unaware of the signage warning road users not to cross the road from
the center median strip crosswalk or chose to ignore it. The test vehicle was initially not
aware what type of road user the pedestrian/cyclist was and was then unable to iden-
tify an intended path. The vehicle operator was initially unaware that a pedestrian was
occupying the road ahead, and did not receive any warnings from the vehicle once it had
detected the pedestrian. Finally, the vehicle operator was allegedly watching a stream-
ing television show on their mobile phone, which represents inappropriate information
that was not required for tasks related to vehicle operation and monitoring. Notably, the
vehicle was not aware of the vehicle operator’s actions, resulting in lack of SA.
FIGURE 13.5 (See color insert.) Infor mation network. Nodes a re shaded in gray to show where agents did not have the infor mation as required,
where the infor mation was understood but was incor rect, or where the infor mation being used was inappropr iate.
310 Human Factors for Automated Vehicles
TABLE 13.3
Examples of DSA Tenets within Uber–Volvo Collision and Implications for
Automated Vehicle Design
Example in Uber–Volvo Implications for Automated Vehicle
DSA Tenet Collision Design
SA is an emergent The SA required to ensure that • Designers should consider SA at a
property of STS there was a safe interaction systems level and all human and
between the vehicle operator, non-human agents contributions
test vehicle, and pedestrian to it.
could only be achieved through • Designers need to consider how
interactions between 20 human information and SA are best
and non-human agents (Social exchanged between agents
network, Figure 13.3). • Designers need to consider what
information should be exchanged
between agents, when it needs to be
exchanged, and in what format
SA is distributed across SA was distributed across 20 • Designers should consider the SA
human and non-human human and non-human agents needs of both human and non-
agents (Social network Figure 13.3). human agents
Systems have a dynamic The vehicle operator, test • Designers should develop predictive
network of information vehicle, and pedestrian were models of the overall systems’ SA
on which each agent has each using different network and ensure that all of the
their own view, and combinations of information information is available, in the right
contribution to during the events leading up to format, at appropriate times
the incident (Information
network, Figure 13.4).
DSA is maintained via Various transactions in SA were • Designers need to consider how SA
transactions between made between 20 human and is best transacted between agents,
agents non-human agents (Social including what needs to be
network, Figure 13.3), and transacted, when, and in what
many of the tasks required format
involved transactions in SA • Designers need to identify the SA
between these agents (Task requirements of all human and
network, Figure 13.2). A non-human agents
primary contributory factor was
a failure of the vehicle to alert
the vehicle operator to the
presence of the pedestrian.
Compatible SA is The SA held by each of the three • Designers need to ensure that the
required for systems to primary agents (vehicle automated vehicle’s SA is
function effectively operator, test vehicle, and compatible with that of other road
pedestrian) was incompatible users, vehicles, and infrastructure in
(Task, Social and different road environments
Information networks,
Figures 13.2–13.4).
(Continued )
Distributed Situation Awareness and Automation 311
13.8 CONCLUSIONS
The aim of this chapter was to provide an overview of the DSA model of SA and
discuss some of the implications for automated vehicle design. In doing so, we pre-
sented a preliminary analysis of the recent Uber–Volvo collision and extracted relevant
implications for automated vehicle design that relate the core tenets of the DSA model.
Automated vehicles will be highly disruptive. The next decade or so, therefore,
represents a critical period for automated vehicles and road safety. Whilst there is an
312 Human Factors for Automated Vehicles
opportunity to create the safest and most efficient road transport systems of all time,
there is also an opportunity to reverse decades of progress and create even more cha-
otic, unsafe, and congested road transport systems (Hancock, 2019; Salmon, 2019).
SA, and specifically DSA, can play a key role in ensuring that we achieve the former
and not the latter. A failure to consider DSA requirements and test DSA through-
out the design life cycle will lead to automated vehicles, infrastructure, and road
users that suffer losses of SA. In turn, this will create new forms of collision whereby
the road transport system’s DSA is insufficient to support safe interactions between
road users. On the contrary, appropriate consideration and testing of DSA will help
create efficient automated vehicles that are able to appropriately exchange SA with
their operators, other road users, other vehicles, and the road infrastructure. A DSA-
utopia on our roads is by no means out of reach. Of course, whether a DSA-utopia
will optimize safety and efficiency in road transport systems is a question that also
warrants further exploration.
How then can this be achieved? To conclude this chapter, we present a framework
to support DSA-based design in road transport (Salmon et al., 2018, see Figure 13.6).
This framework provides an overview of the kinds of analyses required to ensure
that DSA can be understood and catered for during automated vehicle design life
cycles. The framework includes the use of on-road naturalistic studies to examine
DSA and road user behavior in existing road environments, the use of systems analy-
sis methods such as Cognitive Work Analysis (CWA; Vicente, 1999) to identify key
design requirements, the use of STS theory and an STS-Design Toolkit (STS-DT,
Read, Beanland, Lenne, Stanton, & Salmon, 2017) to generate new design concepts,
and finally the use of various evaluation approaches to evaluate design concepts.
It is recommended that the framework is used to support consideration of DSA in
automated vehicle design. Although a significant body of driving automation research
exists, knowledge gaps create a pressing need for further work. Previous studies have
typically adopted one methodological approach (e.g., driving simulation) or have
focused on one issue in isolation (e.g., automation failure, handover of control) when
testing the safety risks associated with vehicle automation. For example, driving sim-
ulator projects have focused on the ability of drivers to regain manual control follow-
ing automation failures (e.g., Stanton, Young, & McCaulder, 1997). Many key areas
have been neglected, including the interaction of automated vehicles with vulnerable
road users (e.g., cyclists and pedestrians), the interaction of automated vehicles with
one another (e.g., vehicles designed by different manufacturers with differing algo-
rithms and control philosophies), and the levels of SA held by automated vehicles in
different scenarios. In addition, we do not know how vehicles with advanced levels of
automation will interact with drivers operating vehicles without automation (and vice
versa). This has prevented the full gamut of risks and emergent behaviors from being
identified, which in turn means we do not currently understand the full potential
impact of different levels of driving automation on behavior and safety.
It is our view that the framework presented in Figure 13.6 could be usefully
applied to the design, testing, and implementation of automated vehicles (see also,
this Handbook, Chapter 22). This could involve design and testing via modeling
(e.g., with EAST and CWA) or testing via on-road studies involving both automated
and non-automated vehicles. DSA-based research in this area will ensure that the
Distributed Situation Awareness and Automation
SA needs of road users and automated vehicles are considered, and that appropri-
ate analyses are undertaken to identify both design requirements and some of the
emergent issues that might arise. Appropriate testing and revision of design concepts
would then ensure that DSA requirements are met. Future crashes could be eradi-
cated before they happen through more appropriate design. In the case of the recent
Uber–Volvo collision, the SA requirements of the vehicle as well as the vehicle
operator and other road users would be considered, meaning that the vehicle would
provide adequate warning to the operator regarding its detection of the pedestrian.
It is likely, for example, that a pro-active EAST modeling exercise would identify
instances where the automated vehicle may detect hazards in the road environment,
but where the vehicle operator may not. Based on this, appropriate design interven-
tions could be made.
It is important to acknowledge that the consideration of DSA requirements in
design should go beyond road users, automated vehicles, and the road environment.
It is recommended that further study considers a larger component of road trans-
port systems. For example, Salmon et al. (2016) recently developed a complex con-
trol structure model for the road transport system in Queensland, Australia, which
included a description of all of the agents involved in road transportation, ranging
from drivers and vehicles all the way up to government and international agencies.
Each of these agents have DSA requirements relating to the safe operation of road
transport systems. Indeed, the road transport systems’ DSA incorporates the SA held
by the police, traffic management centers, the media, road safety agencies, licensing
authorities, insurance companies, government, etc. In particular, agents at the higher
levels of road transport systems (e.g., road safety authorities) have to constantly mon-
itor the state of the road transport system and use feedback on how it is operating to
inform the development of road safety strategy, programs, and interventions. This
system- level view could also be applied to other important cognitive mechanisms
such as cognitive workload, mental models, and trust.
Banks et al. (2018) recently moved toward this form of systemic DSA analysis by
applying EAST to examine DSA in future automated vehicle-based road transport
systems. EAST is more useful in this context as it explicitly considers DSA, whereas
Systems-Theoretic Accident Model and Processes (STAMP) examines control and
feedback mechanisms (Leveson, 2004). Banks et al. (2018) concluded that, in future
connected and autonomous vehicle (CAV)-based road transport systems, most of the
agents within the system, including vehicles and infrastructure, will be connected. As
a result, it is suggested that human factors issues will not be eliminated and, in fact,
they will become more important. Banks et al. (2018) recommended EAST as a frame-
work that can enable researchers to visualize the impact of automated vehicles on a
much larger scale and to implement STS design principles to optimize performance.
It is also important to note that taking a systems perspective on DSA will enable
wider system reforms that will contribute to the efficacy and safety of automated
vehicles. By analyzing and understanding DSA at the overall road transport system
level it will be possible to identify interventions beyond automated vehicle design.
For example, modifications to road safety strategy, road rules and regulations, vehicle
design guidelines and standards, enforcement tools and practices, and licensing and
registration will all be required. Accordingly, we recommend further studies of DSA
Distributed Situation Awareness and Automation 315
across current and future road transport systems. (Again, the reader is referred to
a complementary chapter on the importance of systems analysis in this Handbook,
Chapter 19).
Limitations of the DSA approach are worth noting. Attempting to identify the
DSA requirements of multiple agents within a system is both difficult and resource
intensive. In addition, EAST and the DSA model are not typically used to quantita-
tively assess the quality of DSA. As a result, other methods of assessing SA, such as
the SA Global Assessment Technique (Endsley, 1995b), should also be used as part
of a multi-methods approach.
REFERENCES
Banks, V. A., Plant, K. L., & Stanton, N. A. (2018). Driver error or designer error: Using the
perceptual cycle model to explore the circumstances surrounding the fatal Tesla crash
on 7th May 2016. Safety Science, 108, 278–285. doi:10.1016/j.ssci.2017.12.023
Banks, V. A. & Stanton, N. A. (2016). Keep the driver in control: Automating automobiles of
the future. Applied Ergonomics, 53, 389–395.
Endsley, M. R. (1988). Situation awareness global assessment technique (SAGAT).
Proceedings of the National Aerospace and Electronics Conference (NAECON)
(pp. 789–795). New York: IEEE.
Endsley, M. R. (1995a). Towards a theory of situation awareness in dynamic systems. Human
Factors, 37, 32–64.
Endsley, M. R. (1995b). Measurement of situation awareness in dynamic systems. Human
Factors, 37, 65–84.
Endsley, M. R. (2015). Situation awareness misconceptions and misunderstandings. Journal
of Cognitive Engineering and Decision Making, 9, 4–32.
Endsley, M. R., Bolte, B., & Jones, G. D. (2003). Designing for Situation Awareness: An
Approach to User-Centered Design. Boca Raton, FL: CRC Press.
Endsley, M. R. & Jones, W. M. (2001). A model of inter- and intra-team situation awareness:
Implications for design, training and measurement. In M. McNeese, E. Salas, & M. Endsley
(Eds.), New Trends in Cooperative Activities: Understanding System Dynamics in
Complex Environments. Santa Monica, CA: Human Factors and Ergonomics Society.
Hancock, P. A. (2017). Imposing limits on autonomous system. Ergonomics, 60, 284–291.
Hancock, P. A. (2019). Some pitfalls in the promises of automated and autonomous vehicles.
Ergonomics, 62, 479–495.
Hutchins, E. (1995a). Cognition in the Wild. Cambridge, MA: MIT Press.
Hutchins, E. (1995b). How a cockpit remembers its speeds. Cognitive Science, 19, 265–288.
Griffin, T. G. C., Young, M. S., & Stanton, N. A. (2010). Investigating accident causation
through information network modelling. Ergonomics, 53, 198–210.
Griffin, T. G. C., Young, M. S., & Stanton, N. A. (2015). Human Factors Modelling in
Aviation Accident Analysis and Prevention. Aldershot, UK: Ashgate.
Leveson, N. G. (2004). A new accident model for engineering safer systems. Safety Science,
42(4), 237–270.
National Highway Traffic Safety Administration. (2016). Office of Defects Investigation: PE
16–007. Retrieved from https://static.nhtsa.gov/odi/inv/2016/INCLA-PE16007-7876.PDF
National Transportation Safety Board. (2018). Preliminary Report Highway HWY18MH010.
Retrieved from www.ntsb.gov/investigations/AccidentReports/Reports/HWY18MH010-
prelim.pdf
Neisser, U. (1976). Cognition and Reality: Principles and Implications of Cognitive
Psychology. San Francisco, CA: Freeman.
316 Human Factors for Automated Vehicles
Read, G. J. M., Beanland, V., Lenne, M. G., Stanton, N. A., & Salmon, P. M. (2017).
Integrating Human Factors Methods and Systems Thinking for Transport Analysis and
Design. Boca Raton, FL: CRC Press.
Salas, E., Prince, C., Baker, D. P., & Shrestha, L. (1995). Situation awareness in team perfor-
mance: Implications for measurement and training. Human Factors, 37, 1123–1136.
Salmon, P. M. (2019). The horse has bolted! Why human factors and ergonomics has to catch
up with autonomous vehicles (and other advanced forms of automation). Ergonomics,
62, 502–504.
Salmon, P. M., Lenne, M. G., Walker, G. H., Stanton, N. A., & Filtness, A. (2014). Exploring
schema-driven differences in situation awareness across road users: An on-road study
of driver, cyclist and motorcyclist situation awareness. Ergonomics, 57, 191–209.
Salmon, P. M., Read, G. J. M., Walker, G. H., Lenne, M. G., & Stanton, N. A. (2018).
Distributed Situation Awareness in Road Transport: Theory, Measurement, and
Application to Intersection Design. Boca Raton, FL: CRC Press.
Salmon, P. M. & Stanton, N. A. (2013). Situation awareness and safety: Contribution or con-
fusion? Safety Science, 56, 1–5.
Salmon, P. M., Stanton, N. A., Walker, G. H., Baber, C., Jenkins, D. P., & McMaster, R.
(2008). What really is going on? Review of situation awareness models for individuals
and teams. Theoretical Issues in Ergonomics Science, 9, 297–323.
Salmon, P. M., Stanton, N. A., Walker, G. H., & Jenkins, D. P. (2009). Distributed Situation
Awareness: Advances in Theory, Measurement and Application to Teamwork.
Aldershot, UK: Ashgate.
Salmon, P. M., Walker, G. H., & Stanton, N. A. (2015). Broken components versus broken sys-
tems: Why it is systems not people that lose situation awareness. Cognition, Technology
and Work, 17, 179–183.
Salmon, P. M., Walker, G. H., & Stanton, N. A. (2016). Pilot error versus sociotechnical sys-
tems failure? A distributed situation awareness analysis of Air France 447. Theoretical
Issues in Ergonomics Science, 17, 64–79.
Seeley, T. D., Visscher, P. K., Schlegel, T., Hogan, P. M., Franks, N. R., & Marshall, J. A.
(2012). Stop signals provide cross inhibition in collective decision-making by honeybee
swarms. Science, 335(6064), 108–111.
Smith, K. & Hancock, P. A. (1995). Situation awareness is adaptive, externally directed
consciousness. Human Factors, 37, 137–148.
Sorensen, L. J. & Stanton, N. A. (2015). Exploring compatible and incompatible transactions
in teams. Cognition, Technology and Work, 17, 367–380.
Sorensen, L. J. & Stanton, N. A. (2016). Keeping it together: The role of transactional situa-
tion awareness in team performance. International Journal of Industrial Ergonomics,
53, 267–273.
Stanton, N. A., Salmon, P. M., & Walker, G. H. (2015). Let the reader decide: A paradigm shift
for situation awareness in sociotechnical systems. Journal of Cognitive Engineering
and Decision Making, 9, 44–50.
Stanton, N. A., Salmon, P. M., & Walker, G. H. (2018). Systems Thinking in Practice:
Applications of the Event Analysis of Systemic Teamwork Method. Boca Raton, FL:
CRC Press.
Stanton, N. A., Salmon, P. M., Walker, G. H., & Jenkins, D. P. (2009). Genotype and pheno-
type schema and their role in distributed situation awareness in collaborative systems.
Theoretical Issues in Ergonomics Science, 10, 43–68.
Stanton, N. A., Salmon, P. M., Walker, G. H., & Jenkins, D. P. (2010). Is situation awareness
all in the mind? Theoretical Issues in Ergonomics Science, 11, 29–40.
Stanton, N. A., Salmon, P. M., Walker, G. H., Salas, E., & Hancock, P. A. (2017). State-
of-science: Situation awareness in individuals, teams and systems. Ergonomics, 60,
449–466.
Distributed Situation Awareness and Automation 317
Stanton, N. A., Salmon, P. M., Walker, G. H., & Stanton, M. (2019a). Models and methods for
collision analysis: A comparison study based on the Uber collision with a pedestrian.
Safety Science, 120, 117–128.
Stanton, N. A., Stewart, R., Harris, D., Houghton, R. J., Baber, C., McMaster, R., … Green, D.
(2006). Distributed situation awareness in dynamic systems: Theoretical development
and application of an ergonomics methodology. Ergonomics, 49, 1288–1311.
Stanton, N. A., Young, M., & McCaulder, B. (1997). Drive-by-wire: The case of driver work-
load and reclaiming control with adaptive cruise control. Safety Science, 27, 149–159.
Vicente, K. J. (1999). Cognitive Work Analysis: Toward Safe, Productive, and Healthy
Computer-Based Work. Mahwah, NJ: Lawrence Erlbaum Associates.
Walker, G. H., Stanton, N. A., & Salmon, P. M. (2015). Human Factors in Automotive
Engineering and Technology. Ashgate, UK: Aldershot.
Wickens, C. D. (2008). Situation awareness: Review of Mica Endsley’s 1995 articles on situa-
tion awareness theory and measurement. Human Factors, 50, 397–403.
Taylor & Francis
Taylor & Francis Group
http://taylorandfrancis.com
14 Human Factors
Considerations in
Preparing Policy
and Regulation for
Automated Vehicles
Marcus Burke
National Transport Commission Australia
CONTENTS
Key Points............................................................................................................... 320
14.1 Introduction .................................................................................................. 320
14.1.1 Outline of the Chapter....................................................................... 320
14.1.2 Why Is This Topic Important?.......................................................... 321
14.1.3 How Do We Define Human Factors?................................................ 321
14.1.4 What Are Automated Vehicles?........................................................ 322
14.1.5 What Is the Goal of the Automated Driving System?....................... 323
14.2 Automated Driving Systems and Human Factors ........................................ 323
14.2.1 Human Factors and the Automated Driving System......................... 323
14.2.2 Specific Human Factor Risks That Could Impact Safety.................. 325
14.2.3 Which Parties Will Influence These Safety Risks? .......................... 327
14.2.4 Human Factors Safety Risks and Automated Vehicles .................... 327
14.3 Government, Transport, and Policy............................................................... 328
14.3.1 What Are the Roles of Government in Road Transport?.................. 328
14.3.2 Regulation and New Technology....................................................... 328
14.3.3 Quantifying the Safety Risks of Automated Vehicles ...................... 329
14.3.4 What Policy and Regulatory Changes Do Automated Vehicles
Require? �������������������������������������������������������������������������������������������� 329
14.3.5 Approaches for Government to Address Regulatory Barriers,
Gaps, and Opportunities ������������������������������������������������������������������� 331
14.4 How Does Policy and Regulation Address Human Factors Safety Risks? ......332
14.4.1 Government Regulation of Human Factors and Automation
in Other Modes��������������������������������������������������������������������������������� 332
14.4.2 What Is the Role of Prescriptive Versus Principles-Based
Regulation?��������������������������������������������������������������������������������������� 333
319
320 Human Factors for Automated Vehicles
KEY POINTS
• Automated vehicles will create new human factors-related safety risks,
as these vehicles interact in new ways with humans in a variety of roles
(pedestrian, passenger, driver, etc.).
• The human factors-related safety risks of automated vehicles are not well
understood and cannot yet be fully quantified.
• Governments should seek to learn from other safety regimes that have
already dealt with human factors-related safety risks and automation.
• Governments should consider outcome- or principles-based approaches to
regulation.
14.1 INTRODUCTION
14.1.1 outLine of the chApter
This chapter examines the human factors issues in automated vehicle regulation.
It seeks to answer the question “What human factors issues need to be considered
in preparing policy and regulation for the deployment of automated vehicles?” The
chapter is primarily aimed at those who are charged with developing policy for these
vehicles—lawmakers, senior government executives, and policy officers. However, it
will also likely be of interest to those who seek to influence that policy and by those
who are affected by it—technology manufacturers, insurers, road safety groups, and,
ultimately, the public.
I will first look at why this question is important and define automated vehicles,
automated driving systems (ADS), and human factors. I will then examine the sys-
tems features of ADS. I will attempt to identify the individuals that will interact
with automated vehicles and outline the safety risks in these interactions. I will go
on to look at the role of government in general, and specifically for transport, before
turning to different approaches to regulation. Finally, I will attempt to look at how
regulation and other tools available to government might address the human safety
risks of automated vehicles. Human factors issues relate ultimately to the design of
systems, so the question becomes how much policy makers need to specify system
design to account for human factors. I will largely use examples from Australian law
as this is the author’s major area of expertise; however, examples will also be drawn
from other jurisdictions.
This chapter will use the terminology and taxonomy from the SAE Standard
J3016 (SAE International, 2018). This includes terms such as “automated vehicle”
and “ADS” along with the levels of automation for automated vehicles. The chapter
will focus on regulation rather than broader policy (e.g., decisions around industry
investment or labor policy). I will also largely focus on safety, rather than broader
regulatory issues such as road pricing.
Policy and Regulation for Automated Vehicles 321
The focus in all these definitions is on how humans interact with systems. To under-
stand the human factors involved, therefore, we need to (1) identify the system and its
goal (or goals), (2) identify the various individuals who will interact with that system,
and (3) examine the specific safety risks of those interactions.
This could include vehicles based on existing models, with automated functions, new
vehicle types with automated functions, as well as aftermarket devices or software
upgrades that add automated driving functions to existing vehicles.
The key feature of the automated vehicle is the ADS. The SAE Standard J3016
defines the ADS as:
The hardware and software that are collectively capable of performing the entire
[dynamic driving task] on a sustained basis, regardless of whether it is limited to a
specific operational design domain; this term is used specifically to describe a level 3,
4, or 5 driving automation system (SAE International, 2018, p. 3).
ADSs perform the entire dynamic driving task, though some will require a human
in the vehicle ready to take back control when requested (Level 3 or “conditional”
automation). These systems are no longer a driver assistance system—they are, in
essence, the driver. Legally this creates a significant challenge—if a human is no
longer doing the driving then who is responsible for the driving?
ADS is a new type of system designed to interact with humans in order to provide
them with a service (driving). ADSs will drive passenger vehicles as well as freight
vehicles.
ADSs will form part of a complex “system of systems,” both in the vehicle, in
some control or monitoring centers, and potentially in the road infrastructure (see
also, this Handbook, Chapter 19). This could include “remote drivers” or “teleopera-
tors,” humans who exercise varying levels of control over the vehicle, but sit in an
operations center rather than in the vehicle itself. Thus, the ADS will likely include:
Policy and Regulation for Automated Vehicles 323
• the exact limits of automation (e.g., will we ever have automated vehicles
on dirt roads?);
• the timing of commercial deployments (how far away are we?);
• the applications that will be commercially successful;
• the mix of technologies that will be used;
• user acceptance of these systems, and;
• how they will change travel behaviors.
These unknowns limit the ability for governments to understand the interactions and
safety risks. The data to quantify these risks, in many cases, simply do not yet exist.
Vehicle automation is already used in other environments, from trains to mining
to aviation to ports. These applications, however, will generally not involve as wide
a network of systems, and do not operate in a shared, public environment with such
a range of other users. This complex operating environment creates a technology
challenge as well as human factors and regulatory challenges.
The latter list (#3) includes all the individuals who interact with human drivers today,
but will also include those new roles that will come into existence with automated
vehicles (such as remote drivers and fall-back ready users). This is because auto-
mated vehicles will also need to interact with other automated vehicles on the road.
Some of these roles could be divided further—for example, interactions with pedes-
trians may differ when the pedestrian is very old or very young or visually impaired.
If we extend the idea of human factors to other parties who interact with the
system, we could also include those who are programming, designing, and testing
these vehicles as part of companies who are developing this technology, along with
the senior executives of those companies. Testing phases will include test drivers
monitoring the operation of the ADS.
There will be a wide range of interactions between the systems and individ-
ual roles. These could include an automated vehicle interacting with a pedestrian
attempting to cross the road, an automated vehicle responding to a request from a
human driver to hand over control, or a remote driver taking over control to assist an
automated vehicle to negotiate a complex set of roadworks. These will be explored
in more detail below.
Policy and Regulation for Automated Vehicles 325
In summary, we can state that interactions with ADSs will vary widely across a
diverse range of users. These users will have varying levels of experience and under-
standing about how the technology works and what its limitations are. Many, if not
most, will have no specific training in how the technology works, but will have expe-
rience interacting with human-driven vehicles. The risks are significant—getting an
interaction wrong can result in injury or death. Major design problems could result
in many deaths.
The point of handover between a human driver and an ADS will be a key
safety risk. There may be different risks in handing control from the human
driver to the ADS as opposed to handing control over from the ADS to the
human driver. The latter may create greater risk.
2. Individuals outside the vehicle that influence the control and movement of
the vehicle
• Is there a way these users could send a vehicle to an unsafe location?
Alternatively, could the vehicle put a passenger in an unsafe scenario
(e.g., a vehicle that stops functioning in the middle of a freeway with a
small child as the only passenger)?
• Are there specific risks for remote drivers, given how removed they
will be from the road environment? Are there risks that could impact
how well these drivers will monitor the vehicles? Will they potentially
supervise more than one vehicle? How do they need to be trained and
licensed? What skills do they need to have? Will they have sufficient
information to understand the environment the vehicle is operating in
remotely?
• What are the risks for system operational managers who are monitoring
a network of vehicles? Will they react appropriately in the event of a
risk? This could be analogous to air traffic controllers or controllers of
complex facilities such as power stations.
3. Other individuals the ADS will encounter or interact with on public roads
• Will pedestrians change their behavior if they perceive vehicles to be
safer? Will they step straight out into the street in the way that people
routinely wave their arms in front of the closing door of an elevator
expecting that it will open for them?
• Will other human drivers drive more aggressively?
• Will other users seek to interfere with automated vehicles or to game
them in some way?
• Are there road user behaviors or interactions between road users that
are not regulated today (such as the example at the beginning of the
chapter on pedestrians making eye contact with a human driver, prior
to crossing) that could create risks?
• How will vehicles and other road users negotiate right of way in cir-
cumstances where they do so today? Will automated vehicles nudge out
into traffic to signal intent in the way human drivers do?
• Automated vehicles will drive more conservatively than many vehicles
do today—in terms of speed, spacing, merging, etc. Will this create
specific safety issues?
• How will travel behaviors change with the opportunities provided by
automated vehicles?
In addition to these questions, it is worth considering what are the human factors issues
relating to automated vehicle manufacturers and technology providers. One of the key
influences on safety will derive from the developers of the technology. As a result, it
is important to understand the human factors issues that may influence decisions by
Policy and Regulation for Automated Vehicles 327
individuals in these companies. What are they motivated by? How will they make
decisions about what is acceptably safe? What will be the role of senior executives
and how might they influence outcomes? Again, in all these roles, there may be dif-
ferent risks for different individuals, depending on age, experience, training, impair-
ment, and understanding of the technology.
• Vehicle manufacturers
• Entities responsible for the development and ongoing performance of the
ADS (where that is different to the vehicle manufacturer)
• Repair and maintenance providers
• Vehicle owners or operators
• Drivers
• Fall-back ready users
• Road managers
• Mapping providers
• Governments
An appropriate regulatory system will consider the safety risks but then go on to
consider which parties have the greatest influence on those risks, how those par-
ties are covered by existing regulation, and examine any gaps (National Transport
Commission, 2018). There may be gaps for those parties that are new to the transport
system. These include entities responsible for the development and running of the
ADS, remote drivers, and fall-back ready users.
The role of government in these areas will differ in different countries, as will the
roles of different levels of government (national, state, local). Governments also have
other tools available to encourage (or discourage) certain behaviors, including fund-
ing (e.g., for research) and public information campaigns.
potential risk; there is not yet clear research to understand and assess all of the risks;
and, the technology is still in its infancy and is likely to evolve.
If these vehicles are allowed, the question becomes: should government place any
restrictions or conditions on them, such as requiring that the technology meets basic
safety requirements? Many countries take the view that regulation should only be
introduced when there is a demonstrated market failure. This is difficult to demon-
strate for new technologies where there are no historical data available.
Worldwide, over 1.2 million people die each year from road crashes. Driving is
a dangerous activity. Heavy objects (cars and trucks) move at high speed in areas
shared with other road users, including pedestrians. There is a clear argument for
laws to manage safety, as governments do today. Automated vehicles could reduce
current road tolls, but they could also introduce new road safety risks. Given the
risks and the evident existing danger of roads, there would appear to be a clear case
for regulation.
Governments currently regulate extensively in the area of road transport. This
regulation tends to be highly prescriptive—how fast we can drive; how we should
react to certain road signs and traffic lights; and to whom we should give way. Even
so, these rules do not cover every potential situation on the road. In some situations,
the rules may in fact be impossible to follow precisely.
Policy and Regulation for Automated Vehicles 331
These will not be the focus of this chapter, but will need to be considered by govern-
ments to ensure a comprehensive response. Governments could address some of these
issues after commercial deployment, once automated vehicles are an established part
of the vehicle fleet. But government will need to address safety issues up front.
We need to be aware of the limitations of these approaches and how much we can
expect people to change: “in designing a system, we have much more freedom in
how we specify the operating characteristics of the machine than in how we specify
the characteristics of the human operator. That is, we can redesign and improve the
machine components, but we shouldn’t expect to be able to (or be permitted to) rede-
sign and improve the operator” (Proctor & Van Zandt, 2008, p.10).
Governments can attempt to exert control over human behavior in their inter-
actions with technology through laws (e.g., laws to prevent drivers using mobile
phones). Governments can also attempt to train and educate users, either formally
(for example, requirements for driver licenses) or informally (such as through public
education campaigns).
Even in highly controlled workplace environments, “we can carefully screen and
extensively train our operators before placing them in our system but many limita-
tions that characterize human performance cannot be overcome” (Proctor & Van
Zandt, 2008, p. 10). In road transport, road users (drivers, pedestrians, cyclists, and
other) will have a range of understanding of systems (and the law), and it would seem
unrealistic to suggest that we can extensively screen and train all of them. Whilst
we may improve public education, screening and training for the entire population
(including children) appear completely unrealistic.
The alternative is to focus on influencing the design of the system. But before
we consider this in detail, we should examine what we can learn from other modes.
covered by a variety of existing laws, from mode-specific laws (rail and aviation) to
more general workplace health and safety laws.
To what degree do these other regulatory systems regulate human factors and
what can we learn from them? Aviation regulation has a strong focus on human
factors, including the risks of automation (see also, this Handbook, Chapter 21).
Regulators provide detailed guidance on these issues for the design and operation
of aircraft. However, airlines rely on a highly trained cohort of pilots to supervise
automated systems.
Rail regulators have dealt with driverless trains for decades. These generally fall
under a regime focused on broad safety duties for the rail operator, who must dem-
onstrate how they are managing the safety risks. There are currently no specific
regulations on automation in Australian rail law; the operation of these systems is
addressed through the broader safety duties on operators and other parties (National
Transport Commission, 2016a). That is, there is a focus on the overall safety of the
system, a principles-based approach.
Maritime law already considers human factors, for example through the
International Maritime Organization’s Sub-Committee on Human Element,
Training, and Watchkeeping. It is beginning to examine automation; however, it does
not currently regulate automated ships.
Other workplaces, such as ports, come under general workplace health and safety
regulation that provide overall obligations on employers to maintain a safe work-
place. This, again is a principles-based approach.
These schemes generally provide broad safety duties to cover the risks, whilst
allowing industry to develop the detail of systems and applications. More detailed
standards are being developed over time, either by government or industry, depend-
ing on the different regime.
• Start from the key safety risks—can we attempt to categorize these risks
and begin to assess their likelihood and impact? Government and research-
ers have begun to develop these classifications, with an initial focus on Level
2 vehicles (Russell et al., 2018). Significant additional research is required
to better understand and quantify the safety risks (and safety benefits) of
automated vehicles.
• Examine the individuals involved—including their motivations and exist-
ing behaviors.
• Identify the parties that influence those safety risks—including both com-
panies and individuals.
Policy makers will need to examine whether there is a need for regulation and
carefully consider the potential impacts on behavior, both positive and negative.
Responsibility needs to be clear—if we are focused on the design of the system,
then responsibility for system design needs to be clear, whether it is a single party or
several parties.
If regulation is required, policy makers should consider how to ensure that regu-
lation can be flexible and cover evolving technology. This could include: consider-
ation of what is put into legislation versus guidelines; the choice of principles-based
approaches or prescriptive rules; and consideration of how much discretion to pro-
vide to regulators or enforcement officers.
• Monitor, research, and review—As with humans, governments do not
always monitor systems as closely as they could. There will be an oppor-
tunity with ADSs for governments to better monitor road safety, due to the
data that these vehicles will provide. There will be a strong need for research
and monitoring by governments to identify new risks, understand their like-
lihood and severity, and react accordingly. This will need to examine not
just the human–machine interface between the vehicle and the passengers/
Policy and Regulation for Automated Vehicles 335
driver/fall-back ready user inside. It will also need to look at how other road
users interact with these vehicles, how their behavior evolves over time, and
what this might mean for safety.
Automated vehicles offer significant safety benefits. Government will need to ensure
that regulation maximizes these benefits while minimizing the risks.
ACKNOWLEDGMENTS
This chapter is based on the work as part of the author’s role at the National Transport
Commission. I would like to thank all the members of the team, past and present,
whose research, writing, and thinking contributed to the analysis in this chapter. The
views expressed in this chapter are those of the author and do not represent the views
of the National Transport Commission.
REFERENCES
Casner, S. M., Geven, R. W., Recker, M. P., & Schooler, J. W. (2014). The retention of manual
flying skills in the automated cockpit. Human Factors, 56(8), 1506–1516.
Channon, M., McCormick, L., & Noussia, K. (2019). The Law and Autonomous Vehicles.
Oxfordshire, UK: Informa Law Routledge.
Eastman, A. D. (2016). Self-driving vehicles: Can legal and regulatory change keep
up with new technology? American Journal of Business and Management, 5(2),
53–56.
Eggers, W. D. & Turley, M. (2018). The Future of Regulation: Principles for Regulating
Emerging Technologies. Deloitte Centre for Government Insights. Retrieved from
www2.deloitte.com/us/en/insights/industry/public-sector/future-of-regulation/
regulating-emerging-technology.html?id=us:2em:3pa:public-sector:eng:di:070718
Glassbrook, A. (2017). The Law of Driverless Cars: An Introduction. Somerset, UK: Law
Brief Publishing.
Hedlund, J. (2000). Risk business: Safety regulations, risk compensation, and individual
behavior. Injury Prevention, 6(2), 82–90.
Logan, D. B., Young, K., Allen, T., & Horberry, T. (2017). Safety Benefits of Cooperative ITS
and Automated Driving in Australia and New Zealand. Sydney, Australia: Austroads.
National Transport Commission. (2016a). Regulatory Barriers to More Automated Road and
Rail Vehicles - Issues Paper. Melbourne: National Transport Commission.
National Transport Commission. (2016b). Regulatory Options for Automated Vehicles
Discussion Paper. Melbourne: National Transport Commission.
National Transport Commission. (2018). Safety Assurance for Automated Driving Systems:
Decision Regulation Impact Statement. Melbourne: National Transport Commission.
Proctor, R. W. & Van Zandt, T. (2008). Human Factors in Simple and Complex Systems. Boca
Raton, FL: CRC Press.
Russell, S. M., Blanco, M., Atwood, J., Schaudt, W. A., Fitchett, V., & Tidwell, S. (2018).
Naturalistic Study of Level 2 Driving Automation Functions (DOT HS 812 642).
Washington, DC: National Highway Traffic Safety Administration.
SAE International. (2018). Taxonomy and Definitions for Terms Related to Driving
Automation Systems for On-Road Motor Vehicles (J3016). Warrendale, PA: Society of
Automotive Engineers.
336 Human Factors for Automated Vehicles
van Wees, K. & Brookhuis, K. (2005). Product liability for ADAS: Legal and human fac-
tors perspectives. European Journal of Transport and Infrastructure Research, 5,
357–372.
Wood, M., Robbel, P., Maass, M., Tebbens, R. D., Meijs, M., Harb, M., … Schlicht, P.
(2019). Safety First for Automated Driving. Retrieved from www.aptiv.com/docs/
default-source/white-papers/safety-first-for-automated-driving-aptiv-white-paper.pdf
World Health Organization. (2009). WHO Patient Safety Curriculum Guide for Medical
Schools. Geneva, Switzerland: World Health Organisation.
15 HMI Design for
Automated, Connected,
and Intelligent Vehicles
John L. Campbell
Exponent, Inc.
CONTENTS
Key Points............................................................................................................... 338
15.1 Introduction .................................................................................................. 338
15.2 Automated Vehicle HMI Design .................................................................. 339
15.2.1 Communicating Information within a Given Mode.......................... 339
15.2.1.1 Communicating AV System Status and Mode................... 339
15.2.1.2 HMI Guidelines for AV Warnings ..................................... 340
15.2.2 Conveying Information about the Transfer between the Driver
and the ADS������������������������������������������������������������������������������������� 343
15.2.2.1 HMI Design Issues for TOC to and from the Driver ��������� 344
15.2.2.2 HMI Solutions for SA Support: Improve SA for Both
Normative and Time-Critical Driving Situations ������������� 345
15.3 Connected Vehicle HMI Design ................................................................... 346
15.3.1 Vehicle-to-Vehicle (V2V) Information.............................................. 347
15.3.1.1 HMI Design for Status of CVs ........................................... 347
15.3.1.2 HMI Principles for Presenting Warnings in CVs .............. 348
15.3.2 Vehicle-to-“X”
(V2X) and “X”-to-Vehicle (X2V) Information ....... 349
15.3.2.1 V2X Information................................................................ 350
15.3.2.2 X2V Information: Prioritizing, Filtering, and Scheduling..... 350
15.3.2.3 X2V Information: Multiple Displays.................................. 352
15.3.2.4 X2V Information: Message Content................................... 353
15.4 Intelligent Vehicle HMI Considerations........................................................ 353
15.5 Conclusions
.................................................................................................... 354
References .............................................................................................................. 355
337
338 Human Factors for Automated Vehicles
KEY POINTS
• The guidance that is available on the design of HMIs, plus various SAE and
ISO documents, generally reflects pre-2015 research conducted on driver
information/safety systems that provided little or no automated driving
capability or connectivity;
• Insofar as the existing guidance is relevant to ACIVs, the basic driver infor-
mation needs, HMI considerations for transitions of control alerts and warn-
ings, and high-level principles of message management are well understood
and have been documented in a variety of sources.
• The development of more comprehensive and effective HMI guidelines will
require a better understanding of the changing nature of driving and of the
implications of these changes for HMI design.
15.1 INTRODUCTION
A key design element in advanced vehicles is the human–machine interface (HMI).1
The HMI refers to displays that present information to the driver and controls that
facilitate the driver’s interactions with the vehicle as a whole and indicate the status
of various vehicle components and sub-systems (Campbell et al., 2016). In the con-
text of vehicle safety systems in particular, the HMI should effectively communicate
information while managing driver workload and minimizing distraction (Jerome,
Monk, & Campbell, 2015).
HMI design requirements for automated, connected, and intelligent vehicles
(ACIV) must be determined in the context of many considerations, including their
influence on safety, public perception and perceived value, the mix and behaviors
of legacy vs. connected vs. automated vehicles (AV) over time within the vehicle
fleet, and the degree and type of automation associated with the HMI (see also Noy,
Shinar, & Horrey, 2018). In general, safe and efficient operation of any motor vehicle
requires that the HMI be designed in a manner that is consistent with driver needs,
limitations, capabilities, and expectations—a continuing challenge is to identify just
what these are amidst the changing and uncertain landscape of advanced vehicle
technology.
Despite these challenges, our objective in this chapter is to summarize what we
do know (or at least, what we think we know) regarding HMI design principles
for ACIV. Many of these principles are aimed at the important issues raised in
the preceding chapters; i.e., how can the design of the HMI be used to increase
trust (Chapter 4), manage workload (Chapter 6), and improve situation awareness
(SA) in ACIV (Chapters 7 and 13). All these goals support the broader goal of
safety—the safety of the drivers and occupants of ACIV, as well as the safety of
all road users.
1 The literature generally uses the terms Human–Machine Interface (HMI) and Driver–Vehicle
Interface (DVI) interchangeably; we will use HMI throughout this paper but view HMI and DVI to be
synonymous for our purposes.
HMI Design for ACIVs 339
2 Jeong and Green (2013) as well as Campbell et al. (2016) provide extensive lists and summaries of
documents published by SAE and ISO that are relevant to HMI design.
3 Automated Driving Systems are used in this chapter to refer to automated vehicles with one or more
driver support features (SAE Levels 0–2) and automated driving features (SAE Levels 3–5).
340 Human Factors for Automated Vehicles
TABLE 15.1
Principles for Presenting System Status Information in AV
Type of Status
Information What Information to Provide Why Information Is Provided
System activation or A display indicating which automation To support driver awareness of
on/off status feature/function/mode is currently active. current automation mode when
the driver seeks this information.
Mode transition A display indicating that a TOC is occurring Under normal operating
status or that one will occur in the near future. conditions, this information is
presented to help drivers
maintain awareness of the
driving tasks.
Confirmation of A display or message confirming for the To indicate a successful TOC from
successful transfer driver that control has been transferred to the automation system to the
from automated to the driver as they would expect, or driver.
manual control communication of a failed/incomplete
TOC if the transfer is unsuccessful.
System fault or A display or message indicating that part of To alert drivers that they must
failure the system has failed, is not functioning intervene and reclaim control of
correctly, or that the system has reached driving tasks that have previously
some operational limit. been performed by automation,
due to a system fault or failure.
the driver. Appropriate feedback about automation status and mode is important
for (1) maintaining driver’s SA, (2) communicating if the driver’s requests (e.g., a
request for a TOC) have been received by automation, (3) informing drivers if the
system’s actions are being performed properly, and (4) informing drivers if problems
are occurring (Toffetti et al., 2009).
Overall, vehicles should display the information that drivers need to maintain an
understanding of the current and impending automation status and modes. Table 15.1
(from Campbell et al., 2018) shows the types of status information that can be pro-
vided to the driver about the automation and design considerations for presenting
this information.
The HMI must also be designed to facilitate the TOC between the ADS and the
driver; note that there are two types of TOC: (1) from the driver to the system and (2)
from the system to the driver. A recent naturalistic driving study collected 454 hours
of autopilot use from Tesla drivers and found a total of 16,422 transfers of control
(Reimer, 2017). In the study, the number of transfers from the human driver to the
system was 8,211, and the number of transfers from the system to the human driver
was 8,253. Furthermore, transfers from the system to the human driver split into two
categories: (1) transfers initiated by the human drivers (n = 8,211) and (2) transfers
initiated by the system (n = 42) (Reimer, 2017). Much of the literature in this area is
4 https://www.consumerreports.org/autonomous-driving/cadillac-super-cruise-may-lead-to-safe-
hands-free-driving/
344 Human Factors for Automated Vehicles
concerned with a system-initiated TOC when the driver is unengaged or has low SA;
many relevant scenarios here involve a potential hazard. However, TOC is an issue
with AVs even when the driver is situation aware. Imagine that a driver is monitor-
ing the dynamic driving task with their hands on the wheel and feet on the pedals.
Automatic emergency braking could be too slow to avoid a child who suddenly runs
out into the street, but a steering maneuver initiated by the driver may successfully
avoid the hazard. Will the driver recognize this while in Level 2, as well as he or
she does in Level 0 or 1? And, if not, how can the HMI be used to support the cor-
rect decision? If the driver is unengaged when the TOC request is made, this poses
additional burdens on the HMI. Augmented reality head-up displays (HUDs) have
been proposed as one solution to highlight the areas that pose potential threats or are
of central concern to the immediate driving task (i.e., contain safety-critical traffic
information). In general, the HMI needs to be designed to support all these situa-
tions; some general guidance for TOCs is presented below.5
15.2.2.1 HMI Design Issues for TOC to and from the Driver
15.2.2.1.1 TOC from the Driver to the System
This reflects a series of operations through which the driver transfers responsi-
bility for performing part of or the entire driving task to the automated system.
Providing appropriate information through the HMI is important during these
transfers to maintain a driver’s trust in the system and to help drivers maintain
awareness of driving tasks and the broader driving situation. In general, the sys-
tem should aid the transition from manual to automated driving by acknowledging
a driver’s request to engage the automation and providing information about the
status of the TOC throughout the process. The following principles can be used to
support this goal:
• The current system status should always be provided (Toffetti et al., 2009).
• Automation engagement requests should be acknowledged upon receipt to
prevent duplicate or conflicting inputs from the driver and to prevent the
driver from releasing control of the vehicle without the automation being
activated.
• Feedback acknowledging a driver automation activation request should
be provided within 250 ms of the driver’s input (ISO 15005, 2002; AAM,
2006).
• If the transfer was successful, a notification should be provided to the driver
along with an updated automation status display.
• If the transfer was unsuccessful, the driver should be provided with a noti-
fication as to the failure of the automation to engage and the reason why
the automation did not engage (Tsao, Hall, & Shadlover, 1993; Merat &
Jamson, 2009).
5 Research on this topic is limited. These principles are perhaps most relevant to transitions occurring
in lower levels of automation; i.e., Level 3 or below.
HMI Design for ACIVs 345
• The driver should be provided with information on when they need to take
control (Gold, Damböck, Lorenz, & Bengler, 2013; Blanco et al., 2015).
• The driver should be provided with information on how to take control if a
specific control input is required (Toffetti et al., 2009).
• The driver should be provided with information on why the driver needs
to take control. For time-critical situations, this may be a simplified “take
control” message. For less time-critical situations, more information (e.g.,
upcoming system limits; Naujoks, Forster, Wiedemann, & Neukum, 2017)
may be provided.6
• The current system status should always be provided, allowing the driver to
validate the disengagement (Sheridan & Parasuraman, 2005).
• Notifications and messages related to a “take control” message should be
multi-modal (Blanco et al., 2015; Toffetti et al., 2009; Brookhuis, van Driel,
Hof, van Arem, & Hoedemaeker, 2008).
6 The driver’s ability to take control may be faster and easier with lower levels of automation.
346 Human Factors for Automated Vehicles
AV designers may assume drivers would perform their allocated roles (e.g.,
monitoring the driving situation and the system) and maintain their SA in auto-
mated driving, but recent fatal crashes (e.g., the self-driving Uber accident in
Tempe, AZ) demonstrated the frailty of such assumptions (e.g., the driver watch-
ing a video instead of monitoring during automated driving). During the takeover
phase, the human driver may have to identify the current situation, surroundings,
and required actions in a very short period of time. If the human driver is totally
disengaged and does not pay attention to the roadway environment (e.g., engag-
ing in secondary tasks, sleeping, etc.) during automated operation, it will take a
relatively longer time to build up SA to an appropriate level for intervention. In
worst cases, drivers may not be able to take over within the available time or may
contribute to other errors.
Proactive HMIs could help driver’s attention management and takeover per-
formance by incorporating information about driver state (e.g., overall readiness,
glance history, secondary task activity, etc.; see also this Handbook, Chapter 11) and
immediate tactical demands into an alerting approach that helps direct the driver’s
attention to time-critical information. Future vehicle designs may be able to use
an advanced, proactive HMI that can help drivers re-engage with the driving task
through information that is tailored to their specific information needs. For exam-
ple, the HMI can be used to directly provide critical information to the driver or to
help direct the driver’s attention to critical information and/or information elements
in the roadway environment. This could include timely presentation of attentional
cues, alerts, or critical roadway information (e.g., missed guide signs or temporary
roadside messages; see also principles for designing to support SA developed by
Endsley, 2016). This could include design features such as an HUD with augmented
reality capabilities that could be integrated with a driver state monitoring system to
present such information. In the short term, the proactive HMIs can help drivers’
takeover performance and decrease takeover time, and in the long term, this could
improve driver trust and prevent potential misuse and disuse of AVs (Parasuraman &
Riley, 1997).
Two primary design considerations for provision of status information in CVs are
listed below:
Status information can also be useful in the case of heavy vehicles, such as automated
truck platoons. The lead platoon vehicle communicates its movements to following
vehicles through V2V communication; these followers can choose to enter or leave
the platoon at will (Bergenhem, Hedin, & Skarin, 2012). HMIs in these vehicles may
need to communicate different information and allow for different driver control
inputs based on the position of the vehicle in the platoon (e.g., the leader, a follower
behind an exiting vehicle, etc.) and the planned path (e.g., entering or exiting). A
general consideration for heavy vehicle HMIs is to minimize the information units
communicated to the driver due to the already dense control and display layouts in
these vehicle interiors (Campbell et al., 2016).
• Consider staged warnings based on the purpose of the warning and the crit-
icality of the situation (Campbell et al., 2016, pp. 4–6 & 4–7). Single-stage
warnings are useful in alerting drivers of imminent threats and minimizing
the likelihood of false or nuisance alarms. Multi-stage warnings are useful
HMI Design for ACIVs 349
reliable to support the type of message being sent to the driver or other road user?). In
this regard, Hartman (2015) provides resources on V2X and X2V implementations
and system specifications. However, from the perspective of the HMI user (e.g., the
driver or pedestrian) it often does not matter whether this information is collected
via the roadway infrastructure, a pedestrian’s mobile phone, or hardware in a bicycle,
motorcycle, or other personal transportation device. Thus, many of the principles
presented above for V2V guidance are relevant to a range of X2V scenarios using
in-vehicle displays, but less relevant to HMI designs that involve displays that are not
located within a vehicle.
Messages from X2V-supported HMIs are often intended to augment tasks
already performed by the driver in order to make them easier, and to convey infor-
mation that a driver may not be able to obtain—or obtain as promptly as the vehicle
can—within the network. Not all information may be relevant or desired by the
driver depending on the criticality of the information and the situation. For exam-
ple, heavy vehicle drivers and bus drivers may have different information require-
ments than commuters, and a driver stopped at an intersection may have more
information bandwidth available than they will once they initiate a turn through
the intersection. Many X2V-specific issues and interface elements that have been
evaluated within the literature address the task of managing this influx of informa-
tion. Although the potential applications of X2V and V2X technologies are vast,
the ecosystem is still developing, and there are a limited number of studies that
have evaluated these HMIs (Lerner et al., 2014). The following sub-sections pro-
vide guidelines that apply to V2X/X2V systems and reflect issues unique to HMIs
within the CV environment.
15.3.2.1 V2X Information
Much of the currently available V2X literature reflects technical demonstrations
rather than formal HMI evaluations. An example of the demonstrations is proof-of-
concept studies7 for mobile phone applications that provide pedestrians or bicyclists
with supplementary traffic information to use while crossing the street (e.g., the sta-
tus of approaching vehicles). Another example is the design of warnings for motor-
cyclists. Song, McLaughlin, and Doerzaph (2017) evaluated rider acceptance of
multi-modal CV collision warning applications for motorcycles in an on-road study.
They found that haptic, auditory, and visual modalities were all viable for the colli-
sion warning message displays (as measured by rider acceptance), but that haptic and
auditory messages were preferred because they do not represent visual distractions.
Single-channel auditory messages were not recommended, since hearing tests are
not required to legally operate a motorcycle, and loud engine noise can mask audi-
tory messages. Beyond this evaluation, there has yet to be sufficient research on V2X
HMI evaluation to support definitive guidance.
15.3.2.2.1 Priority
Safety-critical warnings should be coded with sufficient urgency to prioritize the
response (Ward et al., 2013). When managing simultaneous alerts, inferior messages
may be suppressed based on this value system (Olaverri-Monreal & Jizba, 2016).
Urgency of the driver response time is suggested as a way of categorizing alerts in
X2V systems, and the highest levels of urgency may be reserved for situations where
time-to-event is 5 seconds or less (Lerner et al., 2014). Lerner et al. (2014) also rec-
ommends limiting the number of warning categories so that drivers may easily dis-
criminate the salience levels reserved for the highest stages of warning (e.g., “high
threat, act now,” “caution, measured action,” and “no urgency, no action required”).
Olaverri-Monreal and Jizba (2016) suggest using a standard priority index (ISO/TS
16951, 2004) to rank warning messages. This index derives the priority value from
the response time for the driver to take an action and the potential resulting injuries
or damages that may occur if no action is taken (see ISO/TS 16951, 2004 for the
equation and categorization process).
When designing messages in an X2V context it is important to distinguish the
format of non-safety-critical information from safety-critical information, and not
design low-priority messages in a way that implies the driver is required to give an
urgent response (Ward et al., 2013; Olaverri-Monreal & Jizba, 2016). This design
may be approached in different ways (see Campbell et al., 2016), but consistency
across X2V message design based on actual and perceived priority can facilitate
fewer instances of perceived false/nuisance alarms, distraction, unnecessary work-
load, and distrust (Lerner et al., 2014).
15.3.2.2.2 Filtering
While it may be possible to present multiple non-speech auditory and visual X2V
alerts concurrently without overloading the driver or negatively impacting perfor-
mance, more effective driver responses will be elicited if the warning display inter-
rupts and overrides all other messages (Lerner et al., 2014). Information lockouts
may also be managed by the driver. In a test-track study where messages of varying
relevance to the driving task could be presented, drivers tended to request that the
system suppresses messages that aligned with content that the National Highway
Traffic Safety Administration’ s (NHTSA) visual manual distraction guidelines
advise omitting (Holmes, Song, Neurauter, Doerzaph, & Britten, 2016; Olaverri-
Monreal & Jizba, 2016).
352 Human Factors for Automated Vehicles
15.3.2.2.3 Scheduling
To avoid simultaneous safety critical alerts, messages should be paced, if possible.
If the X2V system has predictive capabilities, the preferred approach is to suppress
non-safety-critical information within a time window preceding the onset of safety-
critical messages (Ward et al., 2013). A safety-critical warning should continue until
the driver responds appropriately, without provoking driver annoyance or suppress-
ing consecutive safety-critical warnings. Non-safety-critical warnings should endure
(unobtrusively) over a period of time that allows drivers to execute a self-paced
response (Ward et al., 2013).
15.3.2.3 X2V Information: Multiple Displays
CV X2V technology allows for information that is directly relevant to a particular
driver to be displayed in multiple ways within the vehicle as well as outside of the
vehicle. Generally, the position of X2V displays (including displays inside and outside
of the vehicle) should correspond directionally with key task-related external elements
to cue rapid information extraction (Hoekstra-Atwood, Richard, & Venkatraman,
2019a; b; Richard, Philips, Divekar, Bacon-Abdelmoteleb, & Jerome, 2015a).
For messages within the vehicle, visual warnings conveyed simultaneously
should only be presented on one physical display (Olaverri-Monreal & Jizba, 2016).
Driver responses may be better if messages (even separate messages from separate
devices) are presented on a single display rather than separate displays (Lerner &
Boyd, 2005).
Driver’s visual attention should not be directed towards in-vehicle displays when
they need to be looking outside (Stevens, 2016; Svenson, Stevens, & Guglielmi, 2013;
Richard et al., 2015a). In-vehicle displays that present warnings should be within
the driver’s visual field (see earlier sections for location considerations). When non-
safety-critical information is presented inside the vehicle, it should be positioned
near the periphery of the driver’s field of view to be unobtrusive to the demands of
the immediate driving task (Olaverri-Monreal & Jizba, 2016). Detailed guidance on
HMI display location is provided in Campbell et al. (2016).
A display on roadway infrastructure, or a Driver-Infrastructure Interface (DII),
may be part of the X2V system if this type of display provides context and facili-
tates simplified messaging along with reduced visual workload. See Richard et al.
(2015b) for specific guidance on when DIIs may be appropriate. When positioning a
DII, designers should consider the driving task, situation, and the proximal roadway
environment (Hoekstra-Atwood, Richard, & Venkatraman, 2019a).
If a DVI and DII assess the same hazard, the information or instruction should be
consistent and coordinated (Hoekstra-Atwood et al., 2019a; Richard et al., 2015a).
However, safety-critical warning messages are not suitable for DIIs because they
can be readily seen by road users that are not the intended recipient of the mes-
sage. The ubiquitous visibility of infrastructure-based messages could have the unin-
tended consequence of warning the wrong drivers and eliciting unnecessary evasive
responses (Richard et al., 2015b). In this case, supplementing a cautionary DII
message with an imminent collision warning on the in-vehicle HMI may facilitate
the appropriate driver’s crash-avoidance responses (Hoekstra-Atwood, Richard, &
Venkatraman, 2019b).
HMI Design for ACIVs 353
8 Information units are a measure of information load. An information unit refers to key nouns and
adjectives in the message that provide unique or clarifying information. For example, the phrase
“Vehicle ahead. Merge to the right.” contains the four information units underlined (Campbell et al.,
2016).
9 https://www.cadillac.com/world-of-cadillac/innovation/super-cruise
354 Human Factors for Automated Vehicles
For example, L2 vehicles with intelligent features (e.g., Super Cruise) may monitor
driver engagement levels, and slow down and eventually stop the vehicle if the driver
remains disengaged or unresponsive. In this case, the IV’s reconfiguration of safety
maneuvers should not be disabled by drivers. At other times, the driver may have
access to additional safety-relevant information that invalidates the need for system
reconfiguration and should be able to provide control inputs.
IV HMIs are similar to proactive HMIs (discussed in the AV HMI design sec-
tion), except that they may also reflect changes made to the function or level of auto-
mation as well as HMI based on estimations of driver state. Information that could
be presented to the driver includes descriptive information about the current driver
state (e.g., eyes are off the road, hand are off the wheel) and prescriptive information
to help the driver maintain the desired level of automation (e.g., maintain eyes on the
road, place hands on the wheel).
15.5 CONCLUSIONS
It should be clear from this chapter that there is much yet to learn about driver infor-
mation needs and subsequent HMI design requirements for ACIVs. The rapid pace
and changing nature of ACIV—combined with the relatively slow pace of research
to support design—only adds to the challenges. The guidance that is available (e.g.,
guidance published by the National Highway Transportation Safety Administration
(Campbell et al., 2016; 2018), plus various SAE and ISO documents, generally
reflects pre-2015 research conducted on driver information/safety systems that pro-
vided little or no automated driving capability or connectivity. Perhaps the best that
can be said about such guidance documents is that they provide provisionally useful
design principles for ACIV supported by high-quality research. Basic driver infor-
mation needs, HMI considerations for transitions of control alerts and warnings, and
high-level principles of message management are well understood and have been
documented in a variety of sources. The existing guidance can also serve as a road-
map for future research; i.e., holes or gaps in the topics covered by the available
guidance may reflect areas where more research is needed.
The development of more comprehensive and effective HMI guidelines will
require a better understanding of the changing nature of driving and of the implica-
tions of these changes for HMI design. Specifically, how will the range of ACIV
functionality impact driver information needs, given the concurrent requirements to
maintain driver trust, functional mental models, and SA? What new challenges are
introduced through automation for which the HMI could serve as a solution? How
could a broader focus on information management support driver engagement and
SA? Could a proactive, flexible, and dynamic HMI address some of these challenges
and, if so, how?
We are highly optimistic that answers to these and similar questions will be
answered by the ACIV industry and broader research community. Even at this rela-
tively early stage in the conceptualization and development of ACIV, many recent
studies and analyses are serving to shed light on these and related topics and, we
hope, will serve as a foundation for future HMI guidance; these include the chang-
ing role of the driver in AVs (Noy, Shinar, & Horrey, 2018); the definition and
HMI Design for ACIVs 355
measurement of the “out-of-the-loop” concept (Merat et al., 2018; Biondi et al., 2018;
this Handbook, Chapters 7, 21); driver engagement and conflict intervention perfor-
mance (Victor et al., 2018); the challenges of partial automation (Endsley, 2017);
and strategies for attention management in AVs (Llaneras, Cannon, & Green, 2017).
REFERENCES
Alliance of Automobile Manufacturers. (2006). Statement of Principles, Criteria and
Verification Procedures on Driver Interactions with Advanced In-Vehicle Information
and Communication Systems, including 2006 Updated Sections [Report of the Driver
Focus-Telematics Working Group]. Retrieved from www.autoalliance.org/index.
cfm?objectid=D6819130-B985-11E1-9E4C000C296BA163.
Bazilinskyy, P., Petermeijer, S. M., Petrovych, V., Dodou, D., & de Winter, J. C. (2018). Take-
over requests in highly automated driving: A crowdsourcing survey on auditory, vibro-
tactile, and visual displays. Transportation Research Part F: Traffic Psychology and
Behaviour, 56, 82–98.
Bergenhem, C., Hedin, E., & Skarin, D. (2012). Vehicle-to-vehicle communication for a pla-
tooning system. Procedia-Social and Behavioral Sciences, 48, 1222–1233.
Biondi, F. N., Lohani, M., Hopman, R., Mills, S., Cooper, J. M., & Strayer, D. L. (2018).
80 MPH and out-of-the-loop: Effects of real-world semi-automated driving on driver
workload and arousal. Proceedings of the Human Factors and Ergonomics Society
Annual Meeting, 62(1), 1878–1882.
Blanco, M., Atwood, J., Vasquez, H. M., Trimble, T. E., Fitchett, V. L., Radlbeck, J., …
Morgan, J. F. (2015). Human Factors Evaluation of Level 2 and Level 3 Automated
Driving Concepts (DOT HS 812 182). Washington, DC: National Highway Traffic
Safety Administration.
Brookhuis, K. A., van Driel, C. J. G., Hof, T., van Arem, B., & Hoedemaeker, M. (2008).
Driving with a congestion assistant: Mental workload and acceptance. Applied
Ergonomics, 40, 1019–1025. doi:10.1016/j.apergo.2008.06.010
Campbell, J. L., Brown. J. L., Graving, J. S., Richard, C. M., Lichty, M. G., Sanquist, T., …
Morgan, J. L. (2016). Human Factors Design Guidance for Driver-Vehicle Interfaces
(Report No. DOT HS 812 360). Washington, DC: National Highway Traffic Safety
Administration.
Campbell, J. L., Brown, J. L., Graving, J. S., Richard, C. M., Lichty, M. G., Bacon, L. P., …
Sanquist, T. (2018). Human Factors Design Guidance for Level 2 and Level 3
Automated Driving Concepts (DOT HS 812 555). Washington, DC: National Highway
Traffic Safety Administration.
Campbell, J. L., Richard, C. M., Brown, J. L., & McCallum, M. (2007). Crash Warning
System Interfaces: Human Factors Insights and Lessons Learned (DOT HS 810 697).
Washington, DC: National Highway Traffic Safety Administration.
Chrysler, S. T., Finley, M. D., & Trout, N. (2018). Driver Information Needs for Wrong-Way
Driving Incident Information in a Connected-Vehicle Environment. Retrieved from
https://trid.trb.org/view/1494783
Deatherage, B. H. (1972). Auditory and other sensory forms of information presentation.
In H. P. Van Cott & R. G. Kinkade (Eds.), Human Engineering Guide to Equipment
Design (Rev. ed.) (pp. 123–160). Washington, DC: U. S. Government Printing Office.
Endsley, M. R. (2016). Designing for Situation Awareness: An Approach to User-Centered
Design. Boca Raton, FL: CRC Press.
Endsley, M. R. (2017). Autonomous driving systems: A preliminary naturalistic study of
the Tesla Model S. Journal of Cognitive Engineering and Decision Making, 11(3),
225–238. doi:10.1177/1555343417695197
356 Human Factors for Automated Vehicles
Jerome, C., Monk, C., & Campbell, J. (2015). Driver vehicle interface design assistance for
vehicle-to-vehicle technology applications. 24th International Technical Conference
on the Enhanced Safety of Vehicles (ESV). Washington, D.C.: National Highway Traffic
Safety Administration.
Kiefer, R., LeBlanc, D., Palmer, M., Salinger, J., Deering, R., & Shulman, M. (1999).
Development and Validation of Functional Definitions and Evaluation Procedures for
Collision Warning/Avoidance Systems (DOT HT 808 964). Washington, DC: National
Highway Traffic Safety Administration.
Lee, J. D., Hoffman, J. D., & Hayes, E. (2004). Collision warning design to mitigate driver
distraction. Proceedings of the SIGCHI Conference on Human Factors in Computing
Sciences (pp. 65–72). Retrieved http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.
1.1.77.2168&rep=rep1&type=pdf
Lerner, N. & Boyd, S. (2005). On-Road Study of Willingness to Engage in Distracting
Tasks (DOT HS 809 863). Washington, DC: National Highway Traffic Safety
Administration.
Lerner, N., Kotwal, B. M., Lyons, R. D., & Gardner-Bonneau, D. J. (1996). Preliminary
Human Factors Guidelines for Crash Avoidance Warning Devices (DOT HS 808 342).
Washington, DC: National Highway Traffic Safety Administration.
Lerner, N., Robinson, E., Singer, J., Jenness, J., Huey, R., Baldwin, C., & Fitch, G. (2014).
Human Factors for Connected Vehicles: Effective Warning Interface Research
Findings. Retrieved from www.nhtsa.gov/DOT/NHTSA/NVS/Crash Avoidance/
Technical Publications/2014/812068-HumanFactorsConnectedVehicles.pdf
Llaneras, R. E., Cannon, B. R., & Green, C. A. (2017). Strategies to assist drivers in remaining
attentive while under partially automated driving. Transportation Research Record,
2663, 20–26. doi:10.3141/2663-03
Mendoza, P. A., Angelelli, A., & Lindgren, A. (2011). Ecological interface design inspired
human machine interface for advanced driver assistance systems. IET Intelligent
Transport Systems, 5(1), 53–59.
Merat, N. & Jamson, A. H. (2009). How do drivers behave in a highly automated car?
Proceedings of the International Driving Symposium on Human Factors in Driver
Assessment, Training, and Vehicle Design, 5, 514–521.
Merat, N., Seppelt, B., Louw, T., Engstron, J., Lee, J.D., Johannsson, E., Green, C.A.,
Katazaki, S., Monk, C., Itoh, M., McGehee, D., Sunda, T., Unoura, K., Victor, T.,
Schieben, A., & Keinath, A. (2018). The “Out-of-the-Loop” concept in automated
driving: Proposed definition, measures and implications. Cognition, Technology &
Work, 21(1), 87–98. doi:10.1007/s10111-018-0525-8
National Highway Traffic Safety Administration. (2017). Vehicle to Vehicle Communication.
Retrieved from www.nhtsa.gov/technology-innovation/vehicle-vehicle-communication
Naujoks, F., Forster, Y., Wiedemann, K., & Neukum, A. (2017). A human-machine inter-
face for cooperative highly automated driving. In N. A. Stanton, S. Landry, G.
Di Bucchianico, & A. Vallicelli, Advances in Human Aspects of Transportation
(pp. 585–595). Berlin: Springer.
Noy, I., Shinar, D., & Horrey, W. J. (2018). Automated driving: Safety blind spots. Safety
Science, 102, 68–78.
Olaverri-Monreal, C. & Jizba, T. (2016). Human factors in the design of human–machine
interaction: An overview emphasizing V2X communication. IEEE Transactions on
Intelligent Vehicles, 1(4), 302–313. doi:10.1109/TIV.2017.2695891
Parasuraman, R. & Riley, V. (1997). Humans and automation: Use, misuse, disuse, abuse.
Human Factors, 39(2), 230–253.
Petermeijer, S. M., de Winter, J. C., & Bengler, K. J. (2016). Vibrotactile displays: A sur-
vey with a view on highly automated driving. IEEE Transactions on Intelligent
Transportation Systems, 17(4), 897–907.
358 Human Factors for Automated Vehicles
Prewett, M. S., Elliott, L. R., Walvoord, A. G., & Coovert, M. D. (2012). A meta-analysis
of vibrotactile and visual information displays for improving task performance. IEEE
Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews),
42(1), 123–132.
Reimer, B. (2017). Human Centered Vehicle Automation. Presented at the European New
Car Assessment Programme, Antwerp, Belgium.
Richard, C. M., Philips, B. H., Divekar, G., Bacon-Abdelmoteleb, L. P., & Jerome, C., (2015a).
Driver responses to simultaneous V2V and V2I safety critical Information in left-turn
across path scenarios. Proceedings of the 2015 Annual Meeting of the Human Factors
and Ergonomics Society (pp. 1626–1630). Santa Monica, CA: HFES.
Richard, C. M., Morgan, J. F., Bacon, L. P., Graving, J. S., Divekar, G., & Lichty, M. G.
(2015b). Multiple Sources of Safety Information from V2V and V2I: Redundancy,
Decision Making, and Trust—Safety Message Design Report. Seattle, WA: Battelle.
SAE J2802. (2010). Blind Spot Monitoring System (BSMS): Operating Characteristics and
User Interface. Warrendale, PA: SAE International.
Sheridan, T. B. & Parasuraman, R. (2005). Human-automation interaction. Reviews of Human
Factors and Ergonomics, 1, 89–129.
Song, M., McLaughlin, S., & Doerzaph, Z. (2017). An on-road evaluation of connected
motorcycle crash warning interface with different motorcycle types. Transportation
Research Part C: Emerging Technologies, 74, 34–50.
Stevens, S. (2016). Driver Acceptance of Collision Warning Applications Based on Heavy-
Truck V2V Technology. Retrieved from www.nhtsa.gov//DOT/NHTSA/NVS/Crash
Avoidance/Technical Publications/2016/812336_HeavyTruckDriverClinicAnalysis.pdf
Svenson, A. L., Stevens, S., & Guglielmi, J. (2013). Evaluating driver acceptance of heavy truck
vehicle-to-vehicle safety applications. 23rd International Technical Conference on the
Enhanced Safety of Vehicles. Retrieved from www-esv.nhtsa.dot.gov/Proceedings/23/
isv7/main.htm
Tawari, A., Sivaraman, S., Trivedi, M. M., Shannon, T., & Tippelhofer, M. (2014). Looking-in
and looking-out vision for urban intelligent assistance: Estimation of driver atten-
tive state and dynamic surround for safe merging and braking. Intelligent Vehicles
Symposium Proceedings (pp. 115–120), IEEE.
Toffetti, A., Wilschut, E. S., Martens, M. H., Schieben, A., Rambaldini, A., Merat, N., &
Flemisch, F. (2009). CityMobil: Human factor issues regarding highly automated
vehicles on eLane. Transportation Research Record: Journal of the Transportation
Research Board, 2110, 1–8. doi:10.3141/2110–01
Tsao, H.-S. J., Hall, R. W., & Shadlover, S. E. (1993). Design Options for Operating Fully
Automated Highway Systems. Berkeley, CA: University of California PATH Institute
of Transportation Studies.
Victor, T. W., Tivesten, E., Gustavsson, P., Johansson, J., Sangberg, F., & Ljung Aust, M.
(2018). Automation expectation mismatch: Incorrect prediction despite eyes on threat
and hands on wheel. Human Factors, 60(8), 1095–1116. doi:10.1177/0018720818788164
Ward, N., Velazquez, M., Mueller, J., & Ye, J. (2013). Response interference under near-
concurrent presentation of safety and non-safety information. Transportation Research
Part F: Traffic Psychology and Behaviour, 21, 253–266.
16 Human–Machine
Interface Design for
Fitness-Impaired
Populations
John G. Gaspar
University of Iowa
CONTENTS
Key Points .............................................................................................................. 359
16.1 Introduction .................................................................................................. 360
16.2 Adaptive
Automation .................................................................................... 361
16.2.1 When to Adapt? ................................................................................ 362
16.2.2 How to Adapt? .................................................................................. 363
16.2.3 Invocation
Authority ......................................................................... 365
16.3 A Framework for AA for Impaired Drivers ................................................. 366
16.3.1 Distraction ........................................................................................ 368
16.3.2 Drowsiness ........................................................................................ 370
16.3.3 Alcohol and Other Drugs.................................................................. 372
16.4 Conclusions ................................................................................................... 372
Acknowledgments.................................................................................................. 373
References .............................................................................................................. 373
KEY POINTS
• The human–machine interface provides the link between driver state detec-
tion and the human operator
• Using driver state information, adaptive automated systems could be
designed to adjust their demands and/or the HMI based on the capacity of
the driver
• Adaptive automation requires decisions about if, how, and when the auto-
mation, including both the vehicle systems and HMI, should adapt, and
whether the automation or human has authority to invoke changes in the
system
• Adaptive automation applied for driver impairment needs to consider the
interaction between the state of the driver and the capability of automation
359
360 Human Factors for Automated Vehicles
16.1 INTRODUCTION
The previous chapter (Chapter 15) discussed many important design considerations
for the human–machine interface (HMI) for automated and connected vehicles.
One additional and significant concern is how that design might be impacted by
the ability to monitor driver state. Automation may indeed increase the incidence
of drivers being unprepared or incapable of safely operating the vehicle (e.g., this
Handbook, Chapter 9). For instance, recent research suggests that partial automation
(i.e., Level 2) increases visual disengagement from driving, even in the absence of
secondary tasks (Gaspar & Carney, 2019; See also, Russell et al., 2018). Similar
research demonstrates an increase in the likelihood of fatigue and drowsiness
with even moderately prolonged periods of automated driving (Vogelpohl, Kühn,
Hummel, & Vollrath, 2019). Recent crashes involving partially automated vehicles
highlight the potential consequences of driver impairment and disengagement from
the dynamic driving task (e.g., NTSB, 2017).
Driver monitoring is often presented as the remedy to driver impairment in par-
tially and highly automated vehicles (see e.g., this Handbook, Chapter 11). Indeed,
in their report following the investigation of the Williston Tesla crash, the National
Transportation Safety Board recommended that driver monitoring could provide
a safeguard against driver disengagement and impairment in automated vehicles
(NTSB, 2017). Previous chapters (Chapters 9, 11) discussed approaches to driver
monitoring and their application in automated vehicles. However, simply knowing
the state of the driver is not enough to improve safety. The vehicle must adapt in
some fashion to account for the reduced capacity of the driver. This could be through
modifying the HMI (e.g., providing feedback), adjusting the vehicle systems (e.g.,
tuning lane departure warnings), or some combination of the two.
This chapter builds on discussion of driver state monitoring by discussing how
driver state information can be considered in HMI design in automated vehicles.
Specifically, we consider how information about the state of the driver can be used
to dynamically tailor the automation to the driver’s capabilities on a moment-to-
moment basis. This dynamic, state-based adaptation by the HMI is referred to as
adaptive automation (AA). Unlike static automation, whose functionality remains
constant when engaged, AA flexibly adjusts the HMI and level of automation based
on information about the state of the human operator (Rouse, 1988). AA has been
applied in a variety of complex tasks involving control distribution between human
operators and automated aides, from monitoring air traffic control displays (Kaber,
Perry, Segall, McClernon, & Prinzel III, 2006) to controlling unmanned vehicles (de
Visser & Parasuraman, 2011).
This chapter is divided into two sections. First, we provide an overview of AA
and the important design decisions that should be considered in its application to
driving. We then present a framework for applying AA to driving, specifically driver
impairment. The framework considers both the capability of the automation and
capacity of the driver, as well as interactions between the two. This framework is
considered across common modes of impairment, specifically distraction, drowsi-
ness, and drugs and alcohol (also see this Handbook, Chapter 9).
HMI for Fitness-Impaired Populations 361
16.2 ADAPTIVE AUTOMATION
AA refers to systems that dynamically adjust the level of automation or HMI based
on the state of the operator (Hancock, Chignell, & Lowenthal, 1985; Rouse, 1988).
This contrasts with static automation, which maintains the same level of automation
independent of operator or environmental state. The goal of an adaptive system is
to tailor the level of automation to meet the needs of the operator and maintain safe
operation (Parasuraman, Bahri, Deaton, Morrison, & Barnes, 1992). With impair-
ment (e.g., distraction, drowsiness, drugs) this involves dynamically adjusting the
HMI, function of the automated systems, or both, in order to mitigate the detrimental
effects of disengagement from the driving task. Note that, with respect to workload,
in low-workload conditions, where automation complacency is likely, more control
is shifted to the human operator to increase arousal (see also, this Handbook,
Chapter 6).
Adaptive systems, as depicted in Figure 16.1, consist of two components, a state
monitor and task manager. The state monitor detects and classifies the state of the
human operator (and perhaps also the environment; this Handbook, Chapter 11).
Operator state information is then fed forward to a task manager, whose role it is
to adjust the allocation of tasks between the human operator and system automa-
tion (also see this Handbook, Chapter 8). The HMI serves as the link between the
operator and automated system. For instance, an HMI might provide feedback to a
distracted driver to return his gaze to the forward road. The outcome of behavior
(e.g., lane-keeping) and the updated state of the driver are then fed back into the
system.
A considerable body of research from different domains shows benefits of AA over
static (i.e., non-adaptive) automation (see Scerbo, 2008). For example, Parasuraman,
Mouloua, and Hilburn (1999) compared AA that provided adaptive aiding and adap-
tive task allocation against static automation in a simulated flight task. Adaptive
aiding consisted of the system controlling one dimension of aircraft position in high-
workload situations (i.e., takeoff and landing). Task control was also temporarily
shifted back to the human operator in lower-workload conditions (i.e., the middle
portion of the flight). Compared with a non-adaptive control group, AA improved
tracking performance and reduced subjective workload.
While AA offers advantages over static automation across a number of tasks,
HMI designers face several important questions in implementing AA in vehicles.
FIGURE 16.1 A framework for AA. The human–machine interface links the human
operator to the automated system.
362 Human Factors for Automated Vehicles
These include when the automation should adapt, what form that adaptation should
take, and whether the human or automation has invocation authority. We next con-
sider these questions in the context of vehicle automation.
TABLE 16.1
Levels of Automated Control
364 Human Factors for Automated Vehicles
useful in teaching drivers about the edge cases that define the operational design
domain of a system (see this Handbook, Chapter 18).
These degrees of automated control can be dynamically applied across different
components of an information processing framework consisting of four stages: infor-
mation acquisition, analysis, decision-making, and action execution (see Figure 16.2;
Parasuraman, Sheridan, and Wickens, 2000; see also, this Handbook, Chapter 6). A
distinction can be made between lower- and higher-order processes in this frame-
work based on the degree to which information must be cognitively manipulated.
Information acquisition and action execution are considered lower-order processes
and analysis and decision-making, requiring greater cognitive processing, are con-
sidered higher-order functions.
Kaber, Wright, Prinzel III, and Clamann (2005) considered the potential implica-
tions of applying AA to each stage of the information processing framework in an
air traffic control monitoring task. Participants were instructed to locate and “clear”
aircraft on a control display before they reached a certain location. Participants
could only move a portion of the display through a viewing portal and had to shift
the portal to track multiple aircrafts. Operator state was evaluated via performance
on a secondary gauge monitoring task and used to trigger AA. Participants experi-
enced four automation conditions and a manual condition. Acquisition automation
controlled movement of the viewing portal. Analysis automation provided a table of
all active aircrafts. Decision-making automation prioritized aircraft to clear. Action
implementation automation automatically cleared aircraft the operator had selected.
Kaber et al. (2005) found that AA applied to lower-order information process-
ing stages (acquisition and execution) improved performance relative to manual
control. However, applying AA to higher-order functions actually degraded perfor-
mance. Operators had greater difficulty returning to manual control in the analysis
and decision AA conditions. Kaber et al. (2005) suggest this effect may be due to
the transparency of automation or how easy it is for the operator to assess the reli-
ability of the AA (i.e., how well the automation is working at any point in time).
With lower-level functions, such as automatically clearing selected aircraft, it is easy
for operators to determine whether the automation is active and successful. With
higher-order AA, additional processing is necessary to evaluate the automation’s
decisions against the mental model of the operator. Furthermore, if decision-making
automation repeatedly makes and executes choices in a complex environment,
it may be difficult for operators to maintain a clear understanding of the situation
FIGURE 16.2 Automation applied to different information processing stages. Dashed lines
represent the maximum degree of AA at each stage.
HMI for Fitness-Impaired Populations 365
(Parasuraman et al., 2000). Similar costs of automation have been observed with
high levels of automated information processing, such as display cueing (Yeh,
Wickens, & Seagull, 1999).
The dashed lines in Figure 16.2 represent the extent to which a processing stage
might be maximally automated, using the automation continuum from full manual
control to full automation (see Table 16.1). Both lower-order processes can be highly
automated (Levels 6.5–10), although, as noted earlier, insight into when and why
automation performs a specific function might be helpful in improving driver under-
standing and awareness of automation functioning. Automation applied to higher-
order processes, analysis and decision-making, is more likely to reduce situation
awareness and take the driver out of the control loop (Scerbo, 2008). Thus high
decision-making autonomy is only appropriate to the extent the driver can disen-
gage from the driving task (see also, this Handbook, Chapters 7, 21). If drivers must
remain aware of the driving situation (i.e., conditional automation), it is important
that drivers at least have insight into the functions of the automation (Level 6.5
and below).
Parasuraman et al. (2000) outlined several other important considerations in how
automation could be applied to the information processing framework. First, and
most importantly, the resulting state of the joint driver–vehicle system should be
safer with automation applied than if the driver was in full manual control. That is,
the addition (or adaptation) of automation should increase safety and decrease the
likelihood and severity of crashes. The goal of AA is to achieve a desired level of
operator workload, not to obviate the dynamic driving task from the human operator
in situations where doing so diminishes safety.
A designer must also consider the demands involved in a particular situation and
the costs associated with a failure. In time-critical situations with insufficient time
for the operator to respond, automated decision-making and action implementation
may be ideal (Scerbo, 2008). Automatic emergency braking (AEB) is an example of
such a situation, in that the vehicle can respond faster and with harder braking than a
human driver possibly could. In less time-constrained high-risk situations, the extent
of decision-making performed by the automation depends on the capability of the
system and whether the driver is expected to intervene.
The major limitation of adaptable automation is that operators often lack insight
into their own state. Humans are poor judges of their own mental and physical capac-
ity and may therefore choose to invoke automation (or remain in manual control)
at inappropriate times or fail to invoke automation when it is most needed, such as
under high workload (Horrey, Lesch, Mitsopoulos-Rubens, & Lee, 2015; Morris &
Rouse, 1986; Sarter & Woods, 1994). Indeed, Neubauer, Matthews, Langheim, and
Saxby (2012) found that voluntary invocation of automation failed to reduce fatigue
and stress in a sample of fatigued drivers. Humans are poor judges of the extent
to which impairment states might negatively impact performance and safety. For
example, Horrey, Lesch, and Garabet (2008) showed that drivers were poorly cali-
brated to the detrimental effects of distraction on closed-course driving. Therefore,
it seems advantageous that an adaptive vehicle interface assume invocation authority
in instances of driver impairment, particularly during safety-critical tasks.
Inagaki, Itoh, and Nagai (2007) proposed a situation-adaptive form of AA.
The idea is that in certain situations, particularly those with high degrees of time-
criticality where human operators may be incapable of responding fast enough, the
automation should make decisions about when and how to respond (if it is capable
of doing so). An important caveat of this idea is the importance of the automation
informing the driver of its intentions, if a level of joint control is expected (see also,
this Handbook, Chapter 8).
FIGURE 16.3 Relationship between capability, impairment, and vehicle adaptation, repre-
sented by the transition across shaded regions.
FIGURE 16.4 Relationship between human operator capacity, automation capability, and
demands of the dynamic driving task.
the human operator may be limited by impairment. In such situations, the automation
must intervene and control more of the driving task. If the automation fails or is inca-
pable of intervening, this leaves a safety gap, a portion of the dynamic driving task
not accounted for by either the automation or human operator. Yamani and Horrey
(2018) applied this framework of shared control to the individual information pro-
cessing stages (Figure 16.2). This model predicts how drivers might deploy atten-
tion across different levels of automation, considering varying levels of distributed
control.
368 Human Factors for Automated Vehicles
The goal of AA in automated vehicles is then to prevent the driver from reaching
a level of impairment that exceeds the capability of the automated systems to control
the driving task.
Consider two examples with a drowsy driver, first a vehicle with no automation
and second a highly automated vehicle. In the first example, the vehicle is incapable
of subsuming any portion of the driving task. The state detection system must there-
fore monitor the driver, and the task manager should intervene before the driver
reaches a level of impairment, resulting in a safety gap between human and auto-
mated capabilities. For example, take a drowsy driver who is considering to starting
a drive. A vehicle with low automation might warn the driver before the drive begins,
because the automation is not capable of controlling the vehicle should the driver fall
asleep. A highly automated vehicle, on the other hand, may be capable of perform-
ing the entire driving task. Such a vehicle might therefore allow the drowsy driver to
disengage entirely (i.e., fall asleep), because the automation is capable of performing
all tasks without leaving a gap in safety. Human input may in fact be harmful in
such a situation, given how poorly drivers estimate their drowsiness levels (FHWA,
1998), and the responsibility of the AA would be to block the impaired driver from
retaking control.
In specifying these impairment thresholds for adapting automation, the design
must also strike a balance between safety and driver acceptance. Drivers must under-
stand the correspondence between the impairment thresholds and changes in per-
formance and safety. That is, they must trust the system to identify when driving
is no longer safe. Changes in the HMI in situations the driver does not perceive as
alarming are referred to as nuisance alerts (Kiefer et al., 1999). Nuisance warnings
have a direct negative impact on trust and subsequent willingness to use a system
(Bliss & Acton, 2003; Lee, Hoffman, & Hayes, 2004). Appropriate feedback about
the state and function of automation is also critical to engender operator trust in both
the state detection system and the automated vehicle control (Lee & See, 2004; this
Handbook, Chapter 15).
In situations of joint human–automation control (i.e., partial automation), as in
Figure 16.4, it is also important to consider the relationship between the expectations
of the driver and the automated vehicle. For example, Inagaki et al. (2007) found
that an action support system that automatically executed a response maneuver was
effective at avoiding collisions. However, the system was, in general, not accepted
by drivers. The authors posit that this was because the behaviors of the automation
differed from the ways the driver expected the automation to behave, suggesting that
the intentions of the adaptive system should match those of the driver. Similarly,
in situations where the human operator is unable to detect unsafe levels of impair-
ment or unwilling to alter unsafe behavior, automated intervention may be necessary
(Saito, Itoh, & Inagaki, 2016).
16.3.1 distrAction
Distraction can be defined as the diversion of attention away from the tasks nec-
essary for safe driving (Lee, Young, & Regan, 2008). Much research in the last
20 years has explored the effects of distraction on driver performance and safety.
HMI for Fitness-Impaired Populations 369
naturalistic study of partially automated driving, Gaspar and Carney (2019) found a
significant percentage of individual glances and off-road interactions that exceeded
established thresholds for manual driving (i.e., 2 and 12 seconds, respectively).
Future research must establish how long is too long to look away in with different
levels of automation capability.
16.3.2 drowsiness
Unlike distraction, which represents a relatively discrete disengagement from driv-
ing, drowsiness is a continuous and progressive state of impairment (also see this
Handbook, Chapter 9). That is, over the course of a typical trip, drowsy drivers will
become progressively drowsier (though see Williamson et al., 2011). Drowsiness can
be simply defined as a state of reduced alertness associated with the inclination to
fall asleep (Wierwille, 1995). At the early stages of drowsiness, behavioral changes
such as increased reaction time manifest themselves (e.g., Kozak et al., 2005). In
later stages, drivers actually begin to momentarily fall asleep, a phenomenon known
as a microsleep event (Dinges, 1995). These events are marked by long (>500 ms)
eyelid closures, resulting in clearly degraded driving performance, particularly the
ability to maintain lateral vehicle control and heightened risk of roadway departures
(Boyle, Tippin, Paul, & Rizzo, 2008).
Adaptive drowsiness countermeasures have mostly focused on providing
feedback to drivers based on either physical state or driving performance (see this
Handbook, Chapter 9, 11). Research suggests adaptive in-vehicle countermeasures
can be effective in either mitigating or compensating for performance decre-
ments related to drowsiness. In a simulator study, Gaspar et al. (2017) tested the
effectiveness of several simple state-based drowsiness countermeasures, which
were triggered based on the output of a steering-based drowsiness detection
algorithm (see also, Atchley & Chan, 2011; Berka et al., 2005). The countermeasures
consisted of either auditory-visual or haptic alerts and were either discrete (single
stage) or staged (multiple stages of increasing urgency). These feedback warnings,
and particularly warnings that escalated in severity with continued evidence of
drowsiness, reduced the frequency of drowsy lane departures compared to a control
group with no adaptive mitigation. The warnings in this study can be considered as
a fairly straightforward type of feedback, similar to systems available in produc-
tion vehicles.
Kozak and colleagues (2006) examined the effectiveness of different modalities
of lane departure warnings for drowsy drivers. Warnings included steering wheel
vibration, simulated rumble-strip sounds, and a heads-up display paired with steer-
ing wheel torque. Each of these warnings was effective for drowsy drivers, reducing
response time to lane departures and decreasing the magnitude of lane excursions
relative to baseline. May, Baldwin, and Parasuraman (2006) found that auditory
forward collision warnings could reduce the chance of a head-on crash for drivers
showing signs of active task-induced fatigue.
Higher levels of AA have also been employed successfully in the context of drowsy
driving. Saito et al. (2016) studied the effectiveness of an adaptive lane-keeping
HMI for Fitness-Impaired Populations 371
16.4 CONCLUSIONS
This chapter provides a framework for how adaptive automated systems can interact
with impaired driving populations, using the relations among the type and degree
of driver impairment, the information processing stage at which an intervention is
needed, and the capability of the vehicle technology to identify the specific type of
interaction that is needed. This framework considers the preceding body of research
on AA in a number of tasks. In addition to the capability of the automation itself,
designers of adaptive HMIs for automated vehicles need to consider how changes in
automation will be invoked, when adaptation will occur, and to what degree various
components of the driving task will be automated.
There are several key points that should be considered in this discussion as they
relate to driver impairment. First, it is important that research helps explore how
HMI for Fitness-Impaired Populations 373
drivers will respond to these different interventions and how the joint human–vehicle
system will behave under different task conditions. As Parasuraman et al. (2000)
note, an automated system is only beneficial to the extent that the final human–
machine relationship improves task performance and safety.
Second, it is important to consider the degree to which drivers are accepting and
trusting different adaptive interfaces and behaviors. This will also require under-
standing the impact of factors like feedback and automation transparency in the
design of adaptive interfaces. As this chapter shows, there are complex interactions
between a number of factors that must be considered by HMI designers. These HMI
design decisions have consequences for whether drivers will ultimately want to use
particular systems.
AA has the potential to leverage exciting new developments in driver monitor-
ing technology to make driving safer and more enjoyable for fitness-impaired popu-
lations. Yet driver state information is only useful to the extent it can be used to
implement an HMI that will leverage the capability of automated vehicle systems to
compliment or compensate for driver capacity.
ACKNOWLEDGMENTS
The author would like to thank William Horrey and Donald Fisher for their insight-
ful and constructive feedback on earlier drafts of this chapter. The author would also
like to thank several colleagues for discussion that led to the ideas outlined in this
chapter, including Cher Carney for discussion of visual distraction in automation,
Timothy Brown and Chris Schwarz for considering the role of driver monitoring in
automation and investigating the efficacy of different countermeasures for drowsi-
ness, and Daniel McGehee for discussions regarding trust and driver acceptance of
driver monitoring technology in automated vehicles.
REFERENCES
Atchley, P. & Chan, M. (2011). Potential benefits and costs of concurrent task engagement
to maintain vigilance: A driving simulator investigation. Human Factors, 53(1), 3–12.
Bailey, N. R., Scerbo, M. W., Freeman, F. G., Mikulka, P. J., & Scott, L. A. (2006). Comparison
of a brain-based adaptive system and a manual adaptable system for invoking automa-
tion. Human Factors, 48(4), 693–709.
Baldwin, C. L., Roberts, D. M., Barragan, D., Lee, J. D., Lerner, N., & Higgins, J. S. (2017).
Detecting and quantifying mind wandering during simulated driving. Frontiers in
Human Neuroscience, 11, 406.
Berka, C., Levendowski, D., Westbrook, P., Davis, G., Lumicao, M. N., Ramsey, C., …
Olmstead, R. E. (2005). Implementation of a closed-loop real-time EEG-based drowsi-
ness detection system: Effects of feedback alarms on performance in a driving simula-
tor. 1st International Conference on Augmented Cognition (pp. 151–170), Las Vegas,
NV.
Bliss, J. P. & Acton, S. A. (2003). Alarm mistrust in automobiles: How collision alarm reli-
ability affects driving. Applied Ergonomics, 34(6), 499–509.
Boyle, L. N., Tippin, J., Paul, A., & Rizzo, M. (2008). Driver performance in the moments
surrounding a microsleep. Transportation Research Part F: Traffic Psychology and
Behaviour, 11(2), 126–136.
374 Human Factors for Automated Vehicles
Brown, T. L., Milavetz, G., Gaffney, G., & Spurgin, A. (2018). Evaluating drugged driving:
Effects of exemplar pain and anxiety medications. Traffic Injury Prevention, 19(supp1),
S97–S103.
Compton, R. (2017). Marijuana-Impaired Driving - A Report to Congress (Report No. DOT
HS-812-440). Washington, DC: National Highway Traffic Safety Administration.
de Visser, E. & Parasuraman, R. (2011). Adaptive aiding of human-robot teaming: Effects
of imperfect automation on performance, trust, and workload. Journal of Cognitive
Engineering and Decision Making, 5(2), 209–231.
Dijksterhuis, C., Stuiver, A., Mulder, B., Brookhuis, K. A., & de Waard, D. (2012). An adap-
tive driver support system: User experiences and driving performance in a simulator.
Human Factors, 54(5), 772–785.
Dinges, D. F. (1995). An overview of sleepiness and accidents. Journal of Sleep Research,
4, 4–14.
Dingus, T. A., Guo, F., Lee, S., Antin, J. F., Perez, M., Buchanan-King, M., & Hankey, J.
(2016). Driver crash risk factors and prevalence evaluation using naturalistic driving
data. Proceedings of the National Academy of Sciences, 113(10), 2636–2641.
Donmez, B., Boyle, L. N., & Lee, J. D. (2007). Safety implications of providing real-time
feedback to distracted drivers. Accident Analysis & Prevention, 39(3), 581–590.
Donmez, B., Boyle, L. N., & Lee, J. D. (2008). Mitigating driver distraction with retrospective
and concurrent feedback. Accident Analysis & Prevention, 40(2), 776–786.
Federal Highway Administration. (1998). The Driver Fatigue and Alertness Study.
Washington, DC: Federal Highway Administration.
Freeman, F. G., Mikulka, P. J., Prinzel, L. J., & Scerbo, M. W. (1999). Evaluation of an adap-
tive automation system using three EEG indices with a visual tracking task. Biological
Psychology, 50(1), 61–76.
Gaspar, J. G., Brown, T. L., Schwarz, C. W., Lee, J. D., Kang, J., & Higgins, J. S. (2017).
Evaluating driver drowsiness countermeasures. Traffic Injury Prevention, 18(sup1),
S58–S63.
Gaspar, J. & Carney, C. (2019). The effect of partial automation on driver attention: A natural-
istic driving study. Human Factors, 61(8), 1261-1276. doi:/10.1177/0018720819836310.
Hall, W. & Solowij, N. (1998). Adverse effects of cannabis. The Lancet, 352(9140), 1611–1616.
Hancock, P. A., Chignell, M. H., & Lowenthal, A. (1985). An adaptive human-machine
system. Proceedings of the IEEE Conference on Systems, Man and Cybernetics, 15,
627–629.
Horrey, W. J., Lesch, M. F., & Garabet, A. (2008). Assessing the awareness of performance
decrements in distracted drivers. Accident Analysis & Prevention, 40(2), 675–682.
Horrey, W. J., Lesch, M. F., Mitsopoulos-Rubens, E., & Lee, J. D. (2015). Calibration of skill
and judgment in driving: Development of a conceptual framework and the implications
for road safety. Accident Analysis & Prevention, 76, 25–33.
Inagaki, T. & Furukawa, H. (2004). Computer simulation for the design of authority in the
adaptive cruise control systems under possibility of driver’s over-trust in automation.
Proceedings of the IEEE International Conference on Systems, Man and Cybernetics,
4, 3932–3937.
Inagaki, T., Itoh, M., & Nagai, Y. (2007). Support by warning or by action: Which is appropri-
ate under mismatches between driver intent and traffic conditions? IEICE Transactions
on Fundamentals of Electronics, Communications and Computer Sciences, 90(11),
2540–2545.
Kaber, D. B., Perry, C. M., Segall, N., McClernon, C. K., & Prinzel III, L. J. (2006). Situation
awareness implications of adaptive automation for information processing in an air
traffic control-related task. International Journal of Industrial Ergonomics, 36(5),
447–462.
HMI for Fitness-Impaired Populations 375
Kaber, D. B. & Riley, J. M. (1999). Adaptive automation of a dynamic control task based
on secondary task workload measurement. International Journal of Cognitive
Ergonomics, 3(3), 169–187.
Kaber, D. B., Wright, M. C., Prinzel III, L. J., & Clamann, M. P. (2005). Adaptive automation
of human-machine system information-processing functions. Human Factors, 47(4),
730–741.
Kiefer, R. J., LeBlanc, D., Palmer, M. D., Salinger, J., Deering, R. K., & Shulman, M. (1999).
Development and Validation of Functional Definitions and Evaluation Procedures for
Collision Warning/Avoidance Systems (No. DOT-HS-808–964). Washington, D.C.: US
Department of Transportation. National Highway Traffic Safety Administration.
Klauer, S. G., Dingus, T. A., Neale, V. L., Sudweeks, J. D., & Ramsey, D. J. (2006). The
Impact of Driver Inattention on Near-Crash/Crash Risk: An Analysis Using the 100-
Car Naturalistic Driving Study Data (Report No. DOT HS 810 594). Washington, DC:
National Highway Traffic Safety Administration.
Kozak, H., Artz, B., Blommer, M., Cathey, L., Curry, R., & Greenberg, J. (2005). Evaluation
of HMI for Lane departure Warning Systems for Drowsy Drivers: A VIRTTEX
Simulator Study. Dearborn, MI: Ford Motor Company.
Kozak, K., Pohl, J., Birk, W., Greenberg, J., Artz, B., Blommer, M., … Curry, R. (2006).
Evaluation of lane departure warnings for drowsy drivers. Proceedings of the Human
Factors and Ergonomics Society Annual Meeting, 50, 2400–2404.
Lee, J. D., Hoffman, J. D., & Hayes, E. (2004). Collision warning design to mitigate driver
distraction. Proceedings of the SIGCHI Conference on Human factors in Computing
Systems (pp. 65–72). New York: ACM.
Lee, J. D. & See, K. A. (2004). Trust in automation: Designing for appropriate reliance.
Human Factors, 46(1), 50–80.
Lee, J. D., Young, K. L., & Regan, M. A. (2008). Defining driver distraction. In M. Regan,
J.D. Lee, & K. Young (Eds.), Driver Distraction: Theory, Effects, and Mitigation. Boca
Raton, FL: CRC Press.
May, J. F., Baldwin, C. L., & Parasuraman, R. (2006). Prevention of rear-end crashes in driv-
ers with task-induced fatigue through the use of auditory collision avoidance warnings.
Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 50(22),
2409–2413. (Los Angeles, CA: Sage Publications.)
Morris, N. M. & Rouse, W. B. (1986). Adaptive Aiding for Human-Computer Control:
Experimental Studies of Dynamic Task Allocation (No. TR-3). Burlington, MA:
Alphatech Inc.
National Highway Traffic Safety Administration. (2012). Visual-Manual NHTSA Driver
Distraction Guidelines for In-Vehicle Electronic Devices. Washington, DC: National
Highway Traffic Safety Administration.
National Transportation Safety Board. (2017). Collision between a Car Operating with
Automated Vehicle Control Systems and a Tractor-Semitrailer Truck Near Williston,
Florida May 7, 2016 (Report No. NTSB/HAR-17/02). Washington, DC: National
Transportation Safety Board.
Neubauer, C., Matthews, G., Langheim, L., & Saxby, D. (2012). Fatigue and voluntary utiliza-
tion of automation in simulated driving. Human Factors, 54(5), 734–746.
Parasuraman, R., Bahri, T., Deaton, J., Morrison, J., & Barnes, M. (1992). Theory and Design
of Adaptive Automation in Aviation Systems (Report No. NAWCADWAR-92033-60).
Warminster, PA: Naval Air Warfare Center.
Parasuraman, R., Mouloua, M., & Hilburn, B. (1999). Adaptive aiding and adaptive task
allocation enhance human-machine interaction. In M.W. Scerbo & M. Mouloua (Eds.),
Automation Technology and Human Performance: Current Research and Trends
(pp. 119–123). Mahwah, NJ: Lawrence Erlbaum.
376 Human Factors for Automated Vehicles
Parasuraman, R. & Riley, V. (1997). Humans and automation: Use, misuse, disuse, abuse.
Human factors, 39(2), 230–253.
Parasuraman, R., Sheridan, T. B., & Wickens, C. D. (2000). A model for types and levels
of human interaction with automation. IEEE Transactions on Systems, Man, and
Cybernetics-Part A: Systems and Humans, 30(3), 286–297.
Rouse, W. B. (1988). Adaptive aiding for human/computer control. Human Factors, 30(4),
431–443.
Russell, S., Blanco M., Atwood, J., Schaudt, W. A., Fitchett, V. L., & Tidwell, S. (2018).
Naturalistic Study of Level 2 Driving Automation Functions (Report DOT HS 812
642). Washington, DC: National Highway Traffic Safety Administration.
Saito, Y., Itoh, M., & Inagaki, T. (2016). Driver assistance system with a dual control scheme:
Effectiveness of identifying driver drowsiness and preventing lane departure accidents.
IEEE Transactions on Human-Machine Systems, 46(5), 660–671.
Sarter, N. B. & Woods, D. D. (1994). Pilot interaction with cockpit automation II: An experi-
mental study of pilots’ model and awareness of the flight management system. The
International Journal of Aviation Psychology, 4(1), 1–28.
Scerbo, M. W. (2008). Adaptive automation. In R. Parasuraman & M. Rizzo (Eds.),
Neuroergonomics: The Brain at Work (pp. 239–252). Oxford: Oxford University Press.
Schwarz, C., Brown, T. L., Gaspar, J., Marshall, D., Lee, J., Kitazaki, S., & Kang, J. (2015).
Mitigating drowsiness: Linking detection to mitigation. Proceedings of the 24th ESV
Conference, Gothenburg, Sweden.
Sheridan, T. B. (1992). Telerobotics, Automation, and Human Supervisory Control. MIT
Press.
Strayer, D. L., Watson, J. M., & Drews, F. A. (2011). Cognitive distraction while multitasking
in the automobile. Psychology of Learning and Motivation, 54, 29–58.
Victor, T., Dozza, M., Bärgman, J., Boda, C. N., Engström, J., Flannagan, C., … Markkula, G.
(2015). Analysis of Naturalistic Driving Study Data: Safer Glances, Driver Inattention,
and Crash Risk (SHRP 2 Report S2-S08A-RW-1). Washington, DC: National Academy
of Sciences.
Vogelpohl, T., Kühn, M., Hummel, T., & Vollrath, M. (2019). Asleep at the automated
wheel—Sleepiness and fatigue during highly automated driving. Accident Analysis &
Prevention, 126, 70–84.
Wierwille, W. W. (1995). Overview of research on driver drowsiness definition and driver
drowsiness detection. Proceedings: International Technical Conference on the
Enhanced Safety of Vehicles (Vol. 1995, pp. 462–468). Washington, D.C.: National
Highway Traffic Safety Administration.
Williamson, A., Lombardi, D. A., Folkard, S., Stutts, J., Courtney, T. K., & Connor, J. L.
(2011). The link between fatigue and safety. Accident Analysis & Prevention, 43(2),
498–515.
Yamani, Y. & Horrey, W. J. (2018). A theoretical model of human-automation interac-
tion grounded in resource allocation policy during automated driving. International
Journal of Human Factors and Ergonomics, 5(3), 225–239.
Yeh, M., Wickens, C. D., & Seagull, F. J. (1999). Target cuing in visual search: The effects of
conformality and display location on the allocation of visual attention. Human Factors,
41(4), 524–542.
Zaouk, A. K., Wills, M., Traube, E., & Strassburger, R. (2015). Driver alcohol detection sys-
tem for safety (DADSS)-A status update. 24th Enhanced Safety of Vehicles Conference.
Gothenburg, Sweden: ESV.
17 Automated Vehicle
Design for People
with Disabilities
Rebecca A. Grier
Independent
CONTENTS
Key Points .............................................................................................................. 378
17.1 Introduction ................................................................................................. 378
17.2 Medical Model of Disabilities ...................................................................... 379
17.3 Social Model of Disabilities ......................................................................... 379
17.3.1 Nature of the Task............................................................................. 379
17.3.2 Individual Differences ...................................................................... 380
17.3.3 Summary of the Social Model .......................................................... 381
17.4 Universal Design........................................................................................... 381
17.4.1 Equitable Use .................................................................................... 382
17.4.2 Flexibility in Use .............................................................................. 382
17.4.3 Simple & Intuitive Use ..................................................................... 382
17.4.4 Perceptible Information .................................................................... 383
17.4.5 Tolerance for Error ........................................................................... 383
17.4.6 Low Physical Effort .......................................................................... 384
17.4.7 Size and Space for Approach and/or Use ......................................... 384
17.4.8 Universal Design Summary.............................................................. 384
17.5 How Humans Will Interact with FAVs ......................................................... 384
17.5.1 Non-FAV Task Flow ......................................................................... 384
17.5.2 FAV Task Flow.................................................................................. 385
17.5.3 Negotiating Stopping Location for Pick-Up & Drop-Off ................. 386
17.5.3.1 People Who Use Wheelchairs............................................ 388
17.5.3.1 Other Mobility Impairments .............................................. 388
17.5.3.2 Visual Impairments ........................................................... 389
17.5.3.3 Final Thoughts on Stopping Locations for Pick-Up
and Drop-Off ��������������������������������������������������������������������� 389
17.5.4 Considerations in Unusual/Emergency Situations............................ 390
17.5.5 Cabin Design Considerations ........................................................... 391
17.6 Conclusion ................................................................................................... 391
References .............................................................................................................. 392
377
378 Human Factors for Automated Vehicles
KEY POINTS
• Level 4/5 vehicles may remove the requirement of a driver’s license, poten-
tially allowing people with certain disabilities to travel in passenger vehicles
independently.
• The principles of universal design should be considered for automated vehi-
cles to potentially enhance the utility for people with disabilities.
• There are standards that exist for Human–Machine Interfaces (HMI) to be
accessible to people with certain disabilities.
• Negotiating pick-up and drop-off locations are a factor for automated vehi-
cles to be user friendly for people with disabilities.
• Considerations of how and what to communicate in emergency situations is
an additional aspect to consider in enhancing automated vehicle accessibil-
ity to people with certain disabilities.
17.1 INTRODUCTION
Currently, to operate a vehicle, one must obtain a driver’s license. The specific
requirements for obtaining a license vary by jurisdiction. Generally speaking,
one must have a certain level of vision acuity, pass a knowledge test, and pass a
skills test. Due to these requirements, individuals with certain visual, cognitive,
or motor impairments are not able to obtain a driver’s license. As a result, they
are required to rely on others to travel between locations. This situation creates
an additional logistical burden for these individuals. This logistical burden makes
it challenging to hold a job, attend medical appointments, and generally partici-
pate in commerce and society (Bureau of Transportation Statistics, 2003; World
Health Organization, 2016). Fully Automated Vehicles (FAVs), in which a human
is not expected to take over lateral (i.e., steering) or longitudinal (i.e., accelera-
tion, speed, and braking) control of the vehicle at any time, have the potential of
eliminating the need for a driver’s license. For the purposes of this chapter, FAV
indicates both SAE Level 4 and 5 (SAE, 2016) vehicles that do not have driver
controls.
To be clear, it is likely that some SAE Level 4 and 5 vehicles for private owner-
ship will have driver controls. However, it is unlikely that people with disabilities
who cannot obtain a driver’s license will be able to operate vehicles with driver
controls. Similarly, it also appears that the first SAE Level 4 vehicles will be part
of a ride hailing fleet rather than available for purchase (Walker, 2019). As such,
this chapter focuses on the design of SAE Level 4 and 5 vehicles that do not have
driver controls.
This chapter describes some considerations in developing FAVs to enhance the
possibility that people with disabilities may be able to take advantage of the tech-
nology. Before discussing the specifics of vehicle design as it relates to people with
disabilities, an overview of disabilities can be helpful (see also, this Handbook,
Chapter 10). There are two distinct philosophical views of disabilities: the medical
model and the social model (Family Voices, 2015). These philosophies may affect
individuals with disabilities and the design of vehicles.
Automated Design for People with Disabilities 379
syndrome, and depression) and personal and environmental factors (e.g., negative
attitudes, inaccessible transportation and public buildings, and limited social sup-
ports).” This definition emphasizes the importance of both the nature of the task and
the specific medical diagnosis to the experience of the disability. Given the almost
infinite number of medical diagnoses and tasks as well as the always-evolving nature
of work and technology, there has not been an attempt to provide an exhaustive clas-
sification of disabilities. The classification schema for the Paralympics (International
Paralympic Committee, n.d.) is a useful example of how much work would be
required. There are two steps to classification: (1) impairment eligibility and (2) sport
class classification.
For the Paralympics, there are ten different categories of impairment including
impaired muscle power (e.g., paralysis), impaired passive range of movement, limb
deficiency (e.g., amputees), leg length differences, short stature, hypertonia, ataxia,
athetosis, and visual impairment. However, what defines an impairment depends on
the sport. For example, the maximum height to be considered, short stature or the
maximum amount of muscle power to be considered impaired, is different among
the different sports (e.g., athletics, swimming, ...).
After a para-athlete is deemed eligible to compete in a specific sport, a clas-
sification panel determines what sport class the individual will compete in. These
sport classes have been created to ensure that the events are competitive. Para-
athletics (i.e., track and field) has the most number of sport classes at 52. This rather
large number is because of the variety of activities. These sport classes may divide
individuals based on the different levels of impairments (e.g., single versus double
amputee for certain field events or races) or could combine different impairments
(e.g., paralysis and amputation in wheelchair racing) into one sport class. In sum-
mary, what is an impairment depends on the sport in which one wants to compete.
This is all to say categorization of disabilities can be problematic if the to-be-
performed tasks are not considered. For this reason, the universal design principles
presented in Section 17.4 do not mention specific disabilities, but rather speak to the
goals to be accomplished with the system design. Furthermore, this is what moti-
vated the task analysis demonstrating the differences between traditional vehicles
and FAVs that are presented in Section 17.5.
gradually over the years is continuously learning new strategies to adapt to the wors-
ening vision. As such, although these three individuals may ultimately have the same
visual acuity, they interact with the world in very different ways. When conducting
research with users with disabilities, the researchers should consider the important
between-subjects variability of people with the same disabilities.
than on general interactions. To that end, North Carolina State University (NCSU),
funded by the National Institute for Disability & Rehabilitation (NIDR) under the
U.S. Department of Education, developed seven principles of Universal Design in
1997 (Connell et al., 1997). These principles are as follows:
1. Equitable use,
2. Flexibility in use,
3. Simple & intuitive use,
4. Perceptible information,
5. Tolerance for error,
6. Low physical effort, and
7. Size and shape for approach and use.
The next sections describe these principles in brief. Interested readers are encour-
aged to review the original documents. In the original publications, Connell et al.
(1997), describe several guidelines to be used in the design process for each prin-
ciple. What is presented next is merely a paraphrase of each principle and the asso-
ciated design guidelines. In addition, an example of a design choice to illustrate a
potential implementation of the principle is provided for some of the principles.
on the task, or language skills. In the words of Steve Krug (2000) “Don’t make me
think!” Towards this end, information should be arranged consistently and in accor-
dance with its importance. Prompts and feedback should be designed to maximize
effectiveness for the user throughout his or her interaction with the technology. The
interface should promote an accurate mental model of the technology without unnec-
essary complexity (see also, this Handbook, Chapter 3). In the words of the French
novelist, Antoine de Saint-Exupery, “A designer knows he has achieved perfection
not when there is nothing left to add, but when there is nothing left to take away.”
starts the engine. The human disengages the parking brake, if it was set. The human
shifts the vehicle out of the park and into the appropriate gear. The human monitors
the environment while maintaining lateral and longitudinal control of the vehicle.
The human also is in charge of wayfinding with or without assistance from Global
Positioning System (GPS) navigation. The human identifies a parking space near his/
her destination and parks the vehicle. After the vehicle is parked, the human gathers
his/her personal items and exits the vehicle. The human then closes/locks the doors
of the vehicle and travels from the vehicle to his/her destination.
TABLE 17.1
Summary of the Differences between the Two Task Flows (i.e., Non-FAVS and
Potential FAVs)
Stage Non-FAVs Potential FAVs
Pre-Trip Human travels to where Human negotiates with vehicle where and when
s/he parked vehicle s/he will be picked up.
Trip Initiation Human starts engine, puts Vehicle engine is already started.
car into appropriate gear, Human tells the vehicle s/he is ready to begin trip.
and presses accelerator Vehicle then safely pulls into traffic.
Traveling Human controls Human indicates purpose of ride and negotiates
longitudinal and/or lateral drop-off location.1
control over vehicle, Vehicle controls longitudinal and lateral control to
monitors environment, meet the human’s purpose and arrive at the
and navigates with or designated drop-off location.
without the aid of GPS. Vehicle has the ability to communicate route and
progress to human.
Trip ending Human parks vehicle, exits Vehicle arrives at drop-off location and alerts
vehicle, and makes way human.
to destination. Human exits vehicle. Human indicates the vehicle
can depart or vehicle senses it can depart.
Human makes way to destination.
1 This indication could occur at any time before this, but this is the latest stage at which this could occur.
these sources, which can provide more information than this chapter regarding
selecting and designing HMIs that are usable by people with disabilities. Based on
the hypothesized task flow, HMI is critical to accessibility. Another critical design
consideration is related to vehicle entry and exit. Specifically, the human and the
vehicle may have to negotiate where this will occur.
because of a shared language and knowledge. However, without shared language and
knowledge, the process could potentially be challenging. For example, some people
with certain disabilities (e.g., communication disabilities) or those who do not speak
the same language as the driver might find coordinating a stopping location chal-
lenging with a human driver.
For designers of FAVs, there will need to be careful consideration of the task
flow and HMI for the negotiation of stopping location, as the quality of both are
necessary to ensuring shared knowledge and adequate communication between the
FAV and the passenger. If asked to describe where a person can expect an FAV to
stop, people may say as close as possible to the origin/destination. However, the
shortest distance between the origin/destination and the FAV may have barriers that
pose challenges to an individual with a disability. For this reason, accessible park-
ing spaces (i.e., parking spaces reserved for individuals with a disability permit)
within the United States are required to be located on the shortest accessible path
to an accessible entrance (United States Access Board, 2016). An accessible path is
one that is free of barriers, most notably stairs and curbs, but also poles, trees, and
other objects that can reduce the width of the path. The closest parking spaces to
the door may not have an accessible path. If there is no acceptable path between the
space and the entrance, then the spaces would not be considered accessible spaces.
Moreover, if there are several parking spaces on different accessible paths, those
that are on the shortest path are the ones that should be designated as accessible
parking (United States Access Board, 2016). Collectively, this is not meant to indi-
cate that FAVs should ONLY be able to pick-up and drop-off in accessible parking
spaces. Rather, it is meant to illustrate the importance of a barrier-free path between
the vehicle and origin/destination when selecting a stopping location (Disability
Right Education and Defense Fund, 2018). What constitutes a barrier-free path var-
ies by individual. The following sections describe different potential barriers for
different disabilities (summarized in Table 17.2).
TABLE 17.2
Parameters Related to Vehicle Stopping Location
Important to People with Disabilities
Distance from curb cut-outs
Distance from (accessible) entrance
Incline of road
Distance from curb
Space needed for lift/ramp
Space needed for wheelchair to approach/exit lift/ramp
Presence of physical obstacles (e.g., poles, vehicles, puddles, etc.)
Side of street of origin/destination
Bike lanes between vehicular traffic and sidewalks
388 Human Factors for Automated Vehicles
the vehicle. Some individuals may prefer stopping near a curb cut-out depending
upon its distance from the ramp or entrance (SAE, 2019).
If neither of these results in a location that is acceptable for the passenger to ingress/
egress, it may be helpful if the passenger had the option to request the vehicle wait/circle
for an acceptable space to open.
circumstances or who may potentially have the ability to monitor the FAV (Disability
Right Education and Defense Fund, 2018). Also, these emergency contacts could
potentially help to provide additional information to the first responders.
The second method is to allow individuals to voluntarily provide information
with the known intent of being shared with a first responder in an emergency. This
information could be useful for paramedics or for police. As an example, some
people with special needs (e.g., some transplant recipients) have been told that they
should only be treated at hospitals with certain specializations. It is important for
the first responders to be aware of this information to ensure the person is taken to
the appropriate hospital. Similarly, knowledge of disabilities, particularly in terms
of communication impairments, autism, and mental health issues, may be useful to
emergency responders (Autism Society, n.d.; Center for Development and Disability,
n.d.; Perry & Carter-Long, 2016). Some may believe that in terms of these unusual
scenarios, we might use the same methods used today. However, we need to consider
that individuals today who cannot obtain a driver’s license because of a disability
are often traveling with other individuals who can communicate this information. In
contrast, in an FAV these individuals may be alone. There have been incidents where
individuals with disabilities were injured, because first responders were unaware of
their special needs (Perry & Carter-Long, 2016).
17.6 CONCLUSION
FAVs have the potential of making travel easier for people with disabilities
(Chang & Gouse, 2017; see also, this Handbook, Chapter 10). However, designers
of FAVs should consider a wide variety of disabilities (at the intersection of medical
392 Human Factors for Automated Vehicles
condition and task) and find solutions within the various constraints (Chang &
Gouse, 2017). To that end, this chapter presented the framework of universal design
and how it could potentially be applied to FAVs. The primary tenet of the universal
design framework is that the designers of technology should consider people with
disabilities as part of the user group just as the designers would consider other indi-
vidual differences. Four areas related to FAVs were discussed in this chapter: HMI,
on- and off-boarding, emergency situations, and cabin design. With regard to the
first, there are numerous design standards created for other technologies that may
potentially be leveraged to make the HMI within an FAV accessible to people with
disabilities (also see Chapters 15 and 16 in this Handbook). Second, the on- and
off-boarding process is one aspect that is potentially unique to FAVs. As such, con-
siderations for various classes of disabilities related to this process were presented.
Third, considerations for the development of responses to emergencies were dis-
cussed. Fourth the cabin should consider the needs of those who use wheelchairs,
assistive mobility devices such as canes or walkers, and service animals. Finally,
designers of FAVs are encouraged to consider disabilities as just another class of
individual differences like height, nationality, gender, or handedness.
REFERENCES
American Automobile Association. (2017). Prevent Road Debris. Retrieved from https://
exchange.aaa.com/prevent-road-debris/#.W8FA9LmWzIV
Abraham, H. (2011). Rockslide Causes Accident on Route 28. Retrieved from https://pitts-
burgh.cbslocal.com/2011/08/23/rockslide-causes-accident-on-route-28/
Autism Society. (n.d.). Autism Information for Law Enforcement and Other First
Responders. Retrieved from www.autism-society.org/wp-content/uploads/2014/04/
Law_Enforcement_and_Other_First_Responders.pdf
Bertocci, G., Hobson, D., & Digges, K. (2000). Development of a wheelchair occupant injury
risk assessment method and its application in the investigation of wheelchair secure-
ment point influence on frontal crash safety. IEEE Transactions on Rehabilitation
Engineering, 8(1), 126–139.
Brinkley, J., Posadas, B., Woodward, J., & Gilbert, J. (2017). Opinions and preferences of
blind and low vision consumers regarding self-driving vehicles: Results of focus group
discussions. Proceedings of the 19th International ACM SIGACCESS Conference on
Computers and Accessibility (pp. 290–299). New York: ACM.
Brown, S. (1999). The Curb Ramps of Kalamazoo: Discovering Our Unrecorded History.
Retrieved from www.independentliving.org/docs3/brown99a.html
Bureau of Transportation Statistics. (2003). Transportation Difficulties Keep over Half a Million
Disabled at Home. BTS Issue Brief No. 3. Retrieved from www.bts.gov/sites/bts.dot.gov/
files/legacy/publications/special_reports_and_issue_briefs/issue_briefs/number_
Center for Development and Disability. (n.d.). Tips for First Responders. Retrieved from
https://srcity.org/DocumentCenter/View/2218/Tips-for-First-Responders-PDF?bidId
Chang, A. & Gouse, W. (2017). Accessible Automated Driving System Dedicated Vehicles.
Retrieved from www.sae.org/standardsdev/news/mobility_benefits.htm
Chapman, L. (2018). What Do Self-Driving Vehicles Mean for Disabled Travelers. Retrieved
from www.disabled-world.com/disability/transport/autonomous-vehicles.php
Connell, B., Jones, M., Mace, R., Mueller, J., Mullick, A., Oostroff, E., … Vanderheiden, G.
(1997). The Principles of Universal Design Version 2.0. Retrieved from https://projects.
ncsu.edu/design/cud/about_ud/udprinciplestext.htm
Automated Design for People with Disabilities 393
Disability Right Education and Defense Fund. (2018). Fully Accessible Autonomous
Vehicles Checklist: Working Draft. Berkeley, CA: Disability Right Education and
Defense Fund.
Family Voices Kids As Self Advocates. (2015). Medical Model vs. Social Model. Retrieved
from www.fvkasa.org/resources/files/history-model.php
Grier, R. A. (2015). Situation awareness in command and control. In R. R. Hoffman, P. A.
Hancock, M. W. Scerbo, R. Parasuraman, & J. L. Szalma, The Cambridge Handbook
of Applied Perception Research (pp. 891–911). Cambridge, UK: Cambridge University
Press.
Huggett, E. J. (2009). Driving Safely. Retrieved from http://lowvision.preventblindness.
org/2009/08/25/driving-safely/
International Paralympic Committee. (n.d.). Classification. Retrieved from www.paralympic.
org/classification
Krug, S. (2000). Don’t Make Me Think! A Common Sense Approach to Web Usability.
Indianapolis, IN: New Riders Publishing.
Marshall, A. & Davies, A. (2018). Waymo’s Self-Driving Car Crash in Arizona
Revives Tough Questions. Retrieved from www.wired.com/story/waymo-crash-self-
driving-google-arizona/
National Center for Rural Road Safety. (2016). Will Technology Bring an End to Wildlife
Collissions? Retrieved from https://ruralsafetycenter.org/uncategorized/2016/will-
technology-bring-an-end-to-wildlife-collisions/
National Disability Authority. (2014). 3 Case Studies on UD. Retrieved from http://univer-
saldesign.ie/What-is-Universal-Design/Case-studies-and-examples/3-case-studies-on-
UD/#oxo
National Highway Traffic Safety Administration. (2015). Critical Reasons for
Crashes Investigated in the National Motor Vehicle Crash Causation Survey
(DOT HS 812 115). Retrieved from https://crashstats.nhtsa.dot.gov/Api/Public/
ViewPublication/812115
New York Times. (1995). Americans Too Tall or Short for Russian Space Program. Retrieved
from www.nytimes.com/1995/11/10/us/americans-too-tall-or-short-for-russian-space-
program.html
Niven, R. (2012). How People with Disabilities Inspire Cutting Edge Technology. Retrieved
from www.channel4.com/news/gadgets-inspired-by-people-with-disabilities
Perry, D. M. & Carter-Long, L. (2016). The Ruderman White Paper on Media Coverage
of Law Enforcement Use of Force and Disability. Boston, MA: Ruderman Family
Foundation.
Schaupp, G., Seeanner, J., Jenkins, C., Manganelli, J., Henessy, S., Truesdail, C., … Brooks, J.
(2016). Wheelchair Users’ Ingress/Egress Strategies While Transferring Into and Out
of a Vehicle. SAE. doi:10.4271/2016–01–1433
SAE. (1999a). Structural Modification for Personally Licensed Vehicles to Meet the
Transportation Needs of Persons with Disabilities (J1725_199911). Warrendale, PA:
SAE International.
SAE. (1999b). Design Considerations for Wheelchair Lifts for Entry to or Exit from a
Personally Licensed Vehicle (J2093). Warrendale, PA: SAE International.
SAE. (1999c). Wheelchair Tiedown and Occupant Restraint Systems for Use in Motor
Vehicles (J2249). Warrendale, PA: SAE International.
SAE. (2016). Taxonomy and Definitions for Terms Related to Driving Automation Systems
for On Road Motor Vehicles (J3016 201609). Warrendale, PA: SAE International.
SAE. (2019). Identifying Automated Driving Systems - Dedicated Vehicles (ADS-DV)
Passenger Issues for Persons with Disabilities (J3171). Warrendale, PA: SAE
International.
394 Human Factors for Automated Vehicles
United States Access Board. (2016). Parking Spaces. Retrieved from www.access-
board.gov/guidelines-and-standards/buildings-and-sites/about-the-ada-standards/
guide-to-the-ada-standards/chapter-5-parking
Van Roosmalen, L., Ritchie Orton, N., & Schneider, L. (2013). Safety, usability, and inde-
pendence for wheelchair-seated drivers and front-row passengers of private vehicles: A
qualitative research study. Journal of Rehabilitation Research & Development, 50(2),
239–252.
Vautier, L. (2014). Universal design: What is it and why does it matter? LIANZA Conference.
Aukland, New Zealand. Retrieved from https://lianza.org.nz/sites/default/files/
Vautier_L_Universal_Design.pdf
Walker, J. (2019). The Self-Driving Car Timeline - Predictions from the Top 11 Global
Automakers. Retrieved from https://emerj.com/ai-adoption-timelines/self-driving-car-
timeline-themselves-top-11-automakers/
World Health Organization. (2016). Disability and Health. Retrieved from www.who.int/
mediacentre/factsheets/fs352/en/
18 Importance of Training for
Automated, Connected,
and Intelligent
Vehicle Systems
Alexandria M. Noble and Sheila Garness Klauer
Virginia Tech Transportation Institute
Michael P. Manser
Texas A&M Transportation Institute
CONTENTS
Key Points .............................................................................................................. 396
18.1 Introduction .................................................................................................. 396
18.2 Training Overview ........................................................................................ 398
18.2.1 Mental Models .................................................................................. 399
18.2.2 Consumer Protection ........................................................................ 400
18.2.3 Goals of Training for ACIV Systems ............................................... 400
18.2.3.1 Increased Improvement in Safety ...................................... 401
18.2.3.2 Appropriate Levels of Trust ............................................... 401
18.2.3.3 User Acceptance ................................................................ 402
18.3 Training Content for ACIV Systems............................................................. 402
18.3.1 System Purpose and Associated Risks of Use.................................. 403
18.3.2 Operational Design Domain ............................................................ 403
18.3.3 Monitoring the Road and the System ............................................... 404
18.4 Andragogical Considerations for ACIV Systems Training........................... 404
18.4.1 Driver-Related Factors ...................................................................... 405
18.4.1.1 Motivation .......................................................................... 405
18.4.1.2 Readiness to Learn ............................................................ 405
18.4.1.3 Affect—Anxiety Toward Technology .............................. 406
395
396 Human Factors for Automated Vehicles
KEY POINTS
• Training is a critical component as the vehicle fleet begins its transi-
tion through the levels of automation and the driving task fundamentally
changes with each technological advancement.
• When designing driver training for ACIV systems, the learner and the sys-
tem must both be taken into consideration.
• Effective training must be supported by all stakeholders, including car
companies, state and federal governments, academics, and traffic safety
organizations.
• Intelligent HMI design and comprehensive and informative training should
work as part of a coordinated effort to fully realize the safety benefits of
ACIV systems.
18.1 INTRODUCTION
The mere idea of automated, connected, and intelligent vehicles (ACIVs) conjures
up visions in which our vehicles cater to our every transportation need. A person
walks out of their house and immediately steps into a waiting vehicle, which indi-
cates their destination, and the vehicle quietly and effortlessly moves along free-
ways, suburban roadways, and local streets to deliver the person to work, a grocery
store, or any desired destination. In the process, the vehicle exchanges information
with other vehicles, the infrastructure, and the cloud, allowing for safe and efficient
transportation. In this situation, the vehicle is, in fact, the driver. The person is only
a passenger. A more realistic vision of ACIVs is one in which the vehicle and person
form a partnership. In this approach, either the person or vehicle can be the “driver”
that is essentially in control, but more commonly, both entities work together to
Importance of Training for ACIVs 397
sense the environment, control the vehicle, and avoid crashes. Given that humans
are still an essential element of this current vision of ACIVs, it is easy to predict
that safety will continue to be a significant concern. It was estimated that in 2016,
approximately 94% of serious crashes were attributable to human error, including
errors related to distraction, impairment, or drowsiness (National Highway Traffic
Safety Administration, 2018). These statistics suggest that human-related factors
will continue to require attention as the partnership between human and vehicle
further develops. Indeed, the development and deployment of ACIV systems, such
as advanced driver assistance systems (ADAS; sometimes referred to as driver
support features—SAE Levels 0–2, automated driving features—Levels 3–5, and
active safety features, e.g., automatic emergency braking [AEB]) that help to control
vehicle acceleration, vehicle deceleration, and lane position, offer the potential to
improve safety by relieving the human driver of tasks, particularly those which they
are prone to performing with errors. It is in this situation of shared vehicle control
that safety needs to be addressed.
For the last several decades, there has been an increasing focus on the techno-
logical development of ACIVs (Barfield & Dingus, 1997; Fancher et al., 1998; Tellis
et al., 2016) in a shared control context to achieve the visions outlined earlier. Efforts
in this area include the Automated Highway System projects in the 1990s (Congress,
1994); connected vehicle communications—vehicle to vehicle and vehicle to
infrastructure—in the 2000s (Ahmed-Zaid et al., 2014; Bettisworth et al., 2015); and
partially and fully automated vehicle technologies more recently (Blanco et al., 2015;
Rau & Blanco, 2014; Tellis et al., 2016; Trimble, Bishop, Morgan, & Blanco, 2014). A
basic premise within the human factors profession is that human–machine interface
(HMI) systems should be intuitive and easy to use, both of which are a byproduct of
good HMI design and standardization. In the absence of these factors, there is a criti-
cal need to provide training to drivers. However, there has been relatively little work
examining the efficacy of training and learner-centered HMI design to positively
impact the safety of ACIV systems, which is likely a critical key factor in promoting
a safe person/vehicle partnership (see Chapter 15 for a more complete discussion
of HMI design considerations for ACIV systems which are largely independent of
learner-centered concerns).
A generally accepted definition of training, particularly relevant within the context
of this work, is that it is a continuous and systematic process that teaches individuals
a new skill or behavior to accomplish a specific task (Salas, Wilson, Priest, & Guthrie,
2006). This definition is particularly relevant to the domain of ACIVs because driv-
ers must become proficient in a wide variety of tasks required to operate these tech-
nologies. However, the scope of the definition focuses solely on tasks and fails to
acknowledge that the use of ACIV systems requires proficiency in tasks that rely on
and interact with driving environments (e.g., lane markings that support lane-keeping
assist [LKA] systems). Therefore, a more recent training definition may be more
applicable to ACIVs. This definition describes training as the “systematic acquisi-
tion of knowledge, skills, and attitudes that together lead to improved performance
in a specific environment” (Grossman & Salas, 2011). A successful training method
will impart the necessary knowledge, skills, and attitudes related to the partnership
between ACIVs and humans to promote improvements in safe driving.
398 Human Factors for Automated Vehicles
This chapter will address the learner-centered design of training for ACIVs. We
recognize the potential need for ACIV driver training across a wide range of top-
ics, including sensor operation/capabilities and the limitations of the operational
design domain (ODD); however, the state of driver training relative to ACIVs has
not been examined extensively, so we have instead focused on training approaches
which have received some research attention. This chapter summarizes the general
topic of learner-centered training for ACIVs and provides information on specific
training-related factors. The first section, Training Overview, will briefly discuss the
importance of training, including how a person–vehicle partnership can be enhanced
through training, while the second section addresses the critical question of what
content should be included in training protocols for ACIV systems. The third sec-
tion, titled Andragogical Considerations for ACIV Systems Training, will identify
both driver and non-driver related key factors that should be considered in the devel-
opment of training protocols. The fourth section provides a review of both current
and future protocols that could be employed to train people on the use of ACIV
systems. The chapter concludes with series of recommendations for ACIV systems
training and training practices. This chapter will serve as a foundation for driver
training stakeholders, technology developers, consumers, and legislatures to address
the growing need to include relevant and effective training for ACIV systems as
these technologies are developed and deployed.
18.2 TRAINING OVERVIEW
The prevalence of new vehicles equipped with ADAS has steadily increased over
the last decade. Readers are directed to SAE J3016 (2018) for descriptions of ADAS
technologies relationship to SAE automation Levels 0–5 (also, this Handbook,
Chapters 1, 2). Some ADASs assist drivers in the lateral and longitudinal control of the
vehicle within certain ODDs, while others provide warnings and information about the
surrounding road environment. ADAS assistance in these fundamental tasks can help
a driver by performing control functions and providing alerts about the presence of
another vehicle or object. Despite the potential benefits associated with their use, many
ADAS system capabilities and limitations are misunderstood by drivers (Larsson,
2012; McDonald, Carney, & McGehee, 2018; McDonald et al., 2015, 2016; McDonald,
Reyes, Roe, & McGehee, 2017; Rudin-Brown & Parker, 2004).
As we extend the consideration of the state of the art with ADAS to the future
with ACIV systems, we can see that many of these technologies share functional
similarities among manufacturers; however, there are subtle differences in capa-
bilities that are crucial for the users to understand. For example, a vehicle that has
an LKA system will control the vehicle by oscillating between the lane lines on
the right and left, through intermittent steering input, while a lane-centering sys-
tem will attempt to center the vehicle in the lane through continuous steering input.
Regardless of the fidelity of the lane-keeping system, or any Level 1 or 2 driver
support feature, the driver is responsible for knowing and understanding the sys-
tem’s ODD. However, user’s perceptions of the subtasks of the dynamic driving task
that they are required to perform versus those that the driving automation system
will perform are dependent on their perception and understanding of the system
Importance of Training for ACIVs 399
that consumers will fill on their own, potentially with inaccurate information or
beliefs derived from limited exposure or observation.
Mental models can be carefully formed and structured through training
(Wickens, Hollands, Banbury, & Parasuraman, 2013). Using training combined with
intuitive design practices can decrease variability in mental models among users.
Furthermore, with increasing levels of automation and variability of system capa-
bilities, increased levels of training are required (Sarter, Woods, & Billings, 1997).
Additional training will improve the user’s mental models to help ensure that vehicle
operators are fully aware of their role and breadth of responsibilities while operating
their vehicle on public roads.
Second, they should create appropriate levels of trust, and third, they should increase
user acceptance of the benefits of the technology. These three goals are discussed in
more detail below.
18.2.3.3 User Acceptance
A critical concept relative to user acceptance is that user’s attitudes and behavior
have a bidirectional, multi-level relationship. Stated plainly, attitudes influence use
and use influences attitudes (see also, this Handbook, Chapter 5). As drivers use
ACIV systems, driving performance will likely improve as long as the systems are
well-designed and the driver has a reasonable understanding of the system’s ODD
(see also, this Handbook, Chapter 12 for a discussion of behavioral adaptation to
technologies).
Davis (1989, 1993) identified the lack of user acceptance as an impediment to the
success of new information systems. The Technology Acceptance Model developed
by Davis specifies the causal relationship between system design features, perceived
usefulness, perceived ease of use, attitude toward usage, and actual usage behavior.
Ghazizadeh, Lee, and Boyle (2012) extended that model to create the Automation
Acceptance Model, which includes the explicit additions of trust and compatibility
as well as a feedback loop that continues to inform user trust, perceived ease of use,
compatibility, perceived usefulness, and behavioral intention to use. The Automation
Acceptance Model captures the importance of experience and trial-and-error learn-
ing. Understanding the benefits of ACIV systems shapes the social norms regarding
the use of these systems on the road. These norms may influence individual driver’s
perceptions of ACIVs and, ultimately, their use decisions (Ghazizadeh et al., 2012).
Given that these are safety-critical limitations, users of these driver support and
active safety features should have an understanding of when these systems can fail to
operate and conditions under which it would be prudent to temporarily suspend sys-
tem use. Better understanding could be achieved through the use of proper training.
The AAA Foundation for Traffic Safety in collaboration with the Massachusetts
Institute of Technology AgeLab have developed a data-driven system to review and
rate the effectiveness of new in-vehicle technologies that aim to improve safety
(Mehler et al., 2014). This review focuses on legacy systems, such as Electronic
Stability Control, and advanced features, such as Adaptive Cruise Control (ACC),
Adaptive Headlights, Back-Up Cameras, FCW, Forward Collision Mitigation, and
LDW. In addition to developing a rating system that considers the potential and dem-
onstrated benefits offered by these technologies, the research team stated that while
some systems require little or no familiarity with the technology to derive benefit,
others have a steep learning curve (Mehler et al., 2014).
18.4 ANDRAGOGICAL CONSIDERATIONS
FOR ACIV SYSTEMS TRAINING
One of the standard pedagogical models of education assigns full responsibility to
the instructor for making decisions about what will be learned, how it will be learned,
when it will be learned, and whether it has been learned. The learner in this peda-
gogical model is a passive participant in their own education. Pedagogical methods
are often improperly implemented for adult learners, whose intellectual aspirations
are least likely to be aroused by the uncompromising requirements of authorita-
tive, conventional institutions of learning (Lindeman, 1926). As individuals mature,
their need and ability to self-direct, leverage experience, identify readiness to learn,
and organize their learning around life problems increase (Knowles, Holton III, &
Importance of Training for ACIVs 405
Swanson, 2005). Andragogical models bring into focus additional learner charac-
teristics that should be considered in the development of training for ACIV systems
(Knowles, 1979; Knowles et al., 2005). For vehicle automation, the training structure
should focus on both near- and long-term improvements in driving performance as
well as improved understanding of driver responsibilities and sustained understand-
ing of vehicle system capabilities and limitations.
18.4.1.1 Motivation
The motivational aspects of training design cover many theoretical concepts,
including attribution theory, equity theory, locus of control, expectancy theory,
need of achievement, and goal setting (Patrick, 1992). Trainee motivation can be
influenced by individual characteristics as well as the characteristics of the training
itself (Coultas, Grossman, & Salas, 2012). The temporal divisions of motivation
were described by Quiñones (2003) as having an effect on (1) whether an indi-
vidual decides to attend training in the first place, (2) the amount of effort exerted
during the training session, and (3) the application of skills after training. Zhang,
Hajiseyedjavadi, Wang, Samuel, and Qu (2018) found training transfer for hazard
anticipation, and attention maintenance was observed only in drivers who were
considered to be careful (e.g., low sensation seeking and aggressiveness). Due to its
multi-faceted nature and the temporal inconsistencies within and between trainees,
the consideration of factors affecting motivation requires significant attention when
designing training programs.
demonstrate positive training transfer (Alliger, Tannenbaum, Bennett Jr., Traver, &
Shotland, 1997; Baumgartel, Reynolds, & Pathan, 1984; Rogers, 1951).
Another element of training content that has been shown to influence perfor-
mance was found by Ivancic and Hesketh (2000), who investigated the effect of
guided error training on driving skill and confidence on a driving simulator. In
error training, learners are shown examples of errors that they themselves make
in a test of their knowledge and skills and solutions for overcoming these errors.
One group of participants was given feedback and training on their errors while
driving (error training); another group was shown examples of errors in general,
not their own errors, while driving (guided error training). The error training led
to better performance in near and far transfer of training evaluations of perfor-
mance. Moreover, error training reduced driver’s self-confidence in their driving
skill at the end of the training when compared with the group that received error-
less training. This suggests, importantly, that training can improve performance
without increasing confidence. However, error training may not be appropriate for
all prospective users, especially those who demonstrate low confidence with using
vehicle automation.
18.5 TRAINING PROTOCOLS
Current training methods for in-vehicle technologies show that while many of the
strategies implemented by individual vehicle owners may suit some characteristics
of adult learners (e.g., motivated to seek knowledge, task-oriented, need for self-
direction), the current paradigm of consumer education does not provide sufficient
motivation for the learner to search for a deeper understanding of the material.
Several consumer-preferred methods for learning to use in-vehicle technologies were
identified by Abraham, Reimer, Seppelt, Fitzgerald, and Coughlin (2017). These
methods are discussed in the following sub-sections.
Generally, experience can improve performance, but learning through trial and
error can waste time, may lead to a less-than-ideal solution, or may never result in a
solution at all. Trial and error can also result in negative effects, such as increased
frustration, reduced learner motivation, discontinued use of the system, or mental
model recalibration through potentially dangerous experiences. Learning by doing is
not a homogenous process. A study by Pereira, Beggiato, and Petzoldt (2015) found
that mastering the use of ACC took different lengths of time. Furthermore, Larsson
(2012) conducted a survey of 130 ACC users. The results indicated that drivers need
to be especially attentive in those situations to which, during conventional driving,
they would not be attentive. The system may not be self-explanatory enough for
a strictly trial-and-error based approach. Larsson’s survey results indicated that as
drivers gained experience using the ACC system, they became more aware of the
system’s limitations. However, other studies have shown that safety-critical misun-
derstandings of system limitations are resilient and can persist over time (Kyriakidis
et al., 2015; Llaneras, 2006). Tversky & Kahneman (1971) argued that users tended
to place undue confidence in the stability of observed patterns, thus resulting in mis-
understandings that are not corrected when relying solely on trial-and-error learning
methods.
18.5.3 demonstrAtion
Demonstration, also known as behavior modeling, results in the display of a new pat-
tern of behavior by a learner/trainee who observes a model (experienced user, pro-
fessional), performing the task to be learned. The trainee is encouraged to rehearse
and practice the model’s behavior, and feedback is provided as the trainee refines
410 Human Factors for Automated Vehicles
their behavior to closer approximate that of the observed model. This training style
is founded in social and developmental psychology centered on the research of
Bandura (1977), who argued that by observing model behavior, people can develop
a cognitive representation (mental model), which can then be used to guide future
behavior. In theory, observational learning is a great strategy for situations in which
early errors are viewed as problematic and visual guidance could provide a means to
reduce the frequency and severity of errors. This makes behavior modeling an ideal
way to learn how to perform simple tasks, such as system activation, which may
result in performance deficiencies if learned using trial and error.
One limitation of the demonstration method is that it does nothing to improve
knowledge of the system and it appears to result in fairly superficial learning.
Findings from McDonald et al. (2017) show that ride-along demonstrations were
ultimately no better than self-study of the owner’s manual, as there were knowledge
gains across all training types, of which none provided a statistically significant
difference.
The behavioral model is also assumed to exhibit desirable behaviors and have
accurate information. When demonstration takes place in automotive dealerships,
sales people are assumed to have been trained on how to use ACIV systems and have
accurate knowledge about their use. Unfortunately, this assumption is not always
accurate. Abraham, McAnulty, Mehler, and Reimer (2017) conducted a study inves-
tigating sales employees from six vehicle dealerships in the Boston, MA area associ-
ated with six major vehicle manufacturers. They found that many sales people lacked
a strong understanding of ACIV systems. It was also revealed that the training the
employees received was meager, consisting mostly of web-based modules with very
little (if any) hands-on experience. Furthermore, two of the sixteen employees with
whom the researchers interacted gave explicitly wrong safety-critical information
regarding the systems.
for novice driver training has been shown to improve teen driving safety as well
as reduce the frequency of risky driving behaviors (Klauer et al., 2017; Peek-Asa,
Hamann, Reyes, & McGehee, 2016). Personalized feedback coupled with active
practice was also shown to be superior to passive learning methods with no feed-
back when assessing older driver’s scanning behaviors at intersections (Romoser,
Pollatsek, Fisher, & Williams, 2013). Using real-time feedback for adults to assist
them in learning/understanding ACIV systems could be a useful technique; however,
additional research is needed on this topic.
If it can be determined that driver performance deficiencies are attributable
to a lack of skill or knowledge, then an immediate training intervention after the
first occurrence of the undesired behavior in situ may help to correct the behavior.
However, if undesirable safety-related performance deficiencies cannot be attributed
to a lack of skill or knowledge, then the solution does not lie in training, but in
the application of salient differential consequences (Boldovici, 1992; Mager & Pipe,
1970). One example is the Tesla Autosteer system deactivation for non-compliance
with hands-on wheel warnings (Telsa Inc., 2018).
18.6 RECOMMENDATIONS
Training drivers is critical to the successful deployment of driver support features
from low to high levels of automation. As discussed in this chapter, the high vari-
ability of drivers on our roadways and the competing strengths and limitations of
current ACIV systems present challenges that transportation safety researchers
must address. Given these challenges, there are several recommendations listed
below.
Importance of Training for ACIVs 413
First, the areas of training should remain dynamic as ACIV systems continue to
develop, new data become available, and new skills become necessary. For example,
new training requirements could arise from a driver’s increased exposure (and famil-
iarity) with particular ACIV systems, software updates, or the behavioral adaptation
of non-system users.
Second, considerations for the training of system users should be included as a
key point in the design cycle of these new systems. Training programs should be
subject to proper evaluation and assessment to ensure that learning outcomes are
achieved and that no unintended consequences are introduced by the program.
Third, in-vehicle driver monitoring systems may be an important option to con-
sider for ACIV system training (see also, this Handbook, Chapter 11). Campbell et al.
(2018) discussed the use of driver monitoring systems to avoid out-of-the-loop prob-
lems. Salinger (2018) discussed a driver monitoring system that presented multi-modal
signals to capture drivers’ attention and return focus back to the control or monitoring
loop. Another approach to driver monitoring is to periodically provide a message to
the driver. The National Transportation Safety Board has recommended implementing
driver monitoring systems (National Transportation Safety Board, 2017b).
Fourth, traffic safety professionals need to develop effective training guidelines and
procedures for ACIV systems. Currently, the California Department of Motor Vehicles
requires that training programs for drivers who test such systems in public include
familiarization with the automated driving system technology; basic technical training
regarding the system concept, capabilities, and limitations; ride-along demonstrations
by an experienced test driver; and subsequent behind-the-wheel training (Nowakowski,
Shladover, Chan, & Tan, 2014). Perhaps another worthwhile endeavor in the near term
would be to add some measure of advanced vehicle system components to future itera-
tions of the basic knowledge test for the standard licensing requirement, similar to
what was implemented in at least twenty states and the District of Columbia for dis-
tracted driving as of 2013 (Governors Highway Safety Association, 2013).
Finally, legislative action amending statutory and regulatory definitions of appli-
cable terms (e.g. driver, vehicle, etc.) as well as reviewing and adapting existing rules
regarding vehicle operation may be a persistent challenge until policy makers are
well versed in the subject matter. Educating all entities on the need for acceptance
and implementation of these universal terms and definitions will be an implementa-
tion challenge (American Association of Motor Vehicle Administrators, 2018). This
is one reason why communication between researchers and legislators must be clear
and concise so that, in the event legislation is required, it is based on science and not
on other implicit or explicit biases. Furthermore, all stakeholders are encouraged to
communicate with one another on the most effective ways to train novice and expe-
rienced drivers on ACIV systems. Educational materials that are developed should
be proven effective and understood by the general motoring public.
ACKNOWLEDGMENTS
This chapter is based in part on a Safe-D UTC report by the authors. Safety
through Disruption (Safe-D) National UTC, a grant from the U.S. Department
of Transportation’s University Transportation Centers Program (Federal Grant
Number: 69A3551747115).
REFERENCES
Abraham, H., McAnulty, H., Mehler, B., & Reimer, B. (2017). Case study of today’s auto-
motive dealerships: Introduction and delivery of advanced driver assistance systems.
Transportation Research Record, 2660, 7–14. doi:10.3141/2660–02
Abraham, H., Reimer, B., Seppelt, B., Fitzgerald, C., & Coughlin, J. F. (2017). Consumer
Interest in Automation: Preliminary Observations Exploring a Year’s Change.
Cambridge, MA: MIT Agelab.
Abraham, H., Seppelt, B., Mehler, B., & Reimer, B. (2017). What’s in a name: Vehicle tech-
nology branding and consumer expectations for automation. Proceedings of the 9th
International Conference on Automotive User Interfaces and Interactive Vehicular
Applications - AutomotiveUI ’17. doi:10.1145/3122986.3123018
Ahmed-Zaid, F., Krishnan, H., Vladimerou, V., Brovold, S., Cunningham, A., Goudy,
R., … Viray, R. (2014). Vehicle-to-Vehicle Safety System and Vehicle Build for Safety
Pilot (V2V-SP) Final Report. Washington, DC: National Highway Traffic Safety
Administration.
Alhakami, A. S. & Slovic, P. (1994). A psychological study of the inverse relationship
between perceived risk and perceived benefit. Risk Analysis, 14(6), 1085–1096.
doi:10.1111/j.1539-6924.1994.tb00080.x
Alliger, G. M., Tannenbaum, S. I., Bennett, W., Jr., Traver, H., & Shotland, A. (1997). A meta-
analysis of the relations among training criteria. Personnel Psychology, 50(2), 341–358.
doi:10.1111/j.1744-6570.1997.tb00911.x
American Association of Motor Vehicle Administrators. (2018). Jurisdictional Guidelines
for the Safe Testing and Deployment of Highly Automated Vehicles. Arlington, VA:
AAMVA.
Anderson, J. M., Kalra, N., Stanley, K. D., Sorensen, P., Samaras, C., & Oluwatola, O. A.
(2014). Autonomous Vehicle Technology: A Guide for Policymakers. Santa Monica,
CA: Rand Corporation.
Anderson, J. R., Reder, L. M., & Simon, H. A. (1996). Situated learning and education.
Educational Researcher, 25(4), 5–11. doi:10.3102/0013189X025004005
Annett, J. (1961). The Role of Knowledge of Results in Learning: A Survey. Retrieved from
https://apps.dtic.mil/docs/citations/AD0262937
Bandura, A. (1977). Social Learning Theory (1st ed.). Englewood Cliffs, NJ: Prentice-Hall.
Importance of Training for ACIVs 415
Banks, V. A., Eriksson, A., O ’Donoghue, J., & Stanton, N. A. (2017). Is partially automated
driving a bad idea? Observations from an on-road study. Applied Ergonomics, 68,
138–145. doi:10.1016/j.apergo.2017.11.010
Barfield, W. & Dingus, T. A. (1997). Human Factors in Intelligent Transportation Systems.
New York: Psychology Press.
Baumgartel, H. J., Reynolds, J. I., & Pathan, R. Z. (1984). How personality and organisational
climate variables moderate the effectiveness of management development programmes:
A review and some recent research findings. Management & Labour Studies, 9(1), 1–16.
Benson, A. J., Tefft, B. C., Svancara, A. M., & Horrey, W. J. (2018). Potential Reductions
in Crashes, Injuries, and Deaths from Large-Scale Deployment of Advanced Driver
Assistance Systems (Research Brief). Washington, DC: AAA Foundation for Traffic
Safety.
Bettisworth, C., Burt, M., Chachich, A., Harrington, R., Hassol, J., Kim, A., … Ritter,
G. (2015). Status of the Dedicated Short-Range Communications Technology and
Applications: Report to Congress (FHWA-JPO-15–218). Washington, DC: United
States Department of Transportation.
Blanco, M., Atwood, J., Vasquez, H. M., Trimble, T. E., Fitchett, V. L., Radlbeck, J., …
Morgan, J. F. (2015). Human Factors Evaluation of Level 2 and Level 3 Automated
Driving Concepts (DOT HS 812 182). Washington, DC: National Highway Traffic
Safety Administration.
Boldovici, J. A. (1992). Toward a Theory of Adaptive Training. Alexandria, VA: U.S. Army
Research Institute for the Behavioral and Social Sciences.
Brockman, R. (1992). Writing Better Computer User Documentation: From Paper to Online
(2nd ed.). New York: John Wiley & Sons.
Burke, L. A. & Hutchins, H. M. (2007). Training transfer. An integrative literature review.
Human Resource Development Review, 6(3), 263–296. doi:10.1177/1534484307303035
Campbell, J. L., Brown, J. L., Graving, J. S., Richard, C. M., Lichty, M. G., Bacon, L. P., …
Sanquist, T. (2018). Human Factors Design Principles for Level 2 and Level 3
Automated Driving Concepts (DOT HS 812 555). Washington, DC: National Highway
Traffic Safety Administration.
Casner, S. M. & Hutchins, E. L. (2019). What do we tell the drivers? Toward minimum driver
training standards for partially automated cars. Journal of Cognitive Engineering and
Decision Making, 13(2), 55–66. doi:10.1177/1555343419830901
Cialdini, R. B. (2007). Influence - The Psychology of Persuasion. New York: Harper-Collins.
Cohen, A., Smith, M. J., & Anger, W. K. (1979). Self-protective measures against workplace
hazards. Journal of Safety Research, 11(3), 121–131.
Colquitt, J. A., Lepine, J. A., & Noe, R. A. (2000). Toward an integrative theory of training
motivation: A meta-analytic path analysis of 20 years of research. Journal of Applied
Psychology, 85(5), 678–707. doi:10.1037//0021–9010.85.5.678
Congress, N. (1994). The automated highway system: An idea whose time has come. Public
Roads, 58(1), 1–7.
Coultas, C. W., Grossman, R., & Salas, E. (2012). Design, delivery, evaluation, and transfer of
training systems. In G. Salvendy (Ed.), Handbook of Human Factors and Ergonomics
(4th ed., pp. 490–533). New York: John Wiley & Sons.
Council of State Governments. (2016). State Laws on Autonomous Vehicles. Retrieved from
http://knowledgecenter.csg.org/kc/system/files/CR_automomous.pdf
Davis, F. D. (1989). Perceived usefulness, perceived ease of use, and user acceptance of infor-
mation. MIS Quarterly, 13. Retrieved from www.jstor.org/stable/pdf/249008.pdf?refre
qid=excelsior%3A59df6413a1047bd694b4c69a7eaf559e
Davis, F. D. (1993). User acceptance of information technology: System characteristics, user
perceptions and behavioral impacts. International Journal of Man-Machine Studies,
38(3), 475–487. doi:10.1006/imms.1993.1022
416 Human Factors for Automated Vehicles
Dewey, J. (1933). How We Think: A Restatement of the Relation of Reflective Thinking to the
Educative Process. Boston: DC Heath.
Driscoll, M. P. (2000). Psychology of Learning for Instruction. Needham Heights, MA:
Allyn & Bacon.
Eichelberger, A. H. & McCartt, A. T. (2016). Toyota drivers’ experiences with Dynamic
Radar Cruise Control, Pre-Collision System, and Lane-Keeping Assist. Journal of
Safety Research, 56, 67–73. https://doi.org/10.1016/j.jsr.2015.12.002
Fancher, P., Ervin, R., Sayer, J., Hagan, M. R., Bogard, S., Bareket, Z., … Haugen, J. (1998).
Intelligent Cruise Control Field Operational Test (DOT HS 808 849). Washington,
D.C.: National Highway Traffic Safety Administration.
Federal Aviation Administration. (2009). Aviation Instructor’s Handbook. Retrieved from
www.faa.gov/regulations_policies/handbooks_manuals/aviation/aviation_instructors_
handbook/media/faa-h-8083-9a.pdf
Federation of American Scientists. (2005). Harnessing the Power of Video Games for
Learning. Retrieved from https://fas.org/programs/ltp/policy_and_publications/sum-
mit/Summit on Educational Games.pdf
Finucane, M., Alhakami, A., Slovic, P., & Johnson, S. M. (2000). The affect heuristic in
judgements of risks and benefits. Journal of Behavioral Decision Making, 13, 1–17.
Fitts, P. M. (1962). Factors in complex skill training. In R. Glaser (Ed.), Training Research
and Education (pp. 177–197). New York: Dover Publications.
Gagné, R. M. (1965). The Conditions of Learning (3rd ed.). New York: Holt, Rinehart and Winston.
Gee, J. P. (2003). What video games have to teach us about learning and literacy. Computers
in Entertainment, 1(1), 20.
Ghazizadeh, M., Lee, J. D., & Boyle, L. N. (2012). Extending the technology acceptance
model to assess automation. Cognition, Technology & Work, 14(1), 39–49. doi:10.1007/
s10111-011-0194–3
Gibson, J. J. (1977). The Theory of Affordances. Hilldale, PA.
Glancy, D. J., Peterson, R. W., & Graham, K. F. (2015). A Look at the Legal Environment for
Driverless Vehicles. Washington, DC: Transportation Research Board.
Godfrey, S. S., Allender, L., Laughery, K. R., & Smith, V. L. (1983). Warning messages:
Will the consumer bother to look? Proceedings of the Human Factors and Ergonomics
Society Annual Meeting, 27(11), 950–954.
Goodman, N. (1984). Of Mind and Other Matters. Cambridge, MA: Harvard University Press.
Governors Highway Safety Association. (2013). 2013 Distracted Driving: Survey of the
States. Washington, DC: GHSA.
Grossman, R. & Salas, E. (2011). The transfer of training: What really matters. International
Journal of Training and Development, 15(2), 103–120. doi:10.1111/j.1468–
2419.2011.00373.x
Hendrickson, B. C., Biehler, A., & Mashayekh, Y. (2014). Connected and Autonomous
Vehicles 2040 Vision. Harrisburg, PA: Pennsylvania Department of Transportation.
Howard, D. & Dai, D. (2014). Public perceptions of self-driving cars: The case of Berkeley,
California. Proceedings of the Transportation Research Board Annual Meeting.
Washington, DC: Transportation Research Board.
Hughes, J. S., Rice, S., Trafimow, D., & Clayton, K. (2009). The automated cockpit: A com-
parison of attitudes towards human and automated pilots. Transportation Research Part
F: Traffic Psychology and Behaviour, 12(5), 428–439. doi:10.1016/j.trf.2009.08.004
Ivancic, K. & Hesketh, B. (2000). Learning from errors in a driving simulation: Effects on driving
skill and self-confidence. Ergonomics, 43(12), 1966–1984. doi:10.1080/00140130050201427
Jonassen, D. H. (1995). Operationalizing mental models: Strategies for assessing mental
models to support meaningful learning and design-supportive learning environments.
The First International Conference on Computer Support for Collaborative Learning -
CSCL ’95, 182–186. doi:10.3115/222020.222166
Importance of Training for ACIVs 417
McDonald, A. B., Reyes, M., Roe, C., & McGehee, D. V. (2017). Driver understanding of
ADAS and evolving consumer education. 25th International Technical Conference
on the Enhanced Safety of Vehicles. Retrieved from http://indexsmart.mirasmart.
com/25esv/PDFfiles/25ESV-000373.pdf
Mehlenbacher, B., Wogalter, M. S., & Laughery, K. R. (2002). On the reading of product
owner’s manuals: Perceptions and product complexity. Proceedings of the Human
Factors and Ergonomics Society Annual Meeting, 46(6), 730–734.
Mehler, B., Reimer, B., Lavalliere, M., Dobres, J. J., Caughlin, J., Lavallière, M., … Coughlin, J.
(2014). Evaluating Technologies Relevant to the Enhancement of Driver Safety.
Washington, DC: AAA Foundation for Traffic Safety.
Merrill, M. D. (2002). First principles of instruction. Educational Technology Research and
Development, 50(3), 43–59. doi:10.1007/BF02505024
Merritt, S. M. (2011). Affective processes in human−automation interactions. Human Factors,
53(4), 356–370. doi:10.1177/0018720811411912
Merritt, S. M., Heimbaugh, H., LaChapell, J., & Lee, D. (2013). I trust it, but I don’t know
why: Effects of implicit attitudes toward automation on trust in an automated system.
Human Factors, 55(3), 520–534. doi:10.1177/0018720812465081
Mosier, K. L., Skitka, L. J., Burdick, M. D., & Heers, S. T. (1996). Automation bias, account-
ability, and verification behaviors. Proceedings of the Human Factors and Ergonomics
Society Annual Meeting, 40(4), 204–208.
Mosier, K. L., Skitka, L. J., Heers, S., & Burdick, M. (1998). Automation bias: Decision
making and performance in high-tech cockpits. International Journal of Aviation
Psychology, 8(1), 47–63.
National Highway Traffic Safety Administration. (2018). Traffic Safety Facts - 2016 Data.
Washington, DC: National Highway Traffic Safety Administration.
National Transportation Safety Board. (2017a). Driver Errors, Overreliance on Automation,
Lack of Safeguards, Led to Fatal Tesla Crash. Retrieved from www.ntsb.gov/news/
press-releases/Pages/PR20170912.aspx
National Transportation Safety Board. (2017b). Highway Accident Report Collision Between
a Car Operating With Automated Vehicle Control Systems and a Tractor-Semitrailer
Truck. Retrieved from www.ntsb.gov/investigations/AccidentReports/Reports/
HAR1702.pdf
Noe, R. A. (1986). Trainees ’ attributes and attitudes : Neglected influences on training effec-
tiveness. The Academy of Management Review, 11(4), 736–749.
Norman, D. A. (2013). The Design of Everyday Things. New York: Basic Books.
Nowakowski, C., Shladover, S. E., Chan, C.-Y., & Tan, H.-S. (2014). Development of
California regulations to govern testing and operation of automated driving systems.
Proceedings of the Transportation Research Board Annual Meeting. Washington, DC:
Transportation Research Board.
Parasuraman, R. & Riley, V. (1997). Humans and automation: Use, misuse, disuse, abuse.
Human Factors, 39(2), 230–253. doi:10.1518/001872097778543886
Patrick, J. (1992). Training : Research and Practice. London: Academic Press.
Peek-Asa, C., Hamann, C., Reyes, M., & McGehee, D. (2016). A randomised trial to
improve novice driving. Injury Prevention, 22(Suppl 2), A120.3-A121. doi:10.1136/
injuryprev-2016-042156.329
Pereira, M., Beggiato, M., & Petzoldt, T. (2015). Use of adaptive cruise control functions
on motorways and urban roads: Changes over time in an on-road study. Applied
Ergonomics, 50, 105–112. doi:10.1016/j.apergo.2015.03.002
Prensky, M. (2001). Digital game-based learning. Computers in Entertainment, 1(1), 21.
Quiñones, M. A. (1995). Pretraining context effects - Training assignment as feedback.
Journal of Applied Psychology, 80(2), 226–238.
Importance of Training for ACIVs 419
Skitka, L. J., Mosier, K. L., & Burdick, M. D. (2000). Accountability and automation
bias. International Journal of Human-Computer Studies, 52, 701–717. doi:10.1006/
ijhc.1999.0349
Slovic, P., Finucane, M. L., Peters, E., & Macgregor, D. G. (2006). The affect heuristic.
European Journal of Operational Research, 177(3), 1333–1352.
Smith-Jentsch, K. A., Jentsch, F. G., Payne, S. C., & Salas, E. (1996). Can pretraining experi-
ences explain individual differences in learning? Journal of Applied Psychology, 81(1),
110–116. doi:10.1037/0021–9010.81.1.110
Spitzer, D. R. (1984). Why training fails. Performance & Instruction Journal & Instruction
Journal, 23(7), 6–10. doi:10.1002/pfi.4150230704
Tannenbaum, S. I., Mathieu, J. E., Salas, E., & Cannon-Bowers, J. A. (1991). Meeting train-
ees’ expectations: The influence of training fulfillment on the development of commit-
ment, self-efficacy, and motivation. Journal of Applied Psychology, 76(6), 759–769.
doi:10.1037/0021–9010.76.6.759
Tellis, L., Engelman, G., Christensen, A., Cunningham, A., Debouk, R., Egawa, K., … Kiger,
S. (2016). Automated Vehicle Research for Enhanced Safety (DTNH22–05-H-01277).
Washington, DC: National Highway Traffic Safety Administration.
Telsa Inc. (2018). Telsa Model S Owner’s Manual. Retrieved from www.tesla.com/sites/
default/files/model_s_owners_manual_north_america_en_us.pdf
Thorndike, E. L. (1913). Educational Psychology, Vol 2: The Psychology of Learning.
New York: Teachers College.
Tough, A. (1971). The Adult’s Learning Projects. Retrieved from www.infed.org/thinkers/
et-knowl.htm
Trimble, T. E., Bishop, R., Morgan, J. F., & Blanco, M. (2014). Human Factors Evaluation of
Level 2 and Level 3 Automated Driving Concepts: Past Research, State of Automation
Technology, and Emerging System Concepts (DOT HS 812 043). Washington, DC:
National Highway Traffic Safety Administration.
Tversky, A. & Kahneman, D. (1971). Belief in the law of small numbers. Psychological
Bulletin, 76(2), 105–110.
U.S. Department of Transportation. (2016). Federal Automated Vehicles Policy. Washington,
DC: Department of Transportation.
van Merriënboer, J. J. G. & Kirschner, P. A. (2017). Ten Steps to Complex Learning (3rd ed.).
New York: Routledge.
Victor, T. W., Tivesten, E., Gustavsson, P., Johansson, J., Sangberg, F., & Ljung Aust, M.
(2018). Automation expectation mismatch: Incorrect prediction despite eyes on threat
and hands on wheel. Human Factors, 60(8), 1095–1116. doi:10.1177/0018720818788164
Webster, J. & Martocchio, J. J. (1993). Turning work into play: Implications for microcom-
puter software training. Journal of Management, 19(1), 127–146.
Wickens, C. D., Hollands, J. G., Banbury, S., & Parasuraman, R. (2013). Engineering
Psychology and Human Performance (4th ed.). New York: Psychology Press.
Wogalter, M. S., Brelsford, J. W., Desaulniers, D. R., & Laughery, K. R. (1991). Consumer
product warnings: The role of hazard perception. Journal of Safety Research, 22(2),
71–82. doi:10.1016/0022–4375(91)90015–N
Wright, P., Creighton, P., & Threlfall, S. (1982). Some factors determining when instructions
will be read. Ergonomics, 25(3), 225–237. doi:10.1080/00140138208924943
Zhang, T., Hajiseyedjavadi, F., Wang, Y., Samuel, S., & Qu, X. (2018). Training interventions
are only effective on careful drivers, not careless drivers. Transportation Research Part
F: Psychology and Behaviour, 58, 693–707. doi:10.1016/j.trf.2018.07.004
19 Connected Vehicles in
a Connected World
A Sociotechnical
Systems Perspective
Ian Y. Noy
Independent
CONTENTS
Key Points............................................................................................................... 421
19.1 Introduction ................................................................................................ 422
19.1.1 Cyber-Physical Systems ................................................................. 422
19.1.2 Internet of Things ........................................................................... 423
19.2 Benefits of CVs ........................................................................................... 423
19.3 Limitations of Current Approaches ............................................................ 425
19.4 The CV as an STS ..................................................................................... 428
19.5 Modeling the CV as an STS........................................................................ 429
19.6 Actors in AD Are STSs .............................................................................. 432
19.7 The RTS as a System of Systems ............................................................... 433
19.8 CVs in a Connected World ......................................................................... 436
19.9 Challenges and Opportunities .................................................................... 436
19.10 Conclusion .................................................................................................. 437
References............................................................................................................... 438
KEY POINTS
• Connected vehicles have a great potential to improve safety, mobility, and
sustainability
• Current engineering efforts focus on narrow cyber-physical and communi-
cation technologies
• Important human factors and sociotechnical issues are not being adequately
addressed
• The interdependencies of the various actors within a connected road trans-
portation system (such as all connected cars, traffic control centers, service
providers, original equipment manufacturers (OEMs), etc.) are critical to
emergent properties such as safety and performance
421
422 Human Factors for Automated Vehicles
19.1 INTRODUCTION
We stand today at the cusp of an explosive surge of cyber-physical systems1 (CPS),
spurred by the anticipated capability of 5th generation wireless systems (5G) to deliver
astonishing communication speed and bandwidth. It is projected that by 2020, there
will be 30 billion connected devices (IDC, 2014). The next generation of the Internet
of Things (IoT) and the evolving vision of the connected world will undoubtedly
disrupt the traditional paradigm of the automobile as an independent agent, if not the
entire road transportation enterprise as we know it. Indeed, the exponential advance-
ment of robotic and communication technologies can result in a future transporta-
tion system that will be undergoing evermore frequent transformations, potentially
revolutionizing the very social structure of society by altering why or how we travel.
This chapter considers connected vehicles (CV) within the broader context of
the connected world2 (aka, cyber-physical society) that is rapidly emerging. First,
we briefly describe the principal technological enablers of CV, namely CPS and IoT.
1 Devices that are controlled or monitored by computer-based algorithms, tightly integrated with the
Internet and its users.
2 The connected world, or cyber-physical society, refers to the totality of prevalent cyber-physical
systems.
3 For an interesting read on the impact of microsecond-level improvement in accessing stock market
exchanges by high-frequency traders readers are referred to Michael Lewis’s Flash Boys (Lewis,
2017).
Sociotechnical Systems and Connected Vehicles 423
4 Connections that may change with conditions, technological affordances, or improvements over time.
5 VANETs are spontaneously created wireless networks for data exchange.
6 COP is a single, shared understanding of system operational model and current state.
424 Human Factors for Automated Vehicles
We distinguish between CV, i.e., the physical platform for on-board CPS pro-
cesses (including communication hardware and software), and automated driving7
(AD), the activity of driving in which at least some aspects of the dynamic driving
tasks occur without driver input. Although the USDOT recently expanded its defini-
tion of “driver” to include a computer system, the term “driver” in this chapter will
refer exclusively to a human driver. In effect, AD involves the CV8 with occupants
(and driver, if there is one) going somewhere. Since AD connotes a broader involve-
ment of entities beyond the physical vehicles, we use the acronym AD to represent
the collective entities involved in carrying out the dynamic driving task unless we
specifically reference the vehicle itself. The impetus for this chapter is the belief that
so much more functionality and benefit can be realized if the CV of the future will
interact effectively with other vehicles, both connected and traditional, as well as
other road users and the road system infrastructure—in short, if it were to integrate
with all other entities that it will encounter directly or remotely.
For the above to occur, it is not enough for CVs to exchange location and veloc-
ity vectors with other CVs, as seems to be the narrow focus of current automotive
advances. Rather, CVs should communicate intention, adhere to harmonized deci-
sion rules, and engage other entities in negotiating strategic and tactical interactions.
In addition, it seems paramount to broaden the engineering effort to take into account
the myriad of ways in which the vehicle will connect to non-driving-related systems in
the home, the workplace, the city and beyond. As we argue below, the problem space
should be broader than the vehicle. It should even be broader than the transportation
system per se. Ideally, AD should be aligned with broader societal goals because of
its enormous potential to impact future society. Amaldi and Smoker (2013) identify
the need to develop automation policy to address rationale, future plans and strategies
with a view towards defining the impacts on socioeconomic goals such as employ-
ment, land use, and social structures. In particular, they point out that currently we
lack the “methods or even methodological principles needed to gather and organize
knowledge [towards] the construction of [automation] policies” (p.2, italics added).
We explore briefly what developing automation policy might entail. To do so, we are
guided by systems theoretic considerations in which the system is the entire ecosystem
of which the vehicle is a part. Such an approach will hopefully result in increased sys-
tem resilience and safety through reduced potential for confusion, human–automation
error, unwitting consequences, and foreseeable misuses. System approaches also facil-
itate analyses of security vulnerabilities. Given the paucity of engineering, psychologi-
cal, and sociological effort being directed at integrating the CV within the broader
society,9 it seems important to draw attention to the need to consider the bigger picture.
7 AD denotes the phenomenon whereby driving is accomplished with no or limited human involvement.
It enables smart mobility through the use of automation technologies, comprising the universe of CVs
and supporting infrastructure. Sometimes AD and CV are used interchangeably, but they are concep-
tually different (see this Handbook, Chapter 1).
8 Strictly speaking, AD does not necessarily include CV if automation is achieved completely through
on-board intelligence that does not entail V2V or V2X communication. However, a configuration that
does not involve some level of real-time communication to cooperate or coordinate with other entities
seems highly unlikely given current engineering trends.
9 To be sure, a number of papers have discussed ethical and broader societal issues raised by AD, but
they are few in number in relation to the literature devoted to advancing technical capability.
Sociotechnical Systems and Connected Vehicles 425
travel and vehicle capacity on some highways may actually increase costs associated
with congestion, emissions, and sprawl. In another study, Wadud, MacKenzie, and
Leiby (2016) found that automation had a positive impact on greenhouse gas (GHG)
emissions and energy consumption at low levels of automation but significantly
increased levels at full automation. A related concern is the difficulty of managing
traffic from origin to destination. For example, if highway throughput is increased
through platooning and other means, large numbers of vehicles may converge on
common destination points and create bottlenecks and significant delays that may be
significantly worse than those commonly experienced today.
A comprehensive review of the relevant literature that questions the espoused
benefits of AD is beyond the scope of this chapter. It is sufficient for our purpose to
point out that the benefits are, in many cases, speculative, if not exaggerated, and
largely based on a projection of future technology capabilities superimposed over
the current landscape of the transportation system. Studies that question the assump-
tions underlying optimistic projections of AD benefits raise important questions that
we ignore at our peril. The potential of AD to dramatically improve transportation
safety and productivity is not in question. The goal of this chapter is to help prevent
unintended consequences and maximize the positive benefits of AD by broadening
the problem space to the sociotechnical level. Experience with economic sectors that
involve complex, high-hazard operations yet remarkably few failures such as aero-
space and the nuclear power industry suggests that risk management must be based
on a comprehensive understanding of potential threats arising from system complex-
ity and dynamic behavior (Leveson, Dulac, Marais, & Carroll, 2009).
It is widely accepted that the future transportation system will likely undergo
frequent transformations or be in a constant state of flux. Yet, projections of future
benefits are incorrectly based on current functional models of the road transpor-
tation system (RTS). Consider, for example, the USDOT report on the benefits of
CVs (Chang et al., 2015), which derived its estimates from field demonstrations and
analytical methods of V2I applications developed in four USDOT CV research pro-
grams. These projections are based on current deployments but ignore the potential
effects of driver adaptation or altered future conditions and transportation models.
This is of concern because the rules of the road, physical road infrastructure, driver
behavioral norms, or even the role of transportation in society are likely to change
dramatically. Yet, these topics are not receiving the attention needed to integrate
CVs into the evolving intelligent transportation system. It seems clear that to exploit
the benefits afforded by the new technologies, we should not constrain innovation
by mimicking the existing system or functional models. But to do otherwise would
require establishing a common vision of the future and a shared understanding of
how to get there. We must begin by acknowledging that this is a daunting challenge,
given competing interests and approaches. Yet, planning for a future transportation
system characterized by uncertainty and a constantly evolving connected world is a
challenge we must address. Just as one would not design a building without contex-
tual reference to land use, available infrastructure services, soil and environmental
conditions, weather, and a host of other design-critical factors, it behooves us to think
more broadly about mobility in the context of the connected world. Clearly, the chal-
lenge is evermore daunting because the degrees of freedom increase dramatically
Sociotechnical Systems and Connected Vehicles 427
with increased uncertainty and system complexity.10 One approach to reducing com-
plexity is to deliberately reduce the degrees of freedom by imposing operational
constraints, as in the case of a railroad in which the tracks are fixed and the environ-
ment is controlled (e.g., people movers in airports). However, such an approach is
antithetic to the overall goal of AD, namely increased mobility, freedom, flexibility,
and connectivity.
Not only should developments in AD be more congruent with the broader context
of the connected world, they should be aligned with important societal needs. For
example, most of the current research and development related to AD targets healthy,
relatively ambulative users. However, Hancock et al. (2020) point out that drivers
who stand to benefit most include teenage and elderly drivers. Moreover, perhaps the
greatest societal need may well be to extend mobility to the segment of the popula-
tion that is currently not well served by automobiles (e.g., the physically disabled,
visually impaired, elderly, young). The potential for AD to improve the quality of life
for underserved or vulnerable populations is exciting and offers promising new value
to society. Overviews of the challenges faced by disabled and older drivers have
been previously identified (TRB, 2004), but to date they have not been adequately
addressed. To realize the extraordinary potential of AD, however, would require
that far greater effort be directed towards overcoming existing barriers to mobility,
including addressing mundane practical problems such as accessibility to the vehicle
in different environments, egress/ingress, and driver–vehicle interaction, which are
currently not being adequately considered (Hancock et al., 2020).
AD should be considered within the broadest possible context, by which we mean
a sociotechnical systems (STS) perspective. As Leveson (2011) points out, analyzing
safety in complex systems using a chain of events models or reductionist approaches
fails to uncover human errors arising from social and organizational factors, nor does
it uncover the effects of adaptation in which systems migrate towards unsafe condi-
tions, nor can it uncover system-level failures that are not associated with component
failures. Examples of the latter abound in the safety literature. Berk (2009) provides
several examples of system failures in which no single component failed (e.g., a laser
targeting system and a municipal water treatment system). In these cases, the failures
were attributed to components that worked as designed but interacted in ways that
were not anticipated and, in the event, created unsafe conditions.
The crash of Pacific Western Airlines Flight 314 on February 11, 1978, killing 42
people on board, is another example of an STS design that failed to address plausible
unsafe conditions. The pilot was attempting to abort the landing when he saw a
snowplow on the runway after the plane had touched down. The plane had sufficient
time to take off and avoid the plane, but the failsafe mechanism prevented retraction
of the thrust reverser and it crashed moments later. The failsafe on the thrust reverse
system worked precisely as designed, preventing retraction of the thrust reverse
system when the gears are on the tarmac. In the ensuing accident investigation, it
was determined that a calculation error on the part of air traffic control (ATC) and
10 While there are more formal definitions of complexity, the analogy offered by Michael Lewis in
Flash Boys is memorable (Lewis, 2017). “A car key is simple, a car is complicated, a car in traffic is
complex.”
428 Human Factors for Automated Vehicles
control technologies, etc.; these are referred to as active safety systems earlier in the
Handbook, see Chapter 2). CVs that operate under certain highway or weather con-
ditions such as Traffic Jam Pilot will mature in time. At the current state of the art,
however, it is difficult to conceive fully automated vehicle operating all of the time
in the natural environment, given the complexities and uncertainties of existing road
networks and use patterns.
Nevertheless, there is a widespread misunderstanding of the critical distinction
between full automation that is active part of the time and full automation that is
active all of the time (also see, this Handbook, Chapter 2). Most often, terms such as
“autonomous vehicles” and “self-driving vehicles” conjure visions of vehicles that
operate fully automatically all of the time, but this vision does not correspond to the
reality of the foreseeable future. The incongruence between the idyllic future state
(fully automated, all of the time) and the intermediate state (partially automated,
part of the time, or fully automated, part of the time) serves to mask consideration of
important risks that will inevitably arise as technology evolves towards greater auto-
mation. The distinction between part of the time and all of the time is central to the
need for the application of STS approaches. In the former instance, there is a defined
role for a human driver, while in the latter instance there is no need for a driver at all.
The distinction is critical because the need for a driver, even as a hands-off process
supervisor, raises human factors considerations around topics such as the adequacy
of situational awareness (also see, this Handbook, Chapter 7), driver expectations
(which are informed by a driver’s mental model; this Handbook, Chapter 3), driver
alertness (Chapters 6, 9), transfer of control (Chapters 7–10), and skill acquisition
(Chapter 18). Thus, the presence of a human in a partially automated system raises
concerns about interactions between social and technical components of dynamic,
complex CVs.11 During the transition to full automation, which may take several
decades, there will be additional related complications due to the mixture of auto-
mated and traditional vehicles in the RTS. Indeed, some experts question whether
there will ever be a time when it would be safe for drivers to relinquish control to
CVs (Nunes, Reimer, & Coughlin, 2018).
The human element will thus be a factor for several decades to come and repre-
sent the core challenge for both existing and emerging configurations of AD. Fully
autonomous vehicles in which there is no possibility whatsoever for human interven-
tion may pose significant STS challenges that are beyond the scope of this chapter.
11 Of course, it makes no sense to speak of STS in the context of a fully automated vehicle that operates
with no human interaction.
430 Human Factors for Automated Vehicles
the IoT. Thus, the STS model depicted in Figure 19.2 comprises the universe of
human and machine elements that work in concert to carry out a particular mission
or set of related functions.
While the STS subsumes all elements of the system, the element that is most vari-
able and least understood is the social sub-system (involving human–machine and
human–human interactions of operators, consumers, managers, or policy makers)
and its interface with the technical sub-system (comprising cyber-physical and IoT
technologies). It should be noted that while the sociotechnical literature mentions
other sub-systems such as organizational sub-systems and economic sub-systems,
for simplicity, we regard the social sub-system to include macrolevel social influ-
ences such as organizational structure, culture, processes, and management.
For further clarity, we regard a CV that is fully automated to be essentially a
CPS. If, however, there is a driver present, it becomes an STS due to influences
that arise from the human element and her attendant interactions with the other
system elements.
STS concepts have evolved since the early 1950s to advance our understanding
of the interactions and interdependencies among people and machines in order to
overcome the limitations of reductionist approaches (i.e., approaches based on delin-
eating the system into its component parts and analyzing individual components
to isolate and remove root vulnerabilities). Reductionist approaches fail to identify
systemic issues that do not directly result from component failure modes. STS theory
provides the hypothesis-oriented basis for identifying vulnerabilities associated with
nonlinear interactions that occur across time and space among the various com-
ponents, which is characteristic of complex adaptive systems. It establishes meth-
odologies for joint optimization in which the social and technical sub-systems are
considered concurrently to resolve design tradeoffs and establish the highest achiev-
able system performance and human well-being.
Like other complex adaptive systems, AD derives three key attributes from com-
plexity theory: (1) it is non-deterministic, (2) it cannot be easily decomposed into func-
tional components, and (3) it has emergent properties that reflect self-organization
and dynamic interactions (Pavard & Dugdale, 2006). For example, workplace safety
is a system attribute that emerges from interactive and interdependent activities that
occur within the sociotechnical context of the work system and that cannot be local-
ized within the system structure (Carayon Hancock, Leveson, Noy, Sznelwar, & Van
Hootegem, 2015). For a more complete overview of STS and safety, refer to Carayon
et al. (2015) and Noy et al. (2015).
Although Figure 19.2 depicts an STS as comprising both social and technical
sub-systems, a more analytic depiction of STS identifies the relevant hierarchical
Sociotechnical Systems and Connected Vehicles 431
levels of the system, each level having both social and technical components, start-
ing at the top with public policy and culminating in individual processes. Each level
exerts control over the level directly below, all of which ultimately impinge on the
sharp end of system design (Rasmussen & Svedung, 2000). Figure 19.3, based on
the work of Rasmussen and Svedung (2000), illustrates the hierarchy of STS levels
(layers) associated with CV, from government regulation to successively lower social
levels and finally to the process level (the vehicle platform with embedded electron-
ics). Information from a given layer in the hierarchy is transmitted upwards to the
next level and that information informs appropriate responses in the form of direc-
tion, corrective actions, or procedures. For example, OEM experience with exist-
ing vehicles helps inform further connectivity research and development. Similarly,
the CV safety experience and crash reports may be used by regulators to revise or
establish new regulations. At the top layer, public opinion helps establish legisla-
tive agendas and priorities. The layers are thus connected through feedback control
loops in which each loop behaves as an adaptive control element with input from
the adjacent layer operating both as feedback signal and as parameter attunement
(Flach, Carroll, Dainoff, & Hamilton, 2015). Information about operational expe-
rience is fed upwards and control actions, such as goals, policies, constraints, and
commands are transmitted downward. Ideally, actions at each level are informed by
diverse disciplines (as indicated in Figure 19.3), which contribute unique insights and
FIGURE 19.3 A simplified view of a CV as an STS. Shown are the hierarchy of layers,
research disciplines that provide unique insights, and examples of control loop outcomes.
432 Human Factors for Automated Vehicles
collaborative value. Taken together, feedback control loops along the hierarchy influ-
ence the overall functioning of the system. Deficiencies in the design or operation of
the system can arise from suboptimal control at any point in the hierarchy that can
in turn lead to unsafe behavior of individual components and interactions among the
various components.
The hierarchical model of STS was further developed by Leveson (2011) for prac-
tical application in analyzing safety in system development as well as system opera-
tions. She realized that the control loops that are in play during the development of
the STS differ somewhat from the control loops that exist during the operational life
of the STS once it is deployed. Leveson (2013) applied systems theory to develop
the “System’s Theoretic Accident Model and Processes (STAMP)” as a tool for
identifying system-level safety failures and its derivative hazard analysis technique,
Systems-Theoretic Process Analysis (STPA). STPA is used in accident investiga-
tions to identify design-induced errors that fail to take account of human cognition,
social, organizational, and management factors. Its core value is in depicting safety
as a dynamic control problem rather than a component failure problem. That is,
an error could arise at a given instance in time from the confluence of component
conditions and performance vectors rather than from a failure of any one or more
individual components. Because it identifies interdependencies among components
that are amenable to further analyses and re-engineering, STPA has general applica-
tion beyond accident investigation.
13 A traffic signal per se is not an actor since it has no independent mission and does not include a
social element. It may, however, be a component in the traffic control STS if there are operators
involved.
434 Human Factors for Automated Vehicles
FIGURE 19.4 (See color insert.) STS Star Model of an RTS. (Adapted from Noy et al., 2018.)
In this model, the central node exercises executive power in making decisions,
coordinating activities, and managing communications. For a centralized model to
function properly and realize the full safety potential of AD, manufacturers, gov-
ernments and research organizations need to collaborate in developing a common
human-centric STS architecture, at a pre-competitive level.14 The architecture needs
to be more detailed than a topology15 in that it should prescribe and harmonize/
coordinate compatible decision algorithms, priorities, communication protocols,
contingencies, etc. This means that industry and government should jointly develop
an underlying strategy to ensure the consistency, reliability and functional interoper-
ability of CVs.
Another possible topology is a decentralized network, for example a bus network,
in which an actor can hook onto the common linear link without having to subscribe
to a particular architecture provided it embodies a compatible connection protocol
and adheres to certain principles - for example, it does not disturb any other actor
or actors. This is a self-organizing system (since actors can join and alter overall
system performance) in that it can adapt to changing circumstances and has suf-
ficient flexibility to expand and transform the network to reflect new technologies.
This topology too requires the establishment of rules of engagement, but there is no
centralized oversight.
Figure 19.5 is an example of six actors connected to the bus network representing
the RTS. In general, the RTS comprises a set of actors comprising hierarchical levels
that include both social and technical sub-systems. The generic individual actors (or
STS) can represent CVs, service providers, traffic control centers, and other STSs
14 Pre-competitive refers to fundamental principles of operation to which all OEMs can subscribe.
Competition among OEMs occurs with selection and bundling functions as well as human-machine
interface features.
15 Schematic description of how connecting lines and nodes are arranged in a network. Common net-
works include bus, ring, star, tree, and mesh topologies.
Sociotechnical Systems and Connected Vehicles 435
identify the key elements. Flach et al. (2015) discuss two examples (nuclear power
and the limited service food industry) to illustrate how communications, defined as
integration of information, and decisions, defined as controls to correct deviations
from safety goals, impact the safety of STS. Both examples (one, a relatively simple
system; the other, highly complex) can be described in terms of dynamical systems
frameworks that provide useful approaches to assessing the fit between organiza-
tional structures and work demands. Methods for studying the interdependencies
between system components might employ modeling and simulation to help iden-
tify unanticipated consequences of system design decisions, etc. (Hettinger, Kirlik,
Goh, & Buckle, 2015).
Since the human element is an important source of variation in system func-
tion, a concerted effort should be directed towards the social sub-system to address
traditional human–system integration considerations and interactions across multi-
layered STS control loops as well as across external interactions with interconnected
actors. An approach to delineate the role of humans in complex dynamic systems
was created at MIT (France, 2017) to generate and analyze unsafe control actions
(USC). Four categories of USC were used to demonstrate the application of the
method to the Automated Park Assist system and its utility in identifying potential
failure modes:
Efforts to extend sociotechnical analytic approaches are a step in the right direction,
but a great deal more needs to be done. For example, DeKort (2018) advocates for the
creation of a top-down scenario matrix to be used in simulation tests and algorithm
development. The matrix must include every object (moving, fixed, environmental,
laws/social cues) as well as degraded versions of the objects. The number of scenarios
involving variations of interacting objects quickly becomes too great to test on the
road. Moreover, deep learning systems need to encounter crashes, or near crashes,
to learn avoidance strategies, which makes on-the-road testing inherently unethical.
While conducting exhaustive analyses on STS safety is not yet possible for lack of
appropriate methodology, the perspectives presented in this chapter might at the very
least provide a useful framework for considering and identifying the most problem-
atic aspects of component integration early on so that they can be addressed as early
as possible in design and/or implementation activities.
19.10 CONCLUSION
The STS approach focuses on the RTS as a whole system, not merely its parts. It
assumes that important attributes of system safety and performance can only be
treated adequately in their entirety, taking into account all facets relating the social
438 Human Factors for Automated Vehicles
to the technical aspects. These system-level properties derive from the interdepen-
dencies and interactions among the parts of systems (Leveson, 2011). While the STS
approach has heretofore been primarily applied to large organizations involving
many people (industry, health care, military), STS-theoretical concepts can be read-
ily applied to AD in view of the fact that it is a complex adaptive system that can be
affected by complex nonlinear dynamic interactions across time and space among
a large number of components within the entirety of the system. Thus, a systems-
theoretic approach is more likely to address mobility goals than the engineering
design of any given component or connective technology.
The main contribution of the STS approach to AD is that it views mobility and
safety as system-level outcomes, and it can establish an analytic framework for eval-
uating overall benefits that can inform design choices. Often, system failures occur
because of incorrect assumptions on the part of one or more actors. For example, a
failure of a traffic signal can cause a CV to run what might have been a red light.
Understanding the decision logic and assumptions that may be implicit on the part
of an actor can help create safeguards or establish common parameter definitions
to avoid such misunderstanding. In well-publicized catastrophic accidents such as
Deepwater Horizon or Columbia Shuttle, a common contributing failure is the lack
of credence given to unsafe conditions by top decision-makers. An STS analysis can
help establish information sharing pathways to avoid message filtering. Incidence
reporting anywhere in the system can help guard against system creep to unsafe
conditions and maintain resilience.
We posit that meaningful advances in the safety of CVs requires (1) a shift in
the unit of analysis to the STS level, (2) the RTS be viewed as a system of STSs,
(3) methodologies be developed to facilitate identification of important interde-
pendencies and interactions among social and technical components within and
across system actors, and (4) methodologies be developed for joint optimization to
engender positive emergent properties such as system safety, resilience, and overall
effectiveness. Underlying these efforts is recognition of the central role of human
factors considerations in advancing the safety and utility of AD. There is much
critical work that is needed to guide responsible development of CVs in the con-
nected world.
REFERENCES
Amaldi, P. & Smoker, A. (2013). An organizational study into the concept of “automa-
tion policy” in a safety critical socio-technical system. International Journal of
Sociotechnology and Knowledge Development, 5(2), 1.
Banks, V. A., Eriksson, A., O‘Donoghue, J., & Stanton, N. A. (2018a). Is partially automated
driving a bad idea? Observations from an on-road study. Applied Ergonomics, 68,
138–145.
Banks, V. A., Stanton, N. A., Burnett, G., & Hermawati, S. (2018b). Distributed cogni-
tion on the road: Using EAST to explore future road transportation systems. Applied
Ergonomics, 68, 258–266.
Berk, J. (2009). System Failure Analysis. Materials Park, OH: ASM International.
Bonnefon, J. F., Shariff, A., & Rahwan, I. (2016). The social dilemma of autonomous vehi-
cles. Science, 352(6293), 1573–1576.
Sociotechnical Systems and Connected Vehicles 439
Carayon, P., Hancock, P., Leveson, N., Noy, I., Sznelwar, L., & Van Hootegem, G. (2015).
Advancing a sociotechnical systems approach to workplace safety–developing the con-
ceptual framework. Ergonomics, 58(4), 548–564.
Chang, J., Hatcher, G., Hicks, D., Schneeberger, J., Staples, B., Sundarajan, S., … Wunderlich,
K. (2015). Estimated Benefits of Connected Vehicle Applications: Dynamic Mobility
Applications, AERIS, V2I Safety, and Road Weather Management Applications (No.
FHWA-JPO-15–255). Washington, DC: US Department of Transportation.
DeKort, M. (2018). Corner or edge are not most complex or accident scenarios. LinkedIn
post. www.linkedin.com/pulse/corner-edge-cases-most-complex-accident-scenarios-
michael-dekort/
Dingus, T. A., Guo, F., Lee, S., Antin, J. F., Perez, M., Buchanan-King, M., & Hankey, J.
(2016). Driver crash risk factors and prevalence evaluation using naturalistic driving
data. Proceedings of the National Academy of Sciences, 113(10), 2636–2641.
Flach, J. M., Carroll, J. S., Dainoff, M. J., & Hamilton, W. I. (2015). Striving for safety:
Communicating and deciding in sociotechnical systems. Ergonomics, 58(4),
615–634.
France, M. E. (2017). Engineering for Humans: A New Extension to STPA (Doctoral disserta-
tion). Cambridge, MA: Massachusetts Institute of Technology.
Gubbi, J., Buyya, R., Marusic, S., & Palaniswami, M. (2013). Internet of Things (IoT): A
vision, architectural elements, and future directions. Future Generation Computer
Systems, 29(7), 1645–1660.
Hancock, P. A., Kajaks, T., Caird, J. K., Chignell, M. H., Mizobuchi, S., Burns, P. C., …
Vrkljan, B. H. (2020). Challenges to human drivers in increasingly automated vehi-
cles. Human Factors, 62(2), 310–328.
Hettinger, L. J., Kirlik, A., Goh, Y. M., & Buckle, P. (2015). Modelling and simulation of
complex sociotechnical systems: Envisioning and analysing work environments.
Ergonomics, 58(4), 600–614.
IDC. (2014). Worldwide Internet of Things 2014–2020 Forecast. Framingham, MA:
International Data Corporation.
Kitchin, R. (2018). The realtimeness of smart cities. TECNOSCIENZA: Italian Journal of
Science & Technology Studies, 8(2), 19–42.
Lee, J. D. & See, K. A. (2004). Trust in automation: Designing for appropriate reliance.
Human Factors, 46(1), 50–80.
Leveson, N. (2015). A systems approach to risk management through leading safety indica-
tors. Reliability Engineering & System Safety, 136, 17–34.
Leveson, N., Dulac, N., Marais, K., & Carroll, J. (2009). Moving beyond normal accidents
and high reliability organizations: A systems approach to safety in complex systems.
Organization Studies, 30(2–3), 227–249.
Leveson, N. G. (2011). Engineering a Safer World: Systems Thinking Applied to Safety.
Cambridge, MA: MIT Press.
Leveson, N. G. (2013). An STPA Primer. Retrieved from http://psas.scripts.mit.edu/home/
wp-content/uploads/2015/06/STPA-Primer-v1.pdf
Lewis, M. (2017). Flash Boys. New York: W.W. Norton & Company.
Noy, Y. I., Hettinger, L. J., Dainoff, M. J., Carayon, P., Leveson, N. G., Robertson, M. M., &
Courtney, T. K. (2015). Emerging issues in sociotechnical systems thinking and work-
place safety. Ergonomics, 58(4), 543–547.
Noy, Y. I., Shinar, D., & Horrey, W. J. (2018). Automated driving: Safety blind spots. Safety
Science, 102, 68–78.
Nunes, A., Reimer, B., & Coughlin, J. (2018). People must retain control of autonomous vehi-
cles. Nature, 556, 169–171.
Parasuraman, R. & Wickens, C. D. (2008). Humans: Still vital after all these years of automa-
tion. Human Factors, 50(3), 511–520.
440 Human Factors for Automated Vehicles
Pavard, B. & Dugdale, J. (2006). The contribution of complexity theory to the study of socio-
technical cooperative systems. In A.A. Minai & Y. Bar-Yam (Eds.), Unifying Themes
in Complex Systems (pp. 39–48). Berlin: Springer.
Preuk, K., Stemmler, E., Schießl, C., & Jipp, M. (2016). Does assisted driving behavior lead
to safety-critical encounters with unequipped vehicles’ drivers? Accident Analysis &
Prevention, 95, 149–156.
Rasmussen, J. & Svedung, P. (2000). Proactive Risk Management in a Dynamic Society.
Stockholm: Swedish Rescue Services Agency.
Sharples, S. (2009). Automation and technology in 21st century work and life. In P. Bust
(Ed.), Contemporary Ergonomics (pp. 208–217). Boca Raton, FL: CRC Press.
Sivak, M. S. & Schoettle, B. (2015). Road Safety with Self-Driving Vehicles: General
Limitations and Road Sharing with Conventional Vehicles. Ann Arbor, MI: University
of Michigan, Transportation Research Institute.
Smith, B. W. (2012). Managing autonomous transportation demand. Santa Clara Law Review,
52, 1401.
Smith, B. W. (2014). A legal perspective on three misconceptions in vehicle automation. In
G. Meyer & S. Beiker (Eds.), Road Vehicle Automation (pp. 85–91). Berlin: Springer.
Stankovic, J. A. (2014). Research directions for the internet of things. IEEE Internet of Things
Journal, 1(1), 3–9.
TRB. (2004). Transportation in an Aging Society: A Decade of Experience. Washington, DC:
Transportation Research Board.
Wadud, Z., MacKenzie, D., & Leiby, P. (2016). Help or hindrance? The travel, energy and
carbon impacts of highly automated vehicles. Transportation Research Part A: Policy
and Practice, 86, 1–18.
20 Congestion and
Carbon Emissions
Konstantinos V. Katsikopoulos
University of Southampton
CONTENTS
Key Points .............................................................................................................. 441
20.1 Introduction .................................................................................................. 442
20.2 Reducing Congestion: Driver Behavior ....................................................... 443
20.2.1 Effects of Vehicle Intelligence on Driver Behavior .......................... 443
20.2.2 Effects of Vehicle Connectedness on Driver Behavior .................... 446
20.2.3 Effects of Vehicle Automation on Driver Behavior .......................... 448
20.3 Reducing Carbon Emissions: Vehicle Capacity and Driver
Readiness to Use .......................................................................................... 449
20.3.1 Vehicle Capacity to Reduce Carbon Emissions................................ 449
20.3.2 Driver Readiness to Use Carbon-Emissions-Reducing Vehicles ..... 449
20.3.3 Eco-Driving....................................................................................... 450
20.4 Conclusion ................................................................................................... 451
References .............................................................................................................. 451
KEY POINTS
• Unless people completely surrendered driving control to ACIV, research-
ers must understand the effects of vehicle intelligence, connectedness, and
automation on driver behavior.
• Interventions that aim at behavior modification only—such as congestion
pricing or techniques of nudging—without boosting underlying processes
and competencies, might fail to promote pro-environmental driving-related
behaviors.
• Reasoning and decision-making, regarding one’s own or others’ driving,
seem to be predicted by simple rules of thumb.
• The use of simple rules of thumb for parking can increase system efficiency,
compared with standard game-theoretic proposals.
• Driving-related moral and social dilemmas, induced by automation, have
been investigated, but more work remains to be done.
441
442 Human Factors for Automated Vehicles
• Life cycle assessment of automated vehicles has found that they could
decrease greenhouse gases emissions by 9%, but this is not taking into
account possible rebound effects such as increased driving.
• Pro-environmental driving-related behaviors are tailored by personal as
well as contextual factors: Being motivated towards decreasing carbon
emissions is not enough to undertake a pro-environmental behavior.
• Carbon-emission information may have an impact on driver’s decisions
when it is provided clearly, but other criteria (e.g., fuel price, safety, time)
should also be satisfied.
20.1 INTRODUCTION
According to the U.S. Bureau of Transportation Statistics (2007), the road transpor-
tation sector accounts for approximately one-third of U.S. carbon emissions from the
use of energy. Previous studies have shown that congestion wastes time and money,
and it also increases emissions of greenhouse gases and localized pollutants such as
particulate matter (Barth & Boriboonsomsin, 2008). Can roads be de-congested, and
emissions be reduced, by changing the transportation infrastructure?
One might expect that building more roads should relieve traffic congestion.
Think again. Duranton and Turner (2011) conclude that increased provision of
interstate highways and major urban roads is unlikely to relieve congestion. These
researchers used the economic theory of supply and demand coupled with the sta-
tistics of logistic regression to estimate the elasticity of vehicle-kilometers traveled
with respect to the lane kilometers in U.S. metropolitan areas between 1983 and
2003. This elasticity was estimated to be 1.03, which means that there is more driv-
ing when there is more road to drive on, with the increase in traffic being 3% in
excess of the corresponding increase in road.
Decades ago, such results have been expressed by the umbrella term of a funda-
mental law of road congestion (Downs, 1962). With the benefit of hindsight, it is
not very surprising that people increase the consumption of road when more road is
made available to them—such behavioral adaptations are ubiquitous; haven’t we all
had the experience of eating more just because more was served to us? This suggests
that before looking for physical interventions for decreasing congestion, one might
want to look for psychological ones.
Since the publication of Thaler and Sunstein’s (2008) Nudge, psychological inter-
ventions are often identified with behavioral ones. However, these two types of inter-
vention are not the same. The distinction we have in mind is that in the latter, an
effort is made to directly change behavior without necessarily enhancing the under-
lying psychological processes and their associated competencies. For example, indi-
viduals might end up eating fruits and vegetables if those are exhibited at eye level
in their work cafeteria, without learning and understanding that eating fruits and
vegetables (in general) enables the body’s healthier function. From a human-factors
perspective, this is a tricky issue as any gains from purely behavioral interventions
may be transient, fail to generalize to other contexts, or could create dissonance and
disappointment because it is not clear that the receivers of a nudge actually want to
make the choices they are nudged toward (Sugden, 2017).
Congestion and Carbon Emissions 443
1 The University of Massachusetts at Amherst driving simulator (see Figure 20.1) was used, which,
at the time, consisted of a car (Saturn 1995 SL 1) connected to an Onyx Infinite Reality Engine 2
and an Indy computer (both manufactured by Silicon Graphics, Inc.) The images on the screen sub-
tended 60° horizontally and 30° vertically, might have been artificial or natural, and were developed
using Designer’s Workbench (by Centric). The movement of other cars on the road was controlled by
Real Drive Scenario Builder (Monterey Technologies, Inc.). The system was assembled by Illusion
Technologies.
444 Human Factors for Automated Vehicles
FIGURE 20.1 Driving simulator used in the route and parking choice experiments described
in the text (located at the University of Massachusetts at Amherst).
FIGURE 20.2 Sign with travel time information, to guide route choice.
120 − 70 = 50 minutes. By crossing three levels of average travel time (95, 100, 105)
with five levels of range (20, 30, 40, 50, 60), 15 route choice scenarios were generated.
The results showed that a risk-averse driver is less likely to divert to the alterna-
tive route as the range increases while the average remains the same. For example,
a risk-averse driver was less likely to choose Route 28 in the example in Figure 20.2
when the travel time ranged from 70 to 120 minutes than when the travel time ranged
from 80 to 110 minutes (the average, in both cases, equals 95 minutes). On the other
hand, a risk-seeking driver was more likely to choose the 70-to-120-minute route.
An early claim in economics was that people are risk averse. The psychological
literature, however, suggests that people are risk averse when the choice belongs to
the domain of gains but risk seeking when the choice belongs to the domain of losses
(Kahneman & Tversky, 1979). In route choice, scenarios in which the alternative has
a shorter average time than the default belong to the domain of gains. The scenario
in Figure 20.2 where the alternative route ranges from 70 to 120 minutes is in the
domain of gains because the average equals 95 minutes, which is less than the default
of 100 minutes. If the alternative route ranged from 90 to 120 minutes, the choice
Congestion and Carbon Emissions 445
FIGURE 20.3 Number of participants diverting to the alternative route as a function of its
range of travel time. On the x-axis, 1 means a range of 20 minutes, 2 means 30 minutes, etc.
and 5 means 60 minutes.
would be in the domain of losses because the average would equal 105 minutes, and
that is more than the default of 100 minutes.
Past research on route choice had only tested scenarios in the domain of gains.
Katsikopoulos et al. (2000) tested route choices framed as losses. So what happened
in the experiment? Figure 20.3 shows the number of participants (out of 30) that
diverted as a function of the range of travel time of the alternative route (on the x axis,
1 means a range of 20 minutes, 2 means 30 minutes, etc. and 5 means 60 minutes).
There is a decreasing trend for gains indicating risk aversion, a roughly flat line for
the case where the alternative route has the default average (indicating risk neutral-
ity), and an increasing trend for losses indicating risk seeking.
Expected utility theory (von Neumann & Morgenstern, 1947) and its modifica-
tion, prospect theory (Kahneman & Tversky, 1979), are often used to model human
choice in economics and psychology. According to such theories, people compute
the “worth” of a decision option by summing up its possible outcomes, where each
outcome is weighted by its probability. Then, people choose the option with the
maximum worth. If outcomes and probabilities are taken at face value, this is the
expected value theory. In expected utility theory and prospect theory, outcomes and
probabilities may be translated by multi-parameter mathematical functions. A prob-
lem with such theories is that they are probably too complicated to be describing the
underlying cognitive processes; rather, they are meant to be as-if representations of
behavior (Katsikopoulos & Gigerenzer, 2008). In as-if theories, the claim is not that
people really translate outcomes and probabilities and then sum and weight them,
but rather that they make decisions “as-if” they did so.
In an alternative theory, the driver takes a single sample to estimate the travel
time along the alternative route (where s/he perceives that the travel time of the
alternative route is following a normal probability distribution over the interval of
possible travel times), and chooses to divert if this estimate is less than the refer-
ence travel time. This modeling incorporates two themes of behavioral theory
446 Human Factors for Automated Vehicles
(Katsikopoulos & Gigerenzer, 2008): (1) the driver is tuned to the probabilistic
nature of decision-making and (2) the driver uses one piece of information to make
a decision (one sample of travel time). This simple rule of thumb accounts well for
a host of effects on route choice, including effects that could also be modeled by
prospect theory, such as the effects of gains and losses, and effects that could not
be modeled by prospect theory, such as the effects of an uncertain reference point
(Katsikopoulos et al., 2002).
Such rules of thumb have been put forth for predicting parking choices as well.
Hester, Fisher, and Collura (2002) ran a driving simulator study with parking sce-
narios where a utility model and a simple rule of thumb made different predictions.
It was found that actual parking choices were more consistent with the rule of thumb
than with the utility model. For instance, only 10% of the participants made all park-
ing choices consistent with utility theory, whereas 35% of the participants made all
choices consistent with the simple rule of thumb.
number of spots in each parking resource, and the various costs—which can be made
available to ACIV drivers—Karaliopoulos et al. (2017) compared the performance
of the system when drivers behave according to (1) an “optimal” equilibrium com-
puted by game theory and (2) the following simple rule of thumb:
Step 1. If the best-case costs of the two alternatives (on-street and lot parking)
differ by more than a percentage of the overall worst-case cost one may
incur, then search for on-street parking (because it has much smaller best-
case cost);
Step 2. Otherwise, consider the probabilities incurring the two best-case costs:
If their difference exceeds a threshold, then choose the alternative with the
larger probability of best-case cost;
Step 3. Otherwise, choose the alternative with the smaller worst-case cost
(independently of how small the difference between the two worst-case
costs is).
To see how the simple rule works, say that there are currently two drivers on the road
and there is one parking spot available on the street and three in the lot. In this case,
the unit costs for on-street and lot parking are 1 and 5.5 respectively, and the excess
cost is 2. Assume that for one of these drivers the percentage in Step 1 equals 10%
and the threshold used in Step 2 equals 0.1. Then, the difference of the best-case costs
is 5.5 − 1 = 4.5 which is larger than 0.75 (10% of 5.5 + 2), and thus this driver would
search for on-street parking. If this driver’s percentage parameter were 60%, however,
Step 2 of the rule would be used. In Step 2, the difference of the probabilities—
assuming that the other driver chooses one of the four parking spots randomly—of
the best-case costs equals 1.0 (for the parking lot) − 0.75 (for on-street parking) = 0.25,
which is larger than 0.1, and thus the driver would go to the parking lot.
The rationale for considering this particular rule of thumb is that it might be
descriptive of how people choose where to park. A reason to expect so is because
the rule is analogous to a rule that predicted people’s majority choices better than
expected utility theory and prospect theory (Katsikopoulos & Gigerenzer, 2008).
Consistently, Karaliopoulos et al. (2017) provide the results of a survey of 1,120
participants, which found that the parking choices of those drivers who always
park on the street or always park on a lot—19% of all participants—can be well
described by this simple rule. It is not clear how to apply expected utility theory or
prospect theory in this case, because it is not clear how to reliably estimate their
multiple parameters.
Regarding system performance, Karaliopoulos et al. (2017) analytically derived
conditions under which game-theoretic equilibrium behavior incurred larger total
costs and resulted in a larger percentage of drivers competing for on-street parking
than behavior consistent with the simple rule presented above. For instance, say that
there are 60 drivers on the road and the number of available parking spots is 10 on
the street and 15 in the lot, the unit costs for on-street and lot parking are 1 and 5.5,
respectively, and the excess cost is 2. Then, it turns out that the total cost at equilib-
rium equals 280, whereas under the simple rule, it equals 220. And, the number of
competing drivers at equilibrium is 55, whereas under the simple rule, it is 35.
448 Human Factors for Automated Vehicles
In general, conditions under which the simple rule improves system performance
are fulfilled for a broad range of scenarios concerning the fees charged for parking
resources and their distance from the destinations of the driver’s trips. This result
also holds for more complicated parking games, including more than one lot. Finally,
one might expect that the simple rule of thumb is more transparent than game theory
to drivers, parking managers, and other stakeholders such as local authorities.
25 cents to answer a question), such work surely needs to be followed up. It is also
relevant to congestion when there is a choice between two routes and the risk of
injury or fatality to other drivers and vulnerable road users is relatively high (say by
a factor of four) for one route, but the travel time is shorter (say by a factor of half).
Now, consider the same man living in Rio de Janeiro, Brazil. The overcrowded pub-
lic transportation does not reach all of this city’s areas, most of the roads do not even
have a biking lane, and local violence should also be considered. Can the man now
easily avoid buying a car in this situation? Also, what if he has three children, or lives
far away from his work?
A recent survey conducted by ReportLinker in the United States (ReportLinker,
2017) found that 62% of the respondents would buy an autonomous vehicle. The
following are the main reasons for doing so: using it for long-distance travel (18%),
becoming able to multitask (12%), increasing the safety of roads (10%), not having
to park (6%), and helping to reduce energy consumption (5%). On the other hand,
33% of the respondents said that safety was their top objection to buying an auto-
mated vehicle.
More generally, consumers have said that they would favor items with a lower car-
bon footprint if they were given clear information (Camilleri, Larrick, Hossain, &
Patino-Echeverri, 2019). Carbon footprint labels have been suggested as a simple
and clear intervention for increasing the understanding of energy use and green-
house gases emissions for a diversity of products, thus helping to reduce environ-
mental impacts. In Finland, 90% of consumers have stated that carbon-footprint
information would have at least a little impact on their buying decisions. But this
is so when other purchasing criteria (e.g., price of fuel or travel time) were satis-
fied (Hartikainen, Roininen, Katajajuuri, & Pulkkinen, 2014). Moreover, 86% pre-
ferred carbon labels that allowed comparisons of carbon emissions to be made across
products.
20.3.3 eco-driving
Eco-driving is one area where the human factors issues dominate the discussion.
Much more is known about these issues in the last ten years. They include everything
from pre-trip eco-driving planning, to actual eco-driving during the trip, and finally
to post-trip presentation of energy use (Barkenbus, 2010). A recent book has focused
on one aspect of the actual driving task, in particular the presentation of in-vehicle
information to the driver as the trip unfolds (Mcllroy & Stanton, 2018). The authors
argue for an ecological design of the interface (e.g., Rasmussen & Vincente, 1980),
one which supports the interaction of the driver with the driving task across skill-
based, rule-based, and knowledge-based behaviors.
The authors of the book depart in an interesting way from the above discussion
on congestion, which emphasizes the primacy of boosting underlying competencies
and processes as in addition to improving actual behaviors. With eco-driving it turns
out to be important to automate the decision, which is a skill-based behavior, not
a knowledge-based behavior, in part because the cognitive load imposed by eco-
driving needs to be minimized. The cognitive load needs to be minimized because
eco-driving requires continuous input whereas route choice is engaged in sporadi-
cally and often when the driver chooses to do so. As Mcllroy and Stanton (2018)
state: “the expert eco-driver performs the task in a way that approaches automaticity,
that is, they are performing at the skill-based level of cognitive control.” We refer the
reader to their text for an enlightening and much more detailed discussion.
Congestion and Carbon Emissions 451
20.4 CONCLUSION
Traffic congestion increases emissions of greenhouse gases and localized pollutants
such as particulate matter (Barth & Boriboonsomsin, 2008). Life cycle assessment
of automated vehicles has found that automated vehicles could decrease greenhouse
gases emissions by 9% (Gawron et al., 2018; although this figure does not take into
account possible rebound effects). Furthermore, vehicle intelligence and connected-
ness are expected to bring additional efficiency gains.
This chapter reviewed research on (1) the effects of vehicle automation, intel-
ligence, and connectedness on driver behavior and on (2) the capacity of automated
vehicles to reduce carbon emissions and the readiness of drivers to use such vehi-
cles, and (3) the strategies needed to achieve eco-driving. Related to (1), it seems
that drivers tend to interface with vehicle technology by relying on the simple rules
of thumb. With regards to (2), it seems that people’s motivation to reduce carbon
emissions is not enough for them to engage in pro-environmental behavior, but
rather, informing the drivers is key. Moreover, with respect to (3) once a driver
decides to purchase an environmentally friendly vehicle, it becomes important at
that point to focus on the development of the actual skills required to make eco-
driving a reality.
We end with a note on research methodology. Work on (1) has utilized formal
analyses and driving simulation experiments, where tradeoffs encountered while
driving was made explicit and controlled experimentally. In contrast, work on (2)
has used engineering analyses and surveys, where tradeoffs between engaging in
pro-environmental behavior and possibly giving up some convenience have not been
studied in controlled laboratory settings. For example, we do not know if and to
what extent people would buy a vehicle reducing carbon emissions at the expense of
increased travel time (Huang, Ng, & Zhou, 2018). Finally, work on (3) has used all
methods. However, the effects of eco-driving training appear to attenuate over time.
How to maintain these effects is an issue of considerable interest. Investigating such
questions would be a promising research direction.
REFERENCES
Barkenbus, J. (2010). Eco-driving: An overlooked climate change initiative. Energy Policy,
38, 762–769.
Barth, M. & Boriboonsomsin, K. (2008). Real-world carbon dioxide impacts of traffic con-
gestion. Transportation Research Record: Journal of the Transportation Research
Board, 2058, 163–171.
Berners-Lee, M. (2011). How Bad Are Bananas? The Carbon Footprint of Everything.
London: Greystone Books Ltd.
Bond, M. (2009). Decision-making: risk school. Nature News, 461(7268), 1189–1192.
Bonnefon, J-F., Shariff, A., & Rahwan, I. (2016). The social dilemma of autonomous vehicles.
Science, 352, 1573–1576.
Bortoleto, A. P. (2014). Waste Prevention Policy and Behavior: New Approaches to Reducing
Waste Generation and its Environmental Impacts. Basingstoke, UK: Routledge.
Camilleri, A. R., Larrick, R. P., Hossain, S., & Patino-Echeverri, D. (2019). Consumers under-
estimate the emissions associated with food but are aided by labels. Nature Climate
Change, 9(1), 53.
452 Human Factors for Automated Vehicles
Downs, A. (1962). The law of peak-hour expressway congestion. Traffic Quarterly, 16(3),
393–409.
Duranton, G. & Turner, M. A. (2011). The fundamental law of road congestion: Evidence
from US cities. American Economic Review, 101(6), 2616–2652.
Endsley, M. (2017). Autonomous driving systems: A preliminary naturalistic study of the
Tesla Model S. Journal of Cognitive Engineering and Decision Making, 11, 225–238.
Foot, P. (1967). The problem of abortion and the doctrine of the double effect. Oxford Review,
5, 5–15.
Gawron, J. H., Keoleian, G. A., De Kleine, R. D., Wallington, T. J., & Kim, H. C. (2018). Life
cycle assessment of connected and automated vehicles: Sensing and computing subsys-
tem and vehicle level effects. Environmental Science & Technology, 52(5), 3249–3256.
Hartikainen, H., Roininen, T., Katajajuuri, J. M., & Pulkkinen, H. (2014). Finnish consumer
perceptions of carbon footprints and carbon labelling of food products. Journal of
Cleaner Production, 73, 285–293.
Hertwig, R. & Grüne-Yanoff, T. (2017). Nudging and boosting: Steering or empowering good
decisions. Perspectives on Psychological Science, 12(6), 973–986.
Hester, A. E., Fisher, D. L., & Collura, J. (2002). Drivers’ parking decisions: Advanced park-
ing management systems. Journal of Transportation Engineering, 128(1), 49–57.
Huang, Y., Ng, E., & Zhou, J. (2018). Eco-driving technology for sustainable road transport:
A review. Renewable and Sustainable Energy Reviews, 93, 596–609.
Igliński, H. & Babiak, M. (2017). Analysis of the potential of autonomous vehicles in reduc-
ing the emissions of greenhouse gases in road transport. Procedia Engineering, 192,
353–358.
Kahneman, D. & Tversky, A. (1979). Prospect theory: An analysis of decision under risk.
Econometrica, 47, 263–291.
Kantowitz, B. H., Hanowski, R. J., & Kantowitz, S. C. (1997). Driver acceptance of unreliable
traffic information in familiar and unfamiliar settings. Human Factors, 39(2), 164–177.
Karaliopoulos, M., Katsikopoulos, K., & Lambrinos, L. (2017). Bounded rationality can make
parking search more efficient: The power of lexicographic heuristics. Transportation
Research Part B: Methodological, 101, 28–50.
Katsikopoulos, K. V. (2014). Bounded rationality: The two cultures. Journal of Ecnomic
Methodology, 21(4), 361–374.
Katsikopoulos, K. V., Duse-Anthony, Y., Fisher, D. L., & Duffy, S. A. (2000). The framing of
drivers’ route choices when travel time information is provided under varying degrees
of cognitive load. Human Factors, 42(3), 470–481.
Katsikopoulos, K. V., Duse-Anthony, Y., Fisher, D. L., & Duffy, S. A. (2002). Risk attitude
reversals in drivers’ route choice when range of travel time information is provided,
Human Factors, 44(3), 466–473.
Katsikopoulos, K. V. & Gigerenzer, G. (2008). One-reason decision-making: Modeling viola-
tions of expected utility theory. Journal of Risk and Uncertainty, 37(1), 35–56.
Lee, J. D. & See, K. A. (2004). Trust in automation: Designing for appropriate reliance.
Human Factors, 46(1), 50–80.
Mcllroy, R. & Stanton, M. (2018). Eco-Driving: From Strategies to Interfaces (Transportation
Human Factors). Boca Raton, FL: CRC Press.
Rasmussen, J. & Vincente, K. (1980). Coping with human errors through system design:
Implications for ecological interface design. International Journal of Man-Machine
Studies, 31, 517–534.
ReportLinker Insight. (2017). Self-Driving Vehicles Navigate Twists and Turns on the Road
to Adoption. Retrieved from www.reportlinker.com/insight/self-driving-vehicles-
navigate-twists-turns-road-adoption.html?mod=article_inline
Stern, P. C. (2000). New environmental theories: Toward a coherent theory of environmen-
tally significant behavior. Journal of Social Issues, 56, 407–424.
Congestion and Carbon Emissions 453
Sugden, R. (2017). Do people really want to be nudged towards healthy lifestyles? International
Review of Economics, 64(2), 113–123.
Thaler, R. H. & Sunstein, C. R. (2008). Nudge: Improving Decisions about Health, Wealth,
and Happiness. New Haven, CT: Yale University Press.
Townsend, J. T. (2008). Mathematical psychology: Prospects for the 21st century. Journal of
Mathematical Psychology, 52, 271–282.
United States Bureau of Transportation Statistics. (2007). Transportation Statistics Annual
Report. Washington, DC: US Government Printing Office.
von Neumann, J. & Morgenstern, O. (1947). Theory of Games and Economic Behavior (2nd
ed.). Princeton, NJ: Princeton University Press.
Taylor & Francis
Taylor & Francis Group
http://taylorandfrancis.com
21 Automation Lessons
from Other Domains
Christopher D. Wickens
Colorado State University
CONTENTS
Key Points ............................................................................................................. 455
21.1 Introduction .................................................................................................. 456
21.2 Classic Automation Accidents ...................................................................... 456
21.3 Features of Automation................................................................................. 457
21.3.1 The Degree of Automation: What Does Automation
Do and How Does It Do It? �������������������������������������������������������������� 457
21.3.2 Automation Reliability ..................................................................... 459
21.3.3 Automation Trust and Dependence ................................................. 460
21.3.4 Out-of-the-Loop-Unfamiliarity (OOTLUF) ..................................... 460
21.3.5 Automation Modes and Complexity ................................................. 461
21.3.6 Automation Transparency ................................................................. 461
21.3.7 Adaptable Versus Adaptive Automation........................................... 462
21.4 Research Findings......................................................................................... 462
21.4.1 Accident and Incident Data Mining: Advantages and Costs ............ 462
21.4.2 Experimental and Simulation Results .............................................. 463
21.4.2.1 Alerting Systems ................................................................ 463
21.4.2.2 Attention Cueing ................................................................ 464
21.4.2.3 Stages 2 and 3 Automation: OOTLUF .............................. 465
21.5 Solutions: Countermeasures Proposed and Implemented in
Other Domains ............................................................................................. 465
21.5.1 Flexible and Adaptive Automation ................................................... 465
21.5.2 Automation
Transparency ................................................................ 467
21.5.3 Training ............................................................................................ 467
21.6 Conclusions ................................................................................................... 468
References .............................................................................................................. 468
KEY POINTS
• Automation can be defined by the stage of information processing for which
it assists the human, and the level of assistance at each stage, together defin-
ing the degree of automation.
• The higher the degree of automation, the better is human–automation per-
formance, the lower workload, and the lower situation awareness when
455
456 Human Factors for Automated Vehicles
21.1 INTRODUCTION
As the many other chapters in this book have made clear, automation takes many
forms in vehicles. Foremost among these are the higher levels of automation and,
ultimately, the total autonomy of self-driving cars. But there are numerous other
examples of automation, such as headway monitors, auto-locks, a variety of alerts
and warnings, anti-lock brakes and navigational systems. In designing all such sys-
tems to facilitate better interactions with the human, balancing safety versus produc-
tivity, many lessons can be drawn from accident analysis and research from other
domains, particularly from aviation, which is the pioneer in the systematic investiga-
tion of human–automation (Wiener & Curry, 1980).
In this chapter, I will review the lessons that can be learned from human interac-
tion with systems other than vehicles, including human flight in aviation (Billings,
1997; Ferris, Sarter, & Wickens, 2010; Landry, 2009), unmanned air vehicles
(Cummings & Guerlain, 2007), medicine (Garg et al., 2005; Morrow, North, &
Wickens, 2006), space (Li, Wickens, Sarter, & Sebok, 2014), air traffic control
(Wickens, Mavor, Parasuraman, & McGee, 1998), military systems (Chen et al.,
2018), consumer products (Sauer & Ruttinger, 2007), process control (Strobhar,
2012), robotics, and others.
This chapter will begin by providing a brief synopsis of three aviation tragedies
directly related to breakdowns in human–automation interaction (HAI) when auto-
mation fails. Furthermore, it will describe in detail some of the key concepts in HAI
that have arisen out of non-vehicle research, but are directly applicable to it. Then,
I describe the empirical research bearing on several major issues in HAI. Finally, I
turn to four suggested solutions to improving HAI, preventing its disasters without
sacrificing the productivity that it offers, and examine the empirical evidence in sup-
port of those solutions.
In 1983, on KAL flight 007 over the north Pacific, pilots programmed a course
into the flight management system (FMS) that was incorrect. As with the Everglades
accident, pilots failed to monitor how automation was flying the plane as it flew
directly into Soviet airspace and was shot down, with all lives lost (Wiener, 1988).
In 2013, an Asiana airline was on approach to San Francisco International Airport
when pilots became confused regarding what the “Auto-land” system was doing.
They acted in opposition to what automation was commanding the plane to do, and
the plane stalled and crashed just short of the runway threshold.
All three of these tragedies—and many more (see for example Dornheim,
1995)—have identified problems in HAI encountered by highly skilled profession-
als. Such problems filtered through careful accident analysis and examined through
flight simulation research can identify lessons learned that may be transferred to
automation in ground vehicles, along with potential solutions. In the next section, I
identify several key features of HAI that can be used to understand generic, cross-
disciplinary applications.
out or executing the action that was decided upon in Stage 3, action selection and
decision-making). [Note: Stage 3 maps onto the automation support of human deci-
sions as originally described by Sheridan & Verplank (1978).] For example, auto-
mation assistance for the health care practitioner may highlight particular medical
problems on a patient’s electronic record or alert the practitioner to a dangerous
condition (Stage 1), may offer a diagnostic suggestion (Stage 2), may offer a recom-
mended treatment (Stage 3), and a drug infusion pump may automatically adminis-
trate medicine at the appropriate time and dosage (Stage 4). As with Sheridan and
Verplank’s (1978) original scale, each of these stages can also be described as being
implemented at varying levels of authority.
Thus, the two-dimensional structure (taxonomy) of stages and levels, shown
in Figure 21.1, can be described by a higher level variable defining the degree
of automation (DOA), moving from the lower left to the upper right (Onnasch,
Wickens, Li, & Manzey, 2014; Wickens, 2017). When automation is implemented
at the highest level of all four stages, this describes the status of full autonomy.
The characterization of levels and stages of automation proposed by Parasuraman
et al. (2000) can also be likened, conceptually, to the levels of automation for auto-
mated driving systems described by the Society for Automotive Engineers (SAE,
2016; this Handbook, Chapters 1 and 2).
In some key developments of the history of this research, Endsley and Kiris
(1995), and Kaber and Endsley (2004; Kaber, Onal, & Endsley, 1999) carried out
early research on automation and situation awareness (SA) that could be readily
interpreted in the context of stages and levels of automation (i.e., DOA). More
recently, Onnasch et al. (2014) carried out a meta-analysis of DOA research that
examined the effect of four correlated variables that changed as DOA increased:
(1) The performance of the task for which automation was designed to support
increased; (2) human workload decreased; (3) humans lost SA; and as a conse-
quence, when automation failed, (4) human failure recovery was more problem-
atic (and sometimes disastrous).
A major reason why the taxonomy defining DOA is important in HAI is that
it defines a distinction that is relevant to many automation decision support tools:
Should automation advise the human user as to “what is” (diagnostic support at
Stages 1 and 2) or should automation advise the human user “what to do” (decision
FIGURE 21.1 DOA. In this rendering, there are only two levels within each stage, but there
could be several, and there does not need to be the same number of levels at each stage.
Automation Lessons from Other Domains 459
aiding at Stage 3)? Such a dichotomy exists in many areas, such as medical decision-
making (Garg et al., 2005; Morrow et al., 2006), aviation conflict avoidance systems,
or even statistical tools that distinguish between providing p values and confidence
intervals versus advice to accept or reject the null hypothesis (i.e., decision aid;
Wickens & McCarley, 2017). Given that automation is imperfect, and there may be
more severe consequences if it fails at later stages of automation, this distinction
needs to be considered carefully in automation design and implementation.
It is important to recognize that the latter three of these failure sources are not true
“failures” from an engineering standpoint. However, from the human user’s standpoint,
they are perceived as automation failures, and this incorrect perception can lead to the
problematic failure response, as in the case of the Asiana accident described above.
460 Human Factors for Automated Vehicles
1. Failure to monitor the process(es) carried out by automation, the raw data
that it is processing, or the performance it is producing—a failure often
directly measurable by eye movements (Metzger & Parasuraman, 2005;
Parasuraman & Manzey, 2010).
2. Failure to understand what automation is doing, at the time of the automa-
tion failure, and hence intervening inappropriately. A failure to understand
will certainly follow from the failure to monitor but may also be associated
with an absence of engagement, even when the eyes may scan the indicators
of automation functioning. Such a failure is closely related to the phenom-
enon known as the generation effect (Slamecka & Graf, 1978), whereby we
remember the state of systems better when we generate responses pertain-
ing to those systems than when we witness other agents (e.g., automation)
Automation Lessons from Other Domains 461
carrying out identical actions. Such memory in this case translates directly
into understanding of the current state of the system. The generation effect
has often been applied to the concept of active learning, as a superior train-
ing technique to passive listening or reading (Dunlosky et al., 2013).
Thus, failure to monitor and understand can lead to inappropriate interven-
tions when automation fails. To this is added the third component of OOTLUF:
3. Deskilling refers to the state wherein prolonged use of automation will
lead to degraded operator skills in performing the task that automation is
programmed to accomplish, hence further aggravating the inappropriate
response to automation failure. This concern has been identified in avia-
tion, when pilots rely too much on their automated flight systems (Casner,
Geven, Recker, & Schooler, 2014).
All three of these effects: on monitoring, understanding, and manual recovery skills
appear to be amplified with higher DOA, as, with this higher degree, there is less rea-
son to become engaged in the task. Hence the more automation does during routine
correct performance of automation, the more serious are the consequences when it
fails. This tradeoff has been described as the “lumberjack phenomenon”: the higher
the tree, the harder it falls (Sebok & Wickens, 2017).
21.4 RESEARCH FINDINGS
21.4.1 Accident And incident dAtA mining: AdvAntAges And costs
A good deal of the understanding of problems in HAI has been gained from
the aviation industry, in identifying and analyzing breakdowns that have contrib-
uted to major accidents, such as the Eastern Airlines Everglades crash described
above. Such analyses appear in National Transportation and Safety Board (NTSB)
report and may include HAI errors as one of the causal factors. The limitations in
using such information to gain information about causality are twofold. (1) Pilots
involved in crashes are often killed and there are always multiple factors involved.
It is impossible to sort out, with any certainty, the extent to which HAI was the
precipitating factor rather than just a contributing cause, not to mention identi-
fying which of the many issues of HAI might have been involved. (2) Aircraft
accidents are, fortunately, extremely rare. But such rarity forecloses the multiple
samples that are required to draw reliable statistical inferences regarding fre-
quency and causality. These inferences are at the foundation of the science of
human factors.
A second source of data comes from incident analysis. Since 1976, the National
Aeronautics and Space Agency (NASA) has collected and categorized a large vol-
ume of incident reports, contained in the Aviation Safety Reporting System (ASRS),
in which pilots file, anonymously, voluntary reports of what they consider safety-
compromising incidents in their flights. Such data have the advantage of large
sample sizes (i.e., large N) absent in accident reports. But, being based on the rec-
ollection of pilots, who are not generally trained in human factors or psychology,
the cognitive or information processing mechanisms are often not included in the
narrative. Of course, since the reports are voluntary, while the numbers are large,
they may be highly biased, perhaps against reporting an incident in which a pilot
committed a clear violation (notwithstanding the guarantee of anonymity in such fil-
ings). Nevertheless, such a system has revealed valuable conclusions regarding HAI,
recently summarized in an extensive report compiled by the Committee on Aviation
Automation Lessons from Other Domains 463
Safety (Commercial Air Safety Team, 2014) of 50 ASRS reports. Their conclusions
were notable in identifying the frequency of the sorts of automation mode errors
described above.
involved in aviation incidents is half that of alert false alarms. This adjustment is
done with the understandable rationale that the consequences of the “double miss”
(by both automation and the human) are very severe. But often under-appreciated are
the undesirable consequences of the high false alarm rate, in terms of humans ignor-
ing true alarms. There remains some discrepancy in research findings of the extent to
which false alarms are more detrimental to overall system performance than alarm
misses (Dixon et al., 2007) or the contrary (Chen & Barnes, 2012). However, the con-
sequences to both reliance and compliance should be carefully considered by design-
ers and human factors practitioners before an alarm system sensitivity level is chosen.
Independent of the extent to which an alert system is miss-prone or false alarm-
prone, the consequences of imperfect automation alerting systems are a loss of trust
and therefore dependence upon it, even when the alarm system is fairly (but not per-
fectly) reliable. The question then is how low can such reliability fall before the bene-
fits of the alerting system may be abolished. Insight into this question can be gleaned
from two different sources. First, the results of meta-analyses by Wickens and Dixon
(2007), and Rein, Masalonis, Messina, and Willems (2013) suggest that the mini-
mum reliability level of a system may be around 75%–80%. Above this level, such
imperfection can still support performance better than the unaided human, and par-
ticularly under conditions of concurrent task load. Furthermore, this benefit appears
to be observed in automation at later stages as well (Rovira, Pak, & McLaughlin,
2017; Trapsilawati, Wickens, Qu, & Chen, 2016). Below that level, the unwarranted
dependence upon automation may actually produce worse detection performance
than would be the case of unaided automation, not unlike grabbing onto a “concrete
life preserver” in the water (Dixon & Wickens, 2006).
Second, as we have noted, the first failure experienced by an individual in his/her
experience can be particularly problematic because of complacency that may have
developed following experience with, up that that point in time, perfect detection
(Molloy & Parasuraman, 1996; Sebok & Wickens, 2017; Yeh et al., 2003). This may
be described as the FFE. As a consequence, it may be desirable to “get rid of this
FFE” prior to the operator’s first operational experience with the alerting system, by
allowing them to experience failures during training or introduction to the system
(Manzey, Reichenbach, & Onnasch, 2012; Sauer et al., 2016)—an automation failure
inoculation, so to speak.
21.4.2.2 Attention Cueing
Alerting systems inform the operator that something is wrong. Automation can (and
ideally should) go beyond this to inform the operator what is wrong, and/or where
the dangerous condition is located. The “what” is embodied in Stage 2 diagnostic
automation discussed later, but the “where” is embodied in attentional cueing sys-
tems, closely related to, but more advanced than alerting systems. Research by Yeh
and her colleagues, primarily in the military domain directing a soldier’s attention to
the potential location of an enemy (e.g., highlighting locations or features on a map),
has revealed that erroneous cueing systems (i.e., that direct attention to the wrong
location) too can have serious negative consequences (Yeh & Wickens, 2001; Yeh
et al., 2003; Yeh, Wickens, & Seagull, 1999). As with alerting systems, such automa-
tion errors here, are particularly problematic upon the first failure (Yeh et al., 2003).
Automation Lessons from Other Domains 465
The last of these is similar to the likelihood alert that signals the confidence level of an
alert system that a dangerous condition exists (Wizorek & Manzey, 2014). Furthermore,
in the case of textual explanations of automation functioning and reasoning, these may
be offered either online, at the time a particular automation decision or diagnosis is
reached, or off-line prior to the use of automation, as a form of training and instruction.
There is one important limitation of online automation transparency, and that is that
it may provide a source of distraction or extra perceptual or cognitive workload that
could neutralize or offset the very benefits that automation is intended to provide, par-
ticularly offsetting benefits to performance of the automation-supported task.
21.5.3 trAining
The last manifestation of automation transparency described above, off-line expla-
nation of reasoning, can be thought of as a form of automation training. Like trans-
parency in general, training has been found to be successful in buffering some of
the negative effects of automation failure response, and can be offered in different
468 Human Factors for Automated Vehicles
21.6 CONCLUSIONS
Decades of research and accident analysis have revealed that the overall benefits
of automation can sometimes be mitigated by their costs, as these are often associ-
ated with imperfect reliability, leading to OOTLUF. In safety-critical environments,
designers and regulators should seek solutions such as adaptive or adaptable automa-
tion and transparency to mitigate the consequences of automation errors to HAI per-
formance. To address these consequences as much as possible, lessons learned from
other domains can assist the incorporation of safe automation into the automobile;
but in transferring knowledge and techniques between domains, the designer must
be cognizant of the many differences between the highway driving domain and those
such as aviation and process control, where the solutions described in this chapter
have been identified and evaluated.
REFERENCES
Bahner, J. E., Hüper, A. D., & Manzey, D. (2008). Misuse of automated decision aids:
Complacency, automation bias and the impact of training experience. International
Journal of Human-Computer Studies, 66(9), 688–699.
Billings, C. (1997). Aviation Automation: The Search for a Human-Centered Approach.
Englewood Cliffs, NJ: Erlbaum.
Bliss, J. (2003). Investigation of alarm-related accidents and incidents in aviation. International
Journal of Aviation Psychology, 13, 249–268.
Burns, C. M., Skraaning, G., Jamieson, G. A., Lau, N., Kwok, J., Welch, R., & Andresen, G.
(2008). Evaluation of ecological interface design for nuclear process control: Situation
awareness effects. Human Factors, 50, 663–679.
Casner, S. M., Geven, R. W., Recker, M. P., & Schooler, J. W. (2014). The retention of manual
flying skills in the automated cockpit. Human Factors, 56(8), 1506–1516.
Chen, J. Y. & Barnes, M. J. (2012). Supervisory control of multiple robots: Effects of imper-
fect automation and individual differences. Human Factors, 54(2), 157–174.
Chen, J. Y., Lakhman, S., Stowers, K., Sellpwotz, A., Wright, J., & Barnes, M. (2018).
Situation awareness based agent transparency and human-autonomy teaming effective-
ness. Theoretical Issues in Ergonomics Science, 19, 259–282.
Chen, S., Visser, T., Huf, S., & Loft, S. (2017). Optimizing the balance between task automa-
tion and human manual control in a simulated submarine track management. Journal
of Experimental Psychology: Applied, 23, 240–262.
Christensen, J. C. & Estepp, J. R. (2013). Coadaptive aiding and automation enhance operator
performance. Human Factors, 55(5), 965–975.
Commercial Air Safety Team. (2014). Airplane State Awareness. JSAT Final Report.
Cummings, M. L. & Guerlain, S. (2007). Developing operator capacity estimates for supervi-
sory control of autonomous vehicles. Human Factors, 49, 1–15.
Automation Lessons from Other Domains 469
Lee, J. D. & Moray, N. (1992). Trust, control strategies and allocation of function in human-
machine systems. Ergonomics, 35, 1243–1270.
Li, H., Wickens, C. D., Sarter, N., & Sebok, A. (2014) Stages and levels of automation in sup-
port of space teleoperations. Human Factors, 56(6), 1050–1061.
Manzey, D., Reichenbach, J., & Onnasch, L. (2012). Human performance consequences of
automated decision aids: The impact of degree of automation and system experience.
Journal of Cognitive Engineering and Decision Making, 6, 1–31.
Martensson, L. (1995). The airplane crash at Gottrora: Experiences of the cockpit crew.
International Journal of Aviation Psychology, 5, 305–326.
Mayo, M., Kowalczyk, N., Liston, B., Sanders, E., White, S., & Patterson, E. (2015) Comparing
the effectiveness of alerts and dynamically annotated visualizations (DAVs) in improv-
ing clinical decision making. Human Factors, 57, 1002–1014.
Mercado, J. E., Rupp, M. A., Chen, J. Y., Barnes, M. J., Barber, D., & Procci, K. (2016).
Intelligent agent transparency in human–agent teaming for Multi-UxV manage-
ment. Human Factors, 58(3), 401–415.
Merritt, S., Lee, D., Unnerstall, J., & Huber, K. (2015). Are well calibrated users effective
users? Association between calibration of trust and performance on an automated aid-
ing task. Human Factors, 57, 34–47.
Metzger, U. & Parasuraman, R. (2005). Automation in future air traffic management: Effects
of decision aid reliability on controller performance and mental workload. Human
Factors, 47, 35–49.
Meyer, J. (2001). Effects of warning validity and proximity on responses to warnings. Human
Factors, 43, 563–572.
Meyer, J. (2004). Conceptual issues in the study of dynamic hazard warnings. Human
Factors, 46, 196–204.
Meyer, J. & Lee, J. D. (2013). Trust, reliance, and compliance. In J. D. Lee & A. Kirlik (Eds.),
The Oxford Handbook of Cognitive Engineering (pp. 109-124). New York: Oxford
University Press.
Molloy, R. & Parasuraman, R. (1996). Monitoring an automated system for a single failure:
Vigilance and task complexity effects. Human Factors, 38, 311–322.
Moray, N. & Inagaki, T. (2000). Attention and complacency. Theoretical Issues in Ergonomics
Science, 1, 354–365.
Morrow, D., North, R., & Wickens, C. D. (2006). Reducing and mitigating human error in
medicine. Reviews of Human Factors and Ergonomics, 1, 254–296.
Mosier, K. L., Skitka, L. J., Heers, S., & Burdick, M. (1998). Automation bias: Decision-
making and performance in high-tech cockpits. International Journal of Aviation
Psychology, 8, 47–63.
Mumaw, R. J. (2018). Addressing Mode Confusion Using an Interpreter Display (Contractor
Technical Report). Moffett Field, CA: San Jose State University Research Foundation.
doi:10.13140/RG.2.2.27980.92801
Onnasch, L., Wickens, C. D., Li, H., & Manzey, D. (2014). Human performance consequences
of stages and levels of automation: An integrated meta-analysis. Human Factors, 56(3),
476–488.
Parasuraman, R. & Manzey, D. (2010). Complacency and bias in human use of automation:
An attentional integration. Human Factors, 52, 381–410.
Parasuraman, R., Mouloua, M., & Hilburn, B. (1999). Adaptive aiding and adaptive task
allocation enhance human-machine interaction. In M. W. Scerbo & M. Mouloua (Eds.),
Automation Technology and Human Performance: Current Research and Trends
(pp. 119–123). Mahwah, NJ: Lawrence Erlbaum.
Parasuraman, R. & Riley, V. (1997). Humans and automation: Use, misuse, disuse,
abuse. Human Factors, 39(2), 230–253.
Automation Lessons from Other Domains 471
Parasuraman, R., Sheridan, T. B., & Wickens, C. D. (2000). A model for types and lev-
els of human interaction with automation. IEEE Transactions on Systems, Man, &
Cybernetics: Part A: Systems and Humans, 30(3), 286–297
Parasuraman, R., Sheridan, T. B., & Wickens, C. D. (2008). Situation awareness, mental
workload, and trust in automation: Viable, empirically supported cognitive engineer-
ing constructs. Journal of Cognitive Engineering and Decision Making, 2(2), 140–160.
Rein, J. R., Masalonis, A. J., Messina, J., & Willems, B. (2013). Meta-analysis of the effect of
imperfect alert automation on system performance. Proceedings of the Human Factors
and Ergonomics Society Annual Meeting (Vol. 57, No. 1, pp. 280–284). Los Angeles,
CA: SAGE Publications.
Rovira, E., Pak, R., & McLaughlin, A. (2017). Effects of individual differences in working
memory on performance and trust with various degrees of automation. Theoretical
Issues in Ergonomics Science, 18(6), 573–591.
SAE. (2016). Taxonomy and definitions for terms related to driving automation systems for on
road motor vehicles (SAE Standard J3016 201609). Warrendale, PA: SAE International.
Sarter, N. B. (2008). Investigating mode errors on automated flight decks: Illustrating the
problem-driven, cumulative, and interdisciplinary nature of human factors research.
Human Factors, 50, 506–510.
Sauer, J., Chavaillaz, A., & Wastell, D. (2016). Experience of automation failures in training:
Effects on trust, automation bias, complacency and performance. Ergonomics, 59(6),
767–780.
Sauer, J., Chavaillaz, A., & Wastell, D. (2017). On the effectiveness of performance-based
adaptive automation. Theoretical Issues in Ergonomics Science, 18(3), 279–297.
Sauer, J., Kao, C., & Wastell, D. (2012). A comparison of adaptive and adaptable automation
under different levels of environmental stress. Ergonomics, 55, 840–853.
Sauer, J. & Ruttinger, B. (2007). Automation and decision support in interactive computer
products. Ergonomics, 50, 902–909.
Seagull, F. J. & Sanderson, P. M. (2001). Anesthesiology alarms in context: An observational
study. Human Factors, 43, 66–78.
Sebok, A. & Wickens, C. D., (2017). Implementing lumberjacks and black swans into model-
based tools to support human-automation interaction. Human Factors, 59, 189–202.
Sebok, A., Wickens, C. D., Sarter, N., Quesada, S., Socash, C., & Anthony, B. (2012). The
automation design advisor tool (ADAT): Development and validation of a model-based
tool to support flight deck automation design for NextGen operations. Human Factors
and Ergonomics in Manufacturing and Service Industries, 22, 378–394.
Seppelt, B. D. & Lee, J. D. (2007). Making adaptive cruise control (ACC) limits visible.
International Journal of Human-Computer Studies, 65(3), 192–205.
Sheridan, T. B., & Verplank, W. L. (1978). Human and Computer Control of Undersea
Teleoperators. (Technical Report, Man-Machine Systems Laboratory, Department of
Mechanical Engineering). Cambridge, MA: MIT Press.
Slamecka, N. J. & Graf, P. (1978). The generation effect: Delineation of a phenomenon. Journal
of Experimental Psychology: Human Learning, Memory, and Cognition, 4, 592–604.
Strobhar, D. (2012). Human Factors in Process Plant Operation. New York: Monument Press.
Trapsilawati, F., Wickens, C. D., Chen, C.-H., & Qu, X. (2017) Transparency and conflict res-
olution automation reliability in Air Traffic Control. In P. Tsang, M. Vidulich & J. Flach
(Eds.), Proceedings of the 2017 International Symposium on Aviation Psychology
(pp. 419–424). Dayton, OH: ISAP.
Trapsilawati, F., Wickens, C. D., Qu, X., & Chen, C.-H. (2016). Benefits of imperfect conflict
resolution advisory aids for future air traffic control. Human Factors, 58, 1007–1019.
Wickens, C. D. (2017). Stages and levels of automation: 20 years after. Cognitive Engineering
and Decision Making, 12(1), 35–41.
472 Human Factors for Automated Vehicles
Wickens, C. D., Clegg, B. A., Vieane, A. Z., & Sebok, A. L. (2015a). Complacency and auto-
mation bias in the use of imperfect automation. Human Factors, 57(5), 728–739.
Wickens, C. D. & Dixon, S. R. (2007). The benefits of imperfect diagnostic automation: A
synthesis of the literature. Theoretical Issues in Ergonomics Science, 8(3), 201–212.
Wickens, C. D., Mavor, A., Parasuraman, R., & McGee, J. (1998). The Future of Air Traffic
Control: Human Operators and Automation. Washington, DC: National Academy
Press.
Wickens, C. D. & McCarley, J. (2017). Commonsense statistics in aviation safety research. In
P. Tsang, M. Vidulich, & J. Flach (Eds.), Advances in Aviation Psychology: Volume 2
(pp. 98-110). Dorset, UK: Dorset Press.
Wickens, C. D., Sebok, A., Li, H., Gacy, A., & Sarter, N. (2015b). Using modeling and simula-
tion to predict operator performance and automation-induced complacency with robotic
automation: A case study and empirical validation. Human Factors, 57, 959–975.
Wickens, C. D., Sebok, A., Walters, B., & McCormick, P. (2017). Alert Design Considerations
(FAA Technical Report). Boulder, CO: Alion Science & Technology.
Wiener, E. L. (1988). Cockpit automation. In E. L. Wiener & D. C. Nagel (Eds.), Human
Factors in Aviation (pp. 433–461). San Diego, CA: Academic Press.
Wiener, E. L. & Curry, R. E. (1980). Flight deck automation: Promises and problems.
Ergonomics, 23, 995–1012.
Wizorek, R. & Manzey, D. (2014). Supporting attention allocation in multi task environ-
ments: Effects of likelihood alarm systems on trust behavior and performance. Human
Factors, 56, 1209–1221.
Yeh, M., Merlo, J. L., Wickens, C. D., & Brandenburg, D. L. (2003). Head up versus head
down: The costs of imprecision, unreliability, and visual clutter on cue effectiveness for
display signaling. Human Factors, 45, 390–407.
Yeh, M. & Wickens, C. D. (2001). Attentional filtering in the design of electronic map dis-
plays: A comparison of color coding, intensity coding, and decluttering techniques.
Human Factors, 43, 543–562.
Yeh, M., Wickens, C. D., & Seagull, F. J. (1999). Target cuing in visual search: The effects of
conformality and display location on the allocation of visual attention. Human Factors,
41, 524–542.
22 HF Considerations
When Testing and
Evaluating ACIVs
Sheldon Russell and Kevin Grove
Virginia Tech Transportation Institute Center
for Automated Vehicle Systems
CONTENTS
Key Points .............................................................................................................. 474
22.1 Introduction .................................................................................................. 474
22.1.1 Testing Approach: Avoiding Significant But Not Meaningful ......... 477
22.1.2 Driving Automation Characterization .............................................. 477
22.1.2.1 Automated Features ........................................................... 478
22.1.2.2 Request to Intervene .......................................................... 479
22.1.2.3 Other Alerts ....................................................................... 479
22.1.2.4 Connected Vehicle Capabilities ......................................... 479
22.1.3 Participant Training .......................................................................... 480
22.1.4 Training and Improper Use .............................................................. 481
22.2 Commercial Vehicle Testing......................................................................... 481
22.2.1 Drayage ............................................................................................. 482
22.2.2 Platooning
......................................................................................... 482
22.3 Testing Early in Development: Data Analysis and Driving Simulation ....... 483
22.3.1 Naturalistic Driving Data for Scenario Development ...................... 484
22.3.2 Driving Simulator Testing ................................................................ 486
22.4 Mid-Late Testing: On-Road Experiments .................................................... 487
22.4.1 Scenario
Selection ............................................................................ 487
22.4.2 WO Approaches ................................................................................ 488
22.5 Late Stage Testing: NDSs ............................................................................. 488
22.5.1 Participant
Selection.......................................................................... 490
22.5.2 Data Sampling & Reduction ............................................................. 490
22.5.2.1 Data Sampling.................................................................... 490
22.5.2.2 Data Reduction .................................................................. 491
22.6 Conclusion
.................................................................................................... 492
Acknowledgments.................................................................................................. 493
References............................................................................................................... 494
473
474 Human Factors for Automated Vehicles
KEY POINTS
• Accurately characterizing the automated driving system(s) present in a
platform is critical for testing, allowing participant training materials to be
developed that accurately inform participants about system function, and
aids in the selection of testing scenarios throughout the development process.
• Testing of commercial vehicles involves unique operational domains as
well and specialized automation such as platooning.
• Methods for testing in early development are expected to be iterative, and to
build a foundation for later stages of prototype testing.
• Analysis of existing naturalistic driving data will allow testing of models
of automated features to undergo bench testing, and also aid in selection of
testing scenarios.
• Simulator testing provides a method of iterative testing of early feature
designs with a large degree of experimental control, but reduced external
validity.
• Mid-development testing approaches should include on-road testing, using
a prototype or Wizard of Oz approach to increase external validity of test-
ing, at the expense of iterative testing.
• Late stage testing (including post-development) can include naturalistic
driving studies in which large datasets of drivers actively using automation
are collected.
22.1 INTRODUCTION
The purpose of this chapter is to provide an overview for feature development
and human subjects’ testing of various aspects of automated driving systems.
Considerations for driving automation when testing heavy commercial vehicles (e.g.,
tractor trailers, busses) are also included. Although they can exist as stand-alone
systems, connected vehicle features are not covered separately in this chapter; for
the purposes of testing they are described here as a feature of an automated driving
system. Other Level 1 features (e.g., automated emergency braking, blind spot warn-
ing, etc.) may also be part of the driving system, but are referred to generally here as
Advanced Driver Assistance Systems (ADAS).
The terminology used in this chapter is intended to be generally consistent with
SAE J3016 in referring to the driving automation system and/or features of said
system (rather than a vehicle) (SAE International, 2016). However, some distinctions
in classification are noted. For the purposes of this chapter driving automation is
considered to be any sustained automation of both lateral and longitudinal func-
tions (i.e., Level 2 and above). Furthermore, SAE J3016 defines a specific type of
alert unique to automated driving systems, a Request to Intervene (RTI). An RTI
is an alert or notification from the automation system to the driver that an interven-
tion is needed. Per SAE J3016, RTIs are defined in the context of Level 3 (or above)
systems; however, one could consider “hands-on wheel type alerts” designed to keep
the driver engaged in the driving task as RTIs as well (insofar as an intervention is
requested from the driver; Russell et al., 2018).
Testing and Evaluating ACIVs 475
Previous chapters have covered issues that are fundamental to the human driver,
driving automation, driving in general, as well as potential problems associated with
driving automation and connected vehicle systems. Potential solutions to these prob-
lems such as driver training (this Handbook, Chapter 18), driver monitoring (this
Handbook, Chapter 11), and human–machine interface (HMI) designs have also been
presented (this Handbook, Chapters 15, 16). No matter how principled, any potential
solution is just that until it is tested and evaluated. This chapter will include high-level
overviews of steps critical to testing of driving automation systems. These steps include
system characterization, commercial vehicle considerations, and testing methodologies
(e.g., driving simulator, test track or live road experiments, and naturalistic driving).
SAE standard J3018 provides some guidance for testing at automation Levels 3
and above (SAE International, 2015a). The standard advocates a graduated approach
where the expertise required for testing scenarios decreases over the course of test-
ing, while the complexity of the testing scenario increases over the course of devel-
opment. The standard provides definitions for expert test drivers, experienced test
drivers, and novice test drivers. Expert test drivers are typically engineers who are
designing the automated features themselves and can interact with the systems at the
mechanical and software level; experienced drivers are trained on the systems but
are unable to interact at the software level; and finally novice drivers have received
only cursory training (if any). Testing locations and road scenarios should also be
graduated in complexity; potential testing variables listed in the standard include
• Location
• Test track
• Closed campus operations (e.g., military base, corporate or university
campus)
• Public roads
• Roadway Type
• Limited access freeway
• Highway (single or multi-lane),
• Arterial roads
• Residential streets
• Driveway, parking lot, or structure
• Traffic Environment
• Traffic density
• Vehicles
• Pedestrians
• Signage
• Irregular—construction, crash scenes, road detours, flooding
• Complex intersections, merges
• Regional variations in road design
• Traffic control devices (signals, signs, curbs, guardrails, etc.)
• Time of Day
• Lighting conditions (day vs. night)
• Seasonal
• Weather conditions
476 Human Factors for Automated Vehicles
While providing some guidance, SAE J3018 does not provide any specific method-
ology for testing during phases. Furthermore, a system and associated features will
undoubtedly go through many different tests throughout their development cycle.
Assuming that a system is being developed from start to finish, the methodologies
described herein are intended to build upon one another across the development cycle
(e.g., data modeling in early development leads to scenarios for driving simulator
testing, production systems are then tested via naturalistic driving studies (NDSs)).
The scope of graduated testing is broad; it includes tests that may not consider the
driver (i.e., engineering evaluations). While these tests are critical to the develop-
ment cycle, the scope of this chapter is focused toward human subjects testing. It
is assumed that engineering evaluations of the systems and components have been
completed, and the features themselves are operable.
Testing is not limited to systems early in the development cycle. Testing of post-
development cycle (i.e., commercially available) driving automation will always be
necessary. Post-development testing can be categorized into two primary focuses of
post-release monitoring and testing of broader safety benefits. Post-release monitoring
refers to maintaining system reliability and performance once features are out in the
world, with a focus on “black box” or vehicle data monitoring by an original equipment
manufacturer (OEM) or other researcher. Analysis of this data may lead to over-the-air
software updates (e.g., Tesla autopilot software changes) or otherwise inform future
system designs. Furthermore, automated systems that have been deployed on public
roads may allow for testing safety benefits within the larger transportation system, for
example, crash rate comparisons between vehicles equipped with driving automation
and non-equipped vehicles, which may then lead to rulemaking and/or policy con-
siderations. Although distinct, these two approaches are complimentary. Post-release
monitoring and safety benefit testing results may lead to insights into the overall capa-
bilities and limitations at each level of automation and understanding limitations of
current platforms should inform the design of future iterations of driving automation.
Figure 22.1 provides a summary of testing throughout the development cycle, begin-
ning with heuristic evaluation of the concept, usability testing of a prototype, user
studies of pre-production models, and in-service monitoring of the released product.
Product released
Create
Pre-production model
Create Paper prototype
C
U E
ate
Un
ate
alu
de
Un
Ev
r-
alu
de
Ev
rst
an
d
FIGURE 22.1 Overview of the development cycle. (From Lee, Wickens, Liu, & Boyle, 2017.)
Testing and Evaluating ACIVs 477
methods may allow for a sequence of activities that “defeat” the monitoring system
or provide an avenue for improper use of the automation (e.g., steering torque detec-
tion vs. driver gaze detection; see also, this Handbook, Chapter 11).
Although a critical step, there is no one right way to go about characterizing the
driving system; the approach to characterization will vary based on the overall intent
of the tests. Essentially, individuals evaluating a system should let their research
questions guide the level and depth of characterization needed. The approach laid out
in this chapter should be considered only as a starting point. Guidelines have been
put forward for characterizing interface designs for driving automation (Campbell
et al., 2018), which were based on the output from early studies of Level 2 and 3
systems (Blanco et al., 2015).
22.1.2.1 Automated Features
First and foremost, in terms of characterization, is the type of driving automation
features that are present in a particular platform (also see this Handbook, Chapter 2).
As described in SAE J3016, a vehicle may have multiple features that operate at dif-
ferent levels of automation in different combinations of activation or different opera-
tional driving domains (ODDs). There are any number of other items that may be of
interest to a particular test or research question; a short example characterization is
included in Table 22.1. The type of feature, speed ranges of activation, methods of
activation, and whether or not the feature can activate alone are all important details
for characterization. Although included in the table as Level 3, low-speed traffic jam
assist features may or may not be classified as Level 3 by the manufacturer, which
could even vary based on location.
For commercially available systems, a great deal of information may be contained
in manufacturer sources (e.g., owner’s manuals, manufacturer’s website); this may
include the specified level of automation for a system. As part of characterization,
experimenters should operate systems and verify that the published specifications
TABLE 22.1
Example Characterization of Features for Level 1, Level 2, and Limited Level
3 Capability
Can Be
Lateral/ Activated SAE
Feature Longitudinal Speed Activation Method Alone Level Alerts
Adaptive Continuous Steering wheel button Yes 1 FCW
cruise longitudinal
control support
Lane Continuous lateral Above Automatic when speed No—Requires 2 RTI
centering support 40 mph is crossed; system ACC
setting to disable
Traffic jam Continuous lateral Below Steering wheel button; No 3 RTI
pilot and longitudinal 35 mph HMI notifies when
support available
Testing and Evaluating ACIVs 479
for the system are accurate and if there are affordances from the system, such as
extended periods of hands-off driving or feature activation on improper road types,
which are not within the intended use but are nonetheless available to the driver.
This type of characterization helps to determine the most likely avenues of misuse or
other improper use cases that may be observed in naturalistic settings or otherwise
require testing.
22.1.2.3 Other Alerts
In some cases, alerts or notifications and driver responses to these alerts will be the
primary focus of study. Alerts for novel or new applications, such as connected vehi-
cle alerts should be characterized insofar as the alert triggering conditions, modality,
etc. of the alert are known to the researchers. Alert specifications not only allow for
interpretation of the data and scenario development but also provide information that
may be explained to participants as part of training.
Alerts may not be of primary interest to a testing scenario. However, the nature and
presence of alerts should be noted (e.g., FCW, Blind Spot Warnings, Lane Departure
Warnings). Again, the triggering conditions and operational ranges, should be under-
stood by the research team in order to provide information to participants as neces-
sary. Particularly for a naturalistic research study, alerts are likely to be encountered
by a participant driver.
TABLE 22.2
Example RTI Characterizations
Source (Alert Number Stage Duration Total Duration
Trigger) of Stages (seconds) (seconds) Consequence
Level 2 Steering wheel 2 15 30 Lane centering is disabled
RTI torque for duration of trip
Level 3 External 3 15 45 Vehicle slows to a stop in
RTI conditions lane
480 Human Factors for Automated Vehicles
questions and experience the features for the first time is a second step that should
help reinforce instructions.
Training considerations may not be limited to the use and activation of automa-
tion systems, it may be necessary to provide training for participants on any other
unfamiliar or novel aspect of a test, such as a specific non-driving task. Even if the
tasks are not novel, the results could be biased if the application of the task is confus-
ing or misunderstood by the participant. Finally, if the goal of the test is to develop a
training system or to compare different training methods, designing tests to compare
the methods will be required. For example, a written or practical evaluation to deter-
mine what information was understood and/or retained from the training material
by the participants.
issues with driver confusion or frustration towards FCW and lane departure alerts
due to variations between brands of systems, generations of system, and integration
approaches (Technology & Maintenance Council, 2018). The best practices from
these efforts may inform designs for higher levels of automation for commercial vehi-
cle ADAS or commercial AVS (Technology & Maintenance Council, 2019).
In addition to issues with driver acceptance, commercial vehicles also introduce
additional challenges for testing and development. Testing of commercial vehicles
presents specific challenges in terms of ODDs as well as specialized automation
that may be different than driving automation for passenger vehicles. For example,
trucking operations that swap trailers may be limited to placing sensors only on
the truck/tractor itself, creating visibility challenges unique to commercial vehicles.
Commercial vehicles are also subject to roadside inspections, and automated sys-
tems may need to interact with state personnel who perform these inspections or
their infrastructure. Commercial vehicles may also operate on private properties
or non-public roadways as part of their operation where road markings are lim-
ited. Considering these issues during testing will be critical to uncovering the edge
cases for driving automation for heavy vehicle applications. Additional examples
of these challenges specific to heavy commercial trucking are described in the
following sections.
22.2.1 drAyAge
One example of a non-public roadway usage for commercial vehicles is drayage.
Drayage trucks typically work to transfer cargo between mixed modes of transporta-
tion, such as offloading cargo from a ship to then be transported by heavy truck or
train. These operations are often conducted in a space with controlled access, such
as a port of entry or rail yard facility. These characteristics make the domain attrac-
tive for heavy vehicle automation; the operation space is confined, there is little to
no mixed traffic, there are relatively fixed transit routes, and there are lower speeds
of operation (see Smith, Harder, Huynh, Hutson, & Harrison, 2012, for an overview
of drayage operations and facilities). Drayage may also offer unique challenges to
automated commercial vehicles as roadway infrastructure could differ significantly
from public roads. Additionally, while most driving may take place in areas closed to
public traffic, there may be some driving on public roads in order to get the cargo to
a nearby destination for further transit. These transitions between private and public
space may provide test cases of transfer of control between automated driving and
manual driving.
22.2.2 pLAtooning
Another application of automation that may be unique to commercial vehicles is
platooning. Platooning involves a series of vehicles following each other at relatively
short headways for extended periods of time. Platooning is typically accomplished
by V2V communication in order to synchronize throttle or steering and assist a
driver, but future implementations could also include higher levels of automation
which could operate without a driver present. Platooning offers the opportunity for
Testing and Evaluating ACIVs 483
the following vehicles to reduce airflow drag and improve fuel economy at higher
speeds, which presents an economic opportunity in commercial vehicle operations
that drive long distances at highway speeds. However, the following distances neces-
sary to achieve optimal gains are expected to be relatively short, as low as 10 feet
(Tsugawa, Jeschke, & Shladover, 2016) and as high as 50 feet (Lammert, Duran,
Diez, & Burton, 2014).
The short headways necessary for platooning reduce the time in which the fol-
lowing vehicles can react and create blind spots in front of the following vehicles in
which sensors cannot see beyond the leading vehicle. This can be overcome with
communication between leading vehicles and following vehicles, but the lead vehicle
may need to consider whether there is a platoon following it in choosing how to
react to potential conflicts. Additionally, the points at which a human driver or the
automation needs to engage/disengage a platoon are another area of potential testing
unique to commercial vehicles. Depending on how platoons are designed to form
or dissolve, they may involve a human that is in control at what would typically be
considered an unsafe headway or the automation must transition over time from an
independent state at a longer headway to a synchronized state at a shorter headway
(or vice versa). These handover or transition points could lead to edge cases and test-
ing requirements that are unique to commercial vehicles.
A final concern for platooning testing is how the systems should react to vehi-
cles around a platoon. Light vehicles may attempt to cut between platooning trucks
(depending on the following headway required for a platoon), and platoons may
impact how surrounding vehicles attempt to enter, exit, or change lanes on a high-
way. Anticipating these behaviors around platoons and including them in testing
scenarios will be critical for ensuring safe deployment in the future.
Automated features or other ADAS systems can then be modeled (for example,
implementing a connected vehicle application that detects the lead vehicle slowing
in the model). These modeled features can then be implemented in a driving simula-
tor to test the features with a naïve driver; the driver response data can then be used
to further refine model parameters, leading to additional tests.
the human driver they intend to replace. A reasonable place to start is to use existing
data on typical drivers, namely naturalistic driving data, such as that found in the
SHRP 2 dataset (Dingus et al., 2014). Most approaches to naturalistic data have been
to study rates of safety-critical events (SCEs; crashes and near crashes) along with
the factors present during each event.1 The remaining data are used to select baseline
cases without crashes for computing odds ratios and other analyses. Crashes are rare
events overall, and the majority of the data are uneventful in the conventional sense.
However, this “leftover” data provide a multitude of driving scenarios that could
be informative for automated system development and testing. Essentially, system
function can be compared to data from actual drivers in the scenario; as noted pre-
viously these scenarios can then be modeled in a computer-based simulation and
system parameters tuned to desired performance specifications prior to testing with
a human driver. This is an emerging approach for driving automation system devel-
opment, and most work is being conducted in the private realm but is conceptually
similar to hardware in the loop simulation (see e.g., Murray-Smith, 2012).
To what standard should a system be tested? Should a system be developed to
be similar to the mean following distance? What about following closer? Why not
choose one of the outliers for an added margin of safety? There are no right answers
to any of these questions; but as a researcher, decisions will need to be made about
how the system should be tested, and this decision will have an impact on the out-
come of the test. There are a wide variety of driving styles; and driving style is going
to change depending on the surrounding traffic, weather and environmental factors,
and so on. This makes even what is on the face a simple question, such as how closely
should a longitudinal feature follow lead traffic, more challenging. At some point
in development, it has to be determined what type of driving the automated system
is going to achieve. Figure 22.3 shows boxplots calculated from following distance
FIGURE 22.3 Boxplots of mean following distance data for 10,000 trips from the 100-car
NDS with range (Y axis) and speed (X axis) in metric and standard units.
1 Note that some controversy surrounds the use of near crashes to understand the factors that influence
crashes (Guo, Klauer, McGill, & Dingus, 2010; Knipling, 2015)
486 Human Factors for Automated Vehicles
observed during the 100-car NDS (Dingus et al., 2006). Data are shown for 10,000
trips, binned across the speed ranges on the x-axis. Based on the following distance
classification, those on the higher ranges could be classified as “conservative” with
the shorter following distances considered “sporty.”
Consider the example of a slowed vehicle reveal scenario (Figure 22.2) and
designing vehicle automation that can resolve the scenario safely. Using this type of
driving data, iterative models of different following distance envelopes can be used
to narrow down design considerations (e.g., following at a safe distance to resolve
a vehicle reveal). Again, analysis methods for large datasets are covered in further
detail in other chapters (this Handbook, Chapter 23). For the purpose of iterative
testing, upon narrowing the feature parameters, testing with human operators in a
driving simulator is likely appropriate.
22.4.2 wo ApproAches
Although likely familiar to most readers, WO testing refers to testing a system that
operates as though it were automated but is not. WO “automation” is achieved through
mechanical intervention (e.g., secondary controls), using only pre-determined routes,
GPS guidance, or similar methods. This approach may be necessary for a variety of
reasons; it is possible that the researchers are interested in testing a general category
of automation (e.g., Level 2) rather than a specific platform or system and would
therefore avoid using a stock testing platform. Alternately, the state of development
or other testing limitations may be such that the system of interest is not ready for
deployment but may work on a test track with a pre-programmed path. Alternatively,
a lateral control feature may be classified as Level 2, and is capable of maintaining
lane position but requires the driver to maintain steering input; adding a mechani-
cal steering system from a confederate driver in a rear seat may allow for testing of
Level 3 scenarios on live roads.
As a final note, the WO testing is a flexible testing approach not limited to driver
testing. For example, the approach can be used to test interactions between Level 4
driving automation and other road users. Rather than augmenting vehicle technol-
ogy, a driver may be hidden from view of anyone outside the vehicle. The “ghost
driver” (Rothenbucher, Li, Sirkin, Mok, & Ju, 2015) method can give the appearance
a vehicle is autonomous, allowing testing for general reactions from other road users
and pedestrians, such as responses to signal lights (Ford Motor Company, 2017).
Responses can be gathered via post-exposure interviews or video-based reduction
of observer response.
• 2017 Audi Q7 Premium Plus 3.0 TFSI Quattro with Driver Assistance
Package
• 2015 Infiniti Q50 3.7 AWD Premium with Technology, Navigation, and
Deluxe Touring Package
• 2016 Mercedes-Benz E350 Sedan with Premium Package, Driver Assistance
Package
• 2015 Tesla Model S P90D AWD with Autopilot Convenience (software
version 8.0)
• 2016 Volvo XC90 T6 AWD R with Design and Convenience Packages
Vehicles were instrumented with a data acquisition system including camera views
of the driver and forward roadway, and accelerometer and GPS data. Each partici-
pant drove the vehicles for four weeks. Participants received an introduction to the
driving automation, including a test drive before their participation period. A total of
216,585 miles were driven, with 70,384 miles driven with both lateral and longitudi-
nal features active. Fridman et al. (2019) recruited drivers who were already owners
of Level 2 capable systems (specifically Tesla drivers), with analyses reported for 21
vehicles and 323,384 total miles (112,427 miles driven with Level 2 active). Cameras
were used to record driver behavior and the forward roadway.
The intent here is not to provide a “how to” for the logistics and data collection
procedures for conducting an NDS, but some primary issues will be reviewed briefly.
The focus in this section is on the issues relating to the participant and data collec-
tion, data sampling, and data reduction for testing of automated driving systems.
Researchers will need to plan for what types of systems and participants will be
studied (e.g., owners of candidate systems or participants to which systems will be
loaned). Researchers will also need to decide on what instrumentation (e.g., aspects
of the data acquisition system: cameras, accelerometers, GPS, vehicle network infor-
mation, storage capacity) should be deployed. Likely dedicated staff will be needed
to monitor the data collection, to replace and/or re-align cameras, and to manage
data as data storage fills up during the data collection period. For a more detailed
review of methods for field operational tests and NDSs, see the overview published
by the FESTA Consortium (2017).
490 Human Factors for Automated Vehicles
22.5.2.1 Data Sampling
Sampling strategies may require comparisons between levels of automation, par-
ticularly for driving automation that can operate at different levels (depending on
which features are activated). Baseline selection may be somewhat tricky, and ran-
dom sampling of “non-eventful” driving may lead to results that are not represen-
tative of the full range of capabilities a system may have. Aside from excluding
alerts or other situations of interest, a sampling plan should be based on information
from the characterization process, in order to select samples that include all levels of
Testing and Evaluating ACIVs 491
TABLE 22.3
SCE Descriptions as Used in SHRP 2
Severity Level Description
Most severe Any crash that results in any injury requiring medical attention, or one that
includes an airbag deployment or requires vehicle towing
Police reportable A crash that does not meet the requirements for a Level I crash, but does
crash include sufficient property damage that warrants being reportable to the police
Minor crash A crash that does not meet the requirements for a Level II crash, but does result
in minimal damage
Low risk crash Tire/curb strike
Near crash Any circumstance that requires a rapid evasive maneuver by the subject
vehicle, or any other vehicle, pedestrian, cyclist, or animal, to avoid a crash
automation for which a system is capable and appropriate ODDs for different levels
of automation. For comparisons between automated driving and non-automated (or
non-assisted) driving, samples can be taken from the same ODDs with and without
features activated. If the automated driving system has one or more of those features
that can be activated separately, samples should be taken from each “level” of acti-
vation. For example, naturalistic studies of vehicles equipped with both lateral and
longitudinal automated features sampled Level 0 (no features active), Level 1 (one
feature active), and Level 2 (both features active) during similar driving scenarios
(e.g., highway driving above 40 mph; Russell et al., 2018).
SCEs consisting of crashes and near crashes will certainly be of interest in most if
not all test cases. If not reported to the research team directly (by participant drivers
themselves or by law enforcement) SCEs can be detected by kinematic data and sub-
sequently confirmed by reviewing video data (see data reduction section). Table 22.3
shows the definitions of SCEs used in the SHRP 2 study (Antin et al., in press).
As a final note, driver behaviors themselves do not automatically elevate a sample
to an SCE, even if the behavior is egregious (e.g., visibly intoxicated, sleeping). This
also applies to the state of automation in relation to driver behavior (e.g., texting
while driving with the automated feature).
22.5.2.2 Data Reduction
Data reduction, sometimes referred to as data annotation, is a step prior to analy-
sis in which each sample epoch is reviewed and the relevant driver, vehicle, and
environmental factors are classified by a researcher following a specific protocol.
Typically, the time-synchronized vehicle data (e.g., automation state, driver inputs,
etc.), sensor data (accelerometer, GPS, etc.), and video data are reviewed frame by
frame as necessary (based on the recording rate of the video). Relevant driver factors
include presence of non-driving tasks, driver gaze patterns, hands-on-wheel behav-
iors, pedal behaviors (e.g., accelerator release, brake presses, etc.), and any visible
signs of impairment, etc. Driver behaviors that occur with and without driving auto-
mation, including the presence of non-driving tasks will likely be of importance to
any naturalistic study.
492 Human Factors for Automated Vehicles
• Roadway Type
• Limited access freeway
• Highway (single or multi-lane),
• Arterial roads
• Residential streets
• Driveway, parking lot, or structure
• Traffic Environment
• Level of service
• Vehicles present (e.g., leading and adjacent vehicles)
• Pedestrians
• Animals
• Relation to intersection
• Traffic control devices (signals, signs, curbs, guardrails, etc.)
• Time of Day & Lighting Conditions
• Day
• Night (unlit)
• Night (lit)
• Dusk
• Dawn
• Inclement weather present
• Rain
• Snow
• Fog
Specific operation definitions used for the analysis should be compiled into a “data
dictionary” for each study. The data dictionary includes operational definitions used
to reduce each scenario that can be referenced by any reductionist at any time and
can be used to train new reductionists. An example data dictionary can be consulted
to guide protocol development and descriptions of the vehicle, driver, and environ-
mental factors present in the scenario (Virginia Tech Transportation Institute, 2015).
Other sources for operational definitions and calculating driver performance metrics
include Green (2012) and SAE International (2015b).
22.6 CONCLUSION
We sit at a critical transition point for driving automation deployment and testing.
Testing of Level 4 and higher automated driving systems is currently underway by
multiple companies (e.g., Waymo, Cruise); however, the many engineering chal-
lenges for autonomous driving leave timelines for deployment are still uncertain.
Testing and Evaluating ACIVs 493
ACKNOWLEDGMENTS
This chapter was heavily informed by the authors conducting funded studies from
the National Highway Traffic and Safety Administration as well as other proprietary
sponsors. A special thanks to the editors and other reviewers for valuable feedback
on early drafts of this chapter.
494 Human Factors for Automated Vehicles
REFERENCES
AAA Public Affairs. (2019). Three in Four Americans Remain Afraid of Fully Self-
Driving Vehicles. Retrieved May 12, 2019, from https://newsroom.aaa.com/tag/
autonomous-vehicles/
Antin, J. F., Lee, S., Perez, M. A., Dingus, T. A., Hankey, J. M., & Brach, A. (in press). Second
strategic highway research program naturalistic driving study methods. Safety Science,
119, 2-10.
Banks, V. A., Eriksson, A., O‘Donoghue, J., & Stanton, N. A. (2018). Is partially automated
driving a bad idea? Observations from an on-road study. Applied Ergonomics, 68,
138–145.
Blanco, M., Atwood, J., Vazquez, H. M., Trimble, T. E., Fitchett, V. L., Radlbeck, J. …
Morgan, J. F. (2015). Human Factors Evaluation of Level 2 and Level 3 Automated
Driving Concepts (DOT HS 812 182). Washington, DC: National Highway Traffic
Safety Administration.
Brooks, J. O., Goodenough, R. R., Crisler, M. C., Klein, N. D., Alley, R. L., Koon, B. L.,
… Wills, R. F. (2010). Simulator sickness during driving simulation studies. Accident
Analysis & Prevention, 42(3), 788–796.
Campbell, J. L., Brown, J. L., Graving, J. S., Richard, C. M., Lichty, M. G., Bacon, L. P.,
… Sanquist, T. (2018). Human Factors Design Guidance for Level 2 and Level 3
Automated Driving Concepts (DOT HS 812 555). Washington, DC: National Highway
Traffic Safety Administration.
Dingus, T. A., Hankey, J. M., Antin, J. F., Lee, S. E., Eichelberger, L., Stulce, K., … Stowe,
L. (2014). Naturalistic Driving Study: Technical Coordination and Quality Control
(SHRP 2 Rep. S2-S06-RW-1). Washington, DC: National Academies.
Dingus, T. A., Klauer, S. G., Neale, V. L., Petersen, A., Lee, S. E., Sudweeks, J., & Knipling,
R. R. (2006). The 100-Car Naturalistic Driving Study, Phase II Results of the 100-
Car Field Experiment (DOT HS 810 593). Washington, DC: National Highway Traffic
Safety Administration.
Endsley, M. (2017). Autonomous driving systems: A preliminary naturalistic study of the
Tesla Model S. Journal of Cognitive Engineering and Decision Making, 11, 225–238.
FESTA Consortium. (2017). FESTA Handbook Version 7. Retrieved from https://fot-net.eu/
Documents/festa-handbook-version–7/
Ford Motor Company. (2017). Ford, Virginia Tech Go Undercover to Develop Signals That
Enable Autonomous Vehicles to Communicate with People. Retrieved from https://
media.ford.com/content/fordmedia/fna/us/en/news/2017/09/13/ford-virginia-tech-
autonomous-vehicle-human-testing.html
Fridman, L., Brown, D. E., Glazer, M., Angell, W., Dodd, S., Jenik, B., … Abraham, H.
(2017). MIT autonomous vehicle technology study: Large-scale deep learning based
analysis of driver behavior and interaction with automation. arXiv:1711.06976. doi:
10.1109/ACCESS.2019.2926040
Fridman, L., Brown, D., Kindelsberger, J., Angell, L., Mehler, B., & Reimer, B. (2019).
Human Side of Tesla Autopilot: Exploration of Functional Vigilance in Real-World
Human-Machine Collaboration. Cambridge, MA: MIT. Retrieved from https://hcai.
mit.edu/tesla-autopilot-human-side.pdf
Gold, C., Naujoks, F., Radlmayr, J., Bellem, H., & Jarosch, O. (2017). Testing scenarios for
human factors research in level 3 automated vehicles. International Conference on
Applied Human Factors and Ergonomics (pp. 551–559). Berlin, Springer.
Green, P. (2012). Standard definitions for driving measures and statistics: Overview and sta-
tus of recommended practice J2944. Proceedings of the 5th International Conference
on Automotive User Interfaces and Interactive Vehicular Applications (pp. 28–30).
Eindhoven, The Netherlands.
Testing and Evaluating ACIVs 495
Grove, K., Atwood, J., Blanco, M., Krum, A., & Hanowski, R. (2017). Field study of
heavy vehicle crash avoidance system performance. SAE International Journal of
Transportation Safety, 5(1), 1–12.
Grove, K., Atwood, J., Hill, P., Fitch, G., DiFonzo, A., Marchese, M., & Blanco, M. (2015).
Commercial motor vehicle driver performance with adaptive cruise control in adverse
weather. Procedia Manufacturing, 3, 2777–2783.
Grove, K., Soccolich, S., Engstrom, J., & Hanowski, R. (2019). Driver visual behavior while
using adaptive cruise control on commercial vehicles. Transportation Research Part F:
Traffic Psychology and Behavior, 60, 343–352.
Guo, F., Klauer, S., McGill, M., & Dingus, T. (2010). Evaluating the Relationship Between
Near-Crashes and Crashes: Can Near Crashes Serve as a Surrogate Safety Metric
for Crashes? (DOT HS 811 382). Washington, DC: National Highway Traffic Safety
Administration.
Kalra, N. & Paddock, S. M. (2016). Driving to safety: How many miles of driving would it
take to demonstrate autonomous vehicle reliability? Transportation Research Part A:
Policy and Practice, 94, 182–193.
Kaptein, N. A., Theeuwes, J., & Van Der Horst, R. (1996). Driving simulator validity: Some
considerations. Transportation Research Record, 1550, 30–36.
Knipling, R. (2015). Naturalistic driving events: No harm, no foul, no validity. Proceedings of
the Eighth International Driving Symposium on Human Factors in Driver Assessment,
Training and Vehicle Design. Iowa City: University of Iowa.
Lammert, M., Duran, A., Diez, J., & Burton, K. (2014). Effect of platooning on fuel consump-
tion of Class 8 vehicles over a range of speeds, following distances, and mass. SAE
International Journal of Commercial Vehicles, 7(2), 626–639.
Lee, J. D., Wickens, C. D., Liu, Y., & Boyle, L. N. (2017). Designing for People: An
Introduction to Human Factors Engineering. Charleston, SC: CreateSpace.
Murray-Smith, D. (2012). Modelling and Simulation of Integrated Systems in Engineering.
Philadelphia, PA: Woodhead Publishing.
Poorsartep, M. & Stephens, T. (2015). Truck automation opportunities. In G. Meyer, &
S. Beiker (Eds.), Road Vehicle Automation 2. Lecture Notes in Mobility. Berlin:
Springer.
Ranney, T. (2011). Psychological fidelity: Perception of risk. In D. Fisher, M. Rizzo, J.
Caird, & J. Lee (Eds.), Handbook of Driving Simulation for Engineering, Medicine
and Psychology. Boca Raton, FL: CRC Press.
Rothenbucher, D., Li, J., Sirkin, D., Mok, B., & Ju, W. (2015). Ghost driver: A platform for
investigating interactions between pedestrians and driverless vehicles. Proceedings
of the 7th International Conference on Automotive User Interfaces and Interactive
Vehicular Applications. ACM, New York, NY, USA, pp. 44–49.
Russell, S. M., Atwood, J., & McLaughlin, S. (in press). Driver Expectations for System
Control Errors, Engagement, and Crash Avoidance During Level 2 Driving
Automation. Washington, DC: National Highway Traffic Safety Administration.
Russell, S. M., Blanco, M., Atwood, J., Schaudt, W. A., Fitchett, V. L., & Tidwell, S. (2018).
Naturalistic Study of Level 2 Driving Automation Functions (DOT HS 812 642).
Washington, DC: National Highway Traffic Safety Administration.
SAE International. (2015a). Guidelines for Safe On-Road Testing of SAE Level 3, 4, and
5 Prototype Automated Driving Systems. Warrendale, PA: Society for Automotive
Engineers.
SAE International. (2015b). Operational Definitions of Driving Performance Measures and
Statistics. Retrieved from https://saemobilus.sae.org/content/J2944_201506/
SAE International. (2016). Surface Vehicle Recommended Pratice J3016: Taxonomy and
Definitions for Terms Related to driving Automation Systems for On-Road Motor
Vehicles. Warrendale, PA: Society for Automotive Engineers.
496 Human Factors for Automated Vehicles
Smith, D., Harder, F., Huynh, N., Hutson, N., & Harrison, R. (2012). Analysis of current and
emerging drayage practices. Transportation Research Record, 2273, 69–78.
Technology & Maintenance Council. (2018). Technological advances in next generation colli-
sion warning driver interfaces. The Trailblazer: The Technical Journal of TMC’s 2018
Annual Meeting. ATA, Arlington, VA, USA,. 40–45.
Technology & Maintenance Council. (2019). RP 430 update. The Trailblazer: The Technical
Journal of TMC’s 2019 Annual Meeting. ATA, Arlington, VA, USA, 39–40.
Tsugawa, S., Jeschke, S., & Shladover, S. E. (2016). A review of truck platooning projects for
energy savings. IEE Transactions on Intelligent Vehicles, 1(1), 68–77.
Virginia Tech Transportation Institute. (2015). SHRP 2 Researcher Dictionary for Video
Reduction Data Version 4.1. Blacksburg, VA: VTTI.
Weir, D. H. (2010). Application of a driving simulator to the development of in-vehicle human-
machine-interfaces. IATSS Research, 34, 16–21.
23 Techniques for Making
Sense of Behavior in
Complex Datasets
Linda Ng Boyle
University of Washington
CONTENTS
Key Points .............................................................................................................. 497
23.1 Introduction .................................................................................................. 498
23.2 Data Collection Tools ................................................................................... 499
23.2.1 On-Road
Settings .............................................................................. 499
23.2.2 Driving Simulators ........................................................................... 501
23.2.3 Other Data Collection Tools ............................................................. 503
23.3 Data
............................................................................................................... 504
23.3.1 Data Cleaning, Extraction, and Preparation ..................................... 504
23.3.2 Accounting for Context..................................................................... 506
23.3.3 Accounting for Outliers .................................................................... 508
23.3.4 Data Visualization ............................................................................ 508
23.4 Understanding the Controller ....................................................................... 509
23.4.1 Human Behavior ............................................................................... 510
23.4.2 System
Behavior ............................................................................... 511
23.4.3 Human–System Interaction .............................................................. 511
23.5 Analytical Tools ............................................................................................ 512
23.5.1 Exploratory Models .......................................................................... 512
23.5.2 Inferential Models............................................................................. 512
23.5.3 Predictive Models ............................................................................. 514
23.6 Conclusion..................................................................................................... 515
References .............................................................................................................. 515
KEY POINTS
• Given advances in sensors and technology, a great deal of transportation
data can be gathered, and these data can help answer questions related
to road user behavior and help understand the behavior of the vehicle.
However, the challenge is being able to extract the appropriate data for the
research question of interest and being able to use the data to model the
anticipated outcomes.
497
498 Human Factors for Automated Vehicles
23.1 INTRODUCTION
Human factors researchers are often asked to provide insights on road user’s
behavior. In the context of automated, connected, and intelligent vehicles, the
research questions relate to the human–system interaction and can be tailored
toward the human side or the system side. Given advances in sensors and technol-
ogy, we have amassed a great deal of transportation data, and these data can help
answer questions related to road user behavior and help predict the behavior of
the vehicle. However, the challenge is being able to extract the appropriate data
for the research question of interest and being able to use the data to model the
anticipated outcomes. As we move forward with more advanced vehicle systems,
the goal becomes one of understanding how user behavior changes as they inter-
act with these systems over time and whether the changes will impact safety (also
see this Handbook, Chapter 12).
Oftentimes, researchers begin with an analytical model rather than a theoretical
model. That is, researchers tend to begin integrating data into sophisticated pre-
dictions or inferential models without any knowledge or understanding of the data.
Theory provides a framework for exploring the research problem. Its purpose is
to clarify the concepts and proposed relationships among the variables of interest.
To gain a better understanding of the theoretical side, researchers should take time to
explore and understand the data before moving toward inferences and predictions.
For that reason, this chapter focuses more on how best to parse and process the
data and how to visualize the data (and less on the analytical models). That said,
the reader is provided some high-level examples of tools that go beyond regression
models to provide the reader a good starting point for classifying, inferring, and
predicting the outcomes of interest. In general, safe driving depends on the ability
of the controller to devote attention to the roadway demand (Lee, Young, & Regan,
2009). For the vehicle systems discussed in the context of this book, the controller
can be the human or the system. This chapter considers research questions that can
be addressed using data related to automated, connected, and intelligent vehicles.
Making Sense of Behavior in Complex Datasets 499
The goal of this chapter is to provide techniques that can help the analyst make sense
of complex datasets. This chapter is organized into four topics:
• Understanding the data collection tools: Data collection tools used to make
sense of the behavior given the context, user, and the technology itself.
• Understanding the data: Types of data often used for examining behavior
and ways to manipulate, visualize, and explore your data.
• Understanding the controller: Types of research questions that may be
of interest from the human and system controller perspective, and what
human–system interactions can be modeled.
• Understanding the analytical methods: Types of models that can be devel-
oped that will account for changes in behavior over time and location.
road user. These sensors need to be mapped to the driver's actions and the roadway
conditions. The driver’s action is often captured using audio or video equipment. The
integration of all these sensors often requires some manual coding; this is true even
if one has access to video reduction software. The reason for this is that the event of
interest needs to be initially identified by the researcher/analyst and that may take
extensive time to complete.
While complex, such datasets are incredibly useful in providing insights on
driver-initiated activities and any adaptive behavior associated with existing tech-
nology equipped in the driver’s vehicle. The datasets can also provide insights
on user behavior during safety-critical events that cannot be observed otherwise.
They can also capture surprising issues that the researcher may not have deemed
relevant. This includes unexpected interactions with passengers, pedestrians, and
other road users, changes due to road conditions, and trip purpose. For example,
most drivers do not interact with their mobile device continuously while driv-
ing. Rather, they moderate their use based on the road, environment, and traffic
(Oviedo-Trespalacios, Haque, King, & Washington, 2019). In fact, studies show
that many drivers commonly perform secondary tasks while stopped at a red light
(e.g., Kidd, Tison, Chaudhary, McCartt, & Casanova-Powell, 2016). And, while
using a mobile device at an intersection may be observed by other road users
as more of an annoyance than a safety concern, studies do show that a driver’s
continual need to engage with their mobile device may be an indication of more
harmful and negative behavior (Billieux, Maurage, Lopez-Fernandez, Kuss, &
Griffiths, 2015), which can impact overall driver safety. Data collected from the
real world can provide insights on these subtle differences as well as help the
researcher understand the sensitive issues that participants are not as willing to
reveal in surveys or controlled studies.
On-road data can also provide insights on the appropriateness of crash surrogate
measures. For example, are lane departures or the distance to a leading vehicle really
good indicators of “crashes”? On-road data can be used to map out the trajectory of
the vehicle in the context of other road users. In doing so, one may see that the sharp
deviation in lane position may have actually been necessary to help the driver avoid
an otherwise more dangerous situation.
As of this writing, several of these datasets are available to users from the
organization that collected the data. Several are housed on data.gov such as:
1 SPMD: https://catalog.data.gov/dataset/safety-pilot-model-deployment-data.
2 SPMD for one day: https://catalog.data.gov/dataset/intelligent-transportation-systems-research-data-
exchange-safety-pilot-model-deployment-on-f77e5.
3 INFLO: https://catalog.data.gov/dataset/intelligent-network-flow-optimization-prototype-infrastructure-
traffic-sensor-system-data.
Making Sense of Behavior in Complex Datasets 501
Other data are housed at the institution that collected the data and often require a
data sharing agreement. Examples include:
• Waymo https://waymo.com/open/
• VTTI 100 Car study: https://dataverse.vtti.vt.edu/dataset.xhtml?persistentId=
doi:10.15787/VTT1/CEU6RB
Due to protection of human subjects, other datasets (e.g., Strategic Highway Research
Program 2 (SHRP 2) data) cannot be downloaded directly from the website and a
data use license is required to access the data. SHRP 2: https://dataverse.vtti.vt.edu/
dataverse/shrp2nds.
These real-world datasets can capture differences given the driver’s natural habitat,
but analysis of such data can be a challenge. For example, it would be rare to capture
a crash that occurred in the same environment, weather, and traffic condition for more
than one study participant. While the behavior that is related to a crash or safety-
critical event may be interesting to explore in the real world, it would be a challenge
to identify all the factors that may have caused the event. Oftentimes, the “abnor-
mal” behaviors are those that warrant further investigation. But how can we best infer
policies, education, or engineering changes from the few observations observed with
behavioral characteristics. That is the value of data collected from driving simulators.
Thus, driving simulators are useful to provide practical feedback before system
deployment and help guide the design of automated systems. Driving simulator
studies can be used to observe the effects of safety-critical events for varying
circumstances. For example, a study by Peng, Boyle, Ghazizadeh, and Lee (2013)
used a driving simulator and eye glance data to extract the level of risk associated
with entering and reading text while driving (Figure 23.1a). Given that the par-
ticipants were exposed to the exact same secondary task condition, the study was
very useful to examine individual differences (Figure 23.1b). More specifically,
the study showed that there was greater variation between participants for enter-
ing text when compared with reading text. And drivers that had long glances away
from the road when typing 12 characters also tended to have long glances when
typing four characters.
Because driving simulator studies are not the real world, there are always issues
regarding the validity of the outcomes and how well can they really generalize to
FIGURE 23.1 (See color insert.) Data from a study on secondary task engagement. The
data in (a) was reported in Peng et al (2013). A review of the data afterwards showed that the
spread of maximum eyes-off-road time was greater in the text entry task when compared with
the text reading task (b).
Making Sense of Behavior in Complex Datasets 503
the real world. There is also the possibility of simulator sickness, which can create
sampling biases. That is, the types of people who can participate are skewed toward
those that do not get simulator sickness.
4 https://crashstats.nhtsa.dot.gov/Api/Public/ViewPublication/812510.
504 Human Factors for Automated Vehicles
Likewise, survey or questionnaire data are not often viewed as complex datasets.
But these studies can include longitudinal travel diaries that can be time consuming
to review. For example, the Puget Sound Regional Travel Study had some partici-
pants filling out a multi-page travel diary for seven days (RSG, 2018).5 A review of
these diaries shows that they contain thousands of records over separate datafiles for
the household, person, vehicle, day, and trip.
23.3 DATA
Extracting meaningful information from terabytes to petabytes of data can be costly
and time consuming. The data also need to be prepped before data analysis. This
often requires review of several sources of data: video, vehicle kinematics, and
physiological data. You also need to link the questionnaires and other subjective
measures to the objective measures.
As part of data prepping, it is important to consider whether there are any
underlying subgroups. In human factors research, we often group the data by age
and gender, but there may be other more meaningful subgroups such as those
related to risk taking (e.g., Figure 23.1a). However, placing people into groups
may obscure other individual differences which may impact their use of new
technology. In summary, how you chop, slice, dice, or julienne the data will mat-
ter. And it is important to take the time to explore the data in different ways to
understand the driver’s behavior while they operate automated, connected, and
intelligent vehicles.
5 www.psrc.org/household-travel-survey-program.
6 Many states post their historical crash data on data.gov. Federal crash data on a sample of crashes
across the country is available at: BTS.gov.
Making Sense of Behavior in Complex Datasets 505
reported crashes but did have similar road and lane configurations. In this way, you
control for the roadway factor while focusing on the human or system behavior.
The ability to observe traffic patterns in the real world is where on-road data pro-
vide great value. As an example, data from a signalized intersection in Ann Arbor,
MI, as part of the SPMD study, provide data on drivers as they traversed through
an intersection at Washtenaw Ave. and S. Huron Pkwy in Ann Arbor, Michigan.
Between 2013 and 2017, there was an average of 43 crashes per year at this intersec-
tion (SEMCOG, 2019) making it an intersection of great concern for transportation
engineers and the public.
In understanding the reasons crashes may be high at this intersection, a first step
is to identify the buffer area of interest and how best to extract out behavior you
may not have observed in crash or self-reported data. The time and distance to the
traffic light are ways we can buffer the data. The distance to the traffic signal has
the advantage of ensuring that every participant’s data begin at the same location in
which they should have been able to view the traffic signal.
The SPMD data show that the majority of movements through this intersection
were drivers that were going straight in the eastbound (n = 856) and westbound (n =
1191) direction (Figure 23.2a). For those that are traveling straight in the eastbound
direction, we can examine traces for all traffic light conditions (red, yellow, and green)
(Figure 23.2c). As observed with many complex datasets, when all the traces of data
are mapped on a 2D plot, it becomes difficult to extract useful information. One can
try and color code the data based on the color of the signal as in Figure 23.2c, but
the traces for signals that were encountered often (red signal = 332 samples, green
FIGURE 23.2 (See color insert.) Washtenaw Ave & S Huron Pkwy, Ann Arbor, MI 48104
traffic patterns. (a) Travel movements through intersection, (b) aerial view of intersection,
(c) all movements in eastbound direction, and number of movements by (d) red signal
(e) green signal, and (f) yellow signal.
506 Human Factors for Automated Vehicles
signal = 496) will overshadow those with less driver exposure (yellow signal = 27).
However, decades of studies show that the yellow light provides the most insight on
the decision processes and risk taking of the driver (Konecni, Ebbeson, & Konecni,
1976; Edwards, Creaser, Caird, Lamsdale, & Chisholm, 2003; Papaioannou, 2007).
From Figure 23.2f, we can see two major driving patterns as drivers approach a yel-
low light: (1) those that continue through without changing speed and (2) those that
slow down and wait for the signal to change to red.
Upon further investigation of the movements for those that encountered the yellow
light (Figure 23.2f), we see that one driver was above the speed limit on the approach
(top line at x = −100 m) while another is actually speeding up (bottom line at x = −100 m).
These individuals are not going through this intersection in the same manner as the
others. This can be viewed by other drivers as “unexpected” and create a safety-critical
situation for those that are making a left turn in the westbound direction.
The traces of vehicle movements shown in Figure 23.2 took many months to code.
The analyst needed to identify the start and end point for data extraction, review the
video data for 856 movements, and manually code the color of the traffic signal using
the forward video data. This data cleaning (or data wrangling) needs to be done
before extracting out the speed, acceleration, and deceleration of each movement.
While video or face detection algorithms can help automate some of the data pro-
cessing, the accuracy of the detection greatly depends on the technology available
at the time of data collection and the quality of the video. In the examination of this
intersection, a video detection algorithm was created to automate the process of traf-
fic signal color detection. Unfortunately, it was only 50% reliable given the quality
and resolution of the data. While clearly time consuming, it is still very important
to review the data before any analytical models are created. Things that need to be
checked and cleaned in raw data include:
• Data entry errors: Depending on the statistical package used, upper and
lowercase letters are viewed as different units. This is the case for those
who use the R statistical package. Are spikes in the data true outliers that
warrant further investigation or are they due to data entry errors (e.g., the
majority of individuals report driving 100 miles per week ± 20 miles, but a
few entries are recorded with over 10,000 miles per week).
• Notation for missing data: Depending on how many people have entered
data, missing data can be listed in many ways (e.g., period, NA, na, blank).
• Numbers vs. characters: Check to see if some numbers are stored as text.
Creating a table of summary statistics can show which entries are not
included in numerical summaries (e.g., means, sd).
• Removing extra spaces: Some entries include extra spaces, which may also
cause some numbers to be read as characters.
There are also varying sample sizes which can impact the capability to state some-
thing is significantly different.
An example of why context matters is related to studies on driver distraction.
It appears to be clear from years of research that driver distraction can negatively
impact safety (e.g. Strayer et al., 2013). However, some outcomes show secondary
tasks can actually have a protective effect, or surprisingly help you be more attentive
(Young, 2013).
The impact of driver distraction on crash risk (in the real world) depends on many
things including: the type of distraction (eating, playing music, talking on the phone),
type of mobile device (phone, tablet, laptop), physical location of device (lap, purse,
car stand), type of interaction with the device (dialing, texting), as well as the context
in which the distraction occurs (highway, heavy traffic, snowy conditions).
One measure of safety associated with driver distraction is brake reaction time
(Consiglio, Driscoll, Witte, & Berg, 2003; Salvucci, 2002), where studies show that
distracted drivers tend to have higher brake reaction times (Strayer & Drews, 2004)
or longer reaction time (Lee, Caven, Haake, & Brown, 2001). The majority of these
findings were observed in a simulated environment. However, in a naturalistic set-
ting, the engagement in distractions is not controlled. The value of data from the real
world is the ability to capture these subtle differences, but it is greatly dependent on
how the analyst portrays the findings.
For example, let’s say we collected a sample of 1,200 events in the wild that
captures the driver’s brake reaction time while making a call on a mobile device.
The general conclusion could be that brake reaction time was actually quicker while
using the phone (see Figure 23.3). However, if we separated out the events into the
type of action used when making a call, we may see a difference in brake reaction
time. For those drivers that have to physically touch the screen to enter a phone num-
ber, a much slower response time is observed when compared with those who used
their voice while holding a hand-held device, and this may still be slower than those
who used Bluetooth dialing. Given that holding your device while driving is illegal
in many U.S. states, we may see that there are very few drivers that dial using the
touch screen or hand-held voice.
In fact, if we were to examine the data more closely, we may observe that the 150
observations for touch screen dialing is based on 75 unique participants (or 1–3 events
= + +
quicker slower
Brake Reaction Time
FIGURE 23.3 A hypothetical example of differences in brake reaction time for making a
call on a mobile device given the type of interaction.
508 Human Factors for Automated Vehicles
per person), while the 900 observations for Bluetooth dialing is based on 225 unique
participants (or 2–6 events per person). In summary, the differences in sample size,
variation, and type of device can impact the conclusions and need to be accounted for.
control in the number of characters that may be entered, the manner in which text is
entered, and even the type of device used to enter text.
Time can be segmented in many ways: by minutes, hours, days, weeks, months,
etc. You can also consider information by each sequential trip. The time scale
selected will greatly impact the amount of variation you see in the data as well.
However, that is the actual benefit of looking at your data over time, i.e., that you are
able to get a sense of the variation in the data and thereby gain insights on the spread
in the data. Of course, these graphs should not replace rigorous confidence interval
calculation, but they can help you quickly see large changes and help explain any
statistically significant differences.
A 2-D correlation plot of data from complex datasets is not often meaningful
given the number of data points on the graph; the graph tends to look like a blob
or ink splatter. However, a correlation matrix table that shows the correlation coef-
ficients between variables in one table may be very useful. While these matrices may
seem like a fishing expedition, it can help the analyst focus on only those relation-
ships that may be of interest, especially when there are hundreds or thousands of
variables. These correlations can then be placed in order from largest to smallest. It
will then be up to the analyst to decide how reasonable, surprising, or unrealistic are
the correlations.
One way to overcome overplotting is to use heat maps. Heat maps provide a
way to visualize the volume of events or locations. They are called heat maps as
they are used to show the “hot” areas in an otherwise meaningless blob. Bubble
charts are also great for showing relationships among discrete outcomes. They are
essentially a correlation plot in which each data point is replaced by a bubble. The
size of the bubble represents the amount of data that has the same x and y values.
Bubble charts provide another way to view three dimensions of information in a
2D space.
While not as intuitive, cluster analysis provides a way to group a set of people
together based on how similar their responses are across several variables. Nielsen
and Haustein (2018) used cluster analysis to group drivers based on their expecta-
tions of self-driving cars (skeptics, indifferent, enthusiasts). These groupings in the
data may not have been as apparent otherwise. Clustering drivers based on their
driving profiles, trust level, or driving experience are ways that can help us better
design advanced vehicle systems based on the individual needs and preferences.
Some example research related to these questions can be found in Merat, Jamson,
Lai, Daly, and Carsten (2014) and in many chapters in this Handbook; however,
clearly more research is needed. For example, while Merat et al. (2013) showed that it
can take up to 15 seconds for drivers to resume control, other studies show takeover
times from 4 to 6 seconds (Eriksson & Stanton, 2017; Gold, Happee, & Bengler,
2018) depending on other road, traffic, and driver conditions. As noted earlier, the
examination of takeover time is often examined in a driving simulator, and these
studies used precisely that data collection tool with some studies supplemented with
eye trackers.
• In what context will the automation fail? When will the automation fail?
• How can the system be designed to prevent these failures?
• Will the vehicle be able to stop in time when the car in front is closing in?
• Will the vehicle be able to detect when a pedestrian will cross in front of it?
• How did the combined lateral and longitudinal control system operate?
• What operational conditions influenced the availability of the automated
functions?
23.5 ANALYTICAL TOOLS
The overarching goal of this chapter was to provide readers ways to make sense of
behavior in complex datasets. The focus has been on understanding the data, the
user, and the context. This understanding will help researchers better define the
outcomes of interest as well as the analytical method that would be best to address
the research questions. As noted earlier, the on-road and simulator studies are
often supplemented with eye trackers, heart rate monitors, EEGs, and even tools
that collect body posture (Lotz & Weissenberger, 2018). Once these datasets are
integrated together, the analyst then decides which variables should be included in
the analytical model.
The value of data collected from on-road and simulator studies is the ability to
capture changes over time and location. Some examples of dependent measures (or
outcomes) of interest include takeover time, takeover probability, time-to-collision,
and crash probability (Gold et al., 2018). There are many analytical models that can
be used to examine these dependent variables, and while there are many perspec-
tives (see Donoho, 2015), the models can be generally grouped into three categories:
exploratory, inferential, and predictive.
23.6 CONCLUSION
This chapter describes four things needed to make sense of behavior in complex
datasets. First, know your data collection tools and what type of variables can be
collected from these tools. Second, know your data and the context that the data was
collected. While data cleaning can be time consuming, it is an important first step
after you collect data. There are many ways to look at your data. The simplest is to
do point estimations (means, medians, standard deviation), but complex data often
require data visualization and that requires time to explore. As Yanir Seroussi notes,7
good data scientists “… are ready to get their hands dirty by writing code that cleans
the data…” Third, know the research questions to ask to create good, thoughtful
controllers. This is achieved by understanding the goals of the controller as well as
their limitations. Lastly, know the analytical methods and the goal of the analysis;
are you interested in data exploration, inferring the likelihood of a system to be used
correctly, or predicting when the system will fail?
One of my graduate students gave the following advice at a panel session on
naturalistic data, “Don’t be scared.” Alternatively, I have colleagues who believe
graduate students should, “be scared, be very scared.” Depending on the day, I think
that both sentiments can be true. Complex data merit extreme respect because they
contain many pitfalls and we need to proceed carefully and with humility.
Finding the event of interest may be a challenge as it is often rare, and some
events, such as a crash on a freeway, may not be observed even after extensive data
collection efforts. However, capturing the differences in behavior can help ensure
that automated systems can enhance traffic flow, that drivers can be safer, and that
our transportation network accommodates a wide range of road users with comfort.
REFERENCES
Billieux, J., Maurage, P., Lopez-Fernandez, O., Kuss, D. J., & Griffiths, M. D. (2015). Can dis-
ordered mobile phone use be considered a behavioral addiction? An update on current
evidence and a comprehensive model for future research. Current Addiction Reports,
2(2), 156–162.
Boelhouwer, A., van den Beukel, A. P., van der Voort, M. C., & Martens, M. H. (2019). Should
I take over? Does system knowledge help drivers in making take-over decisions while
driving a partially automated car? Transportation Research Part F: Traffic Psychology
and Behaviour, 60, 669–684
Boyle, L.N. & Lee, J.D. (2010). Using driving simulators to assess driving safety. Accident
Analysis & Prevention, 42(3), 785–787.
Carsten, O., Kircher, K., & Jamson, S. (2013). Vehicle-based studies of driving in the real
world: The hard truth? Accident Analysis & Prevention, 58, 162–174.
Consiglio, W., Driscoll, P., Witte, M., & Berg, W. P. (2003). Effect of cellular telephone
conversations and other potential interference on reaction time in a braking response.
Accident Analysis & Prevention, 35(4), 495–500.
Dai, D. J. & Jaworski, D. (2016). Influence of built environment on pedestrian crashes:
A network-based GIS analysis. Applied Geography, 73, 53–61. doi:10.1016/j.
apgeog.2016.06.005
7 https://yanirseroussi.com/2014/10/23/what-is-data-science/.
516 Human Factors for Automated Vehicles
Donmez, B., Boyle, L., & Lee, J. D. (2009). Designing feedback to mitigate distraction. In
M. A. Regan, J. D. Lee, & K. Young (Eds.), Driver Distraction: Theory, Effects, and
Mitigation. Boca Raton, FL: CRC Press.
Donoho, D. (2015, September 18). 50 years of Data Science. Based on a Presentation at the
Tukey Centennial Workshop, Princeton, NJ. Retrieved from http://courses.csail.mit.
edu/18.337/2015/docs/50YearsDataScience.pdf
Edwards, C. J., Creaser, J. I., Caird, J. K., Lamsdale, A. M., & Chisholm, S. L. (2003).
Older and younger driver performance at complex intersections: Implications for
using perception-response time and driving simulation. Proceedings of the Second
International Driving Symposium on Human Factors in Driver Assessment, Training
and Vehicle Design (pp. 33–38). Iowa City, IA: Public Policy Center.
Eriksson, A. & Stanton, N. A. (2017). Takeover time in highly automated vehicles: Noncritical
transitions to and from manual control. Human Factors, 59(4), 689–705.
Fisher, D. L., Rizzo, M., Caird, J., & Lee, J. D. (2011). Handbook of Driving Simulation for
Engineering, Medicine, and Psychology. Boca Raton, FL: CRC Press.
Gold, C., Happee, R., & Bengler, K. (2018). Modeling take-over performance in level 3 con-
ditionally automated vehicles. Accident Analysis & Prevention, 116, 3–13
Gringarten, E. & Deutsch, C. V. (2001). Teacher’s aide variogram interpretation and model-
ing. Mathematical Geology, 33(4), 507–534.
Janssen, C. P., Boyle, L. N., Kun, A. L., Ju, W., & Chuang, L. L. (2019). A hidden Markov
framework to capture human–machine interaction in automated vehicles. International
Journal of Human–Computer Interaction, 35(11), 947–955.
Kidd, D. G., Tison, J., Chaudhary, N. K., McCartt, A. T., & Casanova-Powell, T. D. (2016).
The influence of roadway situation, other contextual factors, and driver characteris-
tics on the prevalence of driver secondary behaviors. Transportation Research Part F:
Traffic Psychology and Behaviour, 41, 1–9.
Kim, J. K., Ulfarsson, G. F., Shankar, V. N., & Mannering, F. L. (2010). A note on model-
ing pedestrian-injury severity in motor-vehicle crashes with the mixed logit model.
Accident Analysis & Prevention, 42(6), 1751–1758.
Kim, M., Kho, S. Y., & Kim, D. K. (2017). Hierarchical ordered model for injury severity of
pedestrian crashes in South Korea. Journal of Safety Research, 61, 33–40. doi:10.1016/j.
jsr.2017.02.011
Konecni, V., Ebbeson, E. B., & Konecni, D. K. (1976). Decision processes and risk taking
in traffic: Driver response to the onset of yellow light. Journal of Applied Psychology,
61(3), 359.
Lappi, O. (2015). Eye tracking in the wild: The good, the bad and the ugly. Journal of Eye
Movement Research, 8(5), 1–21.
Lee, J. D., Caven, B., Haake, S., & Brown, T. L. (2001). Speech-based interaction with in-
vehicle computers: The effect of speech-based e-mail on drivers’ attention to the road-
way. Human Factors, 43(4), 631–640.
Lee, J. D., Young, K.L., & Regan, M.A. (2009). Defining driver distraction. In M. A. Regan,
J. D. Lee, & K. L. Young (Eds.), Driver Distraction: Theory, Effects, and Mitigation
(pp. 31-40). Boca Raton, FL: CRC Press.
Lotz, A. & Weissenberger, S. (2018). Predicting take-over times of truck drivers in condi-
tional autonomous driving. International Conference on Applied Human Factors and
Ergonomics (pp. 329–338). Berlin: Springer.
Merat, N., Jamson, A. H., Lai, F. C., Daly, M., & Carsten, O. M. (2014). Transition to
manual: Driver behaviour when resuming control from a highly automated vehicle.
Transportation Research Part F: Traffic Psychology and Behaviour, 27, 274–282.
Miller, E. E. & Boyle, L. N. (2019). Adaptations in attention allocation: Implications for take-
over in an automated vehicle. Transportation Research Part F: Traffic Psychology and
Behaviour, 66, 101–110.
Making Sense of Behavior in Complex Datasets 517
Naujoks, F., Purucker, C., & Neukum, A. (2016). Secondary task engagement and vehicle
automation–Comparing the effects of different automation levels in an on-road experi-
ment. Transportation Research Part F: Traffic Psychology and Behaviour, 38, 67–82.
Nielsen, T. A. S. & Haustein, S. (2018). On sceptics and enthusiasts: What are the expecta-
tions towards self-driving cars? Transport Policy, 66, 49–55.
Oviedo-Trespalacios, O., Haque, M. M., King, M., & Washington, S. (2019). “Mate! I’m run-
ning 10 min late”: An investigation into the self-regulation of mobile phone tasks while
driving. Accident Analysis & Prevention, 122, 134–142.
Papaioannou, P. (2007). Driver behaviour, dilemma zone and safety effects at urban sig-
nalised intersections in Greece. Accident Analysis & Prevention, 39(1), 147–158.
Peng, Y., Boyle, L.N., Ghazizadeh, M., & Lee, J. D. (2013). Factors affecting glance behav-
ior when interacting with in- vehicle devices: Implications from a simulator study.
Proceedings of the Seventh International Driving Symposium on Human Factors
in Driver Assessment, Training and Vehicle Design (pp. 474–480). Iowa City, IA:
University of Iowa. doi:10.17077/drivingassessment.1529
Prokop, G. (2001). Modeling human vehicle driving by model predictive online optimization.
Vehicle System Dynamics, 35(1), 19–53.
Ranney, T. A., Simmons, L. A., & Masalonis, A. J. (1999). Prolonged exposure to glare and
driving time: Effects on performance in a driving simulator. Accident Analysis &
Prevention, 31(6), 601–610.
RSG. (2018). Draft Final Report: 2017 Puget Sound Regional Travel Study. Retrieved from
www.psrc.org/sites/default/files/psrc2017-final-report.pdf
Salvucci, D. D. (2002). Modeling driver distraction from cognitive tasks. Proceedings of the
Annual Meeting of the Cognitive Science Society, 24(24), 792-797.
SEMCOG. (2019). High-Frequency Crash Locations: Washtenaw County: Washtenaw
Ave - Huron Pkwy S Detail Crash List. Retrieved from https://semcog.org/
high-frequency-crash-locations/point_id/81012727/view/individualcrashreport
Strayer, D. L., Cooper, J. M., Turrill, J., Coleman, J., Medeiros-Ward, N., & Biondi, F.
(2013). Measuring Cognitive Distraction in the Automobile. Washington, DC: AAA
Foundation for Traffic Safety. Retrieved from https://aaafoundation.org/wp-content/
uploads/2018/01/MeasuringCognitiveDistractionsReport.pdf
Strayer, D. L. & Drews, F. A. (2004). Profiles in driver distraction: Effects of cell phone con-
versations on younger and older drivers. Human Factors, 46, 640–649
Tian, R., Li, L., Yang, K., Chien, S., Chen, Y., & Sherony, R. (2014). Estimation of the
vehicle-pedestrian encounter/conflict risk on the road based on TASI 110-car natural-
istic driving data collection. 2014 IEEE Intelligent Vehicles Symposium Proceedings
(pp. 623–629). Piscataway, NJ: IEEE.
Verbeke, G., Molenberghs, G., & Rizopoulos, D. (2010). Random effects models for lon-
gitudinal data. In K. van Montfort, J. H. L. Oud, & A. Satorra (Eds.), Longitudinal
Research with Latent Variables (pp. 37–96). Berlin: Springer.
Xiong, H. & Boyle, L. N. (2012). Drivers’ adaptation to adaptive cruise control: Examination
of automatic and manual braking. IEEE Transactions on Intelligent Transportation
Systems, 13(3), 1468–1473.
Young, R. A. (2013). Naturalistic studies of driver distraction: Effects of analysis methods
on odds ratios and population attributable risk. Proceedings of the 7th International
Driving Symposium on Human Factors in Driver Assessment Training and Vehicle
Design (pp. 509–515). Iowa City, IA: Public Policy Center.
Zangenehpour, S., Strauss, J., Miranda-Moreno, L. F., & Saunier, N. (2016). Are signalized
intersections with cycle tracks safer? A case–control study based on automated sur-
rogate safety analysis using video data. Accident Analysis & Prevention, 86, 161–172.
Ziegler, A. & Vens, M. (2010). Generalized estimating equations. Methods of Information in
Medicine, 49(5), 421–425.
Taylor & Francis
Taylor & Francis Group
http://taylorandfrancis.com
24 Future Research Needs
and Conclusions
Donald L. Fisher
Volpe National Transportation Systems Center
William J. Horrey
AAA Foundation for Traffic Safety
John D. Lee
University of Wisconsin-Madison
Michael A. Regan
University of New South Wales
CONTENTS
Key Points .............................................................................................................. 520
24.1 Introduction
.................................................................................................. 520
24.2 The State of the Art: ACIVs ......................................................................... 521
24.3 Issues in the Deployment of ACIVs (Problems) ........................................... 522
24.3.1 Drivers’ Mental Models of Vehicle Automation (Chapter 3) .......... 522
24.3.2 Driver Trust in ACIVs (Chapter 4) .................................................. 523
24.3.3 Public
Opinion about ACIVs (Chapter 5)........................................ 523
24.3.4 Workload, Distraction, and Automation (Chapter 6) ...................... 524
24.3.5 Situation Awareness in Driving (Chapter 7).................................... 525
24.3.6 Allocation of Function to Humans and Automation and the
Transfer of Control (Chapter 8) ������������������������������������������������������� 525
24.3.7 Driver Fitness in the Resumption of Control (Chapter 9) ............... 526
24.3.8 Driver Capabilities in the Resumption of Control (Chapter 10) ...... 527
24.3.9 Driver State Monitoring for Decreased Fitness to Drive
(Chapter 11) ������������������������������������������������������������������������������������� 527
24.3.10 Behavioral Adaptation and ACIVs (Chapter 12) ............................. 528
24.3.11 Distributed Situation Awareness (Chapter 13) ��������������������������������� 528
24.3.12 Human Factors Issues in the Regulation of Deployment
(Chapter 14) ������������������������������������������������������������������������������������� 529
24.4 Human-Centered Design of ACIVs (Solutions) ............................................ 529
24.4.1 HMI Design for ACIVs (Chapter 15) .............................................. 529
24.4.2 HMI Design for Fitness-Impaired Populations (Chapter 16) .......... 530
519
520 Human Factors for Automated Vehicles
24.4.3 Automated Vehicle Design for People with Disabilities (Chapter 17) ��� 530
24.4.4 Importance of Training for ACIVs (Chapter 18) .............................. 531
24.5 Special
Topics ............................................................................................... 531
24.5.1 Connected Vehicles in a Connected World: A Sociotechnical
Systems Perspective (Chapter 19) ����������������������������������������������������� 532
24.5.2 Congestion and Carbon Emissions (Chapter 20) .............................. 532
24.5.3 Automation Lessons from Other Domains (Chapter 21) .................. 532
24.6 Evaluation of ACIVs ..................................................................................... 533
24.6.1 HF Considerations in Testing and Evaluating ACIVs (Chapter 22)������533
24.6.2 Techniques for Making Sense of Behavior in Complex Datasets
(Chapter 23) �������������������������������������������������������������������������������������� 533
24.7 Conclusions
.................................................................................................... 534
Acknowledgements................................................................................................. 534
References .............................................................................................................. 534
KEY POINTS
• The exact point at which a given vehicle technology will be present in the
majority of the vehicle fleet is very difficult to predict, but no one expects
Level 4 or 5 technologies to be a majority of the fleet in the next ten years.
• Given that the majority of the vehicle fleet for the next ten years will be Level
0–3 vehicles, along with vehicles which have active safety systems, the over-
whelming majority of the chapters in the Handbook are relevant to today’s
human factors concerns and those concerns for at least the next decade.
• The best summaries of the future research needs are in the chapters themselves.
• This chapter focuses on the various research needs in each topical area
which the editors believe are most critical, based on their broad reading of
all of the chapters.
24.1 INTRODUCTION
Our collective experience professionally, as editors of this Handbook, and as avid
readers of the chapters contained herein, provides us with a perspective on the issues
surrounding the human factors concerns that are raised by the advance of automated,
connected, and intelligent vehicles (ACIVs) that we would like to share with our
readers. This experience may make us blind to the real concerns, but we hope not.
And in so far as it does, you should turn, as we do, to the chapters themselves where
the real expertise and wisdom lies.
So, what we want to do in closing is give readers the editors’ reflections on the
chapters and topics, which includes in addition to summaries of the chapters our best
sense of the most critical human factors research, development, practice, planning,
and/or policy needs as relevant to the chapter being discussed. We encourage you to
disagree, to push the limits of what is possible, in whatever area is of most interest
to you. We certainly do not have privileged access to what will unfold in the future.
With that as background, we hope you find at least one or two nuggets in what follows.
Future Research Needs and Conclusions 521
control the driver needs to maintain over the vehicle in order to keep engaged.
Some vehicle manufacturers now allow drivers to keep their hands off the wheel as
long as their eyes are on the road for some period of time in a given interval. Other
vehicle manufacturers allow drivers to do almost anything as long as their hands
are on the wheel for some period of time in a given interval. However, no one knows
what exactly a driver must do to remain safely engaged at all points in time during a
trip, knowing that the car he or she is driving will serve as a guardian if the situation
warrants it. Research is needed into the most basic of questions.
societal acceptance of them. If not, people may refuse to purchase them, refuse to
travel in them, or interact with them in ways unintended by designers.
The following are some research needs in this area, as distilled from Chapter 5
and elsewhere (Cunningham, Regan, Horberry, & Dixit, 2019). First, most studies of
public opinion of automated vehicles have, to date, been cross-sectional in design,
precluding formulation of any conclusions about causality between the constructs of
interest. Future studies would benefit from the use of longitudinal designs to track
changes in public opinion over time and discern the factors that underlie identi-
fied changes. Second, highly and fully automated passenger vehicles (SAE Levels
3–5) are not yet commercially available in large numbers. Consequently, measure-
ment of public acceptability in most studies to date has been based on people having
to imagine how such vehicles might operate in the future. An important area for
future research is research on acceptance of such vehicles, after people have had
direct exposure to them, and to compare the findings with those obtained prior to
exposure—to determine to what extent measures of acceptability are predictive of
measures of acceptance. Third, there is evidence that, while there is some com-
monality in opinion about automated vehicles across countries and cultures, there
is also some divergence of opinion. Further research is needed to understand these
differences across a wider range of countries and cultures, to inform local needs
and to inform those who seek to market vehicles and systems in other countries
and cultures. Finally, there are many population demographic and other variables
(e.g., age, gender, willingness to pay) that can be used to predict societal acceptabil-
ity and acceptance of highly automated driving systems. Further research is needed
to understand which of these demographic and other variables, individually and col-
lectively, account for most of the variance in public acceptability and acceptance of
these technologies.
by drivers who drive erratically in traffic? In addition to the many specific research
questions, there are also more general conceptual questions regarding workload,
distraction, and automation. For example, will workload, distraction, and inatten-
tion, more generally, remain as issues in vehicles equipped with SAE Level 4 and
5 technologies that are operating autonomously (Cunningham & Regan, 2018)? For
these levels, is it possible for such self-driving vehicles themselves to be distracted or
overloaded (i.e., can the complexity of a situation exceed the processing power of the
hardware and software to take the appropriate countermeasures)? We might call this
“vehicle distraction” or “vehicle overload.” Here, again, the frame of reference from
which to conceptualize distraction and workload will change. But what competing
activities, if any, could divert a vehicle’s “attention” (or computational resources),
more generally, away from activities critical for safe driving? In fact, what might it
mean for a vehicle driving autonomously to be inattentive or be overloaded; and if it
was, what might be the mechanisms of inattention and resource constraints?
task (and vice versa). If one thing should be clear from the Handbook and Chapter 8
in particular, it is that the situation is not so straightforward. Humans are very good
at driving and its subtasks; the crash rate per mile driven is very low (while acknowl-
edging that every crash is a tragic event to be avoided). Automation is very good at
performing many driving tasks as well. Therein lies one challenge: the degree of
overlap in those tasks that both drivers and automation are good at. Who should be
assigned what task? Another challenge relates to those tasks and subtasks at which
drivers and automation are not very adept (e.g., driving in adverse conditions). As the
authors note, this is especially critical when the human–automation system needs
to negotiate control transfers. The appropriate allocation of functions and authority
continues to be an important area for research, including questions of can a function
be safely automated, should it be, and what happens when situational factors change?
Unfortunately, or perhaps ironically, some of the decisions regarding the allocation of
function are products of a design philosophy and occur far removed from the eventual
moment by moment interactions between the driver, system, and the traffic environ-
ment. Automation does not simply replace the person in performing certain functions,
but creates new functions associated with coordinating the automation, an important
research question remains: How to anticipate and support that coordination?
In Chapter 14, the author sets out to answer the question “What Human Factors
issues need to be considered in preparing policy and regulation for the deployment
of automated vehicles?” The focus of the chapter was on how much policy makers
need to be involved in attempting to influence: (1) how companies design systems for
human users and (2) how humans interact with those systems. As noted by the author,
this is difficult in areas of new and evolving technology, where the risks may not yet
be clear, let alone optimal solutions to address those risks. Consequently, the author
argued that policy makers may need to consider moving towards less prescriptive and
more outcomes-, or principles-based, approaches to policy and regulation to ensure
that risks are managed and companies and individuals can be held responsible, while
allowing for system innovation and evolution, but with less standardization, which
can create its own risks. The author underscored in Chapter 14 the need for govern-
ments to continually update policy as human behavior changes and the understand-
ing of human risks surrounding automated vehicles evolves, and identified two main
areas for future research. First, there is the need for governments to better monitor
road safety, due to the data that automated vehicles will provide. Second, there is a
strong need for research by governments to identify new risks, to understand their
likelihood and severity, and to react accordingly—in relation to the human–machine
interface (HMI) between the vehicle and the passengers/driver/fallback-ready user
inside the vehicle, how other road users interact with these vehicles, and how their
behavior evolves over time, and what this might mean for safety.
there, the design guidelines will need to be updated and modified for automated vehi-
cles. The opportunities for teaming the human and the automated vehicle through the
use of artificial intelligence (AI) are huge here (Kamar, 2016), not only in the original
conception of the software but also for updates to the software. In such a teamed sys-
tem, the driver will take actions based on recommendations from the AI partner. The
combined system (human, automation, and AI) can arguably make better decisions than
either component alone. What remains particularly problematic, and has been identi-
fied in a recent report as such (Bansal, Nushi, Kamar, Lasecki, & Horvitz, 2019), is the
fact that updates to the software are often incompatible with a driver’s previous mental
model of the software. Thus, the driver ends up making a decision that is incorrect with
the updated software that would have been correct with the original software. Even
though the updates improve the AI performance they might not improve the combined
AI–human system. So, ideally there needs to be backward compatibility of the software
updates with the existing mental models. But it is not always the case that such can hap-
pen. In such cases one could potentially retrain the driver or share the AI’s confidence in
a prediction. However, these two approaches come with real drawbacks (Bansal, Nushi,
Kamar, Lasecki, & Horvitz, 2019). Research is needed to deal with the backward com-
patibility problem and all of its ramifications for HMI design.
disabilities and many of the tenets of universal design provide some inroads for
addressing the needs of these very different driving populations. They also rein-
force the notion of the whole trip; for these and many other individuals, getting to
and from the vehicle and getting into the vehicle are as important as what the auto-
mated vehicle does while enroute. For them it is not the first mile/last mile which is
the only problem; rather, it is the first 100 and last 100 ft. Neglecting research that
addresses the whole trip can amplify rather than reduce the inequities of those who
are transportation disadvantaged. While many of the design principles discussed can
improve driver’s experiences, there is a general need for research that documents and
quantifies these improvements. Given the breadth of drivers with disabilities, it will
be important to understand how different driver characteristics interact with system
effectiveness. Moreover, as the technology creates the potential for new drivers who
were previously unable to drive, there will need to be a more profound understanding
of how automation impacts their safety and mobility.
24.5 SPECIAL TOPICS
The special topics that were discussed in this section are the ones which pushed the
envelope of our understanding of and proposed use for ACIVs, either by looking
towards the future or by taking a step backwards and considering what has been
learned in other domains. We now look back on these chapters and single out those
issues that we see in most need of attention among the many that were mentioned.
532 Human Factors for Automated Vehicles
The author in Chapter 23 provides abundant examples of tools that can be used to inspect
complex datasets and tools that can be used, once a theoretical model is available, to
534 Human Factors for Automated Vehicles
analyze the complex dataset that are appropriate to the questions being asked (appropri-
ate to the theoretical model). These complex datasets can help provide much more com-
plete answers to research questions than were previously available, questions related to
the human operator’s engagement and performance in a vehicle over time and space. The
research questions that can now be addressed (and need to be addressed) include: How
do drivers respond to transitions in automation states? Will the human operator know
when the automation fails? Will the human operator have sufficient time to take over
when the automation fails? How quickly can drivers resume control from a highly auto-
mated vehicle, and does this resumption change with greater system exposure? What are
the negative or positive effects of automation on driver’s performance over time?
24.7 CONCLUSIONS
We want to conclude by thanking once again all of the authors who contributed to this
Handbook. The creation, development, and completion of the Handbook have been a
journey for many years and have enriched us as well as we hope the contributors and
ultimately our readers. We have already probably gone on too long. So we leave you
with one theme which has played centrally throughout the entire Handbook.
ACIVs are one key to reducing some of society’s greatest problems. The sheer mag-
nitude of the 1.4 million annual vehicle fatalities around the world is astronomical in
terms of economic costs and psychological burden. On a more personal level, tragi-
cally so many of us know only too well someone who has fallen victim to a distracted
driver, a driver who has fallen asleep, or perhaps a driver who has simply accelerated
when he or she had meant to decelerate. The economic costs and psychological burden
of carbon emissions are all becoming too clear. Congestion is reaching levels that were
almost unbelievable ten years ago. More generally, commuting opportunities are the
key factors in social mobility, even more so than factors related to crime and educa-
tion (Bouchard, 2015). Individuals with mobility impairments could for the first time
be freed from the very real constraints on transportation needs that come with those
impairments. But the promise of ACIVs cannot be fully realized until the human fac-
tors concerns are addressed; in fact, the promise could be delayed considerably unless
these concerns are addressed before they become an issue (“dread risk,” Chapter 4).
ACKNOWLEDGEMENTS
Donald Fisher would like to acknowledge the support of the Volpe National
Transportation Systems Center for portions of the preparation of this Handbook.
The opinions, findings, and conclusions expressed in this publication are those of
the authors and not necessarily those of the Department of Transportation, the John
A. Volpe National Transportation Systems Center, the AAA Foundation for Traffic
Safety, or the University of New South Wales.
REFERENCES
AAA (2019). Advanced Driver Assistance Technology names: AAA’s recommendation for
common naming of advanced safety systems. Retrieved from https://www.aaa.com/
AAA/common/AAR/files/ADAS-Technology-Names-Research-Report.pdf
Future Research Needs and Conclusions 535
Bansal, G., Nushi, B., Kamar, E. W., Lasecki, W., & Horvitz, E. (2019, August 17). A Case
for Backward Compatibility for Human-AI Teams. Retrieved from Computer Science.
Human Computer Interaction, https://arxiv.org/pdf/1906.01148.pdf
Bouchard, M. (2015, May 7). Transportation Emerges as Crucial to Escaping Poverty. The
New York Times. Retrieved September 28, 2019, from www.nytimes.com/2015/05/07/
upshot/transportation-emerges-as-crucial-to-escaping-poverty.html
Boudette, N. (2019, July 17). Despite high hopes, self-driving cars are ‘way in the future’.
The New York Times. Retrieved from https://www.nytimes.com/2019/07/17/business/
self-driving-autonomous-cars.html
Cunningham, M. & Regan, M. (2018). Driver distraction and inattention in the realm of auto-
mated driving. IET Intelligent Transport Systems, 12, 407–413.
Cunningham, M., Regan, M., Horberry, W., & Dixit, V. (2019). Public opinion about auto-
mated vehilces in Australia: Results from a large-scale national survey. Transportation
Research Part A, 129, 1–18.
Global Toyota, Feature Innovation Automated Driving Technology Region U.S. (2019,
January 8). Dr. Gill Pratt, CEO, Toyota Research Institute CES 2019 Remarks.
Retrieved from Global Toyota, https://global.toyota/en/newsroom/corporate/26085202.
html?_ga=2.245291527.341054311.1565554409–1615031356.1565554409
Kamar, E. (2016). Directions in hybrid intelligence: Complementing AI systems with human
intelligence. International Joint Conference on Artificial Intelligence. Retrieved
August 17, 2019, from https://pdfs.semanticscholar.org/38b5/fec2730e4e3224d97469-
4d4b3522c0778e 45.pdf ?_ ga=2.26850 0209.1071098841.1566058726 –1666
421389.1566058726
Kelley-Baker, T., Berning, A., Ramirez, A., Lacey, J. C., & Compton, R. (2017). 2013-
2014 National Roadside Study of Alcohol and Drug Use by Drivers: Drug Results.
Washington, DC: National Highway Traffic Safety Administration. Retrieved August
17, 2019, from www.nhtsa.gov/sites/nhtsa.dot.gov/files/documents/13013-nrs_drug-
053117-v3-tag_0.pdf
Klauer, S., Guo, F., Simons-Morton, B., Ouimet, M., Lee, S., & Dingus, T. (2014). Distracted
driving and risk of road crashes among novice and experienced drivers. New England
Journal of Medicine, 370, 54–59.
Knipling, R. (2015). Naturalistic driving events: No harm, no foul, no validity. Proceedings of
the Eighth International Driving Symposium on Human Factors in Driver Assessment,
Training and Vehicle Design. Iowa City: University of Iowa.
Litman, T. (2019). Autonomous Vehicle Implementation Predictions: Implications for
Transport Planning. Victoria, British Columbia: Victoria Transport Policy Institute.
Retrieved August 10, 2019, from www.vtpi.org/avip.pdf
Mayhew, D., Simpson, H., & Ferguson, S. (2006). Collisions involving senior drivers: High
risk conditions and locations. Traffic Injury Prevention, 7, 117–124.
McDonald, A., Carney, C. & McGehee, D. V. (2018). Vehicle Owners’ Experiences with
and Reactions to Advanced Driver Assistance Systems. Washington, D.C.: AAA
Foundation for Traffic Safety.
Mcllroy, R. & Stanton, M. (2018). Eco-driving: From Strategies to Interfaces (Transportation
Human Factors). Boca Raton, FL: CRC Press.
National Safety Council. (2019). Drowsy Driving Is Impaired Driving. Itasca, IL: National
Safety Council. Retrieved August 17, 2019, from www.nsc.org/road-safety/safety-topics/
fatigued-driving
NHTSA. (2016, September 24). Driver Alcohol Detection System for Safety. Retrieved from
NHTSA. Vehicle Safety, www.nhtsa.gov/Vehicle+Safety/DADSS
Sagberg, F., Fosser, S., & Saetermo, I.-A. F. (1997). An investigation of behavioural adaptation
to airbags and antilock brakes among taxi drivers. Accident Analysis and Prevention,
29, 293–302.
536 Human Factors for Automated Vehicles
Samuel, S. & Fisher, D. (2015). Evaluation of the minimum forward roadway glance duration
critical to latent hazard anticipation. Transportation Research Record, 2518, 9–17.
Seppelt, B., Seaman, S., Lee, J., Angell, L., Mehler, B., & Reimer, B. (2017). Glass half-full:
On-road glance metrics differentiate crashes from near-crashes in the 100-Car data.
Accident Analysis and Prevention, 107, 48–62.
Sivak, M. & Schoettle, B. (2012). Eco-driving: Strategic, tactical, and operational decisions
of the driver that influence vehicle fuel economy. Transport Policy, 22, 96–99.
Smith, R., Turturici, M., & Camden, M. (2018). Countermeasures Against Prescription and
Over-the-Counter Drug-Impaired Driving. Washington, DC: AAA Foundationn for
Traffic Safety. Retrieved August 17, 2019, from https://aaafoundation.org/wp-content/
uploads/2018/10/VTTI_Rx_OTC_FinalReport_VTTI-FINAL-complete-9.20.pdf
Watson, A. & Zhou, G. (2016). Microsleep prediction using an EKG capable heart rate moni-
tor. IEEE First International Conference on Connected Health: Applications, Systems.
Retrieved February 28, 2019, from https://ieeexplore.ieee.org/document/7545850
Yadron, D. (2016, June 2). Two years until self-driving cars are on the road – is Elon Musk
right? The Guardian. Retrieved August 11, 2019, from www.theguardian.com/
technology/2016/jun/02/self-driving-car-elon-musk-tech-predictions-tesla-google