Detailed Survey of NGC
Detailed Survey of NGC
Farid Kendoul
Australian Research Centre for Aerospace Automation (ARCAA), 22-24 Boronia Road, Eagle Farm, 4009, Queensland, Australia and
CSIRO ICT Centre, PO Box 883, Kenmore, Queensland, Australia
e-mail: farid.kendoul@csiro.au
Received 23 March 2011; accepted 12 August 2011
Recently, there has been growing interest in developing unmanned aircraft systems (UAS) with advanced on-
board autonomous capabilities. This paper describes the current state of the art in autonomous rotorcraft UAS
(RUAS) and provides a detailed literature review of the last two decades of active research on RUAS. Three
functional technology areas are identified as the core components of an autonomous RUAS. Guidance, navi-
gation, and control (GNC) have received much attention from the research community, and have dominated
the UAS literature from the nineties until now. This paper first presents the main research groups involved in
the development of GNC systems for RUAS. Then it describes the development of a framework that provides
standard definitions and metrics characterizing and measuring the autonomy level of a RUAS using GNC as-
pects. This framework is intended to facilitate the understanding and the organization of this survey paper, but
it can also serve as a common reference for the UAS community. The main objective of this paper is to present
a comprehensive survey of RUAS research that captures all seminal works and milestones in each GNC area,
with a particular focus on practical methods and technologies that have been demonstrated in flight tests. These
algorithms and systems have been classified into different categories and classes based on the autonomy level
they provide and the algorithmic approach used. Finally, the paper discusses the RUAS literature in general and
highlights challenges that need to be addressed in developing autonomous systems for unmanned rotorcraft.
C 2012 Wiley Periodicals, Inc.
Class III: the CSIRO-ARCAA robotic Class IV: the MIT autonomous indoor Class V: the Epson micro-flying
helicopter (VARIO Benzin Trainer) quadrotor (Ascending Technologies) robot
Figure 1. Categories of unmanned rotorcraft systems.
carrying a safety pilot onboard the vehicle, thereby pro- availability, but they require considerable maintenance
viding an excellent test bed for complex and risky flight and engineering work to convert them from RC heli-
tests. A good example of this category is the Boeing Un- copters to dependable UAS.
manned Little Bird (ULB) helicopter (the ULB has been • Category IV: Mini RUAS that are man-portable and can
used in the Piasecki/CMU CMUAV-CBRNE project). fly outdoors as well as in confined and indoor environ-
• Category II: Medium-scale UAS helicopters that are ments. Their payload is generally less than 2 kg and their
available as autonomous or semiautonomous platforms total weight can range from hundreds of grams to a few
and have significant payload (more than 10 kg), with kilograms. Most of them are electrically powered, with
a total weight of more than 30 kg. Some examples are a flight time that ranges from 5 min to 1 h, depend-
the Yamaha RMAX (Japan), Shiebel S-100 (Austria), Ro- ing on the payload. Most multirotors fall into this cat-
tomotion SR500 (USA), Hirobo SkySurveyor (Japan), egory, with some other designs. Their limited payload
Zaval (Russia), ServoHeli-120 (China), and RemoH- still allows them to carry small conventional avionics
C100 (South Korea). The main advantage of these plat- and lightweight perception sensors. Their low cost, easy
forms is their significant payload which allows them to maintenance, and safe operation make them excellent
carry different heavy and high-quality navigation and test beds for research.
mission sensors. These platforms are generally well- • Category V: Micro air vehicles (MAVs) with less than
engineered with some level of dependability. 100 g payload, such as the Epson micro flying robot.
• Category III: Small-scale RUAS that are based on RC Standard navigation sensors and avionics are difficult
helicopters with optionally integrated autopilot. They to carry on these machines. Research challenges in-
have a payload of several kilograms (2 to 10 kg) and a clude novel sensing and navigation solutions, such as
total weight of less than 30 kg. Helicopters such as the those based on bio-mimetic principles. These RUAS are
Vario Benzin Trainer, Bergen Industrial Twin, Rotomo- mainly designed for indoor applications and can be
tion SR100, and Hirobo and others, fall into this cate- launched and recovered by hand.
gory. Platforms of this class have less payload than those
of category II, but they can still carry most of the navi- RUAS of categories I, II, and III are generally suit-
gation and mission sensors. They feature low cost and able for outdoor applications and require a suitable area
for launch and recovery. These three categories are domi- Finally, discussions and conclusions about published pa-
nated by conventional helicopter configurations with a sin- pers and developed systems are given in Section 7.
gle main rotor and a tail rotor. On the other hand, most
mini and micro UAS of categories IV and V are multirotor 2. RESEARCH GROUPS INVOLVED IN RUAS
platforms (e.g., quadrotors and coaxial vehicles) that can fly RESEARCH AND DEVELOPMENT
outdoors as well as indoors and can be launched by hand
or from small and narrow spaces. A number of research groups are working on the develop-
ment of autonomy technologies for RUAS. Figures 2 lists
some 27 research groups that are involved in research and
1.2. Motivation for and Scope of This Survey development of autonomous RUAS. The list is not exhaus-
Nonmilitary research in RUAS only began in the early tive and excludes military and industrial research groups.
1990s, and they have now become a popular research area.
Over the past 20 years, an enormous amount of research 3. AUTONOMY LEVELS FOR UNMANNED
has gone into guidance, navigation, and control (GNC) for ROTORCRAFT SYSTEMS
RUAS, resulting in veried techniques and a large number of
published papers. Although some survey papers have tried Before review of recent advances in research and devel-
to review small subsets of methods in a particular area— opment of autonomous RUAS, it is important to develop
Ollero and Merino (2004) for flight controllers, Chao, Cao, a framework that provides standard definitions and met-
and Chen (2010) for autopilots, Goerzen, Kong, and Mettler rics characterizing and measuring the autonomy level of
(2010) for path planning algorithms, and Valavanis (2007) a RUAS. In this section, the autonomy levels for un-
for UAS in general2 —there is a real need for a comprehen- manned rotorcraft systems (ALFURS) framework is pro-
sive survey to report and organize the large variety of GNC posed, which is based on the generic NIST3 ALFUS frame-
methods, providing a context for viewing and comparing work4 (Huang, Messina, & Albus, 2007) with some mod-
autonomy technologies developed for RUAS. This work ifications and extensions to make it specific to RUAS and
was mainly motivated by the fact that we have not found research-oriented. Another of the main objectives of this
a single survey specializing on GNC systems for UAS in section is to identify the key components that constitute the
general and RUAS in particular, despite the need for a such autonomy architecture onboard a RUAS. This section is in-
work after two decades of active research in these areas. tended to facilitate the understanding of this survey paper,
This paper provides an overview of GNC systems devel- but it can also serve as a common reference for the UAS
oped to date to increase the autonomous capabilities of un- community.
manned rotorcraft systems. The approaches that have been
reported are organized into three main categories: control, 3.1. Terminology and Key Definitions
navigation, and guidance. For each category, methods are In this section, definitions of terms that are most relevant to
grouped at the highest level based on the autonomy level RUAS autonomy are proposed. As such, consistency with
they provide, and then according to the algorithmic ap- the NIST ALFUS (Huang, 2008) definitions is assumed.
proach used, which in most cases is closely associated with However, for certain terms, the NIST generic definitions
the type of sensors used. The central objective of this sur- have been modified to better suit RUAS. For some other
vey paper is to serve the UAS research community by pro- terms, new definitions are proposed based on information
viding an overview of the state of the art, major milestones, from various sources (ICAO, FAA, UAS Roadmap 2010–
and unsolved problems in the areas of GNC for RUAS. This 2035, NIST, etc.).
will help researchers to reduce reinvention and enable them
to better identify the key critical gaps that prevent advance
Definition 1. Rotorcraft: A heavier-than-air aircrafts5 that is
in the field.
supported in flight by the dynamic reaction of the air against its
The rest of the paper is organized as follows: The main
power-driven rotors on a substantially vertical axis.
research groups involved in research and development of
autonomous RUAS are presented in Section 2. Section 3 in-
Definition 2. Rotorcraft Unmanned Aerial Vehicle (RUAV):
troduces autonomy aspects onboard RUAS, including au-
A powered rotorcraft that does not require an onboard crew, can
tonomy definition from the UAS perspective, autonomy
levels and metrics, and the main components of a typical
autonomous system. Sections 4, 5, and 6 provide a com- 3
National Institute of Standards and Technology.
prehensive survey of major works focusing on flight con- 4
The Autonomy Levels For Unmanned Systems (ALFUS) Ad Hoc
trol, autonomous navigation, and guidance, respectively. Workgroup is a NIST sponsored effort that aims at formulating a
logical framework for characterizing the autonomy of unmanned
2
The book (Valavanis, 2007), published in 2007, also provided some systems in general, covering issues of definitions, metrics, levels of
overview of recent advances in UAS, but it is more a concatenation autonomy, etc.
5
of contributed chapters from different groups than a survey. See ICAO and FAA definitions of aircraft.
Figure 2. Continued
highest
Guidance
reasoning and
cognizance
Navigation
Non-ES
low-level decision-making
GPS/IMU (visual, etc.)
waypoint sequencer and
Sensing trajectory generation
operate with some degree of autonomy, and can be expendable Definition 4. Autonomy: The condition or quality of being
or reusable. Most RUAVs include integrated equipment such as self-governing. When applied to RUAS, autonomy can be de-
avionics, data links, payload, and various algorithms needed for fined as RUAS’s own8 abilities of integrated sensing, perceiving,
flight. analyzing, communicating, planning, decision-making, and act-
ing/executing, to achieve its goals as assigned by its human op-
Definition 3. Rotorcraft Unmanned Aerial or Aircraft System erator(s) through a designed human–robot interface (HRI) or by
(RUAS): A RUAS6 is a physical system that includes a RUAV, another system that the RUAS communicates with.
communication architecture, and a ground control station with
no human element7 aboard any component. The RUAS acts on Definition 5. Autonomous RUAS: A RUAS is defined to be
the physical world for the purpose of achieving an assigned mis- autonomous relative to a given mission (relational notion) when
sion. Contrary to the UAS definition proposed in the US DoD it accomplishes its assigned mission successfully, within a defined
UAS Roadmap 2010–2035 (Roadmap, 2010), here the human el- scope, with or without further interaction with human or other
ement is not part of the RUAS but rather an external system that external systems. A RUAS is fully autonomous if it accomplishes
interacts with the RUAS; see Figure 3. its assigned mission successfully without any intervention from a
6
The plural of RUAS will be also denoted RUAS.
7 8
See the UAS Roadmap (Roadmap, 2010) for the definition of hu- “Own” implies independence from human or any other external
man element. systems.
human or any other external system while adapting to operational mation analysis, decision and action selection, and action
and environmental conditions. implementation. Other relevant concepts and results have
been developed by academia, especially from the human–
machine interaction and artificial intelligence areas (Castel-
Definition 6. Autonomy Level (AL): The term “autonomy franchi & Falcone, 2003; Zeigler, 1990), as well as by NASA
level” is used in different contexts in the research community. In and the military,9 using mainly the OODA (Observe, Ori-
Huang (2008) and Huang, Messina, & Albus (2007) for exam- ent, Decide, and Act) loop. A more fully developed frame-
ple, AL is equivalent to human independence (HI). In this paper, work for defining autonomy levels for unmanned systems
AL is defined as a set of progressive indices, typically numbers (ALFUS) has been proposed by an NIST-sponsored ad hoc
and/or names, identifying a RUAS capability for performing au- workgroup (Huang, Messina, & Albus, 2007). In the AL-
tonomously assigned missions. A RUAS’s AL can be character- FUS framework, the autonomy level, later renamed contex-
ized by the missions that the RUAS is capable of performing (mis- tual autonomous capability (CAC), is measured by weight-
sion complexity or MC), the environments within which the mis- ing the score of various metrics for three aspects, or axes,
sions are performed (environment complexity or EC), and inde- which are human independence (HI), mission complexity
pendence from any external system including any human element (MC), and environmental complexity (EC). In 2002, the U.S.
(external system independence or ESI). Note that this AL defini- Air Force Research Laboratory (AFRL) presented the re-
tion is similar to the contextual autonomous capability (CAC) sults of a research study on how to measure the auton-
definition in the NIST ALFUS framework (Huang, 2008), except omy level of a UAV (Clough, 2002). The result of this study
for HI, which is replaced here by ESI. is the autonomous control levels (ACL) chart, where 11
autonomy levels have been identified and described. The
As in Clough (2002) and Merz (2004), we make a autonomy level is determined using the OODA concept,
distinction between automatic, autonomous, and intelli- namely, perception/situationalawareness (observe), anal-
gent systems. An automatic system will do exactly as ysis/coordination (orient), decision-making (decide), and
programmed because it has no capabilities of reasoning, capability (act). A few other papers also briefly discussed
decision-making, or planning. An autonomous system has the UAS autonomy (Fabiani, Fuertes, Piquereau, Mampey,
the capability to make decisions and to plan its tasks and & Teichteil-Knigsbuch, 2007; Lacroix, Alami, Lemaire, Hat-
path in order to achieve its assigned mission. An intelligent tenberger, & Gancet, 2007; Merz, 2004; Mettler et al., 2003;
system has the capabilities of an autonomous system plus Roadmap, 2010), but to our knowledge there have been no
the ability to generate its own goals from inside by moti- other papers published about metrics and autonomy levels
vations and without any instruction or influence from out- for UAS in general and RUAS in particular.
side. In this paper, we are interested in automatic and au- Although the NIST ALFUS and AFRL ACL frame-
tonomous systems; intelligent systems are out of the scope works provide significant insight and progress in the field
of this paper because such systems do not yet exist for UAS. of unmanned system autonomy characterization and eval-
Generally, we do not want an intelligent system, but an au- uation, they are difficult to apply directly to RUAS, es-
tonomous system that does the job assigned. pecially from the research perspective. Indeed, the AFRL
ACL chart is most useful and applicable to relatively large
UAS operating at high altitudes in obstacle-free environ-
3.2. Autonomy Levels and Metrics ments. Furthermore, the used metrics are military scenario-
oriented and are based on the OODA loop, originally de-
From reviewing the RUAS literature, it became evident that
veloped by the military to illustrate how to take advantage
there is an overall need for a comprehensive framework
of an enemy. On the other hand, ALFUS is a generic frame-
that allows RUAS practitioners, particularly researchers,
work covering all unmanned systems, and its application
to evaluate and characterize the autonomous capabili-
to RUAS is not straightforward. It is also important to note
ties of RUAS. The next paragraph gives a brief overview
that autonomy metrics and taxonomies have evolved and
of autonomy-related works, which are also some of the
expanded in theory and practice since then. In this section,
sources that were consulted for developing the ALFURS
we attempt to address the challenge of RUAS autonomy
framework.
characterization by proposing the autonomy levels for un-
Many of the autonomy articles use Sheridan’s work
manned rotorcraft systems (ALFURS) framework. Based on
(Sheridan, 1992) as a reference for initial understanding of
research, including NIST ALFUS and AFRL ACL studies,
autonomy and human-computer interaction. In his book
and the desire to have a research-oriented autonomy frame-
(Sheridan, 1992), Sheridan proposed a 10-level scale of de-
work that better suits RUAS operating at low altitudes and
grees of autonomy based on who makes the decision (ma-
in cluttered environments, the ALFURS framework was
chine or human) and on how those decisions are executed.
In 2000, Parasuraman, Sheridan, and Wickens (2000) in-
troduced a revised model of autonomy levels based on 9
Future combat systems (FCS) program, autonomous collaborative
four classes of functions: information acquisition, infor- operations (ACO) program, etc.
developed. ALFURS is based on the RUAS onboard func- • Perception: RUAS perception is the ability to use inputs
tions that enable its autonomy. These autonomy enabling from sensors to build an internal model of the environment
functions (AEF) can be regrouped into three main cate- within which the vehicle is operating, and to assign entities,
gories: guidance, navigation, and control (GNC). Before events, and situations perceived in the environment to classes.
elaborating this concept, let us first define GNC systems The classification (or recognition) process involves comparing
and their relevant components or AEF when related to what is observed with the RUAS’s a priori knowledge (Huang,
RUAS, as well as their interaction in a typical RUAS au- 2008). Perception can be further divided into various func-
tonomy software implementation; see Figure 3. Indeed, we tions on different levels such as mapping, obstacle and target
found in our literature review that GNC terms are com- detection, and object recognition.
monly used in UAS research, but they are rarely defined • Situational Awareness (SA): The notion of SA is commonly
and are sometimes mistakenly used. used in aviation systems, and numerous definitions of SA
have been proposed. In this paper, we adopt Endsley’s defi-
Definition 7. Automatic Flight Control System (AFCS): Au- nition (Endsley, 1999) of SA as “the perception of elements in
tomatic control10 can be defined as the process of manipulating the environment within a desirable volume of time and space,
the inputs to a dynamical system to obtain a desired effect on its the comprehension of their meaning, and the projection of their
outputs without a human in the control loop. For RUAS, the de- status in the near future.”11 SA therefore is higher than per-
sign of flight controllers consists of synthesizing algorithms or ception because it requires the comprehension of the situation
control laws that compute inputs for vehicle actuators (rotors, and then the extrapolation or projection of this information
aileron, elevator, etc.) to produce torques and forces that act on forward in time to determine how it will affect future states of
the vehicle in controlling its 3D motion (position, orientation, the operational environment.
and their time derivatives). AFCS, called also autopilot, is thus
the integrated software and hardware that serve the control func- Definition 9. Guidance System (GS): A guidance system can
tion as defined. be defined as the “driver” of a RUAS that exercises planning and
decision-making functions to achieve assigned missions or goals.
Definition 8. Navigation System (NS): In the broad sense, The role of a guidance system for RUAS is to replace the cog-
navigation is the process of monitoring and controlling the move- nitive processes of a human pilot and operator. It takes inputs
ment of a craft or vehicle from one place to another. For RUAS, from the navigation system and uses targeting information (mis-
navigation can be defined as the process of data acquisition, data sion goals) to make appropriate decisions at its high level and to
analysis, and extraction and inference of information about the generate reference trajectories and commands for the AFCS at its
vehicle’s states and its surrounding environment with the objec- low level. GS decisions can also spark requests to the navigation
tive of accomplishing assigned missions successfully and safely. system for new information. A guidance system comprises var-
This information can be metric, such as distances, topological, ious autonomy-enabling functions including trajectory genera-
such as landmarks, or any other attributes that are useful for mis- tion, path planning, mission planning, and reasoning and high-
sion achievement. The main autonomy-enabling functions of a level decision making.
navigation system, from lower to higher level, are as follows: • Trajectory Generation: A trajectory generator has the role
• Sensing: A sensing system involves one or a group of devices of computing different motion functions (reference position,
(sensors) that respond to a specific physical phenomenon or reference heading, etc.) that are physically possible, satisfy
stimulus and generate signals that reflect some features of or RUAS dynamics and constraints, and can be directly used as
information about an object or a physical phenomenon. Sen- reference trajectories for the flight controller. Reference trajec-
sors such as gyroscopes, accelerometers, magnetometers, static tories can be preprogrammed, uploaded, or generated in real-
and dynamic pressure sensors, cameras, and LIDARs are com- time onboard the RUAS (dynamic trajectory generation) ac-
monly used onboard UAS to provide raw measurements for cording to the outputs of higher-level guidance modules.
state estimation and perception algorithms. • Path Planning: The process of using accumulated navigation
• State Estimation: This concerns mainly the processing of raw data and a priori information to allow the RUAS to find the
sensor measurements to estimate variables that are related to best and safest way to reach a goal position/configuration or
the vehicle’s state, particularly those related to its pose and to accomplish a specific task. Dynamic path planning refers to
motion, such as attitude, position, and velocity. These esti- onboard, real-time path planning.
mates can be absolute or relative. Localization is a particular • Mission Planning: The process of generating tactical goals, a
case of state estimation that is limited to position estimation route (general or specific), a commanding structure, coordina-
relative to some map or other locations. tion, and timing for a RUAS or a team of unmanned systems
(Huang, 2008). The mission plans can be generated either in
10 11
In the remainder of the paper, the term control refers to automatic This definition is also similar to the one used in the ALFUS frame-
control. work (Huang, 2008).
advance or in real time. They can be generated by operators teraction with an external system, the RUAS needs higher
or by onboard software systems in either centralized or dis- levels of GNC. One of the primary motivations for using
tributed ways. The term “dynamic mission planning” can also GNC as aspects or axes for characterizing the autonomy
be used to refer to onboard, real-time mission planning. level of RUAS is to use terms and concepts that are famil-
• Decision Making: The RUAS’s ability to select a course of ac- iar to the UAS research community. Indeed, we are inter-
tions and choices among several alternative scenarios based on ested in a framework that describes the autonomy levels in
available analysis and information. The decisions reached are a simple but meaningful way, so that it can easily be un-
relevant to achieving assigned missions efficiently and safely. derstood and used by other researchers. The intent of this
Decision-making processes can differ in type and complex- GNC-based ALFURS framework is also to help categorize
ity, ranging from low (e.g., fly home if the communication the RUAS literature and research presented in Sections 4,
link is lost) to high-level decision making. Trajectory gener- 5, and 6. Differentiating among consecutive autonomy lev-
ation, path planning, and mission planning also involve some els is not trivial and may even be subjective. On the other
decision-making processes. hand, autonomy levels need to be distinguished to be use-
• Reasoning and Cognizance: The RUAS’s ability to analyze ful for evaluation and comparison, and to be easily usable
and reason using contextual associations between different by the research community. Therefore, an 11-level13 scale
entities. These are the highest level AEF that a RUAS can of autonomy, shown in Figure 4, was proposed, based on
perform, with varying levels of augmentation or replacement gradual increase (autonomy as a gradual property) of GNC
of human cognitive process. Reasoning and cognizance occur functions and capabilities. Main or key GNC functions that
prior to the point of decision making. Note that transition enable each autonomy level are verbally described, along
from high-level navigation (situational awareness) to high- with their correspondences with MC, EC, and ESI metrics
level guidance (reasoning and cognizance) is of course quite (illustrated by color gradient). The GNC category aspect of
blurry. this scale is advantageous because it helps RUAS develop-
ers to easily and correctly determine the autonomy level
of an existing algorithm or system, but also to identify the
For better understanding of these GNC-related defi- AEF needed to achieve a certain autonomy level during the
nitions, the reader is encouraged to read the NIST docu- design of a new system.
ment (Huang, 2008) for more details about the meaning
of key terms such as “mission,” “goal,” “operator,” and
“environment.” Figure 3 shows a simple block diagram of Observation 1. Although the ALFURS framework was
key GNC functions and their interaction in a typical au- proposed for RUAS, it can be used for UAS in general.
tonomous RUAS. Traditionally, GNC has always been the
bottom blobs of Figure 3, i.e., flight control, state estimation, In addition to autonomy characterization and eval-
and trajectory generation/waypoint navigation. However, uation, two other aspects are important for comparing
many of the research programs today are geared toward re- and evaluating RUAS or autonomy technologies: perfor-
alizing all of the GNC functions in Figure 3 onboard the mance and dependability. Autonomy is related to what the
RUAS. RUAS can do (MC, EC, ESI), performance is related to how
ALFURS levels are determined based on degrees of well the RUAS meets mission requirements (accuracy, time,
RUAS involvement and efforts in performing AEF or GNC etc.), and dependability is related to how often the RUAS
functions. The general trend may be that RUAS autonomy accomplishes the mission without problems (success rate,
level increases when the levels of GNC functions increase. failure rate, etc.). In fact, algorithms or GNC systems that
In other words, autonomy level is higher when the GNC are developed to serve the same autonomy level can still
systems include high-level AEF functions, and they are per- be compared and evaluated based on performance and de-
formed by the RUAS to a greater extent. Because the main pendability metrics. This is out of the scope of this paper,
focus of this paper is not on autonomy characterization, but such a work will benefit the research community and
this concept12 will not be elaborated in detail. However, it UAS practitioners in general.
is important to note that there is a direct correspondence As a direct application of the ALFURS framework,
between GNC functions or systems and the MC, EC, and RUAS-related works, reviewed in Sections 4, 5, and 6, are
ESI metrics used in the ALFUS project. Therefore, it is pos- classified based on the GNC aspects and the level of auton-
sible to establish GNC metrics by mapping ALFUS met- omy they are addressing, starting from low-level AEF such
rics to the ALFURS framework. Indeed, to achieve a com- as automatic control to high-level functions such as cooper-
plex mission in a complex environment without any in- ative mission planning.
12
Metrics for measuring the level of GNC functions, and process
13
for determining the RUAS’s level of autonomy using the scores of Eleven scales, to be consistent with AFRL ACL chart and NIST
these various metrics. ALFUS levels.
Level Level
Guidance Navigation Control ESI EC MC
Descriptor
highest complexity,
extreme environment
Human-level decision-making,
all missions
Fully ties for most missions, performance as for a piloted
10 Autonomous missions without any interven- fast SA that outperforms human aircraft in the same situation
tion from ES (100% ESI), SA in extremely complex and conditions.
cognizant of all within the environments and situations.
operation range.
Distributed strategic group Long track awareness of very Ability to choose the appro-
Swarm
difficult environment
tion of other agents intents and of the current situation/cont-
Making
moderate environment
high adaptation to mission (no-additional control
6 Mission objects/events and to infere some
changes, tactical task allocation, of their attributes, mid fidelity SA. capabilities are required)
performed by external systems RUAS, all data is processed and Control commands are
0% ESI
0 Remote Control (mainly human pilot or operator). analyzed by an external system given by a remote ES
(mainly human). (mainly human pilot).
Figure 4. Illustration of ALFURS autonomy levels as a gradual increase of GNC capabilities and corresponding MC, EC, and ESI.
Acronyms: ESI (external system independence), EC (environment complexity), MC (mission complexity), ES (external system), SA
(situational awareness), RT (real time).
4. FLIGHT CONTROL SYSTEMS namics is a mathematical representation that links its in-
puts to its outputs, as shown in Figure 6. It can be di-
Automatic flight control is a key autonomy-enabling func-
vided into four different subsystems, rigid-body dynamics,
tion that increases the RUAS autonomy level from “level 0”
force and torque generation mechanism, rotor aerodynam-
to “level 1,” as shown in Figure 4. Different control archi-
ics and dynamics, and actuator dynamics. The develop-
tectures and algorithms have been developed for full-scale
ment of an accurate model that includes the flexibility and
manned helicopters and unmanned rotorcraft. Traditional
flapping of rotors and fuselage aerodynamics is very com-
control systems for manned helicopters have been primar-
plex (Prouty, 1995). For control design purposes, the RUAS
ily stability augmentation systems (SAS), which are con-
is generally considered as a rigid body evolving in 3D space
cerned with attitude or altitude control. In addition to SAS,
with a mechanism for generating force and torque vec-
flight control systems developed for RUAS also include
tors. The rigid-body dynamics is generally described by the
velocity/position control, heading control, 3D trajectory
Newton–Euler equations of motion, or energy-oriented ap-
tracking, etc. Although RUAS control systems are often
proaches such as the Lagrange formulation. The rigid-body
based on the control methodologies used for manned aerial
equations of motion can be expressed in the body frame or
vehicles, various other control techniques have been de-
in the inertial frame, and can have different model struc-
veloped in academia in the last decade. Indeed, the RUAS
tures and parameterizations. The force and moment gen-
control problem has attracted the attention of many re-
eration mechanism, shown in Figure 6, is not a dynamical
searchers from both the control and robotics communities,
system but a process for computing the resultant force and
because it presents interesting control challenges and an
torque vectors experienced by the rigid body. These force
excellent opportunity for developing and testing new con-
and moment vectors (F, M) depend mainly on the thrust
trol design methodologies. Existing RUAS flight controllers
T and torque Q generated by each rotor, geometrical pa-
can be classified into three main categories, as in Figure 5:
rameters, and the orientation of each produced thrust. The
(1) learning-based control methods, (2) linear flight control
orientation of thrust T is controlled by servomotors for tilt-
systems, and (3) model-based nonlinear controllers. To bet-
ing the blades (swashplate mechanism) or the rotor itself.
ter understand the differences between these control tech-
Most published papers on modeling and control of RUAS,
niques, we first briefly describe the RUAS modeling prob-
especially for small platforms such as quadrotors, use this
lem and related works in this area.
simple model for control design (Bisgaard, Cour-Harbo,
& Bendtsen, 2010; Castillo, Dzul, & Lozano, 2004; Fraz-
4.1. RUAS Dynamics Modeling and Identification zoli, Dahleh, & Feron, 2000; Guenard, Hamel, & Mahony,
RUAS belong to the class of underactuated mechanical 2008; Johnson and Kannan, 2005; Kendoul, Yu, & Nonami,
systems, which have fewer control inputs than state vari- 2010; Penga et al., 2009; Qi, Song, Dai, Han, & Wang, 2010).
ables. Control design for RUAS is generally based on their For more accurate models, the rigid-body model is essen-
dynamical model, which features high nonlinearities and tially extended or augmented with simplified rotor dynam-
strong couplings between different subsystems. Modeling ics and aerodynamics, using a combination of momentum
is thus a crucial stage in designing flight controllers for and blade element theory. Indeed, detailed modeling of ro-
RUAS. Furthermore, it is important to have high-fidelity tor dynamics and aerodynamics can be extremely complex,
models for simulation purposes. A model of RUAS dy- because the rotor itself is a multibody system. Furthermore,
the produced aerodynamic forces and torques depend on equations of motion. It employs the state-space representa-
operating conditions and vehicle motion (coupling). Ex- tion of the equations of motion and accounts for the cou-
amples of rotor modeling can be found in Prouty’s book pling between longitudinal and lateral dynamics of the he-
(Prouty, 1995) for full-scale helicopters and in Mettler’s licopter. The developed model was evaluated against flight
book (Mettler, 2003) for small unmanned rotorcraft. For data and the CIFER model and the obtained results were
small-scale RUAS, other dynamics can have major effects encouraging. At Chiba University, Japan, different linear
on the vehicle response. Indeed, it is necessary to model and nonlinear models have been developed for a variety of
the dynamics of actuators (swashplate servomotors, gaso- RUAS using first-principles or system-identification tech-
line engines, electrical motors, etc.) and stabilizing bars (for niques. Chapter 5 of Nonami et al. (2010) describes the
helicopters) to better improve model fidelity, especially for derivation of an analytical linear model for unmanned he-
simulation purposes. licopters, which includes rigid-body dynamics, main rotor
There is considerable published research on model- dynamics, and aerodynamics, fuselage dynamics and sta-
ing of unmanned rotorcraft. The most popular techniques bilizer bar dynamics. By collaborating with the helicopter’s
used for RUAS modeling are the first-principles technique, manufacturer, Hirobo Limited, the physical parameters of
the system-identification technique, or a combination of three unmanned helicopters, SST-Eagle, SF40, and Sky Sur-
both approaches. As described in Mettler (2003), the first- veyor, were estimated from platform specifications and re-
principles modeling technique is a physical approach that fined manually. The derived models have been validated
involves deriving the mathematical equations of motion experimentally for the three platforms, and simulation data
using the fundamental laws of mechanics and aerody- showed a good match with flight data.
namics. It requires considerable knowledge of all the phe- On the other hand, the system-identification technique
nomena involved in rotorcraft flight. The resulting mod- is essentially a data-fitting process that uses experimental
els are typically nonlinear and coupled and describe the input-output data to produce a mathematical linear model
rotorcraft dynamics in a large portion of its flight enve- (typically based on transfer functions) of the RUAS dynam-
lope. However, they contain dozens of unknown physi- ics. Shim, Kim, and Sastry (2000) used the time-domain
cal parameters such as geometrical data and aerodynam- analysis tool from the Matlab System Identification Tool-
ics coefficients. In our literature survey, we found that re- box to identify a linear model of the Yamaha R-50 he-
searchers have employed a variety of dynamic models with licopter from experimental data. However, effective and
different degrees of fidelity and accuracy (nonlinear fully popular rotorcraft identification methods are those based
coupled, nonlinear semicoupled, nonlinear decoupled, lin- on the frequency-domain technique, such as the CIFER
ear coupled, linear decoupled, etc.). These models are ob- tool. CIFER14 (Tischler and Cauffman, 1992), developed by
tained by using a first-principles approach and then by ap- army/NASA Rotorcraft Division, is one of today’s stan-
plying model simplification techniques such as Jacobian dard tools for identifying linear models of rotorcraft. Some
linearization (Taylor series), feedback linearization (also researchers from academia have applied CIFER to identify
called dynamic inversion), or decoupling. As mentioned models of different RUAS configurations (Nonami et al.,
in the previous paragraph, rigid-body models are most of- 2010). A more rigorous and well-documented work about
ten used for control design without comprehensive valida- the application of CIFER tool for small RUAS modeling is
tion against flight data (Bisgaard et al., 2010; Castillo et al., probably Mettler’s work (Mettler, 2003; Mettler, Tischler, &
2004; Frazzoli et al., 2000; Guenard et al., 2008; Johnson & Kanade, 2002). Mettler developed and identified param-
Kannan, 2005; Kendoul et al., 2010; Penga et al., 2009; Qi eterized linear state-space models for hover and cruise
et al., 2010). Among the few works that have addressed flight conditions for the Yamaha R-50 and MIT’s X-Cell
the problem of physical parameter estimation and exper- 60 unmanned helicopters (Mettler, 2003). Key dynamics of
imental validation of the obtained models are Gavrilets, both flight conditions is accurately captured with a min-
Mettler, & Feron (2001), Bhandari, Colgren, Lederbogen, & imum level of complexity. Although these linear models
Kowalchuk (2005), and Nonami, Kendoul, Suzuki, & Wang have been successful for simulation and control around
(2010). Indeed, Gavrilets et al. (2001) developed a 17-state hover and around forward flight, they are valid only within
nonlinear dynamic model of a small aerobatic helicopter a certain range of the nominal operating point. Further-
and documented the experimental methods used for the more, they naturally lack the expressiveness to compre-
estimation of the model’s parameters. Experimental vali- hensively capture nonlinear aspects of rotorcraft dynamics.
dation of the proposed model showed that it is valid up To overcome this problem, (LaCivita, Messner, & Kanade,
to high speeds for a variety of flight conditions. Another 2002) developed a modeling technique called model-
interesting method for modeling the dynamics of a small ing for flight simulation and control analysis (MOSCA),
unmanned helicopter (Thunder Tiger Raptor 50) was pro- which integrates first-principles and system-identification
posed by Bhandari et al. (2005). They developed a 6-DoF
linear-parameter-varying (LPV) dynamic model based on
14
stability and control derivatives derived from rigid-body Comprehensive identification from frequency responses.
External disturbances
Fe Me
ω1 T1
V1 Dynamics of Aerodynamics
voltage actuator #1 of rotor #1 Q1
PWM1 F P
T2 position
u Multiplexer PWM2 V2 Dynamics of ω2 Aerodynamics Force and force vector Dynamics
Electronics
Flight actuator #2 of rotor #2 Q2 V
τφ torque
Control of the velocity
τθ generation
Laws τψ 6-DOF η
Dynamics of Tm mechanism M rigid-body angles
Vm ωm Aerodynamics
PWMn actuator #m rotation of rotor #m Qm torque vector η
speed angular
rates
Vn Dynamics of δn δn
actuator #n servo
angle
Fu Mu
Unmodeled dynamics
techniques through the use of global optimization meth- 4.2.1. Fuzzy-Logic-Based Controllers
ods in the frequency domain. Unknown or uncertain phys- Fuzzy logic has been successfully applied to control differ-
ical parameters of the nonlinear model are automatically ent rotorcraft configurations. The idea is to translate the in-
tuned to match frequency responses from flight data and formation and knowledge used by human pilots into rules
from multiple operating points. When applied to the CMU that can be used by a fuzzy control system. The pioneer-
Yamaha R-50 helicopter, the MOSCA technique yields real- ing work by Sugeno, Griffin, and Bastian (1993); Sugeno,
time 30-state accurate linear and nonlinear models, which Howard, Isao, and Satoru (1995) applied model-free fuzzy
have been used to design an H∞ controller (La Civita, control to a Yamaha R-50 unmanned helicopter. The fuzzy
Papageorgiou, Messner, & Kanade, 2006). Another inter- controller was organized hierarchically, with modules for
esting work that combined first-principles and system- primitive control inputs in a lower layer that could be acti-
identification approaches for modeling the dynamics of a vated by basic flight mode modules in an upper layer. A
small aerobatic helicopter (XCell Tempest) was presented combination of expert knowledge and training data was
in Abbeel, Coates, and Ng (2010). Unknown parameters of used to generate and adjust the fuzzy rules base. The exper-
a simple nonlinear model of the rigid-body dynamics were imental results showed that the fuzzy-controlled helicopter
identified from flight data by optimizing the prediction ac- was able to execute basic flight behavior such as hovering,
curacy in the time domain. This modeling approach pro- forward flight, and climbing turns. A similar approach was
vided good simulation accuracy in flight regimes around developed by Montgomery and Bekey (1998) for control of
level flight, but it still exhibited large prediction errors the USC AVATAR helicopter. A hierarchical behavior-based
during simulation of aggressive aerobatic maneuvers. To control architecture was used, with each behavior imple-
compensate for this problem, the baseline nonlinear model mented as a hybrid fuzzy logic controller (FLC) and gen-
was refined by learning (identifying) local corrections to eral regression neural network controller (GRNNC). Fuzzy
the model parameters using time-aligned trajectories of control rules were generated automatically from training
the desired maneuver. Experimental results showed that data using a model-free “teaching by showing” method-
the resulting models are sufficiently accurate to develop ology. The GRNNCs were incrementally built and mod-
controllers for highly aggressive aerobatic maneuvers; see ified whenever the controller did not meet performance
Section 4.2.2. criteria, enhancing the control of the FLCs. This hybrid
flight controller was validated in simulations, but it turned
out that it was inadequate when flight-tested on an actual
robotic helicopter. There are also some works on apply-
ing fuzzy controllers for full-scale helicopters. For exam-
4.2. Learning-Based Flight Control Systems ple, the work presented in Phillips, Karr, and Walker (1996)
The main characteristic of this control scheme is that the ro- demonstrated the feasibility of replacing the human pilot
torcraft dynamics model is not used, but several trials and on a UH-1H helicopter with a fuzzy flight control system
flight data are needed to train the system. Among the used for some standard maneuvers such as hovering, forward
methods, fuzzy logic, human-based learning, and neural flight, and coordinated turning. Effective rules for the fuzzy
networks are the most popular. controller were computed using a genetic algorithm and a
numerical model of the UH-1H helicopter. In a recent paper, loops and hurricanes, autorotation landings, chaos and tic–
Garcia and Valavanis (2009) presented an implementation tocs, and complete air shows. These results illustrate that
of a robotic helicopter test bed based on an RC Maxi-Joker complex control theory is not needed to perform aerobatic
II helicopter. The developed flight control system includes and sophisticated-looking maneuvers such as barrel rolls.
four fuzzy controllers for pitch–longitudinal motion, roll– However, precise, repeatable trajectory tracking may not be
lateral motion, yaw, and collective-vertical control. These achievable with such learning-based techniques.
decoupled fuzzy controllers were developed in Matlab us-
ing Sugeno constant fuzzy logic and a weighted average
defuzzification method. Fuzzy rules were designed based 4.2.3. Neural-Network-Based Controllers
on general RC helicopter flight. The developed fuzzy flight Another interesting method for learning-based control is
controller was validated in outdoor experiments over 300 artificial neural networks (ANN). There are several works
autonomous flights including hovering, takeoff and land- that use ANN to identify rotorcraft dynamic models offline
ing, forward flight, and waypoint flight. or online, as in Dierks and Jagannathan (2010), but most of
these works present simulation results only. ANN are gen-
erally used to identify some unknowns and then combined
4.2.2. Human Based Learning Techniques with standard control techniques, as in Johnson and Kan-
nan (2005). In Buskey, Wyeth, and Roberts (2001), a ANN-
A different learning-based approach, based on analysis of based controller was developed for helicopter hovering. It
the pilot’s execution of aggressive maneuvers from flight uses direct mapping of inertial data to actuator control via
test data, was developed by MIT researchers for acrobatic a feedforward network using the back-propagation train-
maneuvering of a small robotic helicopter. Gavrilets, Fraz- ing regime. Partial hovering for several seconds has been
zoli, Mettler, Piedmonte, and Feron (2001) presented an achieved with a small helicopter.
analysis of a series of input–output sequences for an aileron
roll and a hammerhead maneuver, collected from an instru-
Observation 2. Learning-based approaches are promis-
mented small-scale helicopter performing acrobatic flight.
ing for helicopter control and have been already demon-
The objective was to understand and extract the input se-
strated successfully for achieving several flight maneuvers.
quences and feedback mechanisms that a human pilot uses
Their main advantage is their flexibility for implementation
to execute aggressive maneuvers. The insight gained in this
on different platforms, because they are generally model-
study has been used to develop and implement an intuitive
free. Moreover, some learning methods such as ANN-based
control logic for autonomous aggressive flight (Gavrilets,
controllers allow direct mapping between navigation data
Mettler, & Feron, 2004). Conventional multivariable trim
and actuator deflections, which may be good for fast reac-
trajectory controllers were used before and on exit from
tive behavior and vehicle survivability. However, stability
the maneuvers. This flight control system was flight-tested
and robustness of these approaches are difficult to analyze.
with split-S, hammerhead, and 360◦ axial roll maneuvers,
Furthermore, no extensive experimental evaluation over a
as well as a split-S-hammerhead maneuver sequence, using
wide range of environments and scenarios has been per-
a small unmanned helicopter. Abeel et al. (2010) from Stan-
formed yet, compared to standard linear methods such as
ford University investigated the application of reinforce-
proportional-integral-derivative (PID) control.
ment (or apprenticeship) learning to aerobatic helicopter
flight. The proposed approach is characterized by the fol-
lowing (1) an apprenticeship learning algorithm (expecta- 4.3. Linear Flight Controllers
tion maximization) is used to extract the intended maneu- Conventional approaches to flight control and most initial
ver trajectory from multiple suboptimal expert demonstra- attempts to achieve autonomous helicopter flight have been
tions; (2) the helicopter dynamic model is obtained by first based on linear controllers such as PID, LQR and H∞ . In-
building a baseline dynamic model using a mix of first- deed, in the late 1960s and early 1970s, the CH-53A full-
principles modeling and fitting to flight data, and then scale helicopter achieved autonomous waypoint naviga-
learning a high-accuracy dynamics model (local correc- tion using classical linear control techniques.
tions) from the flight data of the desired maneuver; (3) the
learning of the cost-to-go (reward) function from collected
data and the use of an iterative linear–quadratic regulator 4.3.1. Proportional Integral Derivative
(iLQR) and differential dynamic programming (DDP) as a One of the most successful and widely used linear con-
flight controller. The flight control system has been demon- trollers is the PID controller. Generally, a hierarchical two-
strated on an instrumented 90-size XCell Tempest heli- loop architecture is used, assuming a time-scale separation
copter for which state estimation and control algorithms between the inner- and outer-loop subsystems. The inner
have been implemented on a ground computer. Experimen- loop controls the attitude using single-input-single output
tal results included the autonomous execution of a wide (SISO) PID for each axis, and the outer loop is respon-
range of maneuvers, including in-place flips, in-place rolls, sible for translation motion control using decoupled PID
controllers also. These PID-type flight controllers were im- tecture, where a multiple-input multiple-output (MIMO)
plemented and are still in use in most robotic helicopter inner loop stabilizes the helicopter’s attitude, and four
projects that are listed in Figure 2. In this control scheme, independent SISO controllers are responsible for trajec-
the rotorcraft model may not be used, and controller gains tory tracking. All loop designs used an H∞ loop-shaping
can be tuned empirically by trial and error. This is a time- technique. This controller was implemented onboard the
consuming process that can be improved by identifying the Yamaha R-50 helicopter and demonstrated good tracking
rotorcraft dynamic model from flight data and then tuning performance during a set of maneuvers. H∞ has also been
PID controllers in simulation using the identified model, as applied for attitude and altitude control of the ETH coax-
described in Section 4.1. ial micro helicopter (Schafroth, Bermes, Bouabdallah, &
Siegwart, 2010). There are also a number of recent theo-
retical works (without experimental results) (Gadewadikar,
4.3.2. Linear–Quadratic Regulator/Gaussian
Lewis, Subbarao, & Chen, 2008; Gadewadikar, Lewis, Sub-
The Linear–quadratic regulator (LQR) or linear–quadratic barao, Peng, & Chen, 2009; He & Han, 2010) on using H∞
gaussian (LQG) is also a popular optimal control technique for helicopter control.
that has been successfully applied to control several RUAS
configurations. In How et al. (2008), the LQR was used for
accurate orientation and position control of MIT’s RAVEN 4.3.4. Switched Dynamics and Gain Scheduling
quadrotors. At Chiba University, RUAS of different weights To extend the capabilities of linear flight controllers, non-
(2 to 48 kg) achieved stable hovering and accurate trajectory linear dynamics of RUAS can be modeled as a collection
tracking using LQG-based cascaded controllers designed of simplified linear models, with each model represent-
from identified linear models (Nonami et al., 2010; Shin, ing a particular operating regime. This approach is com-
Fujiwara, Nonami, & Hazawa, 2005). In Bergerman, Amidi, monly used to design flight controllers for aerospace sys-
Miller, Vallidis, and Dudek (2007), an LQR controller was tems. Gain scheduling control is the most used technique.
used as an inner loop to stabilize the unstable poles of the An interesting model-based linear controller that used this
identified linear model of the RMAX helicopter. This con- principle was developed by the Army/NASA rotorcraft di-
troller was then combined with a feedback linearization vision and implemented on the Yamaha RMAX helicopter
controller that decoupled the linear dynamics of the lateral, (Takahashi, Schulein, & Whalley, 2008). Control design is
longitudinal, vertical, and heading axes and enabled trajec- based on two linear models of the rotorcraft that are devel-
tory tracking. Experimental results using the RMAX heli- oped by collecting frequency sweeps in hover and forward
copter were presented for an 8-shaped trajectory tracking flights and using the CIFER software. PID-like feedback
with speed 3 m/s. Some of the learning-based controllers blocks were constructed to have high gain at lower frequen-
presented in the previous section also used the LQR tech- cies and low gain at higher frequencies. Gain scheduling
nique (Abbeel et al., 2010; Gavrilets et al., 2004), for design- was used to handle hover-forward transitions. This flight
ing the baseline controller. control system was extensively evaluated in simulations
and flight-tested on the RMAX robotic helicopter to achieve
4.3.3. H∞ hovering, automatic takeoff and landing, trajectory track-
ing, and waypoint navigation. The system is being used in
The H∞ control approach belongs to the family of model-
a variety of research projects, including obstacle field navi-
based robust control design methods that can cope with
gation. An alternate approach that allowed a small quadro-
the problem of parametric uncertainty and unmodeled dy-
tor UAS to perform aerobatic maneuvers was proposed by
namics. It has already been used for the control of a full-
Gillula, Hoffmann, Huang, Vitus, and Tomlin (2011). In this
scale helicopter (Smerlas et al., 1998), and it is also used
method, the behavior of the system is approximated as a
in some commercial autopilots such as the “wePilot” fam-
discrete set of simpler hybrid modes representing the dy-
ily from the Swiss company “weControl”15 . One of the be-
namics in specific portions of the state space. Linear con-
documented H∞ -based controllers to be implemented and
trol tools and reachable sets are then used to design the
flown on an unmanned RUAS is probably the H∞ loop-
control laws and to construct maneuvers that safely tran-
shaping controller developed by La Civita et al. (2006). The
sition through a sequence of modes. This strategy has been
CMU Yamaha R-50 helicopter linear and nonlinear mod-
implemented on a STARMAC quadrotor and successfully
els were obtained by combining first-principles and sys-
demonstrated for a backflip maneuver.
tem identification techniques. The controller design em-
ployed a linear model extracted at hover, whereas the
simulation and evaluation used the nonlinear model. The Observation 3. Designing and implementing linear flight
implemented control structure is based on a cascaded archi- controllers is straightforward, and there are many avail-
able tools to tune their gains and to analyze their perfor-
mance and robustness. Moreover, they have been success-
15
http://www.wecontrol.ch/. fully used in aerospace systems and RUAS to achieve wide
range of tasks and maneuvers. However, it is well known was able to achieve automatic takeoff and landing, hov-
that these linear controllers suffer from performance degra- ering, slithering, pirouetting, vertical turning, spiral turn-
dation when the helicopter leaves the nominal conditions ing, etc. In Kendoul, Yu, and Nonami (2010), the design
or performs aggressive maneuvers. From the theoretical of a nonlinear flight control system and its implementa-
point of view, it is also difficult to prove the asymptotic tion onboard a 700-g quadrotor are presented. The con-
stability of the complete closed-loop system. Despite these troller was designed by deriving a mathematical model of
limitations, PID, LQR, H∞ , and gain scheduling are the the quadrotor dynamics and exploiting its structural prop-
most widely accepted methods of flight control. erties to transform it into two cascaded subsystems (atti-
tude and translation) coupled by a nonlinear interconnec-
4.4. Model-Based Nonlinear Flight Controllers tion term. Partial passivation design and inverse dynamics
techniques were used to synthesize control laws for each
To overcome some of the limitations of linear approaches, a subsystem, thereby resulting in a hierarchical and nonlin-
variety of nonlinear flight controllers have been developed ear inner- and outer-loop controller. The asymptotic stabil-
and applied to rotorcraft control. These nonlinear flight ity of the entire connected nonlinear system was proven
controllers are generally based on the nonlinear model of by exploiting the theories of systems in cascade. This flight
the rotorcraft dynamics, obtained using the first-principles controller was validated through several flight tests includ-
technique with parameter identification in some cases. ing accurate attitude tracking, automatic takeoff and land-
Among these, feedback linearization or dynamic inversion, ing, long-distance flight, waypoint navigation, and spiral
adaptive control, and model predictive control have re- trajectory tracking. The same platform equipped with this
ceived much of the attention and have been successfully nonlinear flight controller has been also used for vision-
applied to helicopter control. Other control techniques such based flight research (Kendoul, Fantoni, & Nonami, 2009;
as backstepping and nested saturations have also been re- Kendoul, Nonami, Fantoni, & Lozano, 2009; Nonami et al.,
searched for the control of small and mini RUAS such as 2010). Nonlinear controllers based on the dynamic inver-
quadrotors. sion technique can also be used for aerobatic and aggres-
sive maneuvers control, as demonstrated in Mellinger and
4.4.1. Feedback Linearization Kumar (2011). The developed algorithm generates optimal
trajectories through a sequence of 3D positions and yaw an-
Feedback linearization is a broad class of techniques that
gles. These trajectories are then accurately tracked using a
are commonly used to control nonlinear systems. It in-
nonlinear inner- and outer-loop controller. Impressive ex-
volves the use of nonlinear transformation techniques to
perimental results for a small quadrotor16 flying through
transform the state variables of the system into a new
static and thrown circular hoops are presented.
coordinate system where the dynamics are linear (diffeo-
morphism). Linear tools can then be applied and subse-
quently converted back into the original coordinates via 4.4.2. Adaptive Control
an inverse transformation. Dynamic inversion is a specific Dynamic inversion can be vulnerable to modeling errors
case of feedback linearization control that has been inves- and uncertainties. Therefore, some robust nonlinear con-
tigated at great length for application to manned and un- trol schemes have been proposed and implemented on un-
manned aircraft. Koo and Sastry (1998), investigated the manned helicopters. Adaptive control is such a robust con-
use of exact and approximate input–output feedback lin- trol technique, which can handle unmodeled dynamics and
earization for nonlinear control of an unmanned helicopter. parametric uncertainties. The UAS group at Georgia Tech.
The study showed that exact input–output linearization re- is very active in researching adaptive techniques for flight
sults in unstable zero dynamics because of the coupling control. They have developed different variants of nonlin-
in control inputs. A tracking controller was then synthe- ear flight controllers and implemented them on various
sized using approximate linearization. However, this con- fixed-wing as well as rotary-wing UAS. Their research has
troller was verified in simulations only. Researchers from focused on the use of a direct neural-network-based adap-
Singapore National University have proposed a hierarchi- tive control architecture that compensates for unknown
cal nonlinear flight controller that combines “composite” plant nonlinearities in a feedback linearizing control frame-
nonlinear feedback control for the inner loop and dynamic work. The attitude and translational dynamics are con-
inversion for the outer loop (Penga et al., 2009). The inner trolled separately, using approximate dynamic inversion
loop consists of a linear feedback control law and a non- and linear controllers for the linearized dynamics. This
linear feedback control law for velocity, attitude, heave ve- generic inner–outer loop controller has been augmented
locity, and heading control, using a linear model of a small by a multi–layer neural network that parameterizes the
helicopter, identified from flight data. Reference velocities unknown model error and provides online adaptation to
for the inner loop are computed by the outer loop posi-
tion controller using a dynamic inversion method. A Rap-
tor 90 helicopter, equipped with the developed autopilot, 16
VICON motion capture system was used for navigation.
minimize the effect of nonlinear parametric uncertainty control (RHC). MPC employs an explicit model of the plant
arising because of approximate inversion. A pseudo- to predict the future output behavior, and then tracking
control hedging (PCH) method was also used to protect error over a future horizon is minimized, preferably on-
the adaptation process from actuator limits and dynamics. line, by solving optimal control problems. The UCB Aer-
Johnson and Kannan (2005) and Kannan (2005) described obot (BEAR) team has developed a nonlinear model predic-
the theoretical basis of the developed adaptive controller tive controller for tracking control of unmanned helicopters
and presented experimental results for trajectory tracking (Kim, Shim, & Sastry, 2002). The tracking control prob-
by the Yamaha RMAX helicopter. This adaptive flight con- lem was formulated as a cost minimization problem in the
troller has also been flown on different RUAS at Georgia presence of input and state constraints. The minimization
Tech., and it is still used as the main autopilot in differ- problem was then solved with a gradient-descent method,
ent research projects, including formation flight and vision- which is computationally light and fast. The proposed
based navigation research. It is available today as a com- flight control system has been successfully implemented
mercial autopilot (Christophersen et al., 2006) and it is on a number of small helicopters and validated in various
also integrated into commercially available autonomous applications. Shim, Kim, and Sastry (2003b) presented ex-
helicopters.17 Several extensions of the standard adaptive perimental results from waypoint navigation, a probabilis-
flight controller have been published. In Yavrucuk, Prasad, tic pursuit–evasion game, and vision-based target track-
and Unnikrishnan (2009) for example, the baseline adap- ing using the UCB Yamaha R-50 helicopter. The nonlinear
tive flight controller has been augmented by an enve- MPC has also been extended to perform LIDAR-based ob-
lope protection system (EPS) that prevents large structural stacle avoidance (Shim, Chung, & Sastry, 2006), collision
loads on the RMAX helicopter by limiting maximum val- avoidance using centralized (Shim & Sastry, 2007) and de-
ues of the load factor. The effectiveness of this system was centralized architectures (Shim & Sastry, 2006), and for-
demonstrated by using the RMAX helicopter to perform mation flight of two unmanned helicopters together with
aggressive maneuvers such as hover-to-hover acceleration– six simulated helicopters (Shaw, Chung, Hedrick, & Sas-
deceleration and E-turn maneuvers at high forward speeds. try, 2007). Formation flight of two unmanned helicopters
There also exist some works on developing and imple- was also achieved at Chiba University in Japan, using a
menting indirect adaptive controllers on small unmanned leader–follower configuration and model predictive control
rotorcraft. Kendoul, Nonami, et al. (2009b) proposed a (Nonami et al., 2010, Chapter 9). The Shenyang Institute of
nonlinear adaptive controller for vision-based flight of a Automation (China) has also implemented modified gen-
quadrotor UAS. The nonlinear hierarchical controller, de- eralized predictive control on the ServoHeli-40 unmanned
scribed in Kendoul et al. (2010), has been augmented by helicopter (Qi et al., 2010). They have tested and compared
an adaptive mechanism (observer) that estimates the visual the following three flight controllers: generalized predic-
unknown scale factor online by fusing optic flow and in- tive control (GPC), stationary increment predictive control
ertial measurements. The developed adaptive vision-based (SIPC), and active model-based stationary increment pre-
autopilot was implemented on a quadrotor UAS and val- dictive control (AMSIPC). The proposed AMSIPC is a kind
idated in outdoor and indoor autonomous flights. An in- of adaptive controller that is composed of two main com-
tegrated observer–controller scheme was also used in Bis- ponents: (1) a linear model predictive controller that is de-
gaard et al. (2010) to design an adaptive control system for signed using a reference model obtained by linearizing the
autonomous helicopter slung load operations. The observer nonlinear dynamics of a helicopter at one flying mode and
uses vision to estimate the length of the suspension sys- (2) an active modeling algorithm that uses the adaptive set
tem and the position of the slung load, thereby allowing the membership filter to estimate on line the error between the
model to be adapted online. Then a combined feedforward reference model and the actual dynamics of the rotorcraft.
(input shaping) and feedback (delayed feedback) scheme Experimental results showed that when the helicopter in-
for simultaneous avoidance of swing excitation and active creases its longitudinal velocity and changes flight mode
swing damping is used. The performance of this control from hovering to cruising, the AMSIPC controller results in
scheme was evaluated through simulations and laboratory better tracking performance.
flight tests, and a significant reduction in slung load swing
was observed.
4.4.4. Backstepping Methodology
4.4.3. Model Predictive Control Backstepping is a well-known recursive methodology for
Another approach to the control of nonlinear systems, the control of linear and nonlinear underactuated sys-
which does not rely on dynamic inversion, is model pre- tems. The backstepping technique was originally intro-
dictive control (MPC), also referred to as receding horizon duced in adaptive control theory to recursively construct
the feedback control law and the associated Lyapunov
function for a class of nonlinear systems satisfying cer-
17
http://www.adaptiveflight.com/index.html. tain structural properties. The structural properties of the
rotorcraft dynamics model allowed the application of the dynamics. The Heudiasyc Laboratory at the University of
backstepping technique for designing nonlinear flight con- Technology of Compiegne (France) has been very active in
trollers for these machines. Indeed, several papers about designing flight controllers for mini quadrotors with actu-
backstepping-based control of RUAS have been published. ator saturations. Their approach, described in Castillo et al.
However, most of the proposed controllers have been vali- (2004), consists of feedback linearizing the attitude dynam-
dated in simulations only. Early theoretical work on using ics and applying a nested saturation technique to stabilize
backstepping techniques to design nonlinear controllers the longitudinal, lateral, heave, and yaw dynamics. The re-
for RUAS can be found in Frazzoli et al. (2000), Mahony, sulting four control laws are nonlinear and prevent actuator
Hamel, and Dzul (1999) and Olfati-Saber (2001). The con- saturation by sequentially bounding state variables, from
trol design is generally based on an approximate mathe- angular rates to positions. Nested saturations are a low-
matical nonlinear model of the rotorcraft, and the stabil- gain technique with low performance but have global sta-
ity of the closed-loop system is analyzed using Lyapunov bility results in the presence of control input saturations.
theory. Bouabdallah and Siegwart (2005) applied the back- This control strategy was used to stabilize a mini quadro-
stepping and sliding-mode techniques to the control of a tor UAS. Experimental results from indoor flights are re-
mini indoor quadrotor. The proposed strategies have been ported in Castillo et al. (2004). Different variants of this
evaluated in simulations as well as in flight tests for the at- control strategy have been developed and implemented on
titude stabilization of a quadrotor on a test bench. Success- different rotorcraft configurations (Kendoul, Lara, Fantoni,
ful implementations of backstepping-based nonlinear flight & Lozano, 2007; Lozano, 2010; Romero, Salazar, & Lozano,
controllers on actual RUAS have been reported in Pflimlin, 2009).
Soures, & Hamel (2006) for a ducted-fan UAS and Guenard
et al. (2008) for a mini quadrotor. The hierarchical controller Observation 4. Model-based nonlinear controllers are an
proposed in Pflimlin et al. (2006) exploits the cascade struc- interesting alternative for advanced flight control with bet-
ture of the system to design a position controller that com- ter flight capabilities. Nonlinear techniques such as feed-
putes the required thrust vector and an attitude controller back linearization, adaptive control, MPC, and backstep-
that tracks the desired angles. The stability of the connected ping control have been employed successfully in various
system has been proven and its performance has been eval- universities and organizations to control various RUAS.
uated in a waypoint navigation flight test using the Bertin The best-documented and flight-tested nonlinear flight
Technologies Inc. HoverEye ducted-fan UAS. An extension controllers are probably the Georgia Tech adaptive controller
of this flight controller has been implemented on the CEA and the UCB MPC controller. Nonlinear controllers outper-
(French Atomic Energy Commission) quadrotor for image- form linear techniques in terms of robustness to unmod-
based visual servoing of the vehicle (Guenard et al., 2008). eled dynamics and disturbances, tracking accuracy over a
Based on the kinematics of an image centroid under spheri- wider flight envelope, etc. However, the reported experi-
cal projection, a visual error signal was computed and used mental results have not shown significant progress in fly-
together with IMU data by a backstepping-based nonlinear ing capabilities compared to standard linear controllers.
controller to stabilize the vehicle. A similar approach has This may be due to the lack of proper implementation and
been also used in Herisse, Hamel, Mahony, and Russotto tuning of these controllers, which are even more complex
(2010) to perform optic-flow-based terrain following. when the model parameters are unknown. Because the the-
ory of many of these nonlinear flight controllers is well
developed, we recommend more experimental work in
4.4.5. Nested Saturation Technique rigorously implementing and extensively flight-testing de-
Control system design for RUAS with actuator saturation is veloped algorithms.
an important practical design problem that many previous
approaches did not consider. This problem is more signif-
5. NAVIGATION SYSTEMS
icant in the case of small and mini RUAS, where actuator
saturation occurs frequently because of aggressive maneu- In addition to flight control, designing an autonomous UAS
vers (to avoid obstacles) or external disturbances (wind). with higher levels of autonomy requires an appropriate
Actuator saturation limits the operational envelope of the navigation system for sensing, state estimation, environ-
vehicle and may induce instability in the controlled sys- ment perception, and situational awareness. A definition of
tem. Different strategies have been proposed in the litera- navigation and its components from the UAS perspective
ture to handle the problem of RUAS control with saturated is provided in Section 3. In this section, we attempt to re-
inputs, but very few of them have been implemented on ac- view the existing navigation systems developed for RUAS
tual rotorcraft and demonstrated in real time. Among them, and classify them into categories, groups, and classes based
the Georgia Tech. adaptive flight controller, previously de- on the autonomy level they provide (first level of classifi-
scribed, has been augmented by pseudocontrol hedging cation), the sensing technology used (second level of clas-
(Johnson & Kannan, 2005) to cope with actuator limits and sification), and the algorithmic approach or method used
Conventional
Sensing IMU/GPS systems a- On-Ground vision
b- Visual Odometry
State estimation
using range sensors
b- LIDAR-based SLAM
NAVIGATION
SYSTEMS
a- Target detection and tracking
b- Mapless-based approaches
Vision-based
perception
c- Mapping-based methods (SLAM, SMAP, SLAD)
d- Mission-oriented perception
Perception
a- Mapless-based obstacle detection
(third level of classification), see Figure 7 for the different mance, and the state of the art in research and development
levels of classification. of sensing technologies for UAS are out of the scope of this
survey.
Conventional and basic navigation sensors (minimal
5.1. Sensing Technologies sensors suite) for RUAS include state estimation sen-
The first component of a navigation system is sensing, sors for flight control, such as an IMU (three gyro-
which involves an actual sensory device such as an iner- scopes, three accelerometers, and three magnetometers)
tial measurement unit (IMU), a camera, or any other sensor. for attitude estimation, a global navigation satellite sys-
Sensing capabilities are required for all autonomy levels, a tem (GNSS) for position and velocity estimation, and an
presented in Figure 4. This section is dedicated to a gen- altimeter (barometric, LIDAR, radar) for enhancing the
eral introduction to sensing technologies commonly used height estimation. Most current UAS research platforms
for RUAS navigation. A detailed description of navigation and commercial autopilots contain these conventional nav-
sensors, a comparison of their characteristics and perfor- igation sensors. Progress in electronics miniaturization and
microprocessors has made the integration of these sen- requires a lot of power, and the localization of the beam re-
sors in one small and compact device possible. Indeed, quires a large antenna. Radar units are also heavier than
lightweight integrated IMU/GPS/altimeter devices (e.g., previous sensors, which makes their integration into small
Xsens MTi-G 68 g, sbg IG-500N 45 g, Microstrain 3DM- RUAS difficult. Some lightweight radar systems are avail-
GX3-35 23 g) with acceptable accuracy and affordable able today, but they are expensive, such as the miniature
prices are available today. radar altimeter (MRA) Type 1 from Roke Manor Research
For perception, ranging sensors and cameras are the Ltd., which weighs only 400 g and has a range of 700 m. We
sensors used most on RUAS. Cameras or electrooptics sen- are not aware of any work by an academic research group,
sors are a popular approach for environment sensing be- on using radar onboard RUAS for obstacle and collision
cause they are light, passive, and compact and provide avoidance, except the work in Viquerat, Blackhall, Reid,
rich information about the vehicle’s self-motion and its sur- Sukkarieh, & Brooker (2007) but using a fixed-wing UAS.
rounding environment. Computer vision is a popular so- The use of other ranging sensors such as ultrasonic and in-
lution for applications that require target detection and frared sensors has been limited to a few indoor flights or
tracking, landmark recognition, and other tasks. There ex- ground detection during automatic landing.
ist different types of imaging sensors, such as single cam-
eras, stereo cameras, omnidirectional cameras, and optic
5.2. State Estimation
flow sensors. The drawback of camera-based approaches is
their sensitivity to the ambient light and scene texture. Fur- State estimation for RUAS can be defined as the process of
thermore, the complexity of image processing algorithms tracking the current vehicle 3D pose, which can be absolute
makes their real-time implementation on embedded micro- or relative to some object or initial location. The design of
processors challenging. motion estimation algorithms is necessary for flight control
LIDAR (light detection and ranging) is a suitable de- and a crucial step in the development of autonomous flying
vice for mapping and obstacle detection, because it directly machines. Typically, state estimation algorithms fuse infor-
measures the range by scanning a laser beam in the envi- mation from many sources (multiple sensors, preregistered
ronment and measuring distance through time of flight or maps, ground segment data, etc.) to estimate the vehicle’s
interference. Other common terms for LIDAR are LADAR attitude, height, velocity, and position, which may be ab-
(laser detection and ranging) and laser radar, which are of- solute or relative. We classify state estimation algorithms
ten used in military contexts. LIDAR does not rely on ambi- into three main categories based on the sensing technology
ent lighting and scene texture. However, LIDAR units they used: (1) conventional systems, (2) vision-based systems,
are heavier than cameras and energy-consuming (active), and (3) systems that rely on active ranging sensors.
and most available off-the-shelf systems use a single-line
scan (2D LIDAR). For 3D navigation, these 2D LIDAR units
have been mounted on sweeping or spinning mechanisms 5.2.1. Conventional GPS–Inertial Measurement Unit
when used on RUAS and ground robots. A few 3D com- Systems
pact LIDAR units exist, but they are not commercially avail- UAS normally rely on an IMU and a GNSS such as GPS to
able, such as the one from Fiberteck Inc. (Scherer, Singh, provide the flight controller with attitude and position in-
Chamberlain, & Elgersma, 2008) or they are very expensive formation. These measurements are usually sufficient for
and heavy, such as the Velodyne 3D LIDAR. Flash LIDAR UAS operating in obstacle-free environments such as at
unit or 3D time-of-flight cameras are a promising emerg- high altitudes. An extended kalman filter (EKF) provides
ing 3D sensing technology that will certainly increase the the most common approach to fusing sensor data to esti-
perception capabilities of RUAS. Unlike traditional LIDAR mate the rotorcraft 3D pose. This can be done in one step
devices that scan a collimated laser beam over the scene, a by using one filter with all the state variables that need to
flash LIDAR unit illuminates the entire scene with diffuse be estimated, as for example in Christophersen et al. (2006)
laser light and computes the time of flight for every pixel in (16-states EKF) and (Bristeau, Dorveaux, Vissiere, & Petit,
an imager, thereby resulting in a dense 3D depth image. Re- 2010) (23-states EKF). Another alternative is to use two cas-
cently, several companies have begun offering flash LIDAR caded EKFs, one for attitude and heading estimation (atti-
units commercially, such as the SwissRanger SR4000 (510 g) tude and heading reference system) using IMU raw data,
from MESA Imaging AG, Canesta 3D cameras, the Kinect and the other one for position and velocity estimation us-
sensor (based on the Canesta 3D camera), the TigerEye 3D ing GPS raw measurements and translational accelerations
Flash LIDAR unit (1.6 kg) from Advanced Scientific Con- (GPS/INS solution) Kendoul et al. (2010). Height estima-
cepts Inc., the Ball Aerospace 5th Generation flash LIDAR. tion can be enhanced by incorporating altimeter measure-
Radar is the sensor of choice for long-range collision ments into the second EKF, or estimated separately using
and obstacle detection in larger UAS and manned air- another Kalman filter that fuses altimeter data with verti-
craft. Radar provides nearly all-weather, high-resolution, cal accelerations. There are also other approaches for sen-
and broad-area imagery. The problem with radar is that it sor data fusion, such as particle filters and complementary
filters (Mahony, Hamel, & Pflimlin, 2008) but they are less eral quadrotors and a VICON system. For example, the MIT
popular for UAS state estimation. Aerospace Control Lab. (How, Bethke, Frank, Dale, & Vian,
2008) and the GRASP Lab. (Michael, Mellinger, Lindsey, &
Kumar, 2010) use this setup in their research on flight con-
5.2.2. Vision-Based State Estimation Systems trol and multivehicle cooperation. Another setup, used in
GNSS-based navigation depends on the existence of, and many research works, consists of one or several ground
access to, signals from satellites. There is a well-recognized cameras and an aerial platform with colored landmarks
need for support and backup systems in case GNSS is not that can easily be tracked from images. Altug, Ostrowski,
available, in particular to reach the requirements for in- and Taylor (2005) used a pair of pan/tilt ground and on-
tegrity and accuracy for use in civilian applications. Ac- board cameras and colored blobs attached to the bottom
cording to Figure 4 a RUAS with non-GNSS state estima- of a quadrotor to estimate the 3D pose of the quadrotor
tion capability has autonomy level 2, which allows it to and to stabilize its motion. In Wang, Song, Nonami, Hi-
achieve more complex missions in a wide range of envi- rata, and Miyazawa (2006), a ground USB-camera was used
ronments. Indeed, many RUAS missions are defined within to track a square marker on the bottom of the Epson 12-g
GNSS-denied environments such as urban and indoor en- micro flying robot and to provide a hovering controller
vironments. To achieve realistic missions in such environ- with attitude and position measurements. This approach
ments, a non-GNSS navigation system is necessary. Vision has been also used for flight control of the Colibri out-
systems are passive and outperform active navigation sys- door helicopter, Figure 8. A trinocular system, composed
tems in terms of cost, weight, power consumption, and size, of three FireWire cameras fixed on the ground, was used
and are therefore an excellent sensing technology for many to estimate the helicopter position and heading by track-
aerial platforms and various environments. Computer vi- ing color landmarks, fixed at the bottom of the helicopter.
sion can be used as part of the state estimation system for This visual state estimator was compared against GPS data
flight control, as a perception system for obstacle and target during manual flights (Martinez, Campoy, Mondragon, &
detection, and as a mission sensor to collect required infor- Olivares-Mendez, 2009) and has also been integrated into
mation. In this section, we are interested in approaches and the flight control system to perform automatic vision-based
algorithms that are used for vision-based state estimation. hovering and landing (Martinez, Mondragon, Olivares-
There is no trivial way of classifying these algorithms, but Mendez, & Campoy, 2010).
we try here to classify them based on the principles and In some research work, ground cameras were used to
approach used to estimate the motion from the general per- directly track a vehicle without using color landmarks on
spective of RUAS navigation. We have identified six main the vehicle. The 3D position of the vehicle was then calcu-
classes of vision-based state estimation systems: lated by triangulation using two or more cameras. At Chiba
University, a ground-based Bumblebee stereo vision sys-
• On-ground vision (OGV) tem was used to estimate the 3D position of a quadrotor
• Visual odometry (VO)
• Target relative navigation (TGRN)
• Terrain/landmark relative navigation (TRN)
• Concurrent estimation of motion and structure (SFM
and SLAM)
• Bio-inspired optic flow navigation (BIOFN).
and to achieve autonomous hovering and automatic land- the system was evaluated offline using data collected from
ing outdoors (Pebrianti, Kendoul, Azrad, Wang, & Nonami, manual flights of the AVATAR helicopter. The obtained re-
2010). Autonomous aerobatic flights of an instrumented he- sults demonstrated that VO+IMU can produce a final po-
licopter were achieved at Stanford University (Abbeel et al., sition estimate within 1% of the measured GPS+IMU posi-
2010) using two (or more) Point Grey Research Dragon- tion over a flight distance of about 400 m. In Mejias, Cam-
Fly2 cameras with known positions and orientations on the poy, Mondragon, and Doherty (2007), a similar stereo VO
ground. The accuracy of the estimates obtained was about system has been implemented and tested on a ground robot
25 cm at about 40 m distance from the cameras. and on the COLIBRI helicopter. The implemented stereo
• Visual Odometry: Traditional odometry techniques use VO system tracks features in the ground and computes
data from the movement of actuators to estimate changes in the helicopter height using the stereo disparity principle.
position and/or orientation over time. VO is an incremental A least-squares algorithm is then used to compute the heli-
procedure that estimates a vehicle’s orientation and/or rel- copter rotation and translation between two successive im-
ative position (traveled distance) by analyzing a sequence ages. The experimental results obtained from a GPS-based
of images. In the general case, the RUAS position relative flight showed that the mean squared error between GPS
to the initial or a known location is computed by integrat- and online vision estimates was about 1 m during a 10-
ing over time the flown distances, obtained from tracking m-long flight. Whereas previous stereo VO systems were
visual features in an unknown environment using an on- used for outdoor flights, the stereo VO system presented
board imaging system. A typical VO algorithm includes in Achtelik, Bachrach, Heb, Prentice, and Roy (2009) has
the following main components: (1) detection of salient fea- been implemented on a small quadrotor and flight-tested in
tures that can be robustly tracked across successive images, an indoor environment. Like previous systems, it includes
(2) feature correspondence or tracking between consecutive feature tracking, 6-DOF motion estimation using the least-
images, and (3) motion parameter estimation using feature squares algorithm, nonlinear motion estimation by bundle
correspondences. adjustment over a window of consecutive frames, and fu-
Most VO algorithms need compute a scale factor to sion of visual estimates and IMU data using an EKF. A
recover the real rotorcraft position and velocity. Another quadrotor UAS, outfitted with a customized stereo cam-
common problem of VO is that position estimates drift over era rig (35-cm baseline) and IMU, was used to compare the
time. IMU/INS data are generally used in the VO algorithm position and velocity estimates from the VO and the VI-
to compensate for rotation effects and to improve the esti- CON system (ground truth). The vision algorithm was im-
mation accuracy and robustness. VO approaches reported plemented on a ground control station and the flight con-
in the literature vary according to how these challenges are trol was based on the VO estimates. The obtained results
addressed. We distinguish between visual odometers that showed that the estimates originating from the VO match
are based on stereo, monocular, and omnidirectional vision. the ground-truth values closely in both position and veloc-
Stereo-vision-based odometry: the concept of visual ity, and the bundle adjustment substantially reduced the to-
odometry for RUAS was implemented in the CMU au- tal error.
tonomous helicopter (Amidi, Kanade, & Fujita, 1999) us- Monocular-vision-based odometry: Monocular vision (single
ing template matching on stereo images. The VO locks and camera) is another alternative for stereo VO. It has been
tracks arbitrary objects (target template) in consecutive im- used successfully for VO onboard RUAS, in particular for
ages and uses attitude measurements and range to objects those of class IV, such as quadrotors, where the payload
(sensed by triangulation) in order to compute the position limitation prevents the use of stereo cameras. The chal-
of the helicopter relative to its initial location. This visual lenge with monocular-based VO is scale factor ambigu-
odometer, aided by a set of inexpensive angular sensors ity due to the fact that the depth cannot be recovered
were the only sensors employed to stabilize and maneuver from images only. In Caballero, Merino, Ferruz, and Ollero
a Yamaha R-50 helicopter (Amidi et al., 1999). Researchers (2005) and Caballero, Merino, Ferruz, and Ollero (2009), a
from CSIRO (Buskey et al., 2003; Roberts, Corke, & Buskey, homography-based VO technique is proposed. It basically
2003), have also developed a low-cost state estimator for a consists of a point-feature tracker that obtains matches be-
helicopter using stereo vision and an in-house IMU. The tween images and a combination of least median of squares
proposed approach relies first on height estimation from and M-estimator for outlier rejection and homography es-
stereo disparity, and then fusing optic flow and inertial data timation from these matches. Motion parameters (orienta-
using complementary filters to compute the translational tion and relative position up to a scale factor) are estimated
velocities. Experimental results for attitude control, veloc- using the singular value decomposition (SVD) of the ho-
ity, and height tracking were presented using an Xcell 60 mography. This homography-based VO was evaluated of-
helicopter. A stereo visual odometer was also proposed in fline on monocular image sequences gathered by an un-
Kelly, Saripalli, and Sukhatme (2007). The proposed VO is manned helicopter. Researchers from Draper Laboratory
based on feature tracking and helicopter pose estimation and MIT have developed a vision-aided navigation sys-
using a maximum likelihood estimator. The accuracy of tem for UAS using monocular images (Madison, Andrews,
DeBitettoand, Rasmussen, & Bottkol, 2007). The proposed images taken at roughly 2000, 1700, and 1400 m above the
system architecture allows the estimation of the UAS abso- surface.
lute position and velocity when GPS is lost. This is achieved Omnidirectional-vision-based odometry: There are also
by initially estimating the 3D positions of landmarks when some examples of using omnidirectional vision for heli-
GPS is available, and then using these 3D points and visual copter attitude estimation in unstructured environments. In
line-of-sight measurements to these landmarks to continue Mondragon, Campoy, Martinez, and Olivares (2010), cata-
the estimation of the vehicle pose long after losing both dioptric images are used to estimate pitch and roll from
GPS and the features that were localized using GPS. This skyline detection. A visual compass on appearance images
strategy was validated in simulations as well as in flight is also proposed to estimate relative yaw by unwrapping
tests using the MIT quadrotors and VICON system to pro- the spherical image and finding the best match based on
vide pseudo-GPS updates. This VO system was also used in the images column shift using the Euclidean distance. Dif-
Ahrens, Levine, Andrews, and How (2009) for vision-based ferent tests have been performed on a helicopter in differ-
flight control and obstacle avoidance in GPS-denied envi- ent seasons and conditions. When compared to IMU mea-
ronments. When equipped with the developed VO system, surements, the VO estimates have an RMSE of (2.86◦ , 4.3◦ ,
the MIT quadrotor UAS achieved drift-free hover for 96 s 10.7◦ ) for (roll, pitch, yaw) in the worst case and (0.18◦ , 1.9◦ ,
and a collision-free forward flight of 5 m. A different ap- 1, 2◦ ) in the best case.
proach for estimating the range and recovering the 3D po- • Target Relative Navigation: One of the important appli-
sition of a RUAS from monocular images in a visual odome- cations of vision for RUAS is target detection and tracking.
ter framework was proposed in Kendoul et al. (2009). The The objective is generally to detect a specific target and to
authors proposed a real-time adaptive VO-inertial system estimate its position relative to the rotorcraft. This relative
that recovers the 3D position of the rotorcraft using an on- position may be expressed in an inertial frame in meters or
board camera, IMU, and pressure sensor. The vision algo- in the image frame in pixels. Potential applications of visual
rithm computes the optic flow, tracks visual features, and TGRN are precise landing on a specified target and track-
integrates the image displacement to compute the traveled ing of moving air/ground targets such as cars, persons, or
flight distance in pixels. These visual measurements are other UAS. In contrast to VO approaches, there is the notion
then fused with IMU data to compensate for undesired ro- of “target” in TGRN methods which must be kept in the
tation effects. This vision algorithm has been augmented by camera field of view. This target may be static or moving, a
an adaptive observer that identifies the scale factor (range) priori known or selected during flight by the operator or an
from optic flow and INS measurements and recovers the 3D automatic perception system. TGRN systems, developed
position and velocity of the vehicle. For more accuracy and for RUAS, can be further divided into four categories: (1)
robustness, a Kalman filter is used to fuse these estimates vision-based landing on a known target, (2) vision-based
with inertial data, enhanced by pressure sensor measure- landing on an unknown target, (3) vision-based static tar-
ments to overcome scale factor identification errors during get tracking, and (4) vision-based mobile target tracking.
hovering and slow flights. The performance of the adaptive
VO was demonstrated in outdoor and indoor flight tests
Observation 5. In this paper, we make a distinction be-
using a quadrotor MAV and real-time architecture. The VO
tween target-based visual state estimation and target-based
estimates were used in a closed loop to control the MAV
perception (described in the next section). In the first case,
motion for automatic takeoff and landing, hovering, tra-
vision-based estimates are directly used as measurements
jectory tracking, and tracking of a moving ground-based
for the controller, replacing GPS data. In the second case,
target.
target-based visual perception provides inputs to the guid-
A descent image motion estimation system (DIMES)
ance system, which generates reference position/velocity
was developed by NASA JPL and used for horizontal ve-
commands to the controller, but the control feedback is
locity estimation during the Mars exploration rover land-
based on GPS data.
ings (Johnson, Willson, et al., 2007). Although this VO has
been developed for a spacecraft, it has been flight-tested us-
ing a helicopter, which makes it suitable for this survey. The Vision-based landing on a known target: TGRN approaches
DIMES algorithm combined measurements from a descent have been actively researched for automatic landing on
camera, a radar altimeter, and an IMU. The horizontal ve- some artificial landmarks with known attributes such as
locity is estimated by tracking features in three images after size, color, and pattern. The landing helipad or target is
rectifying them to counter for changes in scale and orien- generally designed to simplify image processing and to al-
tation. The system was evaluated on Monte Carlo simula- low accurate motion estimation from different viewpoints.
tions and flight-tested using a full-scale helicopter. Finally, Generally, the pattern and size of the target are known,
the DIMES system was successfully used to compute and which allows both the orientation and the translation of
reduce the velocity during the Opportunity and Spirit rover the helicopter relative to the target to be recovered without
landings on Mars. Two templates are tracked between three scale factor ambiguity. These vision-based landing systems
are generally practical when the landing point is fixed or known targets, there is no information about the size, color,
known, such as the base station. or shape of the landing area, which makes position estima-
In our literature review, we found that research on tion challenging because of the scale factor ambiguity un-
vision-based landing of unmanned rotorcraft goes back less another sensor is used. The landing area is generally
to the nineties, but only results from simulation or of- flat and homogeneous, with poor texture, which makes fea-
fline processing of real images were presented (Bosse, Karl, ture tracking difficult, too. The motion estimation problem
Castanon, & Debitetto, 1997). There are also some theoreti- here is very similar to the visual odometry problem.
cal works such as (Marconi, Isidori, & Serrani, 2002), where One of the advanced works in this area was carried
the problem of automatic landing of RUAS on an oscillat- out in the framework of the precision autonomous land-
ing platform has been approached as a robust nonlinear ing adaptive control experiment (PALACE) project by the
regulator problem. In Shakernia, et al. (2002), a vision al- U.S. army/NASA Autonomous Rotorcraft Program (ARP),
gorithm that detects a known target (white squares) and Brigham Young University, and the Mobility and Robotics
computes the helicopter’s relative position and orientation Group of JPL. In Theodore, et al. (2006), a vision-based
is presented. The algorithm is based on multiple view ge- landing system at noncooperative sites without the aid of
ometry, which exploits the rank deficiency condition of the GPS is presented. The proposed approach includes three
multiple view matrix. The proposed system has been im- main parts: (1) stereo-based mapping for safe landing area
plemented on the Yamaha R-50 helicopter and vision-based detection, (2) monocular-vision-based position estimation,
hovering (18 s hovering) above the target was achieved, but and (3) flight controller. The position estimation algorithm
no automatic landing was attempted. The work presented takes as inputs the feature-tracking data, the LIDAR range
in Saripalli, Montgomery, and Sukhatme (2003) combines to the image center and the IMU attitude measurements,
vision and GPS for automatic landing on a known target. and outputs the 3D position of the helicopter relative to a
Therefore, it is not considered a pure vision-based landing fixed point in the selected landing area. Closed-loop vision-
system but rather a vision-based perception system, which based control has been evaluated in flight for a number
will be described in the next section. The paper by Sari- of different conditions and heights (up to 30 m AGL) us-
palli and Sukhatme (2003) about vision-based landing on ing the Yamaha RMAX helicopter. The tracking drift ob-
a moving target also uses vision and GPS and only offline served in each case was between 15 and 20 cm/min. The
processing results were presented (no automatic landing). complete SLAD and vision-based automatic landing sys-
The first experimental results on vision-based auto- tem has also been demonstrated in flight, with a total of 17
matic landing (without GPS aid) of an unmanned heli- successful landings on various surfaces and obstacle fields,
copter were published in 2004 by Merz, Duranti, and Conte with a landing accuracy of better than 1 m. In Yu, Nonami,
(2004). Their vision-based landing system is based on a sin- Shin, and Celestino (2007), a vision-based landing system
gle pan/tilt camera and a specially designed landing pad for unknown flat areas is developed for a robotic helicopter.
with black circles on a white board. The image-processing Stereo vision is used to build a depth image of the area be-
algorithm computes the helicopter’s relative position and low the helicopter. Assuming that the ground is flat, a least-
attitude by minimizing the reprojection error of the ex- mean-square (LMS) technique is used for plane fitting and
tracted center points and semiaxes of the three ellipses. Us- height estimation. The system has been implemented on
ing a Kalman filter, these estimates are then fused with the Hirobo SR-40 helicopter and demonstrated in vision-
the IMU data to improve the system robustness, espe- based closed-loop flights to perform automatic landings
cially when the vision system is “blind” for short periods. from a height of 5.5 m. The visual odometer, described pre-
Several autonomous landings of the Yamaha RMAX heli- viously in Kendoul et al. (2009) has also been successfully
copter were conducted from different relative positions, on used for tracking and landing on unknown static and mov-
grass and snow fields, under different wind and illumina- ing ground targets. The VO system was extended to include
tion conditions. Successful accurate landings were achieved target tracking by just selecting in the image the desired tar-
with an average touchdown precision of about 42 cm, a get as the area where visual features will be tracked. The
vertical velocity at touchdown that ranged between 18 and outputs of the vision system are then the position and ve-
35 cm/s, and a horizontal velocity of about 15 cm/s. locity of the quadrotor relative to the target. The flight con-
Vision-based landing on an unknown target: Landing on troller regulates these visual estimates to zero, resulting in
unknown and unprepared sites has also been an active re- vision-based target tracking and automatic landing on un-
search topic because of its importance for real applications. known targets without the aid of GPS.
The first step in vision-based landing in unknown areas is Vision-based static target tracking: These systems
safe landing area detection (SLAD); this will be described in share many characteristics with vision-based landing
the next section. In this section, we are interested in vision- approaches. However, the objective of the approaches
based landing (without GPS) on the selected target once it described here is not to land on the target but to fly toward
is detected by the SLAD system or selected by the opera- that or hover near it. The target is not necessarily on the
tor during flight. In contrast to previous approaches with ground, because it can be on a vertical wall, for example.
An example of a such system is described in Hrabar and tem, it was able to autonomously search an urban region of
Sukhatme (2003), where the objective was to develop a 15 buildings for a particular building and then identify the
sideways-looking system that will allow a helicopter to openings to the building. The third system was developed
locate and move toward the centroid of a number of known for mobile ground target tracking using a particle filter al-
targets, using omnidirectional vision. A vision-aided iner- gorithm that tracks the target in the incoming video frames
tial navigation system is presented in Proctor and Johnson and estimates its relative position. A closed-loop tracking
(2005), where the helicopter’s position and attitude relative system was tested in simulation and onboard the GTMax
to a known target (dark square with known position, ori- helicopter to track a white van from altitude 30 m. How-
entation, and size) are computed from images and inertial ever, the flight controller relied on GPS data for vehicle sta-
data using an extended Kalman filter. Experimental results bilization and trajectory tracking. Another vision system
from a flight test where the vision-based EKF estimates has been developed at Georgia Tech. for UAS navigation
are used in the closed-loop control of an RMAX helicopter relative to an airborne target using only visual information
are presented. The helicopter started at a location 36 m from a single onboard camera (Johnson, Calise, Watanabe,
from the target and was given a step input of 2 m up in Ha, & Neidhoefer, 2007). The image processing uses an ac-
altitude, a 3 m step parallel to the plane of the window, tive contour to calculate the location of the target. Two dis-
and finally a 3 m step toward the window, which is a mock tinct vision strategies are proposed for range and hence po-
square window with an area of 4 m2 . In Guenard et al. sition estimation from monocular images. The first method,
(2008), image-based visual servoing of a quadrotor UAS is called center only relative state estimation (CORSE), tracks
described. Visual features are mapped into control inputs only the target center position and uses an EKF to compute
via the inverse of an image Jacobian matrix without esti- the relative position and velocity. To maximize the accu-
mation of the rotorcraft position. The target consists of four racy of range estimation, optimal control is used to generate
black marks on the vertices of a stationary planar square. a sinusoidal trajectory, making range estimation possible
An image-based error signal is regulated to zero using from visual measurements and rotorcraft accelerations or
a nonlinear backstepping controller, thereby resulting in control inputs. The second approach, called subtended an-
position control using vision and angular rates. This visual gle relative state estimation (SARSE), uses a more compu-
servoing algorithm was evaluated in real-time experiments tational image processing algorithm to compute the target
using a small quadrotor with downward-looking camera. center as well as the locations of the wing tips in the image.
The vehicle was tasked to hover 1.4 m above the target (the The range to the aircraft target and wingspan are estimated
yaw was controlled manually). The rotorcraft performed a from an EKF based on visual measurements and helicopter
50-s–long hover flight with a maximum regulation error of accelerations (the target acceleration is assumed to be zero).
10 cm. Both techniques have been tested and evaluated in simu-
Vision-based mobile target tracking: Vision-based target lations as well as in closed-loop flights. The SARSE algo-
relative navigation includes also moving ground or air tar- rithm was used to perform relative navigation and forma-
get tracking. The challenge in visually tracking a moving tion flying between the GTmax helicopter and the GTEdge
target is that it is difficult to estimate both target and ro- fixed-wing UAS. Although successful autonomous flights
torcraft velocities, because only relative velocity can be es- were achieved, inaccuracies in depth estimation were ob-
timated. The image processing algorithm also needs to be served. Later, in Ha, Johnson, and Tannenbaum (2008), the
robust to track a dynamic target in image sequences. Sev- vision-based tracking system was improved by using fast
eral vision-based navigation systems have been developed geometric active contours and particle filters for multiple
at Georgia Tech. and implemented on RUAS. In Luding- target tracking. However, the shape and size of targets were
ton, Johnson, and Vachtsevanos (2006), three vision systems assumed to be known in order to simplify range and posi-
that were developed and tested on the GTMax unmanned tion estimation from monocular images. A leader–follower
helicopter are presented: (1) vision-based state estimation, formation flight was demonstrated using vision.
(2) automated search of stationary ground targets, and (3) In the framework of the French project ACTION (co-
mobile ground target tracking. The first system fuses iner- operation of heterogeneous autonomous vehicles), a vi-
tial data and visual measurements from the detection of sual air-to-ground target tracking system was developed
a known dark square on the ground to estimate the heli- by ONERA (Watanabe, Lesire, Piquereau, & Fabiani, 2010).
copter state using an EKF. This vision-aided inertial system The system includes three main components, which are
has been used in the closed-loop control of the RMAX he- target tracking and relative motion estimation, optic flow
licopter and has demonstrated autonomous vision-based computation for rotorcraft self-motion estimation, and a
hovering and forward flight with good accuracy when guidance law for vision-based tracking. The target detec-
compared with DGPS measurements. The second vision- tion and tracking problem is made simpler by assuming a
based system is dedicated to automatic identification of priori knowledge of the target color and size. Sparse op-
buildings of interest based on some a priori information. tic flow is also computed around the target by assuming
When the RMAX helicopter was equipped with that sys- the surface is flat. By fusing target visual measurements,
optic flow, and inertial data, the position and velocity of TRN systems are also of great interest for pinpoint
both the helicopter and of the target are estimated using or accurate landing of spacecraft on Mars, the Moon, and
an EKF. This vision-based tracking system has been im- other celestial objects, where GNSS systems are not avail-
plemented and tested on the ONERA ReSSAC unmanned able. In Adams, Criss, and Shankar (2008), a passive opti-
helicopter (Yamaha RMAX). Successful target (car) track- cal TRN system for spacecraft landing on the Moon is de-
ing was achieved from an altitude of 40 m and overflight rived from DSMAC. The proposed system uses two orthog-
course of about 50–100 m using vision for target tracking onal cameras and an images-to-map correlation method.
and a guidance law and GPS for flight control and trajec- The NASA JPL computer vision group developed different
tory tracking. Optic-flow-based self-motion estimation is visual TRN systems for Mars landers. In Trawny, Mourikis,
implemented but not yet tested in flight (it is not yet pure Roumeliotis, Johnson, and Montgomery (2007) described
vision-based target tracking). an EKF-based TRN system that fuses IMU data with visual
• Terrain/Landmark Relative Navigation: TRN refers to a observations of a priori mapped landmarks such as craters
vehicle’s position and/or velocity estimation by compar- to estimate the spacecraft attitude, position, and velocity
ing terrain measurements from onboard sensors with an during descent and landing on a planet. This system, called
a priori terrain map. TRN can be achieved using passive a descent image motion estimation system (DIMES), was
imaging or active range sensing. The reference map can evaluated using data sets from NASA subsonic parachute
be a digital elevation map, satellite images, topographical drops (8,000 m AGL) and field testing on a helicopter (1,000
map with landmarks, etc. The main purpose of TRN is to to 2,000 m). In all these tests, the final error bounds obtained
provide a drift-free navigation tool by estimating position (3σ ) were on the order of ±3 m for position and ±0.5◦ for
(global or local) or bearing relative to known surface land- orientation. Another vision-aided inertial TRN system for
marks, yielding a powerful alternative to GNSS navigation. pinpoint landing on Mars is presented in Mourikis et al.
It is important to note that some animals, such as birds (2009). The algorithm extracts 2D-to-3D correspondences
and bees, seem to use landmark-based cues as well as vi- between descent images and a surface map (mapped land-
sual odometry to navigate to a goal location. Before describ- marks with known global coordinates) and 2D-to-2D fea-
ing vision-based TRN systems developed for RUAS, let us ture tracks through a sequence of descent images. These
review some TRN algorithms (visual and nonvisual) that visual estimates are then fused with inertial data using an
have been implemented successfully on other aerial vehi- EKF in order to estimate in real time the lander’s terrain-
cles such as military aircraft, cruise missiles, and spacecraft. relative position, attitude, and velocity. This system was
This can help to gain more insights in TRN approaches and experimentally validated on a sounding-rocket test flight
to have ideas for adapting existing TRN systems to RUAS with estimation errors of 0.16 m/s for velocity and 6.4 m
navigation. for position at touchdown. For more work and comparison
Non-RUAS terrain-relative navigation systems: TRN sys- of different TRN systems for planetary landing, the reader
tems were first developed for cruise missile guidance be- can refer to the review paper by Johnson and Montgomery
fore the advent of GNSS, and they are still used indepen- (2008) and the references therein.
dently or in conjunction with GPS navigation. One of the Visual terrain-relative navigation systems for RUAS: Al-
earliest TRN systems was developed for cruise missiles in though TRN is crucial for UAS as a backup system for GPS
the late 1970’s (Golden, 1980). The algorithm, called TER- navigation or as a main navigation system for GPS-denied
COM (terrain contour matching), uses a radar altimeter to environments, very few TRN systems have been developed
continuously sense the contour of the land directly beneath and implemented on UAS. Visual TRN seems more ap-
the missile, and compares the sensed elevation with an a propriate than active TRN for RUAS flying at low speeds
priori contour map containing the planned route. Another and low altitudes. This is because the altitude variation is
similar system developed by Atlantic Inertial Systems (UK) generally quite poor in terms of allowing ground profile
is called TERPROM (terrain profile matching) and is used matching using active range sensing. The most advanced
on a number of jet fighters and cruise missiles. It also com- and tested visual TRN system for RUAS is probably the
bines a radar altimeter and navigation data with a digital one developed by the UASTech Lab. at Linköping Univer-
terrain map to provide a ground proximity warning sys- sity (Conte & Doherty, 2009). The proposed navigation ar-
tem (GPWS) and advanced terrain avoidance cues (ATAC). chitecture combines inertial sensors, visual odometry, baro-
DSMAC (digital scene mapping and area correlation) (Carr metric altimeter, and image to georeferenced map correla-
& Sobek, 1980) is a passive TRN system that uses a single tion data to estimate the rotorcraft absolute position and
digital camera and a set of preregistered images. An image attitude using a 12-state Kalman filter, Figure 9. First, the
correlation technique is used to determine the optimal lo- horizontal absolute position is estimated by fusing the VO
cation that would correct the sensed image so that it would estimates (homography-based VO) with the image correla-
best match the planned image. These visual estimates occur tion information using a Bayesian point-mass filter. These
periodically throughout the flight, and are used to correct position estimates, together with height measurements, are
the drift of the inertial navigation system. then used as position measurements to update an error
build up a map within an unknown environment or to up- The vSLAM horizontal position error was about 12 m after
date a map within a partially known environment, while 280 m flight and the height estimate was approximately 2.5
at the same time localizing the vehicle on the map. SLAM m error. An interesting work about vSLAM for full-scale he-
algorithms have advantages over SFM techniques when licopters is presented in Kim, Lyou, and Kwak (2010). The
the vehicle revisits the same places (loop closure). Visual conventional GPS/INS navigation system has been aug-
SLAM (vSLAM) implementations mainly rely on feature mented by a vSLAM algorithm to compensate for position
(or landmark) correspondences and use extended Kalman drift during GPS loss and blockage. The imaging system
filters (EKF) or Rao–Backwellized particle filters (RBPF). uses two cameras that are perpendicular to each other and
Although vSLAM has been widely used on ground robots, observe the forward and downward feature points, respec-
only a few applications have been implemented on UAS. tively. The vehicle’s position, velocity, and attitude are re-
vSLAM algorithms developed for RUAS can be further covered by a KF-SLAM algorithm based on the fact that the
classified into stereo vision SLAM (svSLAM) and bearing- 3D position of initial features is known.18 The scale factor
only SLAM (boSLAM), which is based on monocular im- ambiguity is solved by assuming that the precise altitude
ages. boSLAM is a partially observable problem where the of the rotorcraft relative to the ground surface is known
vehicle’s motion and feature positions (map) can be recov- from a radio altimeter, a pressure sensor, or GPS. The GPS–
ered up to some unknown scale factor, unless a priori infor- INS–vision algorithm has been demonstrated in three dif-
mation or other ranging sensors are used. Landmark initial- ferent scenarios (forward-looking only, downward-looking
ization for boSLAM has also proven to be a difficult issue to only, forward–downward-looking) using a manned heli-
resolve in a consistent manner. Lemaire, Berger, Jung, and copter flying at 110 knots. When the GPS signal was lost for
Lacroix (2007) describe svSLAM and boSLAM algorithms 60 s, the GPS/INS solution error was about 320 m, where as
that both rely on a robust interest-point matching algorithm this error was reduced to 20 m when the vSLAM algorithm
and the EKF framework. The performance of the two ap- was used with downward and forward-looking cameras.
proaches was compared offline using data acquired with a
ground rover and an aerial blimp. Other svSLAM (Nemra Observation 6. Despite the popularity of SLAM tech-
& Aouf, 2009) and boSLAM (Kima & Sukkarieh, 2007) algo- niques, we have not found a single paper describing au-
rithms have been proposed and evaluated in simulations or tonomous flight of a RUAS using visual SLAM estimates.
using fixed-wing UAS. All reported results have been obtained from offline pro-
There are also some works on applying SLAM to un- cessing or simulations.
manned rotorcraft, but they are still premature, because
only offline results have been reported. In Artieda, et al. • Bio-inspired Optic Flow Navigation: In most vi-
(2009), the implementation of a vSLAM technique on im- sion approaches previously described, the rotorcraft posi-
ages taken from a RUAS is presented. An EKF is used to tion and/or velocity are explicitly recovered and used as
reconstruct the UAS motion as well as feature positions measurements in the flight controller. Recent experimen-
(map) up to some unknown scale factor. This monocular tal research in biology has discovered a number of differ-
vSLAM algorithm has been tested offline using images col- ent ways in which flying insects use cues derived from op-
lected from unmanned and manned helicopters. The ob- tical flow for navigational purposes without explicit esti-
tained MSE of the differences between the vSLAM and the mation of motion and environment structure. Indeed, fly-
GPS horizontal coordinates for a 35-s sequence was about ing insects such as bees and flies have developed alter-
2 m. The VO system, previously described (Caballero et al., native, simple, and ingenious stratagems for dealing with
2009), has been augmented by a vSLAM algorithm to re- the 3D vision problem to perform navigational tasks such
duce the drift in position estimates. The proposed tech- as attitude and course stabilization (Hengstenberg, 1993),
nique has also been evaluated offline on images gathered centering response, visual odometry (Srinivasan, Zhang,
by an unmanned helicopter during a 90-m flight at 15 m & Bidwell, 1997), flight speed regulation (Barron & Srini-
altitude. The obtained results are promising, with a maxi- vasan, 2006), landing (Wagner, 1982), altitude control and
mum error of (7, 9, 5 m) for (x, y, z) without IMU data in terrain following (Ruffier & Franceschini, 2005), and obsta-
the SLAM algorithm and about (6, 4, 1.5 m) when are inte- cle avoidance (Tammero & Dickinson, 2002). It is clear that
grated the IMU data in the EKF. Another vSLAM technique systems using these bio-inspired strategies play at the same
for RUAS navigation is presented in Tornqvist, Schon, time the role of state estimation systems, perception sys-
Karlsson, and Gustafsson (2009). The proposed algorithm tems for obstacle detection, and guidance systems for re-
uses the Rao–Blackwellized particle filter to fuse data from active obstacle avoidance. Bio-inspired systems thus have
inertial sensors, barometer, and vision. The FastSLAM algo-
rithm has been adapted to handle high-dimensional state
vectors efficiently. The derived algorithm is applied offline 18
In the first acquired image, a georeferenced map is created (fea-
to experimental data from an unmanned RMAX helicopter tures positions) based on the vehicle’s position and attitude infor-
flying at an altitude of 60 m and a forward speed of 3 m/s. mation from GPS/INS (before GPS loss).
the potential to provide the RUAS with higher levels of au- equipped with a 1D optic flow microsensor (Oh, 2004). In
tonomy such as level 2 and level 4 of the ALFURS chart in Beyeler, Zufferey, and Floreano (2009a), an optiPilot for au-
Figure 4. Recently, there has been an increasing interest in tomatic takeoff and landing of a fixed-wing MAV using op-
developing bio-inspired GNC systems for mini flying ma- tic flow is described. Sparse optic flow measurements are
chines, especially for terrain following, landing, and obsta- obtained from optic mouse sensors pointed at 45◦ with re-
cle avoidance. spect to the longitudinal axis. Angular rates from gyros
Terrain following: Altitude control and terrain following are used to cancel the rotational optic flow, and the re-
can be achieved by regulating the optic flow obtained from maining translational flow is directly mapped into pitch
a downward-looking camera to a constant value. Based on and roll commands. Automatic takeoff and landing from
this insect-inspired behavior, Franceschini’s team (Ruffier 10 m height were successfully achieved using a fixed-wing
& Franceschini, 2005) has developed an autopilot, called MAV equipped with the optiPilot. Automatic landings of a
OCTAVE (optical altitude control system for autonomous 1.5-m-wingspan MAV were also achieved using a baro-
vehicles), for navigation and flight control of small rotor- metric altimeter and optic flow sensors (Agilent ADNS-
craft. The optic flow is computed by a small fly-inspired 2610) (Barber, Griffiths, McLain, & Beard, 2007). Optic flow
sensor developed by the authors, and is used by PID con- has also been used to land mini RUAS, as described in
trol loops to achieve the desired maneuvers. The effective- Herisse, Russotto, Hamel, and Mahony (2008), where diver-
ness of this strategy has been demonstrated in a number of gent optic flow has been combined with inertial data and
scenarios using a small tethered rotorcraft. In Garratt and used to control the vertical motion of a quadrotor. The ap-
Chahl (2008), an optic-flow-based terrain-following system proach has been validated through autonomous hovering
has been developed and demonstrated in real flights using and landing from 1 m height.
a robotic helicopter. Optic flow was computed in real time Obstacle avoidance: The main strength of optic-flow-
using the image interpolation algorithm (I2A) and then based strategies is their ability to avoid obstacles without
fused with IMU data to compensate its rotational compo- mapping the environment or explicitly detecting obstacles.
nent. The height above the ground is explicitly estimated The idea is to interpret regions with high optic flow as im-
using optic flow and GPS velocities. The terrain-following minent obstacles. Indeed, translational optic flow is propor-
system has been implemented onboard the RMAX heli- tional to the magnitude of the vehicle velocity and inversely
copter and demonstrated in closed-loop flights. During proportional to the distance to obstacles in the environ-
100 s of closed-loop flight at a speed of 5 m/s, the heli- ment. The most investigated strategy is the use of diver-
copter maintained 1.27 m clearance from the ground using gent optic flow for frontal obstacle avoidance and right
the estimated height from optic flow. An optic-flow-based and left optic flows for centering response and lateral ob-
terrain-following algorithm for mini rotorcraft is proposed stacle avoidance. This bio-inspired behavior has been suc-
in Herisse et al. (2010). The developed system computes op- cessfully implemented on some small fixed-wing UAS. In
tic flows at multiple observation points, obtained from two Griffiths et al. (2006), three Agilent ADNS-2610 optical
onboard cameras, using the LK algorithm and combines mouse sensors were mounted on a small fixed-wing UAS
this information with forward speed measurements to es- (right-forward, left-forward, and down) and used to sense
timate the height above the ground. A backstepping con- the distance to obstacles by combining GPS velocities and
troller is used to regulate the height to some desired value. optic flow signals. The avoidance controller generates a
Indoor closed-loop flights have been performed over a tex- path offset based on the difference between the computed
tured terrain using the CEA quadrotor vehicle flying at a left and right distances. This approach has been demon-
forward speed of 0.3–0.4 m/s. The system was able to main- strated in real flights while the UAS was flying a GPS-based
tain a desired height of 1.5 m above a ramp and 2D corner path through the Goshen Canyon. Successful lateral obsta-
textured terrain of about 4 m length. cle avoidance using optic flow sensors is demonstrated on
Automatic Landing: One important consequence of a fixed-wing MAV and described in William, Green, and
holding the optic flow beneath the vehicle constant is that Oh (2008). Two 1D optic flow sensors from Centeye were
the horizontal speed is automatically reduced as the height mounted on the sides of the vehicle. By assuming that the
decreases. Therefore, height, and forward and descent ve- forward velocity is constant, a full deflection of the rud-
locities will decrease exponentially with time and become der is performed when the computed optic flow from one
zero at touchdown, by just regulating the optic flow to some sensor is higher than some prefixed threshold. Researchers
constant value. Bees use this simple strategy (two rules) for from EPFL (Switzerland) have also developed and demon-
smooth landing without explicit measurement or knowl- strated an embedded OF-based system allowing an ultra-
edge of the flight speed and height above the ground. This light aircraft (30-g prototype) to detect and avoid frontal
insect-inspired landing strategy was used in Chahl, Srini- obstacles (Zufferey & Floreano, 2006), using the divergent
vasan, and Zhang (2004) to control the altitude of a fixed- OF signals obtained from the left and right linear cam-
wing UAS during the approach phase. The same landing eras. The obstacle avoidance strategy aims at triggering
strategy was also demonstrated on a small fixed-wing MAV a saccade maneuver in the direction of low optic flow
Figure 10. The Maryland University quadrotor during corridor navigation using optic flow obtained from omnidirectional vision
(Conroy, Gremillion, Ranganathan, & Humbert, 2009).
when the divergent optic flow exceeds some threshold. In a long, Figure 10. The quadrotor successfully avoided corri-
recent paper (Beyeler, Zufferey, & Floreano, 2009b), the au- dor walls and finished its course without collisions.
thors described another optic-flow-based autopilot (optiPi-
lot) for an outdoor fixed-wing MAV . The optiPilot is based Observation 7. In all reviewed works, bio-inspired optic-
on seven optic mouse sensors (Avago ADNS5050), MEMS flow-based navigation was demonstrated on fixed-wing
rate gyroscopes, and a pressure-based airspeed sensor. The platforms or on small quadrotors performing very short
estimated translational optic flow is directly mapped into flights in structured indoor environments. It is thus clear
control signals to steer the pitch and roll angles, whereas that applying optic flow (without GPS aid) for outdoor nav-
the airspeed measurements are used to control the forward igation of RUAS is still an open research problem because
speed. The optiPilot has been validated in simulations and no experimental results have been published yet.
demonstrated in real flights to avoid a group of tall trees
(lateral avoidance) and small trees (flyover avoidance). Al-
though optic-flow-based obstacle avoidance has been suc- 5.2.3. State Estimation Using Ranging Sensors
cessfully demonstrated on fixed-wing UAS, a very limited In addition to conventional and vision-based state estima-
number of papers have been published on the applica- tion systems, ranging sensor technologies have been also
tion of optic flow for obstacle avoidance onboard RUAS. researched, with the objective of using them onboard RUAS
In Hrabar and Sukhatme (2004), the use of optic flow for for state estimation. Although active range sensors such as
an unmanned helicopter centering in an urban canyon is LIDARs, radars, and ultrasonic and IR sensors are gener-
investigated, but only offline results were presented. In ally used for perception, there are some works where these
Conroy, Gremillion, Ranganathan, and Humbert (2009), a sensors have been used mainly for state estimation and in-
bio-inspired optic flow navigation system has been imple- door flight control.
mented on a quadrotor UAS and demonstrated in an in- • State Estimation Using Ultrasonic and Infrared Sen-
door textured corridor. The inner-loop pitch and roll sta- sors: Ultrasonic sensors have been used for the stabiliza-
bilization is accomplished using rate gyros and accelerom- tion of a quadrotor vehicle relative to office walls (Kendoul
eters. Altitude control is achieved via fusion of measure- et al., 2007). Three perpendicular ultrasonic range finders
ments from sonar and an accelerometer. A ventrally located have been mounted on a quadrotor to provide 3D posi-
optic flow sensor from Centeye is utilized to augment lon- tion measurements in order to test and evaluate the per-
gitudinal and lateral damping, thus improving vehicle sta- formance of a nonlinear controller. Autonomous hovering
bility. An omnidirectional visual sensing and processing and automatic takeoff and landing have been achieved. A
based on a Surveyor camera board and a parabolic mir- similar system is described in Bouabdallah and Siegwart
ror is used for optic flow estimation and outer-loop control. (2007), where ultrasonic and IR sensors are also used for al-
The navigation strategy consists of decomposing patterns titude control and obstacle avoidance. In Shin et al. (2011),
of translational optic flow (magnitude, phase, and asym- an ultrasonic positioning system (UPS) has been developed
metry) with weighting functions in order to extract signals and used for indoor hovering control of a small ducted-
that encode relative proximity and speed with respect to fan UAS. The UPS consists of four transmitter nodes on-
obstacles in the environment, which can be used directly for board the vehicle, eight receiver nodes on the ground, and
outer-loop navigation feedback. The flight tests were per- a server node. The obtained precisions were 2 cm RMS (root
formed in a textured corridor about 1.5 m wide and 9 m mean square) for position estimation using UPS and about
10 cm RMS for hover control using PD for attitude con- algorithm that is based on the GMapping algorithm19 has
trol and LQI for position control. Recently, IR sensors have been used. Throughout 1 min of flight in a closed room, the
been used on a quadrotor UAS as the main position sen- scan-matching odometer estimated the position and veloc-
sors for stabilization and precise landing on small rectan- ity of a robotic quadrotor with an average error of 1.5 cm
gular or square objects such as tables or car roofs (Nonami for position and 0.02 m/s for velocity. The complete navi-
et al., 2010, Chapter 13). The height is estimated by an ultra- gation system has also been demonstrated in three differ-
sonic range finder, whereas the horizontal 2D position is es- ent unstructured environments. Indeed, when this naviga-
timated from four actuated IR sensors that detect the edges tion system was used with a planning and exploration al-
of the landing target. This system has been demonstrated in gorithm, the quadrotor was able to navigate autonomously
real-time for autonomous hovering and automatic precise (motion estimation and mapping) for 8 min (208 m) in
landing on a 52 × 52 cm table with an accuracy of 20 cm open lobbies, 6 min (44 m) in cluttered environments, and
during hovering and 10 cm during landing. A combina- 7 min (75 m) in office hallway environments. Flight control
tion of low-cost sonar range and infrared sensors was also was done onboard, whereas scan-matching and SLAM al-
used to perform mapping of indoor environments and lo- gorithms ran offboard.
calization of small RUAS (Sobers, Chowdhary, & Johnson,
2009). A SLAM-like algorithm was developed by alternat- Observation 8. Although LIDAR-based localization sys-
ing between mapping and localization. First, a map of an tems have been successfully applied for ground robot nav-
indoor environment is constructed and stored. The range igation in outdoor environments, we did not find any work
sensors are then used to provide vehicle position relative to of adapting this technology to UAS outdoor localization
the stored map of the room being explored. A small coaxial without GPS aid. This is mainly because UAS generally fly
rotorcraft equipped with this system was used to complete in relatively large open spaces without sufficient environ-
mission requirements for the International Aerial Robotics mental structure for the relative position estimation. There-
Competition. During the competition, the vehicle was suc- fore, algorithms such as scan-matching and SLAM will fail
cessfully able to enter the arena, negotiate obstacles, and fly to provide reliable state estimates for flight control.
down a hallway for approximately 12 m via wall following.
• LIDAR-Based SLAM Methods: A LIDAR-based SLAM 5.3. Perception Systems
algorithm for autonomous indoor navigation of a quadro-
tor UAS is presented in Grzonka, Grisetti, and Burgard To achieve real-world applications in natural environ-
(2009). A Hokuyo LIDAR (URG-4) and an Xsens IMU were ments, the RUAS may need to detect and avoid obstacles
used as the main navigation sensors to localize the ve- in real-time, recognize and track targets, map the environ-
hicle and to update the map. The height is estimated by ment, etc. These perception capabilities are essential for
deflecting some laser beams down using a mirror. The reaching autonomy level 4 of the ALFURS chart in Figure 4.
UAS motion is computed by incrementing the displace- Environment perception technologies used within RUAS
ment between two subsequent scans by means of a scan include passive sensors such as cameras, and range sensors
matching algorithm (LIDAR odometry). A Monte Carlo (active sensors) such as LIDARs. The selection of appropri-
localization algorithm has been used to estimate the 2D po- ate sensors depends heavily on the RUAS payload and in-
sition in a given map and to reduce drift in position esti- tended applications. Perception systems for RUAS can thus
mates. A mapping problem was also formulated as a graph- be divided into two main categories: (1) vision-based per-
based SLAM. These navigation components have been val- ception (passive) and (2) LIDAR-based perception (active).
idated separately on an instrumented quadrotor, but no
closed-loop flights have been achieved, except for yaw and 5.3.1. Vision-Based Perception
height control. A similar LIDAR-based localization and Some visual perception systems, developed for RUAS,
mapping system for quadrotor indoor navigation in un- have already been presented in Section 5.2.2, especially
known environments was described in Bachrach, He, and for SLAM, SFM, and optic-flow-based approaches, where
Roy (2009). The navigation system is based on a Hokuyo LI- both state estimation and perception are performed with
DAR (UTM-30LX), an IMU, a monocular camera (later up- the same algorithm. The algorithms presented in the pre-
graded to two monochrome USB cameras) and a Gumstix vious section focus more on state estimation, whereas the
processor (later upgraded to Intel Atom processor). As in algorithms presented here assume that state estimates are
Grzonka et al. (2009), some of the LIDAR beams were de- available (from the onboard GPS-IMU avionics) and focus
flected downward to estimate and control height above the on perception tasks as defined in Definition 8. The extracted
ground plane. A LIDAR scan-matching algorithm is used information is not directly used as measurements in the
to estimate the rotorcraft horizontal 2D position and head-
ing. These estimates are then fused with IMU accelerations
using an EKF, yielding accurate estimates of position and 19
A publicly available algorithm that uses a Rao–Blackwellized
velocity. To reduce drift in position estimates, a 2D SLAM particle filter.
flight controller, but as inputs to higher-level guidance sys- ship deck is described in Garratt, Pota, Lambert, Eckersley-
tems that generate trajectories, tasks, or commands for the Maslin, and Farabet (2009). The LIDAR determines both the
controller. We classify visual perception systems based on distance and the orientation of the deck relative to the heli-
the navigation task they achieve. By reviewing the RUAS copter. This system has been augmented by a visual track-
literature, we found that most vision-based perception sys- ing system that detects a light source on the deck and tracks
tems developed for RUAS have mainly been used for it over time. Automatic landing on the deck has not been
attempted yet. Recently, Tom Richardson, from the Univer-
• Target detection and tracking sity of Bristol, has won a national award for his part in de-
• Obstacle detection without mapping (mapless ap- veloping a vision system for automatic landing of an un-
manned helicopter on a moving target (roof of a car). To
proaches)
• Mapping-based methods for navigable space searching our knowledge, the obtained experimental results have not
been published yet.
or safe landing area detection
• Mission-oriented perception. Horizontal target approach: Vision and GPS have also
been used to approach a frontal target and to hover at
• Target Detection and Tracking: From the image process- some distance from it. In Mejias, Saripalli, Campoy, and
ing and navigation points of view, these perception algo- Sukhatme (2006), an image-based velocity control approach
rithms are similar to those presented in the previous sec- to frontal target tracking is presented. The image processing
tion, “Target-Relative Navigation”. The main difference in algorithm detects and tracks features (building windows)
the systems presented here is that the visual estimates are using segmentation and square finding for detection, with
used to guide the vehicle whereas the controller relies on template matching and a Kalman filter for tracking. The tar-
GPS or another state estimator to control the rotorcraft get coordinates in the image frame are used directly to gen-
flight. erate lateral and vertical velocity commands to the flight
Target tracking for automatic landing: In Saripalli et al. controller. By using GPS measurements, the helicopter fol-
(2003), such a system is used for landing a helicopter on lows these commands in order to align itself with the win-
an artificial helipad (letter H). A vision algorithm is used dow target. The developed system has been demonstrated
to detect and recognize the target by comparing the com- on two different helicopters (USC AVATAR and COLIBRI).
puted moments of inertia with stored ones (obtained of- In both experiments, the helicopter successfully tracked the
fline). The horizontal 2D position and heading of the heli- target window and performed automatic hover flight.
copter relative to the target are obtained by using the target Moving ground target tracking: Vision is also the sensor
coordinates in the image frame and the helicopter altitude of choice for mobile ground target detection and tracking. A
above ground from a differential GPS. The landing guid- vision-based pursuit–evasion game has been implemented
ance strategy consists of generating heading and velocity on a helicopter UAS and unmanned ground vehicles (UGV)
commands to the flight controller that tracks these trajec- (Vidal, Shakernia, Kim, Shim, & Sastry, 2002) at the Univer-
tories based on GPS measurements. Sonar is also used to sity of California, Berkeley (UCB). The Yamaha R-50 heli-
estimate and control height during the last 2 m of descent. copter and two UGV pursuers were equipped with a vi-
Fourteen automatic landings were performed from an ini- sion system that can actively track many colored objects at
tial position of about (8, 10, 10 m) on a stationary target and 30 fps. These visual estimates are then used as inputs for a
on a momentarily hidden target. The helicopter achieved distributed hierarchical pursuit planner that generates ref-
automatic landings with an average position error of 42 cm erence trajectories for each agent. The navigation and pur-
and an average orientation error of 7◦ . The same authors suit policies have been validated in real-time experiments
investigated vision/GPS automatic landing of a helicopter using a RUAS, two UGV pursuers, and one UGV evader.
on a moving target, but only offline processing results have In the framework of the WITAS project at Linköping Uni-
been presented (Saripalli & Sukhatme, 2003). A similar ap- versity, an onboard vision-based perception system was de-
proach was used in Hermansson, Gising, Skoglund, and veloped for ground vehicle tracking and surveillance by a
Schon (2010) to land a helicopter on a stationary target us- robotic helicopter. We did not find any published exper-
ing vision and GPS. An EKF is used to fuse data from IMU, imental results, but the video clip available at Linkoping
GPS, and a vision system in order to estimate the helicopter (2003) shows the Yamaha RMAX helicopter autonomously
3D pose as well as its relative position and heading to the tracking a moving car in a natural environment for a period
target. These relative estimates are then used to generate of 9 min. Cooperative vision-based tracking of ground ve-
reference positions for the GPS-based controller in order to hicles by indoor quadrotors is presented in Bethke, Valenti,
keep the helicopter above the target while decreasing its al- and How (2007). The vision tracking algorithm uses an op-
titude. Fifteen automatic landings from an altitude of 5 m timization technique and a KF that combines the instanta-
were performed with a maximum landing error of 60 cm neous observations of all UAS, allowing very accurate es-
and an average error of about 34 cm. Another system that timation of target location and velocity. The 3D position of
combines vision and LIDAR data to land a helicopter on a the UAS is assumed to be known from its onboard avionics
(here, by the VICON system). The guidance system gen- obstacles have been detected, an appropriate evasive con-
erates waypoints to keep the UGV in the RUAS FOV. The trol command (turn away, stop) is generated. The combined
performance of this system has been evaluated using two stereo and optic-flow-based perception and control system
quadrotors and a small RC car as a target. Another exam- have been evaluated in simulations as well as in flight ex-
ple of vision-based ground target tracking by a rotorcraft periments using an autonomous tractor and a robotic heli-
MAV is presented in He, Bachrach, Achtelik, Geramifard, copter. When tested on a helicopter, the optic flow system
Gurdan, et al. (2010). Images from a single onboard cam- was able to steer the helicopter away from obstacles with
era were used to identify obstacles and mines and to track a success rate of five out of eight flights. For frontal obsta-
a ground vehicle. These ground targets were then geolo- cle avoidance, three flights were performed and the stereo
cated using the known position of the MAV and the cam- vision allowed the helicopter to avoid the obstacle in one
era. Ground targets were first detected by a human opera- flight only. The combined stereo-optic flow was tested on a
tor and then passed to the vision algorithm for automatic ground vehicle but not on a helicopter.
tracking in successive images. The tracker was based on • Mapping-Based Methods for Navigable Space Search-
an object appearance classifier that used a Bayesian parti- ing or Safe Landing Area Detection: In the following para-
cle filter. For robust tracking, optical flow was also used to graphs, we present vision-based perception systems for
refine the camera ego-motion. This tracker was evaluated RUAS that are based on the mapping framework. In map-
offline on a 17-s video collected by a rotorcraft MAV dur- less approaches, obstacle avoidance strategies are generally
ing the MAV’08 competition in India. The algorithm suc- reactive, whereas mapping-based methods allow the use of
cessfully tracked a moving ground vehicle and a walking more sophisticated path planning algorithms. Mapping the
person. During mission execution, the system (assisted by environment consists of building some internal representa-
a human operator) was also able to geolocate two mines tion of the scene in the form of a depth image, digital ele-
and two ground obstacles. vation map, occupancy grid, feature database, set of land-
• Obstacle Detection without Mapping: Computer vision marks, etc. The existing systems for mapping-based visual
can be used to detect and negotiate obstacles without map- perception onboard RUAS can be classified into three cate-
ping the environment by using: (1) image motion (op- gories: (1) simultaneous localization and mapping (SLAM),
tic flow) as described in Section 5.2.2, “Bio-Inspired Optic (2) simultaneous mapping and planning (SMAP), and (3)
Flow Approaches,” (2) a priori known characteristics of a safe landing area detection (SLAD).
specified object such as color and shape, or (3) depth infor- SLAM: As described previously (see Section 5.2.2,
mation from stereo vision. “Concurrent Estimation of Motion and Structure”), these
At DLR (the German Aerospace Centre), there is ac- algorithms estimate the vehicle’s self-motion and build a
tive research on using vision for autonomous rotorcraft per- map of the environment which is generally represented by
ception. In Andert, Adolf, Goormann, and Dittrich (2010) a set of features or registered point clouds. Most work on
considered the problem of flying through a course with SLAM-based perception has focussed on the vehicle’s lo-
gates that are slightly larger than the vehicle (narrow pas- calization, and the generation of accurate maps of the envi-
sage). The gate global position is estimated by tracking ronment. In these works, the generated maps have not been
some colored flags on the gate (assuming that their size used to detect obstacles or to guide the vehicle through
and appearance are previously known) and using the nav- free spaces (Artieda et al., 2009; Bryson & Sukkarieh, 2009;
igation solution from GPS and IMU. Once the gate posi- Caballero et al., 2009; Kanade et al., 2004; Kendoul, Fantoni,
tion is estimated, the guidance system generates two way- & Nonami, 2009; Kima & Sukkarieh, 2007; Lemaire, Berger,
points that define a flight path (straight segment) that goes Jung, & Lacroix, 2007; Nemra & Aouf, 2009; Tornqvist et al.,
through the gate center. The developed system was demon- 2009).
strated in real time using a 12-kg robotic helicopter that au- Simultaneous mapping and planning: Unlike SLAM,
tonomously crossed gates of 6 × 6 m at a speed of 1.5 m/s SMAP focuses more on building maps that can be used
without collisions. The use of optic flow and stereo vi- for obstacle avoidance and path planning. In these ap-
sion onboard a helicopter for 2D navigation through urban proaches, state estimates are generally available from the
canyons without mapping is investigated in Hrabar and GPS–IMU, and efficiency and robustness are more impor-
Sukhatme (2009). An LK tracker is used to compute optic tant than accuracy. Probabilistic occupancy grids are the
flow from a pair of sideways-looking fish-eye cameras. The representation most widely used to address these issues.
translational optic flow is then used for the centering re- Most successful work on SMAP for RUAS has been done
sponse by changing the vehicle’s turn rate to balance optic using LIDAR, as will be described in the next section.
flow on both sides. The perception system has been also However, there is some interesting work on applying vi-
augmented with a forward-facing stereo camera to detect sual SMAP for rotorcraft, such as the system presented in
frontal obstacles. Based on a 3D-point cloud representation, Andert and Adolf (2009). The proposed world represen-
obstacles are detected in the upper half of the image using a tation combines occupancy grids and polygonal features.
distance threshold and region-growing algorithm. Once the First, a 3D occupancy grid around the vehicle is created
incrementally using stereo-based range measurements and clared as landing candidates if they are below given thresh-
GPS/INS data. Actual sensor data and previously stored olds on each element of the landing quality equation. The
features are then inserted into the grid. Obstacle features developed system was evaluated offline only. The SLAD
are constructed by clustering occupied grid cells, and a problem has been also an active research topic for safe
polygonal shape of these features is finally approximated landing of spacecraft on unknown terrains such as Mars.
by prisms. This mapping algorithm has been implemented JPL researchers have developed an autonomous helicopter
on a 25-kg helicopter and has been demonstrated in real- to evaluate and validate some technologies that can be
time experiments. Another stereo-vision-based perception useful for spacecraft as well as for UAS (Montgomery,
system for rotorcraft is described in Byrne, Cosgrove, and Roumeliotis, Johnson, & Matthies, 2006). A digital eleva-
Mehra (2006). The VISTA (visual threat awareness) system tion map of the terrain is generated from monocular images
was developed by Scientific Systems Company Inc. for col- using motion stereo and a narrow beam altimeter (Johnson,
lision detection on an unmanned helicopter. It combines Montgomery, & Matthies, 2005). Once the DEM is built, lo-
block matching stereo (depth image) with image segmen- cal operators are applied to detect hazards by generating
tation based on a graph representation appropriate for ob- slope and roughness maps. Finally, safe landing areas are
stacle detection. Detected obstacles are then tracked using detected by generating binary images from the slope and
a KF, and the result is a state estimate for each obstacle, roughness maps and applying a grassfire transform. The
which is then passed to the guidance system for path plan- SLAD algorithm has been implemented on an unmanned
ning. Nineteen flight experiments were performed using helicopter and demonstrated in closed-loop flight experi-
the Georgia Tech. RMAX helicopter. The vehicle had to fly ments using vision for SLAD and GPS/IMU for flight con-
a collision trajectory approaching either a “sign” obstacle trol. Four successful autonomous landings were achieved
or a “pole” obstacle at speeds up to 10 m/s. The obstacles in unknown and hazardous terrains with an average posi-
were correctly detected at various collision distances. tion error of 1 m. Other SLAD flights in more complex ter-
Safe landing area detection: Unmanned rotorcraft may be rains have also been performed. SLAD and automatic land-
commanded to land on unprepared and unknown terrains ing problems have been also investigated in the US Army’s
to accomplish some mission or to achieve an emergency Precision Autonomous Landing Adaptive Control Experi-
landing. Therefore, safe landing area detection (SLAD) is ment (PALACE) project (Theodore et al., 2006). The landing
necessary for mission achievement and vehicle survival. A site selection used for the PALACE project consists in cre-
safe landing area can be defined as a terrain without haz- ating a stereo range map of the terrain, and then running
ards, large enough to fit the RUAS, and suitable for land- a SLAD algorithm. This algorithm applies a set of landing
ing a specified machine (slope, roughness, etc.). Vision and point constraints (slope, roughness, distance to obstacles)
LIDAR are the sensors most investigated for solving the to the range map to find all safe landing regions, and then
SLAD problem. We present here vision-based SLAD algo- to choose the optimum landing point. The initial landing
rithms; LIDAR-based approaches will be described in the point is chosen from 30 m above second level (AGL) and
next section. then verified and refined at 24, 18, 12, and 6 m AGL. Posi-
Early work on visual SLAD for an unmanned he- tion control is done using GPS until 12 m AGL, and then a
licopter was presented in Garcia-Pardo, Sukhatme, and visual odometer until 2 m AGL, and finally inertial naviga-
Montgomery (2002). A single downward-pointing camera tion for the last 3 m. The complete system (SLAD, VO, auto-
was used to detect an obstacle-free circular area by assum- matic landing) has been implemented on a Yamaha RMAX
ing flat terrain and high contrast between obstacles and un- helicopter and demonstrated in closed-loop flights. Stereo
derlying terrain. Four different strategies were proposed ranging and SLAD algorithms were first evaluated under
and evaluated offline on images gathered from 10 flights different conditions (surface texture, altitude, pattern direc-
of an instrumented helicopter. A stereo vision-based sys- tion, obstacle height), and very satisfactory results were ob-
tem for terrain mapping and SLAD for an unmanned heli- tained. More than 17 successful landings were achieved on
copter is presented in Meingast, Geyer, and Sastry (2004). various surfaces and obstacle fields with a landing accu-
Given the motion of the vehicle, a digital elevation map is racy of better than 1 m and a success rate greater than 95%,
built using a multiframe planar parallax algorithm. A land- Figure 11.
ing function is defined to analyze and classify each image In a recent paper (Sanfourche, Besnerais, Fabiani,
pixel. Neither simulations nor experimental results were Piquereau, & Whalley, 2009), researchers from the French
presented in this paper. Aerospace Lab (ONERA), compared two different algo-
Later in 2007, the same authors presented another rithms for terrain characterization and SLAD. One algo-
SLAD algorithm based on a single camera (Templeton, rithm was developed by NASA and is based on hemi-
Shim, Geyer, & Sastry, 2007). The vision algorithm is sim- spherical LIDAR (described in the next section), and the
ilar to the previous one and uses a recursive multiframe other was developed by ONERA and uses monocular vi-
planar parallax algorithm to build a DEM in world co- sion. Feature correspondences are used to estimate the
ordinates. The map is divided into blocks that are de- parameters of a homography and bundle adjustment is
Figure 11. The NASA robotic helicopter during autonomous identification of safe landing area using stereo vision (Theodore
et al., 2006).
employed to refine ego-motion and feature positions (struc- tion and avoidance. The most successful results on vision-
ture) estimation. By assuming that the ground is flat with based perception in unknown environments are probably
some obstacles, a DEM is built by classifying the 3D points the SLAD-related results obtained in the NASA PALACE
as ground or obstacles. This vision-based mapping algo- project (Theodore et al., 2006), and the SMAP-related re-
rithm has been evaluated offline on a video sequence taken search by DLR Andert, Adolf, Goormann, and Dittrich
by the helicopter flying at a height of 10 m above flat (2011).
ground with the same common obstacle field (COF) as in
Takahashi, Schulein, and Whalley (2008). All obstacles
• Mission-Oriented Perception: In the work described pre-
(boxes) taller than or equal to 40 cm were correctly detected,
viously, vision has mainly been used for the vehicle’s navi-
with a standard deviation error of 5 cm. A comparison with
gation (localization and obstacle avoidance). However, per-
the NASA LIDAR-based method revealed that the two ap-
ception systems are more general than that and may be
proaches yielded comparable results. However, the LIDAR-
used for environmental monitoring and mission achieve-
based SLAD algorithm runs in real time onboard the heli-
ment (camera as a mission sensor). In current systems, im-
copter, and it is effective even for night flights. In Cesetti,
ages are generally transmitted in real time to the ground
Frontoni, Mancini, Zingaretti, and Longhi (2010), optic flow
station, where the operator analyzes them and makes ap-
has also been used to select a safe landing area without
propriate decisions, or they are post processed. However,
building a dense DEM of the environment. Sparse SIFT fea-
in some missions, it is more effective to automatically re-
tures are used to compute optic flow, which is then used to
trieve the required mission information from images in
recover depth information in the landing area. By assum-
real time and guide the helicopter appropriately. There has
ing pure translational flight at a constant height, a simple
been some work on using computer vision onboard ro-
classifier based on a binary threshold is used to decide if
torcraft for mission achievement purposes. A good exam-
the surface has variable depth or not and, consequently, if
ple of that is the COMETS (EU project) project, where a
it can be used as a landing area. This approach has been
cooperative vision-based perception system has been de-
evaluated in simulations and offline based on images col-
veloped for multi-UAS and applied for automatic detec-
lected by an unmanned helicopter.
tion of forest fires (Merino, Fernando Caballero, Ferruz, &
Ollero, 2006). In the WITAS project (Farnebäck & Nord-
Observation 9. From the reviewed papers, we found that berg, 2002), dense motion of the images has been applied to
vision-based perception technologies for UAS are not yet traffic monitoring with an autonomous RMAX helicopter.
mature and very few experimental results have been re- In more recent work, Rudol and Doherty (2008), demon-
ported from closed-loop flights. Indeed, most of the pro- strated automatic detection of human bodies for search and
posed approaches have been evaluated in simulations or rescue missions using an autonomous RMAX helicopter
in offline processing. In some other work, perception al- equipped with thermal and color imagery and an onboard
gorithms were running in real time but not used in a image processing algorithm. The Australian Centre for
closed-loop to guide the vehicle. When closed-loop results Field Robotics (ACFR) has developed two vision-based per-
are presented, they are basic, without proper evaluation ception systems for automatic detection and surveillance of
of their performance in different environments and condi- aquatic weeds (Goktogan, Sukkarieh, Bryson, Randle, Lup-
tions. Furthermore, it turned out that there are no convinc- ton, et al., 2010), and classification of large farmland envi-
ing results on using vision for SMAP and obstacles detec- ronments (Bryson, Reid, Ramos, & Sukkarieh, 2010). The
first system was demonstrated using an autonomous he- these indoor systems, LIDAR has also been used on big-
licopter, whereas the second one was evaluated using im- ger helicopters for outdoor mapping. However, the heli-
ages collected by a fixed-wing aircraft. There are also other copter state is generally available from GPS/INS because
interesting applications such as power line inspection, as airborne LIDAR-only SLAM in outdoor and natural en-
described in Wang, Han, Zhang, Wang, & Li, (2009). vironments is very challenging. 3D terrain mapping from
LIDAR onboard the Yamaha R-50 helicopter was demon-
strated by CMU researchers (Kanade et al., 2004; Miller &
5.3.2. LIDAR-Based Perception
Amidi, 1998). The helicopter was flown about 10 m above
Although images contain rich information about the envi- a 300 × 300-m site as the system scanned the surrounding
ronment, their real-time processing to retrieve useful cues environment. A digital elevation map with a 0.5 m2 grid
suffers from many issues, which are mainly related to high spacing was generated in real time from the scan data using
computational requirements and high sensitivity to the en- known pose and position estimates. However, the gener-
vironment, such as light and, textures. On the other hand, ated maps were not used for guidance or navigation. Other
LIDAR can provide accurate information about the envi- researchers from CMU (Thrun, Diel, & Hahnel, 2003) pro-
ronment structure with lower computational requirements. posed an airborne SLAM algorithm that builds accurate 3D
However, its weight and power consumption limit its use maps of urban and natural terrains from 2D range data,
to RUAS with relatively significant payload, such as the GPS, and compass measurements. Real-time scan matching
Yamaha RMAX. Recently, progress in LIDAR miniaturiza- is used to improve the accuracy of pose estimates and the
tion has motivated its use on mini quadrotor UAS as the environment model. This system has been demonstrated on
main perception sensor. Most LIDAR-based perception sys- different environments using a remotely controlled Bergen
tems for RUAS are based on building metric maps of the Industrial Twin helicopter equipped with a SICK LIDAR.
environment. Different types of maps have been proposed, The generated maps appear to be visually and accurate,
but DEM and occupancy grids are the most widely used and locally consistent with spatial resolution in the cen-
for RUAS perception. As for vision-based perception ap- timeter range.
proaches, existing LIDAR-based perception systems can be • Simultaneous Mapping and Planning: SMAP involves
classified into four main categories: perception systems where mapping and planning are
• jointly performed to build a map of the environment
Mapless-based approaches for obstacle detection
• that is used for obstacle detection and avoidance. Shim
Simultaneous localization and mapping (SLAM)
• et al. (2006), proposed an obstacle detection and avoidance
Simultaneous mapping and planning (SMAP)
• scheme based on building local obstacle maps and solv-
Safe landing area detection (SLAD).
ing a conflict-free trajectory using model predictive con-
• Mapless-Based Approaches for Obstacle Detection: Ob- trol. Local obstacle maps are built by registering range mea-
stacle detection and avoidance without mapping is an at- surements in a FIFO database and computing the near-
tractive solution because it can deal with uncertainty and est point in the surrounding. This nearest point (minimal
run very quickly, providing a fast reactive system that distance) is considered as an obstacle, and is then used
can prevent last-minute collisions. CSIRO researchers have in the MPC algorithm for reference trajectory replanning
developed a LIDAR-based perception system for obsta- with a safe distance from obstacles. The Yamaha R-50 heli-
cle avoidance and infrastructure inspection beyond visual copter, equipped with SICK LIDAR and avionics, was used
range (Merz & Kendoul, 2011). The proposed system uses to demonstrate the proposed SMAP algorithm. The vehicle
only immediate LIDAR measurements to detect the ground flew from point A to point B at 2 m/s while avoiding ob-
and frontal obstacles/target, and computes just one reac- stacles, Figure 12. Local map building and trajectory gener-
tive action based on the current context. The developed sys- ation were done in real time at the ground control station.
tem has been implemented onboard the CSIRO unmanned Researchers from NASA Ames Research Center and
helicopter and flight-tested in a number of mission scenar- the U.S. Army have developed different LIDAR-based per-
ios and environments. More than 40 flights, totaling about ception systems for robotic helicopters. In the framework
15 h of autonomous flight, have been performed to date. of the Autonomous Rotorcraft Project (ARP), an active
The system performed well with a high success rate and obstacle-sensing and mapping system was developed and
dependability. The developed and demonstrated capabili- demonstrated using 2D SICK LIDAR onboard a Yamaha
ties proved to be sufficient to accomplish several real-world RMAX helicopter (Freed, Fitzgerald, & Harris, 2005; NASA,
applications. n.d.). For effective 3D navigation in urban environments,
• SLAM: Some LIDAR-based SLAM algorithms for RUAS 3D hemispherical LIDAR was developed by mounting 2D
perception have already been described in Section 5.2.3. SICK LIDAR on a spinning mechanism (Takahashi et al.,
The systems described in that section use LIDAR for both 2008), Figure 13. When bining range data and GPS and IMU
localization and mapping, and are generally developed measurements are combined, accurate 3D maps (registered
for indoor navigation of small quadrotors. In addition to point clouds) of the environment are generated. This 3D
Figure 12. BEAR helicopter during obstacle detection and avoidance using 2D LIDAR (Shim et al., 2006).
Figure 13. The U.S. Army/NASA obstacle field navigation system based on a 3D hemispherical LIDAR (Takahashi et al., 2008).
LIDAR system was then used to develop a complete 3D Elgersma, 2008), where researchers from CMU have
navigation system for an autonomous helicopter flying in demonstrated safe autonomous flight with 3D obstacle
urban environments (Tsenkov, et al., 2008; Whalley, Taka- avoidance capability. The developed method combines on-
hashi, Tsenkov, & Schulein, 2009). The chosen map format line environment sensing, global planning, and local plan-
or terrain representation is a cell-based hybrid data struc- ning for reactive obstacle avoidance. The perception system
ture with dual data access: directly through the data table, is based on a customized 3D LIDAR system from Fibertek
or hierarchically through a quadtree. Two 3D path plan- Inc (30◦ × 40◦ field of view). The raw range sensor data are
ning algorithms have been proposed and used to guide the transformed to an occupancy probability and then mapped
helicopter through obstacles. The first is based on an ob- into a 3D evidence grid using position and orientation in-
stacle proximity map, whereas the second is based on a formation from GPS and IMU. The developed system was
height map. This SMAP system has been implemented on tested extensively in different environments and scenarios,
the Yamaha RMAX helicopter and demonstrated in a num- with more than 700 successful obstacle avoidance runs at
ber of real obstacle avoidance scenarios and environments; commanded speeds up to 10 m/s. Miniaturization of LI-
see Figure 13. Other interesting work about LIDAR-based DAR units led to their integration into mini platforms, such
SMAP is presented in (Scherer, Singh, Chamberlain, & as quadrotors that weigh less than 1 kg. Indeed, researchers
Figure 15. SLAD results of the U.S. Army/NASA autonomous helicopter equipped with a hemispherical SICK LIDAR (Whalley
et al., 2009).
sweeping LIDAR
Figure 16. Results from autonomous landing of an unmanned full-scale helicopter in unknown zones, using the CMU LIDAR-
based SLAD and obstacle avoidance system (Scherer, 2011).
circles technique to a smoothed height map to detect a set of copter was equipped with a fixed Riegl LMS-Q140i-60 LI-
potential landing points and uses fuzzy logic to rank them DAR, a Novatel SPAN INS/GPS, and an IMU. Experimen-
based on geometric (size, surface roughness, and slope) and tal results showed that the SLAD system was able to de-
mission (distance, azimuth, and elevation to a target point) tect safe landing areas from 100–200 m AGL in different
criteria. The main contribution of the paper is the devel- vegetated and urban sites. In another experimental setup
opment of a test method for quantitatively comparing and (CMU & Piasecki, 2010; Scherer, 2011), the Riegl LIDAR
evaluating the performance of SLAD algorithms in realistic was mounted on a sweeping mechanism and integrated,
environments. This test approach depends on generating together with GPS and IMU, into a full-scale Unmanned
benchmark sites or “truth” in conjunction with the receiver Little Bird helicopter; see Figure 16. This technology has
operator characteristic plots. Both SLAD algorithms were been flight-tested at the Boeing Company’s Rotorcraft Sys-
evaluated based on data sets collected at seven different tems facility, and autonomous landings in cluttered envi-
sites using the NASA RMAX helicopter, equipped with a ronments have been repeatedly demonstrated. In each case,
SICK spinning LIDAR (Takahashi et al., 2008). Experimen- the system was able to map an unknown area, detect obsta-
tal results indicated that both algorithms performed well cles, select a safe landing area, plan its approach path, avoid
with the simple sites but had more difficulty dealing with obstacles and perform automatic landing. An interesting
complex sites. video of an autonomous flight test is available at CMU and
LIDAR-based SLAD has also been successfully ap- Piasecki (2010).
plied to safe landing of autonomous full-scale helicopters. Sevcik, Kuntz, & Oh (2010) investigated the effects of
Carnegie Mellon University and Piasecki Aircraft Corp. obscurants such as smoke and dust on SLAD when an air-
have developed and flight-demonstrated a navigation sys- borne LIDAR was used. First, a probabilistic point cloud
tem that enables full-size, autonomous helicopters to fly terrain map is created by fusing range points with the ro-
at low altitude while avoiding obstacles; evaluate and torcraft pose estimates and applying scan alignment for im-
select suitable landing sites in unmapped terrains; and proving registration accuracy. Point clouds are converted
land safely using a self-generated approach path (CMU & to a grid map and then a cost map based on the slope and
Piasecki, 2010). Their algorithm, described in (Scherer, 2011; roughness of the surface. The lowest-cost area that fits the
Scherer, Chamberlain, & Singh 2010), takes as input a set helicopter rotor is finally selected as a safe landing area.
of 3D range points registered in a global coordinates sys- The system was evaluated offline using data collected by
tem to select a safe landing area using a combination of the SR100 Rotomotion helicopter, equipped with a SICK LI-
coarse and fine evaluation. The coarse evaluation finds DAR. The algorithm successfully detected safe landing ar-
candidate areas by applying a fast plane fit to measure eas, and experimental results showed performance degra-
the slope, roughness, and other statistics in a terrain grid dation in the presence of smoke.
map. The fine evaluation then evaluates promising areas
by fitting a 3D model of the helicopter and landing gear
to a triangulated surface of the environment. In (Scherer,
et al., 2010), primary experimental results from 3D point 5.4. Situational Awareness
cloud datasets, collected by a manned helicopter on nine There is confusion in using the term situation or situa-
sites with a flight time of 8 h, were presented. The heli- tional awareness (SA) in the UAS literature. This term is
commonly used in aviation, and some technologies (obsta- 6.1. Path Planning Methods and Algorithms
cles detection and visualization, for example) have been de-
Path planning is a fundamental aspect of autonomous UAS
veloped to increase the situational awareness of the pilot
guidance and deployment and an essential component for
or the UAS operator. In current manned and unmanned
reaching autonomy level 4 and higher levels; see Figure 4.
vehicles, the SA process is performed by the human crew
The path planning problem has been extensively studied
or operator. We are interested in autonomous SA onboard
for manipulators and ground robots, and different meth-
the rotorcraft, which allows it to reach autonomy level
ods and algorithms have been proposed. Classifying path
“8” of the Rotorcraft Autonomy Levels chart, presented in
planning algorithms is not a trivial task, but one can, for ex-
Figure 4. In our literature review, we did not find any pub-
ample, make distinctions between optimized and heuristic
lished papers about autonomous SA for UAS, which means
algorithms; global and local path planners; path planners
that issues related to autonomy level 8 and above have not
with and without differential constraints; and 2D and 3D
been addressed yet.
path planners. We attempt to briefly describe some meth-
ods that are commonly used for path planning, and present
6. GUIDANCE SYSTEMS some successful implementations on unmanned rotorcraft.
As defined in Section 3.1, a UAS guidance system tends to For a general understanding of the motion planning prob-
replace the pilot’s deliberative process and decision func- lem and existing approaches, the reader can refer to pub-
tions, resulting in commands for the flight controller to lished surveys and textbooks in this field, such as (Latombe,
achieve a given mission safely. A guidance system is a 1991; LaValle, 2006). In addition to these traditional mo-
key component in increasing the autonomy level of an un- tion planning methods, other planning approaches have
manned rotorcraft; see Figure 4. As shown in Figure 3, a been proposed recently, such as a multilayered synergis-
typical guidance system for UAS includes the following tic framework that can specify complex temporal goals
main components: (1) trajectory generation, (2) path plan- (Bhatia, Kavraki, & Vardi, 2010). More recently, Goerzen
ning and obstacle avoidance, (3) mission planning, and (4) et al. (2010) provided an overview of existing motion plan-
reasoning and high-level decision making. Depending on ning algorithms with a general perspective on the prob-
the autonomy level of the rotorcraft, each of these com- lem of UAS guidance. Many papers have been published
ponents can be performed manually by human operator on path planning, and many techniques have been applied
or autonomously onboard the vehicle. Currently, UAS are to the UAS path planning problem. However, most pro-
operated remotely by human operators from ground con- posed algorithms have been validated in simulations only.
trol stations, or use rudimentary guidance systems, such Therefore, this survey will not cover all these papers and
as following preplanned or manually provided waypoints. techniques, but rather will present the most used and prac-
This is sufficient for UAS operating at high altitude for re- tical methods, with a particular focus on works with exper-
connaissance and exploration. However, UAS will require imental results. From the literature of UAS path planning,
more advanced guidance capabilities to achieve more com- we have identified six main classes of path planning tech-
plex tasks and missions. Furthermore, RUAS are generally niques that have been implemented on RUAS:
designed to operate at low altitude in cluttered and dy-
1. Road maps (RM), including visibility graph, Voronoi road
namic environments. Indeed, operation at low altitudes is
map, probabilistic road map (PRM), and rapidly exploring
where RUAS are most valuable, but also where they are
random trees (RRT)
most vulnerable. Therefore, advanced guidance and per-
2. Potential fields (PF)
ception capabilities are necessary to negotiate obstacles and
3. Optimization methods (OM), including mixed integer lin-
to plan the flight path. Currently, there is growing interest
ear programming (MILP), receding horizon control (RHC),
in increasing the vehicle’s autonomy by developing guid-
and motion primitive automaton (MPA)
ance systems that are able to tackle a number of opera-
4. Heuristic search algorithms (HSA), including A and D
tional events without operator intervention: for example,
algorithms
trajectory replanning following the detection of an obsta-
5. Planning under uncertainties (PUU)
cle or changes in the mission or environment configura-
6. Reactive and bio-inspired obstacle avoidance methods
tion. Guidance capabilities can also be increased to allow
(RBIOAM).
integration with other UAS and manned vehicles to accom-
plish coordinated missions. In this section, we review some Path planners based on methods 1 to 5 can be global,
guidance systems that have been successfully applied to local, or a combination of both, whereas techniques in 6
unmanned rotorcraft systems and describe the most impor- are reactive. Global path planners generally assume com-
tant milestones and achievements in both path and mis- plete knowledge of the world and in return provide a com-
sion planning, focusing on practical works with experi- plete path to the destination with useful properties of cor-
mental results. Figure 17 outlines methods and algorithms rectness, optimality, and completeness. These algorithms
that have been developed for RUAS and presented in this are generally expensive to compute, especially if the trajec-
section. tory has to be revised or replanned when new obstacles are
a- RoadMaps (RM)
detected. Furthermore, most outdoor UAS missions do not 6.1.1. Road Maps
have complete knowledge of the world. Local path plan- In this method, the motion planning problem is reduced to
ners can be defined as sensor-based incremental planning fitting a road map or a graph to the work space and search-
algorithms where sensor information is acquired during ing for the best path. A road map is generally represented
mission execution, a local model of the environment is built as a graph in which the nodes correspond to placements of
in the neighborhood of the vehicle, and the “best” local the robot and the edges represent collision-free paths be-
path is determined, based upon these local measurements tween these placements; see Figure 18. There are different
or maps. Local path planners are better able to accommo- road map methods, such as visibility graphs, Voronoi road
date new sensor data than classical global planners. How- maps, probabilistic road maps (PRM), and rapidly explor-
ever, such path planning algorithms do not provide any ing random trees (RRT). Once the road map is produced,
guarantee of completeness and optimality. In contrast, reac- a graph search algorithm such as Dijkstra’s method or A∗
tive obstacle avoidance algorithms operate in a timely fash- can be used to find a sequence of states (e.g., waypoints)
ion and compute just one next action in every instant, based that define the best path.
on the current context. They use only immediate measure- • Visibility Graph: The visibility graph was one of the
ments of the obstacle field to take a reactive response in earliest road map methods, which consists of straight
order to prevent last-minute collisions by stopping or line segments that connect nodes of polygonal obstacles.
swerving the vehicle when an obstacle is known to be in the It is a complete and optimal path planner that is com-
trajectory, which has been planned by a different algorithm. putable in only two dimensions. To our knowledge, there
These reactive algorithms are important in dealing with un- are no works on applying visibility graphs for rotorcraft
certainty, and run very quickly. However, reactive methods path planning except the work presented in Hoffmann,
cannot guarantee an appropriate solution to a route for a Waslander, & Tomlin (2008). However, their work focused
goal. A practical implementation of path planning may in- on trajectory generation for a quadrotor where a path is
clude the combination of these different path planning al- generated by a visibility graph-based algorithm and then
gorithms and reactive methods. smoothed to consider the vehicle’s dynamics.
• Voronoi Diagrams: Voronoi diagrams are another popu- locities, and angles. The complete path planner and path-
lar mechanism for generating a road map from a configu- following system have been implemented onboard the heli-
ration space (c-space). In this case, the road map consists of copter and flight tested. The UAS contains an onboard GIS
Voronoi edges, which are equidistant from all the points in map that includes a terrain 3D model of the test area, in-
the obstacle region. In contrast to visibility graphs, Voronoi cluding buildings, trees, and other obstacles, with an ac-
paths are by definition as far as possible from the obstacles. curacy of decimeters. Successful experimental results from
Researchers from NASA and U.S. Army have developed an different flight scenarios are presented in (Pettersson and
obstacle field route planner (OFRP) that is based on Voronoi Doherty, 2004; Wzorek et al., 2006), where the helicopter
roadmaps (Howlett, Schulein, & Mansour, 2004; Howlett, was tasked to fly to a designated building and to pho-
Tsenkov, Whalley, Schulein, & Takahashi, 2007; Whalley tograph each of its façades. In a recent paper, Wzorek,
et al., 2005). The Voronoi-based OFRP algorithm provides a Kvarnstrom, & Doherty (2010) investigated the use of ma-
2D solution using a four-phase approach: (1) Voronoi graph chine learning techniques to automatically choose the best
generation from obstacle edges, (2) graph culling using bi- possible replanning strategy, but only HIL simulations
nary space partitioning, (3) shortest path search using Epp- were presented. Another PRM-based path planner for a
stein’s search algorithm, and (4) path smoothing using bi- robotic helicopter was presented in Hrabar (2008). The road
nary space partitioning again. This path planner has been map is constructed based on an occupancy grid representa-
implemented on the NASA RMAX helicopter and flight tion, and a D Lite algorithm is used to search for the short-
tested at the NASA Ames Disaster Assistance and Rescue est collision-free path. The proposed approach has been
Team (DART) site. Flights were conducted at a constant alti- validated in simulations as well as on the CSIRO air vehi-
tude of 10 m AGL and at a maximum speed of 5 m/s. How- cle simulator cable-array robot. A variant of the PRM tech-
ever, for these tests, obstacle locations were known before- nique has been used in path planning for a small quadrotor
hand and predefined on the 2D map of the test site. This UAS navigating in indoor GPS-denied environments. He,
Voronoi-based planner performed well, but experimental Prentice, & Roy (2008) present a belief road map (BRM)
results showed that it has some limitations in the case of path planning algorithm, which is an information or belief-
a changing obstacle field. space extension of the probabilistic road map method. The
• Probabilistic Road Maps: A much more recent ad- originality of this algorithm is that the graph is constructed
vance in road map methods is the probabilistic road map in the belief or information space rather than the state
(PRM) approach that attempts to make planning in large space. In doing so, the algorithm allows the vehicle to reach
or high-dimensional spaces tractable. A PRM is a heuris- its goal while maintaining reliable localization during path
tic sample-based approach that discretizes a continuous c- following using an unscented kalman filter (UKF). This al-
space, resulting in fewer states than the original c-space. gorithm has been tested using a quadrotor equipped with
In the first stage, the road map is generally generated of- an IMU and a 4-m-range Hokuyo LIDAR as the main lo-
fline by randomly sampling the larger c-space and then calization sensor. The quadrotor achieved its task by de-
using a local planner to connect collision-free configura- touring from the shortest path toward areas of high sen-
tions. In the second stage (query phase), the start and goal sor information, successfully reaching its desired goal us-
points are added and a graph search algorithm uses the ing LIDAR-based odometry.
road map created in the first stage to search through the One of the rare works on RM-based path planning
waypoint nodes in order to find the least-cost path be- for an unmanned helicopter with online stereo-based map-
tween the start and goal configurations. Some problems ping is presented in Andert et al. (2011). The stereo-based
with a standard PRM method are inefficiency for nar- mapping system, previously described in Section 5.3.1
row confined spaces, slow convergence rate, and dynamic (Andert and Adolf, 2009) has been augmented by a real-
obstacles. time path planning algorithm that is based on a quasi-
In the framework of the WITAS project, researchers random road map (QRM) approach, an A algorithm for
from Linköping University have developed and evaluated initial path search, and an AD algorithm for replanning.
a PRM path planner for a robotic helicopter (Pettersson and Both mapping and path planning algorithms have been im-
Doherty, 2004). Their planner uses an OBBTree-algorithm plemented onboard the DLR ARTIS helicopter and flight-
for collision checking and A for graph search. Constraints tested in two scenarios, one with a single obstacle and the
such as maximum and minimum altitude, no-fly zones, other one with buildings in an urban area, Figure 18.
and ascent/descent rate have been introduced during the • Rapidly Exploring Random Trees: The rapidly exploring
query phase (the A search phase) in order to adapt the ap- random tree (RRT) is also a heuristic randomization-based
proach to the RMAX helicopter. Different replanning strate- approach, which is a variant of PRM methods. Rather than
gies were presented which modify the path when a change randomly sampling the configuration space as a PRM does,
in the sensed environment or mission is detected. The path the planner begins at the start location and randomly ex-
planner has also been augmented by a trajectory genera- pands a graph or tree (a tree here is a directed graph with
tor that creates time-parameterized reference positions, ve- no cycles) by pushing the search tree away from previously
Figure 18. Example of stereo-based mapping and path planning by the DLR ARTIS helicopter (Andert et al., 2011).
constructed vertices. This allows it to rapidly search large 6.1.2. Potential Fields
and high-dimensional spaces, thereby outperforming PRM The potential fields (PF) approach is another major type
from the efficiency point of view. RRT are also well suited of representation used in path planning, which was first
to the capture of dynamic or nonholonomic constraints and introduced by Khatib and Mampey (1978) for manipula-
can be immediately extended to allow for moving obsta- tor control. This method treats the robot as a point un-
cles. One method for planning in the RRT approach is to der the influence of force fields generated by the goals
grow two RRTs, one from the goal and one from the start, and obstacles in the world. Obstacles generate repulsive
and then search for states that are common to both, creat- forces that repel the vehicle and goals generate attractive
ing a linked path between the two. This sample-based path forces; see Figure 19. These methods are characterized by
planner has gained much interest, and its application to their low computational complexity and easy implemen-
UAS guidance in unknown environments has been inves- tation in real time. However, they are incomplete, because
tigated. Dahleh, & Feron (2002) extended the original RRT such approaches are prone to local minima. Much of the
algorithm to deal effectively with the system’s dynamics, in effort of adapting potential fields has been spent in over-
an environment characterized by fixed and moving obsta- coming this local-minima problem by generating so-called
cles. Different versions of the proposed path planner have navigation functions such as depth-first and best-first tech-
been evaluated in simulations using a ground robot and a niques, wavefront-based methods, and harmonic potential
flying helicopter model in different scenarios. Wzorek and functions. An interesting work on implementing and flight-
Doherty (2006) compared the performance of the PRM and testing a variant of PF-based path planning algorithm on a
RTT planners in terms of completeness, planning time, and robotic helicopter is presented in Scherer et al. (2008). The
path length. The helicopter was equipped with a digital authors addressed the problem of flying relatively fast at
map of the environment, and no-fly zones were randomly low altitudes in cluttered environments relying only on on-
added during flight by the ground operator. Experimen- line LIDAR-based sensing and perception. Their approach
tal results showed that the PRM algorithm performs better combines a slower global path planner that continuously
than the RRT algorithm in most situations if equipped with replans the path to the goal based on the perceived environ-
a suitable precompiled road map. However, the RRT plan- ment with a faster local collision avoidance algorithm that
ner is more appropriate for unknown environments, be- ensures that the vehicle stays safe. The global planner is
cause it can build its road map online. There are also some based on an implementation of Laplace’s equation that gen-
other works on applying RRT methods to rotorcraft path erates a potential function with a unique minimum at the
planning, but only simulation results have been presented goal. Binary occupancy grid and goal points are used to as-
(Redding et al., 2007; Yang, Gan, & Sukkarieh, 2010). sign boundary conditions for Laplace’s equation. The local
Figure 19. Path planning and obstacle avoidance by the CMU robotic helicopter using 3D LIDAR (Scherer et al., 2008).
path planner uses Fajen and Warren’s model, where repul- graph, as in RM approaches, or directly on a regular grid or
sive and attractive forces depend not only on distances (as any other world representation. These approaches can then
in the original PF approaches) but also on angles (headings) be considered as a middle ground between heuristic and
relative to obstacles. Based on spherical representation of optimization methods. A and D algorithms, with their
3D obstacles, the reactive guidance law generates two steer- variants, are probably the most used HSA in robotics. The
ing rates in the azimuth and elevation directions. By con- A algorithm evaluates the goodness of each node or cell
straining the lateral velocity to zero and the magnitude by combining two metrics (distance from the start and es-
of the velocity vector to some defined value, the steering timated distance to the goal) to estimate the distance to the
rates are mapped to a heading rate (horizontal avoidance) goal. D is an incremental HSA that can replan the path in
and a vertical velocity. The complete control–navigation– real time when changes occur in the environment. It uses
guidance system has been implemented on a Pentium M local sensor information to continuously update the map
processor (1.6 GHz) onboard the CMU Yamaha RMAX he- and dynamically repair the A paths affected by the change
licopter. A custom 3D LIDAR unit from Fibertek Inc. was in the map.
used as the main perception sensor. The system has been A significant milestone in path planning for RUAS was
extensively flight-tested in different sites with different ob- achieved by NASA/Army researchers (Tsenkov et al., 2008;
stacles and flight speeds. More than 700 successful obstacle Whalley et al., 2009), who developed two 3D path plan-
avoidance runs were performed where the helicopter au- ners based on heuristic planning concepts and the A search
tonomously avoided buildings, trees, and thin wires; see algorithm. The first one, called the plane-slicing 3D route
Figure 19. planner, uses multiple invocations of the standard 2D A
grid search algorithm on a set of obstacle edges obtained
by slicing the terrain representation20 with a horizontal, a
6.1.3. Heuristic Search Algorithms vertical, or an off-axis plane. It first obtains a horizontal
Most approaches to rotorcraft path planning problems are route between the origin and the goal points and then opti-
based on heuristic concepts, because of their ability to pro- mizes the path by replacing portions of it with up-and-over
vide sufficiently good performance at a low computational
cost. Heuristic search algorithms (HSA) use a heuristic,
which is a rule for making a guess as to which path moves 20
Hierarchical quadtree is used along with the obstacle proximity
the vehicle closer to the goal, yielding a efficient but sub- map that is constructed by Bresenham rasterization of line seg-
optimal planning. HSA tend to minimize route cost on a ments and circles on a regular grid.
Figure 20. Architecture of the U.S. Army/NASA Obstacle Field Navigation (OFN) system and example of flight paths at two
different sites (Whalley et al., 2009).
shortcut hops. The final obtained 3D route is thus com- than 125 flight tests were performed at the DART site and
posed of a series of 2D pieces or paths. The other 3D the Fort Ord MOUT site with real natural and manmade
route planner executes the A search algorithm on a height obstacles at speeds that ranged from 1 to 4 m/s. The ro-
map. It plans directly in 3D and requires a single invoca- torcraft was able to fly a sequence of waypoints, sense the
tion of the A search algorithm. The proposed route plan- terrain, build a map online, replan the route, and follow
ner is a 3D extension of a conventional 2D A algorithm the path simultaneously, with all processing performed on-
on a height map extracted from the terrain representation. board the helicopter. The OFN system was also successfully
The A -generated route is contained on the surface of that flown with a 5-m slung load on several occasions. Experi-
height map, which has been inflated with vertical and hor- mental results have also shown that the A 3D planner pro-
izontal safety factors. Originally minimizing route length, vides similar or better overall performance, compared to
the algorithm has been reworked to minimize flight time the plane-slicing 3D route planner, at a small fraction of the
by accounting for speed variation caused by horizontal and computational cost. Figure 20 shows two typical test runs
vertical maneuvering. For both path planners, the route is at different sites.
refined and a trajectory generator is used to fit through the
route waypoints a series of Kochanek–Bartels splines, re-
sulting in a smoothed trajectory that remains within the 6.1.4. Optimization Methods
operational corridor and dynamic limits of the vehicle. Optimization methods (OM) consider the path planning
The developed obstacle field navigation (OFN) system was problem as a numerical optimization problem where con-
evaluated in Monte Carlo simulations as well as in flight ex- straints on the problem (obstacles, vehicle dynamic limits,
periments. After the OFN system was tuned and validated mission constraints, etc.) are translated into mathematical
in simulation, it was implemented on the Yamaha RMAX relationships. These methods are attractive because they
helicopter and demonstrated in a number of real obstacle theoretically produce optimal solutions and can also di-
avoidance scenarios. As described in Takahashi et al. (2008), rectly generate paths that consider the vehicle’s dynamics.
a spinning SICK LIDAR was used as the main perception However, they are computationally expensive, especially
sensor. Initially, 91 runs with a total duration of 44 h were when many constraints need to be satisfied. The compu-
flown in a virtual obstacle setup. In the second phase, more tational cost of optimization-based methods is frequently
reduced by employing receding-horizon control tech- method, was developed and flight-tested in different in-
niques, or by using heuristics to simplify the optimization- door environments. In He, Bachrach, and Roy (2010), the
based solution. Three major optimization methods have same group proposed a probabilistic planner for tracking
been investigated for designing path planning algorithms targets in the presence of uncertainties in target pose. The
for RUAS. They are mixed integer linear programming PUU problem was treated using a multimodal Gaussian
(MILP), receding horizon control (RHC) or model pre- representation of the agent’s beliefs and a forward-search
dictive control (MPC), and motion primitive automaton algorithm. Experimental results for a quadrotor tracking
(MPA). Because these methods are well described in the small vehicles indoor were presented.
survey (Goerzen et al., 2010), we will not present them To allow RUAS to operate with a high degree of relia-
here. Nevertheless, it is important to mention that some bility and safety in the presence of many sources of uncer-
optimization-based path planners have been demonstrated tainties, NASA researchers have developed a new 3D path
in real time using actual RUAS. Indeed, MILP and RH have planning algorithm based on risk minimization (Goerzen
been successfully applied for path planning for an indoor & Whalley, 2011). The proposed planner (called RiskMi-
quadrotor (Culligan, Valenti, Kuwata, & How, 2007) and nOFN) converts a 3D occupancy grid into a dynamic risk
(Mettler, Dadkhah, & Kong, 2010), and a small indoor heli- map by combining the various risk factors, such as sensing
copter. (Mettler, 2010) using VICON system for navigation. uncertainty, tracking error, uncertainty in helicopter posi-
The BEAR team from the University of California, Berkeley tion, quantization errors, altitude penalty, and other known
has successfully flight-tested an obstacle avoidance system risks such as mechanical failures. The RiskMinOFN relies
that uses a LIDAR-based perception system and a hierar- on dynamic programming and the A algorithm to com-
chical MPC-based path planner and controller (Shim et al., pute the minimum cost-to-go function over the field of
2006); see Figure 12. (Mettler, Kong, Goerzen, & Whalley, view of the sensor, which is used as a navigation func-
2010) presented a benchmarking framework for evaluat- tion. The velocity command module uses this 3D naviga-
ing and comparing path planning algorithms for UAS. It is tion function to generate reference velocities directly for the
based on generating near time- or energy-optimal trajecto- flight controller, causing the vehicle to move in the direction
ries (using nonlinear and dynamic programming) that are of the navigation function gradient. This path planner was
then used as performance baselines to evaluate different al- first validated in simulations against the benchmarks de-
gorithms. In a more recent paper (Kong & Mettler, 2011), scribed in Mettler et al. (2010), and then implemented on
an extension to that method is proposed based on spatial the NASA RMAX helicopter and demonstrated in urban
cost-to-go (SCTG) maps. The concept of SCTG maps allows and natural environments. The experimental results have
studying the effects imposed by different airframe configu- shown that RiskMinOFN is useful for obstacle avoidance
rations and performance criteria. The paper also presented in the presence of uncertainties.
the simulation results obtained from applying this frame-
work to a basic goal-directed guidance task in an urban en- 6.1.6. Reactive and Bio-inspired Obstacle
vironment for three small UAS types (fixed-wing aircraft, Avoidance Methods
helicopter, and quadrotor).
Most of the motion planning methods previously presented
can be considered as global or local path planners where
a global or local representation (map) of the environment
6.1.5. Planning under Uncertainties is built. These path planners are generally computation-
Uncertainties in localization, sensing, and control are un- ally expensive and their implementation onboard mini and
avoidable for the UAS planning problem. The common ap- micro RUAS is challenging. On the other hand, reactive
proach generally used to deal with uncertainties consists obstacle avoidance algorithms run very quickly and are
of treating them as a deterministic worst case by introduc- crucial for preventing last-minute collisions. These char-
ing a conservative safety corridor. However, there are some acteristics motivated many researchers to investigate
works that consider uncertainties directly in the planning their implementation of reactive and bio-inspired obsta-
algorithm. For example (Davis & Chakravorty 2007) for- cle avoidance methods (RBIOAM) onboard RUAS as main
mulated the problem of PUU as the adaptive control of an guidance systems or in combination with global and local
uncertain Markov decision process, consisting of a known path planning algorithms.
control-dependent system state and an unknown control- • LIDAR-Based Reactive Obstacle Avoidance: In a recent
independent environment. The feasibility of the approach work, Merz and Kendoul (2011) have demonstrated the ef-
was illustrated by testing it on an unmanned helicopter fectiveness of a LIDAR-based reactive obstacle avoidance
navigating through an urban environment in a 6-DOF flight system for infrastructure inspection by an autonomous
simulation. PUU was also addressed in He et al. (2008) for helicopter. The perception system uses a COTS 2D LI-
autonomous indoor exploration using a quadrotor vehicle. DAR (Hokuyo) in combination with two special helicopter
A belief road map (BRM) path planning algorithm, which flight modes (Pirouette descent and Waggle cruise) to ad-
is a belief-space extension of the probabilistic roadmap dress the problem of 3D perception. Simple but effective
Inspection point 3
Inspection point 4
GNC systems
and avionics
Inspection point 2
Descent point
The helicopter
during obstacles Inspection point 5
(trees) avoidance
Inspection point 1
10m
The helicopter during autonomous inspection of a windmill
LIDAR-based obstacle avoidance using wall-following algorithm
LIDAR-based terrain following
Altitude in [m
15
10 Terrain
5
Helicopter height
−5
−10
300 400
Figure 21. Obstacle avoidance and infrastructure inspection beyond visual range by the CSIRO autonomous helicopter using a
vertically mounted Hokuyo LIDAR (Merz & Kendoul, 2011).
reactive strategies were used to detect the ground and regu- collision avoidance results are presented from flight experi-
late the height, perform terrain-following, detect and avoid ments with a CSIRO autonomous helicopter equipped with
frontal obstacles, detect and stop in front of the target, and stereo and LIDAR sensors.
take high-resolution pictures from a specified viewing an- • Vision-Based Bio-inspired Reactive Obstacle Avoid-
gle. This system has been implemented onboard the CSIRO ance: Visual bio-inspired reactive methods have been pop-
robotic helicopter using the real-time extended state ma- ular because of their simplicity and low weight require-
chine (ESM) framework (Merz, Rudol, & Wzorek, 2006). ment. Furthermore, their effectiveness in nature has been
Several flight tests of the perception and guidance systems proven in many flying insects. There are numerous ongo-
have been performed in different environments, with a to- ing works on applying insect-inspired approaches to obsta-
tal of 14 h of autonomous flight to date. This system has also cle avoidance in small UAS. In these methods, optic flow
been successfully used to perform an autonomous inspec- is directly used to compute an avoidance maneuver, yield-
tion of a windmill beyond the visual range and without a ing to obstacle avoidance without map building and path
safety pilot; see Figure 21. In another work from the same planning. As described in Section 5.2.2, some interesting re-
group, Hrabar (2011) developed and flight-tested a 3D re- sults have been obtained for terrain following (Ruffier &
active obstacle avoidance algorithm using Hokuyo LIDAR Franceschini, 2005; Garratt & Chahl, 2008; Herisse et al.,
and a stereo camera. This algorithm finds an escape point 2010; Ruffier, & Franceschini, 2005) and obstacle avoid-
(intermediary waypoint) by checking collisions within a ance (Conroy, Gremillion, Ranganathan, & Humbert, 2009;
cylindrical safety volume in a 3D occupancy map represen- Hrabar & Sukhatme, 2009; William et al., 2008; Zufferey &
tation of the environment. The conducted research also in- Floreano, 2006), when implemented on small UAS. These
cluded the evaluation and comparison of LIDAR and stereo methods are very powerful and provide an interesting al-
vision systems for RUAS real-time perception. Successful ternative for both perception and guidance onboard mini
Figure 22. Architecture of the mission planning and execution monitoring system developed at Linköping University (Doherty,
Kvarnstrom, & Heintz, 2009).
UAS with limited payload. However, the problem of robust path planner generates obstacle-free trajectories between
optic flow computation in real time and obstacle detection the waypoints. A RUAS with mission planning capabilities
is still a challenge. Moreover, these techniques need to be is considered to have an autonomy level 6, according to the
combined with some goal-oriented guidance to reach the ALFURS chart presented in Figure 4. This section provides
assigned destination or to achieve the mission. only a very short overview of mission planning and high-
level autonomy systems that have been developed and im-
plemented on single RUAS. Section 6.3 presents more work
6.2. Mission Planning and High-Level Autonomy about multi-RUAS mission planning.
Although there were some attempts to implement so-
As mentioned in the beginning of this section, guidance phisticated RUAS guidance systems that achieve mission
systems for UAS are not limited to path planning only. Re- planning and high-level autonomy, very few experimen-
search with UAS is reaching a new degree of sophistication tal results have been reported in the literature. Doherty
and high-level autonomy where guidance systems include et al. (2009) presented an architectural framework for mis-
task-oriented planners and deliberative capability. This is sion planning and execution monitoring and its integration
an active research topic for ground robots that includes dis- into a fully deployed unmanned helicopter, Figure 22. Plan-
ciplines such as cognitive robotics, reasoning, and artifi- ning and monitoring use the same logic formalism, a tem-
cial intelligence. Mission planning for RUAS is defined in poral action logic (TAL) for reasoning about actions and
Section 3.2; it is mainly concerned with the selection of changes. The system has been implemented on the Yamaha
locations to visit (goal waypoints or trajectories) and other RMAX helicopter and “partially” demonstrated in the con-
vehicle actions (loading/dropping a load, acquiring in- text of an emergency services scenario. The first phase
formation), typically over a long time horizon and be- (body identification and flight-related monitor formulas)
yond sensing range. Functionally, mission planning resides was achieved through real autonomous flights, whereas the
above the process of path planning, where, for example, the second phase (package delivery and execution monitoring)
mission planner generates a desired flight plan21 , and the has been evaluated through HIL simulations. In the frame-
work of the NASA/Army autonomous rotorcraft project,
21
A series of waypoints or trajectories through which the vehicle a sophisticated guidance system has been developed
must pass in order to collect specific information for the mission. for the autonomous surveillance planning problem for
Figure 23. The guidance system developed by U.S. Army/NASA for autonomous surveillance planning (Whalley et al., 2005).
multiple and varying targets of interest (Whalley et al., the mission into a sequence of tasks or macro-actions asso-
2005). High-level autonomy is provided by Apex22 , which ciated with rewards. The problem has been modeled using
is the intelligent behavior component of the guidance the Markov decision process (MDP) framework, and a sym-
system, which generates mission plans using a decision- bolic focused dynamic programming (SFDP) algorithmic
theoretic approach. It is based on three main layers: the de- scheme for mission planning has been developed. Teichteil-
liberative layer (periodic surveillance planning), the goal Konigsbuch and Fabiani (2007) extended the guidance sys-
executive layer (plan execution, monitoring, and human tem and described its implementation onboard the RMAX
interaction management), and skills (autopilot and pay- robotic helicopter. The system has been tested in an explo-
load controllers); see Figure 23. This guidance system has ration scenario where the mission consists in exploring an
been integrated into an RMAX robotic helicopter and flight- area, detecting an injured person, and selecting a suitable
tested in more than 240 scenarios with comparative analy- safe landing area. A mission management system for RUAS
sis of 2-Opt versus human in directing a surveillance mis- was also developed at DLR (Germany) using a combina-
sion. A similar project, called ReSSAC,23 was carried out tion of the 3T layered architecture (deliberative layer, exec-
by the French Aerospace Lab (ONERA) with the objec- utive/sequencer layer, and reactive layer) with ideas from
tive of developing and demonstrating architectures and al- the behavior-based paradigm (Adolf & Andert, 2010; Adolf
gorithms for high-level autonomy (decision-making and & Thielecke, 2007). The developed mission management
mission management) and information processing onboard system has been integrated onboard the ARTIS helicopter
RUAS in a search and rescue scenario (Fabiani et al., 2007). and validated in simulations (Adolf & Thielecke, 2007) as
A decisional autonomy architecture for an exploration mis- well as in real flight tests (Adolf & Andert, 2010) in a num-
sion has been developed based on the idea of decomposing ber of scenarios, including waypoint following and a search
and track mission.
22
Apex is a NASA reusable autonomy software architecture 6.3. Multi-Rotorcraft Unmanned Aircraft Systems
that integrates artificial intelligence capabilities for monitoring Coordination and Cooperation
events and for reactively selecting, scheduling and controlling
actions, see http://ti.arc.nasa.gov/projects/apex/projectARP.php Cooperation between vehicles extends their capabilities be-
for more details. yond the capabilities of a single vehicle, and offers exciting
23
Search and Rescue by Cooperative Autonomous System. new capabilities to perform a wide range of applications
efficiently and with greater fault tolerance and flexibility. RUAS constraints and the predicted future path of other ve-
The RUAS capability to coordinate and cooperate, as de- hicles to avoid. For validation, flight tests were performed
fined in Definition 10, with other RUAS or unmanned sys- using two unmanned helicopters (Yamaha R-50) in a head-
tems in general will increase its autonomy level according on collision course, which broadcast their positions. Exper-
to the ALFURS chart in Figure 4. For the other autonomy- imental results are presented in Shim and Sastry (2006) for
enabling functions, RUAS cooperation has also different a decentralized architecture and in Shim and Sastry (2007)
levels. It may involve control, navigation, guidance, or all for a centralized architecture. Successful results about co-
of them, and may be centralized or distributed (decentral- ordinated flight of two unmanned helicopters using opti-
ized). mization approach are also reported in Schouwenaars et al.
(2006). Coordinated multivehicle connectivity-constrained
Definition 1. In this survey, we make a distinction between trajectory optimization was tackled using both centralized
coordination and cooperation. RUAS coordination involves low- and distributed receding-horizon planning and a MILP
level interactions for task or maneuver execution, mainly for shar- framework. Flight test results are presented for a central-
ing space, including collision avoidance and formation flight. On ized two-helicopter mission where the lead and relay he-
the other hand, cooperation or collaboration occurs at a higher licopters are controlled in such a way that indirect line-of-
level and may require, in addition, that the RUAS work toward a sight connectivity between the leader and the ground sta-
common goal or mission by sharing data and controlling actions tion is always maintained. Other concepts and algorithms
together. using optimal control for multi-RUAS control were demon-
strated on the Stanford Testbed of Autonomous Rotorcraft
Although many researchers have examined algorithms for Multi-Agent Control (STARMAC) platform. STARMAC
for UAS coordination and cooperation, few have included is composed of six quadrotors, each with its own onboard
experimental results demonstrating the implementation of sensing and control. Hoffmann et al. (2009) briefly de-
these algorithms on RUAS. The multi-RUAS coordination scribed the developed algorithms for multi-RUAS coordi-
and cooperation problem has been approached in a variety nation and cooperation and provided references for more
of ways in the past. However, proposed architectures and details and information. The demonstrated algorithms in-
algorithms for RUAS can be regrouped into three main cat- clude two different decentralized cooperative collision-
egories: (1) flight control coordination (control level), (2) co- avoidance algorithms, one using rules derived from opti-
operative perception (perception level), and (3) cooperative mal control and reachability analysis, and the other based
mission planning and decision making (mission level). on a nonlinear, nonconvex optimization program and a
Nash bargaining cost metric. In a more recent work (Gillula
et al., 2011), a combination of hybrid decomposition and
6.3.1. Coordinated Flight Control reachable set analysis was used to design collision avoid-
In the literature, much of the research on multi-RUAS flight ance algorithms for multiple unmanned vehicles. The algo-
control coordination has been primarily concerned with rithms use a distributed optimal switching control strategy,
formation flight, collision avoidance, and load transporta- derived from a dynamic game formulation, to command
tion. The coordination problem was formulated as an opti- avoidance action (transition from nominal control to col-
mal control problem (Gillula, Hoffmann, Huang, Vitus, & lision avoidance) when any vehicle was on the boundary
Tomlin et al., 2011; Hoffmann et al., 2009; Nonami et al., of the computed avoid sets. Proposed controllers were im-
2010; Schouwenaars, Feron, & How, 2006; Shaw et al., 2007; plemented on STARMAC and demonstrated in real time.
Shim, Kim, & Sastry, 2003a) or as a distributed control Experiments were flown using both two- and four-vehicle
problem (Bernard and Kon, 2009; Michael, Fink, & Kumar, fleets, where each quadrotor broadcasts its states to all
2011; Yun, Chen, Lum, & Lee, 2010). Formation flight of other quadrotors.
two unmanned helicopters was demonstrated in real flights The UAV research group at the National University of
(Nonami et al., 2010, Chapter 9) using a leader–follower Singapore has successfully demonstrated formation flight
configuration and linear MPC for position control of the of two small unmanned helicopters using a nonoptimiza-
follower helicopter. Other constraints such as communica- tion approach (Yun et al., 2010). Simple equations of mo-
tion range and collision avoidance were also considered in tion were derived to model the leader–follower pattern in
MPC calculations. The follower MPC algorithm was imple- a fixed geometrical formation. A dynamic inversion tech-
mented off board and used state estimates of the leader he- nique was used to design a controller for the follower
licopter. Decentralized nonlinear MPC, enforced with po- helicopter to maintain a fixed distance from the leader
tential functions, was formulated for rotorcraft formation while tracking its prescribed reference trajectory. This con-
flight and collision avoidance and presented in Shim et al. trol scheme has been validated using a manually piloted
(2003a). Later, this nonlinear MPC-based algorithm was ex- leader helicopter and a follower helicopter. An interesting
tended or modified to compute and execute a plausible application of coordinated flight is load transportation by
emergency evasive maneuver or trajectory based on the multiple RUAS. Unlike the case in the abovementioned
Figure 24. The AWARE multi-RUAS platform during slung load transportation and sensor node deployment missions (Bernard
& Kon, 2009).
works, where the interactions between RUAS were essen- objective of developing swarms of cooperative mini RUAS
tially information exchange, joint transportation of a sin- such as the AirShield (airborne remote sensing for hazard
gle load requires physical coupling between the vehicles. inspection by network enabled lightweight drones) project,
In the framework of the AWARE24 project, researchers from the SUAAVE (sensing, unmanned, autonomous aerial ve-
TUB (Germany) developed a generic system for slung load hicles) project, and the sFly (swarm of micro flying robots)
transportation using multiple small unmanned helicopters project.
(Bernard & Kon, 2009). The focus was on the flight control
system, which is based on the dynamics model of the whole Observation 10. Not all coordinated flight controllers
system (all RUAS and load). The proposed distributed con- presented in this section, are purely based on a control-
trol scheme includes two control loops: (1) an inner loop theoretic approach. Some of them are augmented with sim-
(attitude controller) for each helicopter that accounts for ple coordinated trajectory-generation subsystems that gen-
the complete dynamics of the whole system, and (2) an erate an appropriate reference trajectory for each vehicle.
outer loop (translation controller) that only accounts for the
translational dynamics of the whole system and depends
6.3.2. Cooperative Perception
on the number of helicopters. This algorithm was validated
in real flight experiments, where a load of 4 kg was trans- Cooperative perception is beneficial for multi-UAS sys-
ported by three identical unmanned helicopters flying in tems, but it presents many challenging issues. One of the
a triangular formation, Figure 24. A similar system for load main projects that addresses multi-UAS cooperative per-
manipulation and transportation by indoor quadrotors was ception is probably the COMETS25 project (Ollero, Lacroix,
developed and demonstrated at the University of Pennsyl- Merino, et al., 2005; Ollero and Maza, 2007). The objec-
vania (Michael et al., 2011). The problem was addressed tive of COMETS was to design and implement a system
by solving the inverse or direct kinematics problem based for cooperative activities using heterogeneous UAS such
on a mathematical model that models the kinematic con- as unmanned helicopters and blimps. Merino et al. (2006)
straints and the mechanics underlying stable equilibria of presents results of field experiments on fire detection, lo-
the whole underactuated system. The designed flight con- calization, and monitoring with two unmanned helicopters
troller was augmented by a simple potential field algorithm and an automated blimp cooperating. The architecture of
to avoid interrobot collisions. By using a VICON motion the cooperative system consists of (1) partially distributed
capture system for state estimation, a 6-DoF load was trans- low-level image-processing functions (image stabilization,
ported and manipulated using three quadrotors. In recent segmentation, georeferencing), which are dedicated to the
years, several other projects have been launched with the
25
The COMETS (real-time coordination and control of multiple het-
erogeneous unmanned aerial vehicles) project was funded by the
24
http://grvc.us.es/aware/. European Commission.
various sensors (visual and infrared cameras, fire sensor), RUAS exhibit cooperation at the mission level, such as co-
and (2) two centralized subsystems that fuse processed data operative task assignment and mission planning. Cooper-
from the different vehicles for fire detection/alarm confir- ative task assignment or allocation is concerned with the
mation, localization, and monitoring. When a fire alarm selection of conflict-free matching of tasks to vehicles. In
is detected and localized, the mission is replanned, and other words, a higher-level component of a mission planner
more UAS are sent to confirm the alarm. Field experiments provides a list of tasks to the task assignment component,
were performed and tasks related to fire search, fire con- which decides which of the available vehicles should per-
firmation, and fire observation were successfully achieved. form each task, based on information about the tasks and
Multi-RUAS cooperative perception was also addressed the capabilities of the vehicles. In Bethke et al., (2008), an
in the AWARE project (Maza, Kondak, Bernard, & Ollero, algorithm for cooperative task assignment was proposed
2010). The Aerospace Control Lab. (ACL) at MIT has been with a focus on the health management problem at the task
very active in studying and testing multi-RUAS coopera- level. The receding-horizon task assignment (RHTA) algo-
tion using the RAVEN26 platform (How et al., 2008). In rithm, developed at MIT, has been extended to include the
the cooperative perception area, they have addressed the fuel state in the vehicle model. The modified RHTS algo-
problem of persistent vision-based search and track using rithm solves an optimization problem to select the optimal
multiple UAS (Bethke, 2007). The cooperative perception sequence of tasks for each UAS. A set of experiments were
problem was addressed in the bearings-only estimation conducted to accomplish a multi-RUAS mission that con-
framework, where multiple bearing measurements from sists of three RAVEN quadrotors searching for, detecting,
different vehicles are combined to estimate the target po- estimating, and tracking an unknown number of ground
sition and velocity. The method uses an optimization tech- vehicles. The mission was successfully accomplished by
nique to combine the instantaneous observations of all combining the modified RHTA algorithm with the coop-
UAS, allowing rapid and accurate estimation. A RAVEN erative vision-based perception algorithm described ear-
platform was used to conduct different experiments in- lier (Bethke, 2007). The same group has developed other
cluding cooperative tracking of a small ground vehicle by architectures and algorithms for decentralized coopera-
two quadrotors, cooperative tracking of a flying vehicle tive task allocation, Figure 25. In Choi, Brunet, and How,
(quadrotor) by two quadrotors, and a persistent search and (2009), the consensus-based auction algorithm (CBAA) and
track mission for three quadrotors. consensus-based bundle algorithm (CBBA) are described.
Both algorithms produce conflict-free task assignment so-
6.3.3. Cooperative Mission Planning and Decision Making lutions by matching a list of tasks to multiple vehicles to
Very few contributions have dealt with multi-RUAS prob- maximize some global reward. Simulation results were pre-
lems according to a deliberative paradigm, where the sented in Choi et al. (2009) and some experimental results
were reported in How et al., (2009).
In many realistic missions, such as search and rescue,
26
RAVEN is an indoor multivehicle test bed composed of fixed- and a certain degree of centralization and flexibility in coop-
rotary-wing vehicles and a VICON motion capture system. erative mission planning is necessary for those in charge
Figure 25. Eight-vehicle cooperative flight test on MIT’s RAVEN (Bethke, Valenti, & How, 2008).
(human operators) to understand and eventually sign off igation, and control systems for RUAS and provided a de-
on potential plans. In the framework of the COMETS tailed overview of published papers, with a particular fo-
project, decisional architectures and algorithms for the co- cus on recent and practical GNC algorithms that have been
operation of heterogeneous UAS with different autonomy implemented and flight-tested. The key conclusions of this
levels were proposed (Lacroix et al., 2007). The proposed ar- paper are the following:
chitecture is based on a human-centered central decisional
node (CDN) in the ground station and a distributed deci-
• Many papers have been published in the GNC areas, but
sional node (DDN) onboard each vehicle. The DDN encom- very few of them have reported convincing experimen-
passes a multilevel executive (MLE) module that handles tal results.
the coordination and task execution issues and a deliber-
• Flight control is a well understood and well developed
ative layer (DL) that deals with mission and task refine- area that can offer immediate solutions to RUAS tech-
ment and distributed task allocation. The MLE was tested nology needs.
in simulations as well as in actual flights for the fire detec-
• LIDAR-based perception and obstacle avoidance or path
tion and monitoring scenario, whereas the DL components planning systems are less mature, but some of the devel-
have only been tested in simulations. A similar architec- oped systems can be deployed for some particular mis-
ture was developed for the AWARE platform, which con- sions.
siders the self-deploying of the network (communication
• Vision-based navigation, including state estimation and
equipment and nodes) by means of multiple autonomous perception, is an active research topic, but the technolo-
helicopters (Maza et al., 2010). Each vehicle contains an on- gies developed so far are not reliable enough for real de-
board deliberative layer (ODL) and a proprietary executive ployments.
layer (EL). The former deals with high-level distributed de-
• There is a gap between theoretical work and reported ex-
cision making, whereas the latter is in charge of the execu- perimental results. Indeed, a significant amount of theo-
tion of the tasks. The ODL has interactions with its EL and retical work has been done in the design of control, nav-
with the ODL of other UAS, as well as with the human– igation, and guidance algorithms for RUAS. However,
machine interface. This architecture has been implemented most of these systems have been validated in simula-
on three unmanned helicopters and used in a mission con- tions, offline processing, or very basic flight tests with-
sisting of deployment of a sensor node from one RUAS in a out convincing results. We believe that significant en-
given location to repair the wireless sensor network connec- gineering work is also needed to implement the de-
tivity, whereas other RUAS supervised the operation with veloped systems properly and to evaluate their perfor-
the onboard cameras and also monitored the area. More mance in extensive flight-test programs.
field experiments are presented in Maza, Caballero, Capi-
• It was difficult to compare the reviewed algorithms,
tan, de Dios, and Ollero (2011) for the following missions: especially in the navigation and guidance areas. More
(1) multi-RUAS fireman tracking, (2) multi-RUAS sensor work needs to take place on the development of bench-
deployment, (3) fire confirmation and extinguishing using marks and metrics to compare algorithms and to pro-
two helicopters, and (4) multi-RUAS surveillance using two vide some design standards for navigation and guid-
helicopters. The UASTech at Linköping University has in- ance technologies.
vestigated the problem of cooperative mission planning for
• From this review, it was clear that most milestones in
UAS according to an AI approach. Different architectures GNC for RUAS have been achieved using robotic heli-
and algorithms have been proposed, but to our knowledge copters. This is mainly because of their increased pay-
none of them has been flight-tested with a cooperative fleet load capabilities, which allow them to carry different
of RUAS. In a recent work, Kvarnstrom and Doherty (2010) reliable sensors and powerful embedded computers.
proposed a new mission-planning algorithm for collabora- However, research using other small rotorcraft such as
tive UAS based on combining ideas from forward-chaining quadrotors is growing fast and many interesting results
planning with partial-order planning, leading to a new have been published in recent years.
hybrid partial-order forward-chaining (POFC) framework
that meets the requirements on centralization, abstraction, 7.1. Control
and distribution found in realistic emergency services set-
Early research on RUAS focused on modeling their dy-
tings. A prototype implementation of POFC is in the pro-
namics and designing control algorithms for automated
cess of being integrated with the UASTech RUAS architec-
flight. In recent years, there have been fewer published
ture.
papers on the control of RUAS, except a few papers on
the improvement of existing controllers. In fact, many re-
7. DISCUSSION AND CONCLUSION
searchers have shifted from control-related research to the
In the last 20 years, significant progress toward the devel- areas of vision-based navigation and guidance. Although
opment of autonomous RUAS has been made. This survey significant work has been done in the design of advanced
has reviewed the current state of the art in guidance, nav- and nonlinear flight controllers, reported experimental
results did not show significant progress in flying capabili- especially from implementation and flight-testing points of
ties when compared to standard linear controllers. Indeed, view. Indeed, many papers have been published on vision-
most research platforms and commercial RUAS still use au- based navigation, but the conducted experiments are lim-
topilots based on standard linear controllers such as PID, ited to special and simple cases. Furthermore, most algo-
LQR, and H∞ . Successful implementations of these linear rithms developed for obstacle detection and mapping have
flight controllers and some nonlinear controllers such as the not been validated and demonstrated in closed-loop au-
Georgia Tech. adaptive flight controller have contributed to tonomous flights. There is thus a real need to develop-
making control technologies for RUAS mature enough to be ing embedded real-time and robust algorithms for vision-
deployed for real world applications. We believe that there based localization and perception given real-world condi-
are three aspects of RUAS control that have not really been tions. We also recommend more efforts in rigorously im-
investigated and that are very important for real-world ap- plementing and extensively flight-testing developed algo-
plications: rithms. We believe that bio-inspired visual navigation ap-
proaches are very promising, and we recommend further
• Development of self-tunable and flexible controllers that research in this direction. A new generation of bio-inspired
can be integrated into different platforms in a short time. vision sensors (such as artificial compound eyes and frame-
This also includes adaptation to platform changes such less event-based cameras) has started to emerge, providing
as payload and sensors. a new paradigm and framework for developing very effi-
• Design of robust controllers that guarantee good flight cient and reliable vision algorithms.
performance under windy and severe weather condi-
tions.
• Design of reconfigurable control systems that have the
ability to change or switch between different control
strategies based on flight and mission conditions. This 7.3. Guidance
also includes compensation for most failures and mis-
sion and environment changes. Despite active research on UAS guidance, most current un-
manned rotorcraft still use rudimentary guidance systems
that are limited to waypoint navigation. From the reviewed
7.2. Navigation
literature, we found that the majority of existing algorithms
UAS navigation in obstacle-free environments using con- do not include much discussion of practical implementa-
ventional avionics and sensors (GPS, IMU, and altimeter) tion, and very few experimental results have been reported.
is a solved problem. Most current research activities fo- Often, only simulation results are provided, based on ide-
cus on non-GPS state estimation and perception using ac- alized vehicle and operational conditions. The main prob-
tive LIDAR units or passive imaging systems. Although lem of implementing and demonstrating effective guid-
there have been some interesting results on autonomous ance systems may be attributed to two main factors. First,
navigation using LIDAR or vision, current navigation tech- a guidance system generally relies heavily on the percep-
nologies have not yet reached the level of robustness tion system for obstacle detection and avoidance. Because
and reliability required for real-world applications. The perception technologies are not yet mature, as discussed
most successful implementations of LIDAR-based percep- before, it limits practical implementation and demonstra-
tion and obstacle avoidance are probably the ones by tion of guidance systems in realistic scenarios. Second, 3D
NASA (Goerzen & Whalley, 2011; Tsenkov et al., 2008), AR- path planning algorithms that are complete, optimal, and
CAA/CSIRO (Merz & Kendoul, 2011), and CMU (Scherer sound and that consider UAS dynamics are computation-
et al., 2008) for outdoor navigation and MIT (Bachrach et al., ally expensive and their implementation onboard RUAS is
2009) for indoor navigation. 3D navigation in unknown, very challenging. Therefore, among the papers surveyed,
cluttered, and complex environments is still challenging most of the practical implementation (Bachrach et al., 2009;
and requires adequate 3D sensing technologies and effi- Merz & Kendoul, 2011; Scherer et al., 2008; Tsenkov et al.,
cient onboard algorithms for data processing. The technolo- 2008), of UAS guidance have been of the hierarchical de-
gies of compact 3D LIDAR and 3D time-to-flight cameras coupled control type, or in some cases focus only on the lo-
are progressing quickly and have been identified as poten- cal planning needed to avoid obstacles. From review of the
tial candidates for active sensing and perception onboard different papers, most work on UAS guidance has focused
RUAS. on path planning for simple goal-directed or goal-oriented
Computer vision is the most relevant localization tasks, without considering the different sources of uncer-
and perception technology for RUAS navigation. It has tainty. Therefore, there is still much work to be done on de-
been used for motion estimation and localization, tar- veloping and demonstrating efficient and robust path plan-
get/obstacle detection, and environment mapping. We ning algorithms, mission planning and management sys-
found that research on vision-based navigation (state esti- tems, and higher-level guidance modules for a single RUAS
mation and perception) for RUAS is still in its early stages, or a team of multiple vehicles.
7.4. Operational and Certification Issues findings for structural inspections CONOPS (crewing or-
ganization and flight operations protocol), but also needed
In addition to technological factors such as GNC tech-
directions in research for developing autonomous capabili-
nologies, other important aspects of RUAS development
ties for inspection-like missions.
and deployment are regulations and operational proce-
The development of autonomy technologies for RUAS
dures. UAS operations and use have remained almost ex-
follows a bottom-up approach, because most GNC sys-
clusively military because of the lack of a regulatory frame-
tems for UAS typically have a hierarchical structure. This
work that allows UAS integration into the civilian national
is because recent advances in UAS autonomy have been
airspace system (NAS). Indeed, the development of such
largely driven by practitioners in the field of systems con-
as framework is a prerequisite for greater UAS access to
trol and robotics, and not computer science and artificial
civil airspace and, subsequently, the continued growth of
intelligence. This mode of incremental technology develop-
the UAS industry and research. Most academic research has
ment in the field of UAS autonomy resulted in a gradual
focussed on the embedded autonomous capabilities (GNC
increase of UAS autonomy and the complexity of missions
challenges) of UAS. Issues related to operation and certifi-
that can be executed. Indeed, most research activities in
cation of UAS are completely missed by the research com-
the last decade have focused on flight control, i.e., address-
munity. Although these factors have clear implications for
ing issues related to autonomy level 1. However, in recent
future UAS research and deployment, they are beyond the
years, there has been more research on higher autonomy
scope of this paper. The main objective of this section is
levels incorporating new navigation and guidance capabil-
to encourage UAS developers, especially researchers from
ities in the UAS. For example, much work and many results
academia, to participate actively in the process of devel-
have been published recently on non-GPS navigation (au-
oping a regulatory framework and operational procedures.
tonomy level 2) and obstacle detection and path planning
We also encourage them to be aware of the current oper-
(autonomy level 4). There is also some work on fault de-
ational restrictions, as well as making informed decisions
tection (level 3), cooperative navigation and path planning
on their research and development efforts so that their de-
(autonomy level 5), and mission planning (autonomy level
signs will be airworthy when the regulatory framework is
6), and very little work on cooperative mission planning
in place. At ARCAA in Australia, we are closely collaborat-
(autonomy level 7). In our review, we have not found any
ing with CASA (the Australian Civil Aviation Safety Au-
work that has addressed the autonomy problems related to
thority) in developing rules, regulations, procedures and
level 8 and above.
standards for integration of UAS in civil airspace (Clothier,
Palmer, Walker, & Fulton, 2011). Currently, there are no spe-
cific standards and regulations for the type certification of
civil UAS. In their absence, the risks to people and property REFERENCES
on the ground are insured through substantial restrictions Abbeel, P., Coates, A., & Ng, A. Y. (2010). Autonomous heli-
on where UAS operations can take place. However, there copter aerobatics through apprenticeship learning. Inter-
are a lot of efforts by national/international organizations national Journal of Robotics Research, 29(13), 1608–1639.
and industry to produce and develop a regulation frame- Achtelik, M., Bachrach, A., Heb, R., Prentice, S., & Roy,
work for integration of UAS in to the NAS. Dalamagkidis, N. (2009, April). Stereo vision and laser odometry
Valavanis, & Piegl (2009) did a very good job of review- for autonomous helicopters in GPS-denied indoor en-
ing current manned aviation regulations and available UAS vironments. In Proceedings of SPIE Unmanned Sys-
regulations worldwide. It also presented UAS safety assess- tems Technology XI Orlando, FL, USA (vol. 7332-733219,
ment and functional requirements, as well as recommenda- pp. 1–10).
tions on a UAS integration roadmap. Another recent paper Adams, D., Criss, T., & Shankar, U. (2008, March). Passive op-
by Guglieri, Mariano, Quagliotti, and Scola (2011) also out- tical terrain relative navigation using APLNav. In Pro-
lined the international initiatives that deal with the devel- ceedings of the IEEE Aerospace Conference, Big Sky, MT
opment of the regulatory framework on airworthiness and (pp. 1–9).
Adolf, F., & Andert, F. (2010). Onboard mission management
certification of UAS.
for a VTOL UAV using sequence and supervisory control.
Some interesting academic works have addressed
In V. Kordic (Ed.), Cutting edge robotics, Croatia: InTech
the problem of UAS operations and regulations. The
(pp. 301–316).
study by Pratt, Murphy, Stover, and Griffin (2009) is a Adolf, F., & Thielecke, F. (2007, May). A sequence con-
good example of addressing issues related to RUAS op- trol system for onboard mission management of an
erations. The aim of the research was to identify con- unmanned helicopter. In Proceedings of the AIAA In-
cept of operations (CONOPS) and autonomy needs for fotech@Aerospace Conference and Exhibit, Rohnert Park,
RUAS operating in cluttered urban environments. Results CA (pp. 1–12).
from a small teleoperated helicopter performing a struc- Ahrens, S., Levine, D., Andrews, G., & How, J. P. (2009, May).
tural inspection task following Hurricane Katrina in 2005 Vision-based guidance and control of a hovering vehicle
provided a basis for extracting recommendations and key in unknown, GPS-denied environments. In Proceedings
of the IEEE International Conference on Robotics and Au- ference and Competition 2009, Delft, Netherlands, (EMAV
tomation, Kobe, Japan (pp. 2643–2648). 2009).
Altug, E., Ostrowski, J. P., & Taylor, C. J. (2005). Control of a Beyeler, A., Zufferey, J.-C., & Floreano, D. (2009b). Vision-based
quadrotor helicopter using dual camera visual feedback. control of near-obstacle flight. Autonomous Robots, 27,
International Journal of Robotics Research, 24(5), 329–341. 201–219.
Amidi, O., Kanade, T., & Fujita, K. (1999). A visual odome- Bhandari, S., Colgren, R., Lederbogen, P., & Kowalchuk, S.
ter for autonomous helicopter flight. Robotics and Au- (2005, August). Six-DoF dynamic modeling and flight test-
tonomous Systems, 28(2–3), 185–193. ing of a UAV helicopter. In Proceedings of the AIAA Mod-
Andert, F., & Adolf, F. (2009). Online world modeling and eling and Simulation Technologies Conference and Ex-
path planning for an unmanned helicopter. Autonomous hibit, AIAA 2005-6422, San Francisco, CA (pp. 1–17).
Robots, 27, 147–164. Bhatia, A., Kavraki, L. E., & Vardi, M. Y. (2010, May). Sampling-
Andert, F., Adolf, F., Goormann, L., & Dittrich, J. (2011, May). based motion planning with temporal goals. In Proceed-
Mapping and path planning in complex environments: ings of the IEEE International Conference on Robotics and
An obstacle avoidance approach for an unmanned he- Automation, Anchorage, AK (pp. 2689–2696).
licopter. In Proceedings of the IEEE International Con- Bisgaard, M., Cour-Harbo, A., & Bendtsen, J. D. (2010). Adap-
ference on Robotics and Automation, Shanghai, China tive control system for autonomous helicopters slung
(pp. 745–750). load operations. Control Engineering Practice, 18, 800–
Andert, F., Adolf, F.-M., Goormann, L., & Dittrich, J. (2010). Au- 811.
tonomous vision-based helicopter flights through obstacle Bosse, M., Karl, W., Castanon, D., & Debitetto, P. (1997, Novem-
gates. Journal of Intelligent and Robotic Systems, 57(1–4), ber). A vision augmented navigation system. In Proceed-
259–280. ings of the IEEE Conference on Intelligent Transportation
Artieda, J., Sebastian, J. M., Campoy, P., Correa, J. F., Mon- System, Boston, MA (pp. 1028–1033.)
dragon, I. F., Martinez, C., & Olivares, M. (2009). Visual Bouabdallah, S., & Siegwart, R. (2005, April). Backstepping
3-D SLAM from UAVs. Journal of Intelligent and Robotic and sliding-mode techniques applied to an indoor micro
Systems, 55(4–5), 299–321. quadrotor. In Proceedings of the IEEE International Con-
Bachrach, A., He, R., & Roy, N. (2009). Autonomous flight in ference on Robotics and Automation, Barcelona, Spain
unknown indoor environments. International Journal of (pp. 2259–2264).
Micro Air Vehicles, 1(4), 217–228. Bouabdallah, S., & Siegwart, R. (2007). Design and control of
Barber, D. B., Griffiths, S. R., McLain, T. W., & Beard, R. W. a miniature quadrotor. In K. Valavanis (Ed.), Advances
(2007). Autonomous landing of miniature aerial vehicles. in unmanned aerial vehicles, The Netherlands: Springer
Journal of Aerospace Computing, Information, and Com- (pp. 171–210).
munication, 4(5), 770–784. Bristeau, P., Dorveaux, E., Vissiere, D., & Petit, N. (2010). Hard-
Barron, A., & Srinivasan, M. (2006). Visual regulation of ground ware and software architecture for state estimation on
speed and headwind compensation in freely flying honey an experimental low-cost small-scaled helicopter. Control
bees (Apis Mellifera L.). Journal of Experimental Biology, Engineering Practice, 18, 733–746.
209, 978–984. Bryson, M., Reid, A., Ramos, F., & Sukkarieh, S. (2010). Air-
Bergerman, M., Amidi, O., Miller, J., Vallidis, N., & Dudek, T. borne vision-based mapping and classification of large
(2007, November). Cascaded position and heading control farmland environments. Journal of Field Robotics, 27(5),
of a robotic helicopter. In Proceedings of the IEEE/RSJ In- 632–655.
ternational Conference on Intelligent Robots and Systems, Bryson, M., & Sukkarieh, S. (2009). Architectures for coop-
San Diego, CA (pp. 135–140). erative airborne simultaneous localisation and mapping.
Bernard, M., & Kon, K. (2009, May). Generic slung load trans- Journal of Intelligent and Robotic Systems, 55(4–5), 267–
portation system using small size helicopters. In Proceed- 297.
ings of the IEEE International Conference on Robotics and Buskey, G., Roberts, J., Corke, P., & Wyeth, G. (2003, Decem-
Automation, Kobe, Japan (pp. 3258–3264). ber). Helicopter automation using a low-cost sensing sys-
Bethke, B. (2007). Persistent vision-based search and track us- tem. In Proceedings of the Australasian Conference on
ing multiple uavs. Master’s thesis, Massachusetts Institute Robotics and Automation, Brisbane, Australia.
of Technology. Buskey, G., Wyeth, G., & Roberts, J. (2001, May). Autonomous
Bethke, B., Valenti, M., & How, J. (2007, January–February). helicopter hover using an artificial neural network. In Pro-
Cooperative vision based estimation and tracking using ceedings of the IEEE Conference on Robotics and Au-
multiple UAVs. In Proceedings of the 7th International tomation, Seoul, South Korea (pp. 1635–1640).
Conference on Cooperative Control and Optimization, Byrne, J., Cosgrove, M., & Mehra, R. (2006, May). Stereo based
Gainesville, FL (vol. 369, pp. 179–189). obstacle detection for an unmanned air vehicle. In Pro-
Bethke, B., Valenti, M., & How, J. (2008). UAV task assignment. ceedings of the IEEE International Conference on Robotics
IEEE Robotics and Automation Magazine, 15(1), 39–44. and Automation (ICRA), Orlando, FL (pp. 2830–2835).
Beyeler, A., Zufferey, J.-C., & Floreano, D. (2009a, September). Caballero, F., Merino, L., Ferruz, J., & Ollero, A. (2005, April). A
OptiPilot: Control of take-off and landing using optic flow. visual odometer without 3D reconstruction for aerial ve-
In Proceedings of the European Micro Air Vehicle Con- hicles. Applications to building inspection. In Proceedings
of the International Conference on Robotics and Automa- tion. EURASIP Journal on Advances in Signal Processing,
tion, Barcelona, Spain (pp. 4684–4689). 2009(10), 1–18.
Caballero, F., Merino, L., Ferruz, J., & Ollero, A. (2009). Vision- Courbon, J., Mezouar, Y., Guenard, N., & Martinet, P. (2010).
based odometry and SLAM for medium and high altitude Vision-based navigation of unmanned aerial vehicles.
flying UAVs. Journal of Intelligent and Robotic Systems, Control Engineering Practice, 18, 789–799.
54(1–3), 137–161. Culligan, K., Valenti, M., Kuwata, Y., & How, J. P. (2007,
Carr, J., & Sobek, J. L. (1980, July–August). Digital scene match- July). Three-dimensional flight experiments using on-line
ing area correlator (DSMAC). In Proceedings of the So- mixed-integer linear programming trajectory optimiza-
ciety of Photo-Optical Instrumentation Engineers, Image tion. In Proceedings ofhe IEEE American Control Confer-
Processing for Missile Guidance, San Diego, CA (vol. 238, ence, New York (pp. 5322–5327).
pp. 36–41). Dalamagkidis, K., Valavanis, K. P., & Piegl, L. A. (2009). On
Castelfranchi, C., & Falcone, R. (2003). Agent autonomy. In integrating unmanned aircraft systems into the national
From automaticity to autonomy: The frontier of artificial airspace system: Issues, challenges, operational restric-
agents (pp. 103–136). Boston, MA: Kluwer Academic Pub- tions, certification, and recommendations, Intelligent Sys-
lishers. tems, Control and Automation: Science and Engineering
Castillo, P., Dzul, A., & Lozano, R. (2004). Real-time stabiliza- (vol. 36). New York: Springer-Verlag.
tion and tracking of a four rotor mini-rotorcraft. IEEE Davis, J. D., & Chakravorty, S. (2007). Motion planning under
Transactions on Control Systems Technology, 12(4), 510– uncertainty: Application to an unmanned helicopter. Jour-
516. nal of Guidance, Control, and Dynamics, 30(5), 1268–1276.
Cesetti, A., Frontoni, E., Mancini, A., Zingaretti, P., & Longhi, Dickerson, L. (2007). UAV on the rise. In Aviation Week &
S. (2010). A vision-based guidance system for UAV navi- Space Technology, Aerospace Source Book 2007 (vol. 166,
gation and safe landing using natural landmarks. Journal no. 3). New York: McGraw Hill.
of Intelligent and Robotic Systems, 57, 233–257. Dierks, T., & Jagannathan, S. (2010). Output feedback control
Chahl, J., Srinivasan, M., & Zhang, S. (2004). Landing strategies of a quadrotor UAV using neural networks. IEEE Transac-
in honeybees and applications to uninhabited airborne ve- tions on Neural Networks, 21(1), 50–66.
hicles. International Journal of Robotics Research, 23(2), Doherty, P., Kvarnstrom, J., & Heintz, F. (2009). A tem-
101–110. poral logic-based planning and execution monitoring
Chao, H., Cao, Y., , and Chen, Y. (2010). Autopilots for small framework for unmanned aircraft systems. Autonomous
unmanned aerial vehicles: A survey. International Journal Agents and Multi-Agent Systems, 19(3), 332–377.
of Control, Automation, and Systems, 8(1), 36–44. Endsley, M. R. (1999). Situation awareness in aviation systems.
Choi, H.-L., Brunet, L., & How, J. P. (2009). Consensus-based In D.J. Garland, J.A. Wise & V.D. Hopkin (Eds.), Hand-
decentralized auctions for robust task allocation. IEEE book of aviation human factors (pp. 257–276). Mahwah,
Transactions on Robotics, 25(4), 912–926. NJ: Lawrence Erlbaum Associates.
Christophersen, H., Pickell, W., Neidhoefer, J., Koller, A., Kan- Fabiani, P., Fuertes, V., Piquereau, A., Mampey, R., & Teichteil-
nan, S., & Johnson, E. (2006). A compact guidance, navi- Knigsbuch, F. (2007). Autonomous flight and navigation
gation, and control system for unmanned aerial vehicles. of VTOL UAVs: From autonomy demonstrations to out-
AIAA Journal of Aerospace Computing, Information, and of-sight flights. Aerospace Science and Technology, 11,
Communication, 3, 187–213. 183–193.
Clothier, R., Palmer, J., Walker, R., & Fulton, N. (2011). Defini- Farnebäck, G., & Nordberg, K. (2002, March). Motion detection
tion of an airworthiness certification framework for civil in the WITAS project. In Proceedings of the Symposium
unmanned aircraft systems. Safety Science, 49(6), 871– on Image Analysis, Lund, Sweden (pp. 99–102).
885. Frazzoli, E., Dahleh, M., & Feron, E. (2002). Real-time motion
Clough, B. T. (2002, August). Metrics, schmetrics! How the planning for agile autonomous vehicles. Journal of Guid-
heck do you determine a UAV’s autonomy anyway? In ance, Control and Dynamics, 25(1), 116–129.
Proceedings of the Performance Metrics for Intelligent Frazzoli, E., Dahleh, M. A., & Feron, E. (2000, June). Trajectory
Systems (PerMIS) Conference, Gaithersburg, MD. tracking control design for autonomous helicopters us-
CMU and Piasecki [Carnegie Mellon University and Pi- ing a backstepping algorithm. In Proceedings of the IEEE
asecki Aircraft Corp.] (2010). Autonomous helicopter American Control Conference, Chicago, IL (pp. 4102–
collision avoidance & landing zone selection system for 4108).
casualty evacuation and transport. Retrieved July, 2011, Freed, M., Fitzgerald, W., & Harris, R. (2005, April). Intelligent
from http://www.frc.ri.cmu.edu/ssingh/CMU UAV/ autonomous surveillance of any targets with few UAVs. In
ULB/CombatMedicUAVDemo/Home.html. Proceedings of the Research and Development Partnering
Conroy, J., Gremillion, G., Ranganathan, B., & Humbert, J. S. Conference, Department of Homeland Security, Boston,
(2009). Implementation of wide-field integration of optic MA.
flow for autonomous quadrotor navigation. Autonomous Gadewadikar, J., Lewis, F. L., Subbarao, K., & Chen, B. M.
Robots, 27, 189–198. (2008). Structured H command and control-loop design
Conte, G., & Doherty, P. (2009). Vision-based unmanned for unmanned helicopters. AIAA Journal of Guidance,
aerial vehicle navigation using geo-referenced informa- Control, and Dynamics, 31(4), 1093–1102.
Gadewadikar, J., Lewis, F. L., Subbarao, K., Peng, K., & Chen, ings of the IEEE International Conference on Robotics and
B. M. (2009). H-infinity static output-feedback control for Automation, Kobe, Japan (pp. 2878–2883).
rotorcraft. Journal of Robotics and Intelligent Systems, 54, Guenard, N., Hamel, T., & Mahony, R. (2008). A practical vi-
629–646. sual servo control for an unmanned aerial vehicle. IEEE
Garcia, R., & Valavanis, K. (2009). The implementation of an Transactions on Robotics, 24(2), 331–340.
autonomous helicopter testbed. Journal of Intelligent and Guglieri, G., Mariano, V., Quagliotti, F., & Scola, A. (2011). A
Robotics Systems, 54, 423–454. survey of airworthiness and certification for UAS. Journal
Garcia-Pardo, P. J., Sukhatme, G. S., & Montgomery, J. F. (2002). of Intelligent and Robotic Systems, 61, 399–421.
Towards vision-based safe landing for an autonomous he- Ha, J., Johnson, E. N., & Tannenbaum, A. (2008). Real-time vi-
licopter. Robotics and Autonomous Systems, 38, 19–29. sual tracking using geometric active contours and particle
Garratt, M., Pota, H., Lambert, A., Eckersley-Maslin, S., & filters. AIAA Journal of Aerospace Computing, Informa-
Farabet, C. (2009). Visual tracking and lidar relative tion, and Communication, 5(10), 361–379.
positioning for automated launch and recovery of an un- He, R., Bachrach, A., Achtelik, M., Geramifard, A., Gurdan, D.,
manned rotorcraft from ships at sea. Naval Engineering Prentice, S., Stumpf, J., & Roy, N. (2010). On the design and
Journal, 121(2), 99–110. use of a micro air vehicle to track and avoid adversaries.
Garratt, M. A., & Chahl, J. S. (2008). Vision-based terrain International Journal of Robotics Research, 29(5), 529–546.
following for an unmanned rotorcraft. Journal of Field He, R., Bachrach, A., & Roy, N. (2010, May). Efficient planning
Robotics, 25(4–5), 284–301. under uncertainty for a target-tracking micro aerial vehi-
Gavrilets, V., Frazzoli, E., Mettler, B., Piedmonte, M., & Feron, cle. In Proceedings of the IEEE International Conference
E. (2001a). Aggressive maneuvering of small autonomous on Robotics and Automation, Anchorage, AK.
helicopters: A human centered approach. International He, R., Prentice, S., & Roy, N. (2008, May). Planning in infor-
Journal of Robotics Research, 20(10), 795–807. mation space for a quadrotor helicopter in a GPS-denied
Gavrilets, V., Mettler, B., & Feron, E. (2001, August). Nonlinear environment. In Proceedings of the IEEE International
model for a small-size acrobatic helicopter. In Proceedings Conference on Robotics and Automation, Pasadena, CA
of the AIAA Guidance, Navigation and Control Confer- (pp. 1814–1820).
ence, Montreal, Canada. He, Y. Q., & Han, J. D. (2010). Acceleration-feedback-enhanced
Gavrilets, V., Mettler, B., & Feron, E. (2004). Human-inspired robust control of an unmanned helicopter. AIAA Journal
control logic for automated maneuvering of miniature he- of Guidance, Control, and Dynamics, 33(4), 1236–1250.
licopter. AIAA Journal of Guidance, Control, and Dynam- Hengstenberg, R. (1993). Multisensory control in insect ocu-
ics, 27(5), 752–759. lomotor systems, in visual motion and its role in the
Gillula, J. H., Hoffmann, G. M., Huang, H., Vitus, M. P., & stabilization of gaze. Reviews of Oculomotor Research,
Tomlin, C. J. (2011). Applications of hybrid reachability 5(2), 285–298.
analysis to robotic aerial vehicles. International Journal of Herisse, B., Hamel, T., Mahony, R. E., & Russotto, F. X. (2010). A
Robotics Research, 30(3), 335–354. terrain-following control approach for a VTOL unmanned
Goerzen, C., Kong, Z., & Mettler, B. (2010). A survey of motion aerial vehicle using average optical flow. Autonomous
planning algorithms from the perspective of autonomous Robots, 29(3–4), 381–399.
UAV guidance. Journal of Intelligent and Robotic Sys- Herisse, B., Russotto, F., Hamel, T., & Mahony, R. (2008,
tems, 57, 65–100. September). Hovering flight and vertical landing control
Goerzen, C., & Whalley, M. (2011). Minimal risk motion plan- of a VTOL unmanned aerial vehicle using optical flow. In
ning: A new planner for autonomous UAVs in uncertain IEEE/RSJ International Conference on Intelligent Robots
environments. In Proceedings of the AHS International and Systems, Nice, France (pp. 801–896).
Specialists Meeting on Unmanned Rotorcraft, Tempe, AZ Hermansson, J., Gising, A., Skoglund, M., & Schon, T. B. (2010,
(pp. 1–21). June). Autonomous landing of an unmanned aerial ve-
Goktogan, A. H., Sukkarieh, S., Bryson, M., Randle, J., Lup- hicle. In Reglermte (Swedish Control Conference), Lund,
ton, T., & Hung, C. (2010). A rotary-wing unmanned Sweden (pp. 1–5).
air vehicle for aquatic weed surveillance and manage- Hoffmann, G. M., Waslander, S., & Tomlin, C. J. (2008, August).
ment. Journal of Intelligent and Robotic Systems, 57, 467– Quadrotor helicopter trajectory tracking control. In Pro-
484. ceedings of the AIAA Navigation, Guidance and Control
Golden, J. P. (1980). Terrain contour matching (TERCOM): A Conference, Honolulu, HI.
cruise missile guidance aid. In Proceedings of the Society Hoffmann, G. M., Waslander, S. L., Vitus, M. P., Gillula, H.
of Photo-Optical Instrumentation Engineers, Image Pro- H. J., Pradeep, V., & Tomlin, C. J. (2009, October). Stanford
cessing for Missile Guidance (vol. 238, pp. 10–18). testbed of autonomous rotorcraft for multi-agent control.
Griffiths, S., Saunders, J., Curtis, A., Barber, B., Mclain, T., In Proceedings of the IEEE/RSJ International Conference
& Beard, R. (2006). Maximizing miniature aerial vehi- on Intelligent Robots and Systems, St. Louis, MO (pp. 404–
cles. IEEE Robotics and Automation Magazine, 13(3), 405).
34–43. How, J., Bethke, B., Frank, A., Dale, D., & Vian, J. (2008). Real-
Grzonka, S., Grisetti, G., & Burgard, W. (2009). Towards a nav- time indoor autonomous vehicle test environment. IEEE
igation system for autonomous indoor flying. In Proceed- Control Systems Magazine, 28(2), 51–64.
How, J., Fraser, C., Kulling, K., Bertuccelli, L., Toupet, O., AIAA Journal of Aerospace Computing, Information, and
Brunet, L., Bachrach, A., & Roy, N. (2009). Increasing au- Communication, 4(4), 707–738.
tonomy of UAVs. IEEE Robotics and Automation Maga- Johnson, E., & Kannan, S. (2005). Adaptive trajectory control
zine, 16(2), 43–51. for autonomous helicopters. AIAA Journal of Guidance,
Howlett, J., Schulein, G., & Mansour, M. (2004, June). A prac- Control, and Dynamics, 28(3), 524–538.
tical approach to obstacle field route planning for un- Kanade, T., Amidi, O., & Ke, Q. (2004, December). Real-time
manned rotorcraft. In Proceedings of the 60th Annual Fo- and 3d vision for autonomous small and micro air vehi-
rum of the American Helicopter Society, Baltimore, MD. cles. In Proceedings of the 43rd IEEE Conference on Deci-
Howlett, J., Tsenkov, P., Whalley, M., Schulein, G., & Taka- sion and Control, Atlantis, Bahamas (pp. 1655–1662).
hashi, M. (2007, May). Flight evaluation of a system for Kannan, S. K. (2005). Adaptive control of systems in cas-
unmanned rotorcraft reactive navigation in uncertain ur- cade with saturation. Ph.D. thesis, Georgia Institute of
ban environments. In Proceedings of the 63rd Annual Fo- Technology.
rum of the American Helicopter Society, Virginia Beach. Kelly, J., Saripalli, S., & Sukhatme, G. S. (2007, July). Combined
Hrabar, S. (2008, September). 3d path planning and stereo- visual and inertial navigation for an unmanned aerial ve-
based obstacle avoidance for rotorcraft UAVs. In Proceed- hicle. In Proceedings of the 6th International Conference
ings of the IEEE International Conference on Intelligent on Field and Service Robotics, Chamonix, France.
Robots and Systems, Nice, France. Kendoul, F., Fantoni, I., & Nonami, K. (2009). Optic flow-based
Hrabar, S. (2011, September). Reactive obstacle avoidance for vision system for autonomous 3D localization and control
rotorcraft UAVs. In Proceedings of the IEEE/RSJ Interna- of small aerial vehicles. Robotics and Autonomous Sys-
tional Conference on Intelligent Robots and Systems, San tems, 57, 591–602.
Francisco, CA. Kendoul, F., Lara, D., Fantoni, I., & Lozano, R. (2007). Real-time
Hrabar, S., & Sukhatme, G. (2003, September). Omnidirectional nonlinear embedded control for an autonomous quad-
vision for an autonomous helicopter. In Proceedings of the rotor helicopter. AIAA Journal of Guidance, Control, and
IEEE International Conference on Robotics and Automa- Dynamics, 30(4), 1049–1061.
tion, Taipei, Taiwan (pp. 3602–3609). Kendoul, F., Nonami, K., Fantoni, I., & Lozano, R. (2009). An
Hrabar, S., & Sukhatme, G. (2004, September–October). A com- adaptive vision-based autopilot for mini flying machines
parison of two camera configurations for optic-flow based guidance, navigation and control. Autonomous Robots,
navigation of a UAV through urban canyons. In Proceed- 27(3), 165–188.
ings of the IEEE International Conference on Intelligent Kendoul, F., Yu, Z., & Nonami, K. (2010). Guidance and nonlin-
Robots and Systems, Sendai, Japan (pp. 2673–2680). ear control system for autonomous flight of mini rotorcraft
Hrabar, S., & Sukhatme, G. (2009). Vision-based navigation unmanned aerial vehicles. Journal of Field Robotics, 27(3),
through urban canyons. Journal of Field Robotics, 26(5), 311–334.
431–452. Khatib, O., & Mampey, L. (1978). Fonction decision-commande
Huang, H.-M. (2008). Autonomy levels for unmanned systems d’un robot manipulateur (Rep. 2/7156). Toulouse, France:
(ALFUS) framework. Vol. i: Terminology, version 2.0. Con- DERA/CERT.
tributed by the Ad Hoc ALFUS Working Group Partici- Kim, H., Shim, D., & Sastry, S. (2002, May). Nonlinear model
pants, NIST Special Publication 1011-I-2.0. predictive tracking control for rotorcraft-based unmanned
Huang, H.-M., Messina, E., & Albus, J. (2007). Autonomy lev- aerial vehicles. In Proceedings of the American Control
els for unmanned systems (ALFUS) framework. Vol. ii: Conference, Anchorage, AK (vol. 5, pp. 3576–3581).
Framework models, version 1.0. Contributed by the Ad Kim, J., Lyou, J., & Kwak, H. (2010). Vision coupled GPS/INS
Hoc ALFUS Working Group Participants, NIST Special scheme for helicopter navigation. Journal of Mechanical
Publication 1011-II-1.0. Science and Technology, 24(2), 489–496.
Johnson, A., Montgomery, J., & Matthies, L. (2005, April). Vi- Kima, J., & Sukkarieh, S. (2007). Real-time implementation of
sion guided landing of an autonomous helicopter in haz- airborne inertial-SLAM. Robotics and Autonomous Sys-
ardous terrain. In Proceedings of the IEEE International tems, 55, 62–71.
Conference on Robotics and Automation, Barcelona, Kong, Z., & Mettler, B. (2011). Evaluation of guidance perfor-
Spain (pp. 4470–4475). mance in urban terrains for different UAV types and per-
Johnson, A., Willson, R., Cheng, Y., Goguen, J., Leger, C., San- formance criteria using spatial CTG maps. Journal of In-
martin, M., & Matthies, L. (2007). Design through oper- telligent and Robotics Systems, 61, 135–156.
ation of an image-based velocity estimation system for Koo, T., & Sastry, S. (1998, December). Output tracking con-
Mars landing. International Journal of Computer Vision, trol design of a helicopter model based on approxi-
74(3), 319–341. mate linearization. In Proceedings of the IEEE Confer-
Johnson, A. E., & Montgomery, J. F. (2008, March). Overview ence on Decision and Control, Tampa, FL (pp. 3635–
of terrain relative navigation approaches for precise lu- 3640).
nar landing. In Proceedings of the IEEE Aerospace Con- Kvarnstrom, J., & Doherty, P. (2010, December). Automated
ference, Big Sky, MT (pp. 1–10). planning for collaborative UAV systems. In Proceedings
Johnson, E., Calise, A., Watanabe, Y., Ha, J., & Neidhoefer, J. of the 11th International Conference on Control, Automa-
(2007). Real-time vision-based relative aircraft navigation. tion, Robotics and Vision, Singapore (pp. 1078–1075).
LaCivita, M., Messner, W., & Kanade, T. (2002, June). Modeling estimation techniques for UAV control. In Proceedings of
of small-scale helicopters with integrated first-principles the 3rd International Symposium on Unmanned Aerial
and system-identification techniques. In Proceedings of Vehicles, Dubai, UAE (pp. 1–19).
the 58th Forum of the American Helicopter Society, Mon- Maza, I., Caballero, F., Capitan, J., de Dios, J. M., & Ollero,
treal, Quebec, Canada. A. (2011). Experimental results in multi-UAV coordina-
La Civita, M., Papageorgiou, G., Messner, W. C., & Kanade, T. tion for disaster management and civil security applica-
(2006). Design and flight testing of an H∞ controller for tions. Journal of Intelligent and Robotic Systems, 61, 563–
a robotic helicopter. AIAA Journal of Guidance, Control, 585.
and Dynamics, 29(2), 485–494. Maza, I., Kondak, K., Bernard, M., & Ollero, A. (2010). Multi-
Lacroix, S., Alami, R., Lemaire, T., Hattenberger, G., & Gancet, UAV cooperation and control for load transportation and
J. (2007). Decision making in multi-UAV system: Architec- deployment. Journal of Intelligent and Robotic Systems,
ture and algorithms. In A. Ollero and M. Ivan (Ed.), Mul- 57, 417–449.
tiple heterogeneous unmanned aerial vehicles (pp. 15–48). Meingast, M., Geyer, C., & Sastry, S. (2004, December). Vi-
Berlin: Springer-Verlag. sion based terrain recovery for landing unmanned aerial
Latombe, J. (1991). Robot motion planning. Boston: Kluwer vehicles. In Proceedings of the IEEE Conference on De-
Academic. cision and Control, Atlantis, Paradise Island, Bahamas
LaValle, S. (2006). Planning algorithms. Cambridge, UK: Cam- (pp. 1670–1675).
bridge University Press. Mejias, L., Campoy, P., Mondragon, I., & Doherty, P. (2007,
Lemaire, T., Berger, C., Jung, I.-K., & Lacroix, S. (2007). Vision- September). Stereo visual system for autonomous air ve-
based SLAM: Stereo and monocular approaches. Interna- hicle navigation. In Proceedings of the IFAC Symposium
tional Journal of Computer Vision, 74(3), 343–364. on Intelligent Autonomous Vehicles, Toulouse, France.
Lindsten, F., Callmer, J., Ohlsson, H., Tornqvist, D., Schon, T. B., Mejias, L., Saripalli, S., Campoy, P., & Sukhatme, G. (2006). Vi-
& Gustafsson, F. (2010, May). Geo-referencing for UAV sual servoing of an autonomous helicopter in urban areas
navigation using environmental classification. In Proceed- using feature tracking. Journal of Field Robotics, 23(3/4),
ings of the International Conference on Robotics and Au- 185–199.
tomation, Anchorage, AK. Mellinger, D., & Kumar, V. (2011, May). Minimum snap tra-
Linköping (2003). Vehicle tracking and surveillance. Retrieved jectory generation and control for quadrotors. In Proceed-
August, 2010, from http://www.ida.liu.se/patdo/ ings of the IEEE International Conference on Robotics and
auttek/missions/mission2.html. Automation, Shanghai, China.
Lozano, R. (Ed.). (2010). Unmanned aerial vehicles: Embedded Merino, L., Fernando Caballero, J. R. M.-d. D., Ferruz, J.,
control. London: Wiley. & Ollero, A. (2006). A cooperative perception system
Ludington, B., Johnson, E., & Vachtsevanos, G. (2006). Aug- for multiple UAVs: Application to automatic detection
menting UAV autonomy: Vision-based navigation and of forest fires. Journal of Field Robotics, 23(3/4), 165–
target tracking for unmanned aerial vehicles. IEEE 184.
Robotics and Automation Magazine, 13(3), 63–71. Merz, T. (2004, July). Building a system for autonomous aerial
Madison, R., Andrews, G., DeBitettoand, P., Rasmussen, S., & robotics research. In 5th IFAC/EURON Symposium on In-
Bottkol, M. S. (2007, May). Vision-aided navigation for telligent Autonomous Vehicles, Lisbon, Portugal.
small UAVs in GPS-challenged environments. In Proceed- Merz, T., Duranti, S., & Conte, G. (2004, June). Autonomous
ings of the AIAA Infotech@Aerospace Conference and Ex- landing of an unmanned helicopter based on vision and
hibit, Rohnert Park, CA. inertial sensing. In Proceedings of the 9th International
Mahony, R., Hamel, T., & Pflimlin, J.-M. (2008). Nonlin- Symposium on Experimental Robotics, Singapore.
ear complementary filters on the special orthogonal Merz, T., & Kendoul, F. (2011, September). Beyond visual range
group. IEEE Transactions on Automatic Control, 53(5), obstacle avoidance and infrastructure inspection by an au-
1203–1218. tonomous helicopter. In Proceedings of the IEEE/RSJ In-
Mahony, R., Hamel, T., Dzul, A. (1999, December). Hover con- ternational Conference on Intelligent Robots and Systems,
trol via Lyapunov control for an autonomous model heli- San Francisco, CA.
copter. In Proceedings of the IEEE Conference on Decision Merz, T., Rudol, P., & Wzorek, M. (2006, July). Control sys-
and Control, Phoenix, AZ (vol. 4, pp. 3490–3495). tem framework for autonomous robots based on ex-
Marconi, L., Isidori, A., & Serrani, A. (2002). Autonomous tended state machines. In Proceedings of the International
vertical landing on an oscillating platform: An internal- Conference on Autonomic and Autonomous Systems,
model based approach. Automatica, 38, 21–32. Washington, DC (pp. 1–8).
Martinez, C., Campoy, P., Mondragon, I. F., & Olivares- Mettler, B. (2003). Identification, modeling and characteristics
Mendez, M. A. (2009, October). Trinocular ground system of miniature rotorcraft. Boston, MA: Kluwer Academic.
to control UAVs. In Proceedings of the IEEE/RSJ Interna- Mettler, B., Dadkhah, N., & Kong, Z. (2010). Agile autonomous
tional Conference on Intelligent Robots and Systems, St, guidance using spatial value functions. Control Engineer-
Louis, MO (pp. 3361–3367). ing Practice, 18, 773–788.
Martinez, C., Mondragon, I. F., Olivares-Mendez, M. A., & Mettler, B., Kong, Z., Goerzen, C., & Whalley, M. (2010, May).
Campoy, P. (2010, June). Onboard and ground visual pose Benchmarking of obstacle field navigation algorithms for
autonomous helicopters. In Proceedings of the 66th An- Ollero, A., Lacroix, S., Merino, L., Gancet J., Wiklund, J.,
nual Forum of the American Helicopter Society, Phoenix, Remuss, V., Perez, I. V., Gutierrez, L. G., Viegas, D. X.,
AZ (pp. 1–18). Benitez, M. A. G. Mallet, A., Alami R., Chatila, R., Hom-
Mettler, B., Tischler, M., & Kanade, T. (2002). System identifica- mel, G., Lechuga, F. G. C., Arrue, B. C., Ferruz, J.,
tion modeling of a small-scale unmanned helicopter. Jour- Martinez-De Dios, J. R., Caballero, F. (2005). Multiple eyes
nal of the American Helicopter Society, 47(1), 50–63. in the skies: architecture and perception issues in the
Mettler, B., Valenti, M., Schouwenaars, T., Kuwata, Y., How, J., COMETS unmanned air vehicles project. IEEE Robotics
Paunicka, J., and Feron, E. (2003, August). Autonomous and Automation Magazine, 12(2), 46–57.
UAV guidance build-up: Flight-test demonstration and Ollero, A., & Maza, I. (2007). Multiple heterogeneous un-
evaluation plan. In Proceedings of the AIAA Guid- manned aerial vehicles, Tracts in Advanced Robotics,
ance, Navigation, and Control Conference, Austin, TX Berlin: Springer-Verlag (vol. 37).
(vol. AIAA-2003-5744, pp. 1–10). Ollero, A., & Merino, L. (2004). Control and perception tech-
Michael, N., Fink, J., & Kumar, V. (2011). Cooperative manipu- niques for aerial robotics. Annual Reviews in Control, 28,
lation and transportation with aerial robots. Autonomous 167–178.
Robots, 30(1), 73–86. Parasuraman, R., Sheridan, T. B., & Wickens, C. D. (2000). A
Michael, N., Mellinger, D., Lindsey, Q., & Kumar, V. (2010). The model for types and levels of human interaction with au-
GRASP multiple micro UAV testbed. IEEE Robotics and tomation. IEEE Transactions on Systems, Man and Cyber-
Automation Magazine, 17(3), 56–65. netics, 30(3), 286–297.
Miller, R., & Amidi, O. (1998, June). 3-D site mapping with Pebrianti, D., Kendoul, F., Azrad, S., Wang, W., &
the CMU autonmous helicopter. In Proceedings of the 5th Nonami, K. (2010). Autonomous hovering and land-
International Conference on Intelligent Autonomous Sys- ing of a quad-rotor micro aerial vehicle by means of on
tems, Karlsruhe, Germany (pp. 1–8). ground stereo vision system. Journal of System Design
Mondragon, I. F., Campoy, P., Martinez, C., & Olivares, and Dynamics, 4(2), 269–284.
M. A. (2010). Omnidirectional vision applied to un- Penga, K., Cai, G., Chen, B. M., Dongb, M., Luma, K. Y., &
manned aerial vehicles (UAVs) attitude and heading es- Lee, T. H. (2009). Design and implementation of an au-
timation. Robotics and Autonomous Systems, 58, 809– tonomous flight control law for a UAV helicopter. Auto-
819. matica, 45, 2333–2338.
Montgomery, J., Roumeliotis, S. I., Johnson, A., & Matthies, Pettersson, P., & Doherty, P. (2004, June). Probabilistic roadmap
L. (2006). The jet propulsion laboratory autonomous he- based path planning for an autonomous unmanned
licopter testbed: A platform for planetary exploration aerial vehicle. In Proceedings of the ICAPS-04 Workshop
technology research and development. Journal of Field on Connecting Planning Theory with Practice, Whistler,
Robotics, 23(3/4), 245–267. British Columbia, Canada (pp. 1–6).
Montgomery, J. F., & Bekey, G. A. (1998, December). Learning Pflimlin, J.-M., Soures, P., & Hamel, T. (2006). A hierarchi-
helicopter control through teaching by showing. In Pro- cal control strategy for the autonomous navigation of
ceedings of the 37th IEEE Conference on Decision and a ducted fan flying robot. In Proceedings of the IEEE
Control, FL, Tampa, FL (vol. 4, pp. 3647–3652). International Conference on Robotics and Automation,
Mourikis, A. I., Trawny, N., Roumeliotis, S. I., Johnson, A. E., Orlando, FL (pp. 2491–2496).
Ansar, A., & Matthies, L. (2009). Vision-aided inertial nav- Phillips, C., Karr, C. L., & Walker, G. (1996). Helicopter flight
igation for spacecraft entry, descent, and landing. IEEE control with fuzzy logic and genetic algorithms. Engineer-
Transactions on Robotics, 25(9), 264–280. ing Applications of Artificial Intelligence, 9(2), 175–184.
NASA. (N.d.) Nasa/army autonomous rotorcraft Pratt, K. S., Murphy, R., Stover, S., & Griffin, C. (2009).
project (ARP). Retrieved August, 2010, from CONOPS and autonomy recommendations for VTOL
http://ti.arc.nasa.gov/projects/apex/projectARP.php. small unmanned aerial system based on Hurricane
Nemra, A., & Aouf, N. (2009). Robust airborne 3D visual simul- Katrina operations. Journal of Field Robotics, 26(8), 636–
taneous localization and mapping with observability and 650.
consistency analysis. Journal of Intelligent and Robotic Proctor, A. A., & Johnson, E. N. (2005, August). Vision-only ap-
Systems, 55(4–5), 345–376. proach and landing. In Proceedings of the AIAA Guid-
Nonami, K., Kendoul, F., Suzuki, S., & Wang, W. (2010). Au- ance, Navigation, and Control Conference and Exhibit,
tonomous flying robots: Unmanned aerial vehicles and AIAA 2005-5871, San Francisco, CA.
micro air vehicles. Tokyo: Springer-Verlag. Prouty, R. W. (1995). Helicopter performance, stability and con-
Oh, P. Y. (2004, August). Flying insect inspired vision for micro- trol. Malabar, FL: Krieger.
air-vehicle navigation. In Proceedings of the International Qi, J., Song, D., Dai, L., Han, J., & Wang, Y. (2010). The new evo-
Symposium on Autonomous Unmanned Vehicles System, lution for SIA rotorcraft UAV project. Journal of Robotics
Anaheim, CA. (vol. 2010, pp. 1–9).
Olfati-Saber, R. (2001). Nonlinear control of underactuated Redding, J., Amin, J., Boskovic, J., Kang, Y., Hedrick, K.,
mechanical systems with application to robotics and Howlett, J., & Poll, S. (2007). A real-time obstacle detec-
aerospace vehicles. Ph.D. thesis, Massachusetts Institute tion and reactive path planning system for autonomous
of Technology. small-scale helicopters. In Proceedings of the AIAA
Navigation, Guidance and Control Conference and Ex- Economics and Mathematical Systems, In D. Grundel, R.
hibit, Hilton Head, SC (pp. 1–22). Murphey, P. Pardalos, & O. Prokopyev (Eds.), Unmanned
Roadmap (2010). Unmanned aircraft systems roadmap 2010– Helicopter Formation Flight Experiment for the Study
2035. Office of the Secretary of Defense. of Mesh Stability (vol. 588, pp. 37–56). Berlin: Springer-
Roberts, J., Corke, P., & Buskey, G. (2003, September). Low-cost Verlag.
flight control system for a small autonomous helicopter. In Shaw, E., Chung, H., Hedrick, J., & Sastry, S. (2007). Unmanned
Proceedings of the International Conference on Robotics helicopter formation flight experiment for the study of
and Automation, Taipei, Taiwan (pp. 546–551). mesh stability. In cooperative systems; Lecture notes in
Romero, H., Salazar, S., & Lozano, R. (2009). Real-time stabi- economics and mathematical systems (vol. 588, pp. 37–
lization of an eight-rotor UAV using optical flow. IEEE 56).
Transactions on Robotics, 25(4), 809–817. Sheridan, T. B. (1992). Telerobotics, automation, and human su-
Rudol, P., & Doherty, P. (2008, March). Human body detection pervisory control. Cambridge, MA: Massachusetts Insti-
and geolocalization for UAV search and rescue missions tute of Technology.
using color and thermal imagery. In Proceedings of IEEE Shim, D. H., Chung, H., & Sastry, S. (2006). Conflict-free nav-
Aerospace Conference, Big Sky, MT (pp. 1–8). igation in unknown urban environments. IEEE Robotics
Ruffier, F., & Franceschini, N. (2005). Optic flow regulation: and Automation Magazine, 13, 27–33.
The key to aircraft automatic guidance. Robotics and Au- Shim, D. H., Kim, H. J., & Sastry, S. (2003a, December). De-
tonomous Systems, (50), 177–194. centralized nonlinear model predictive control of mul-
Sanfourche, M., Besnerais, G. L., Fabiani, P., Piquereau, A., tiple flying robots. In Proceedings of the IEEE Con-
& Whalley, M. S. (2009, May). Comparison of terrain ference on Decision and Control, Maui, HI (pp. 3621–
characterization methods for autonomous UAVs. In Pro- 3626).
ceedings of the 65th Annual Forum of the American Shim, D. H., Kim, H. J., & Sastry, S. (2000, August).
Helicopter Society, Grapevine, TX (pp. 1–14). Hierarchical control system synthesis for rotorcraft-based
Saripalli, S., Montgomery, J., & Sukhatme, G. (2003). Visually- unmanned aerial vehicles. In Proceedings of the AIAA
guided landing of an unmanned aerial vehicle. IEEE Guidance, Navigation and Control Conference, Denver,
Transactions on Robotics and Automation, 19(3), 371–381. CO.
Saripalli, S., & Sukhatme, G. (2003, July). Landing on a mov- Shim, D. H., Kim, H. J., & Sastry, S. (2003b). A flight control sys-
ing target using an autonomous helicopter. In Proceed- tem for aerial robots: Algorithms and experiments. IFAC
ings of the International Conference on Field and Service Control Engineering Practice, 11(2), 1389–1400.
Robotics, Mt. Fuji, Japan. Shim, D. H., & Sastry, S. (2006, August). A situation-aware
Schafroth, D., Bermes, C, Bouabdallah, S., & Siegwart, R. flight control system design using real-time model pre-
(2010). Modeling, system identification and robust control dictive control for unmanned autonomous helicopters. In
of a coaxial micro helicopter. Control Engineering Prac- Proceedings of the AIAA Guidance, Navigation, and Con-
tice, 18, 700–711. trol Conference, AIAA 2006-6101, Keystone, CO.
Scherer, S. (2011). Low-altitude operation of unmanned rotor- Shim, D. H., & Sastry, S. (2007, July). An evasive maneuver-
craft. Ph.D. thesis, The Robotics Institute, Carnegie Mellon ing algorithm for UAVs in see-and-avoid situations. In
University. Proceedings of the IEEE American Control Conference,
Scherer, S., Chamberlain, L., & Singh, S. (2010). Online assess- New York (pp. 3886–3891).
ment of landing sites. In Proceedings of the AIAA In- Shin, J., Fujiwara, D., Nonami, K., & Hazawa, K. (2005). Model-
fotech@Aerospace, Atlanta, GA (pp. 1–14). based optimal attitude and positioning control of small-
Scherer, S., Singh, S., Chamberlain, L., & Elgersma, M. (2008). scale unmanned helicopter. Robotica, 23, 51–63.
Flying fast and low among obstacles: Methodology and Shin, J., Ji, S., Shon, W., Lee, H., Cho, K., & Park, S. (2011). In-
experiments. International Journal of Robotics Research, door hovering control of small ducted-fan type OAV using
27(5), 549–574. ultrasonic positioning system. Journal of Intelligent and
Schouwenaars, T., Feron, E., & How, J. (2006, June). Multi- Robotic Systems, 61, 15–27.
vehicle path planning for non-line of sight communica- Smerlas, A., I, P., Walker, D., Strange, M., Howitt, J.,
tion. In Proceedings of the IEEE American Control Con- Norton, R., Gubbels, A., & Bailllie, S. (1998, August). De-
ference, Minneapolis, MN (pp. 5757–5762). sign and flight testing of an H∞ controller for thr NRC Bell
Sevcik, K. W., Kuntz, N., & Oh, P. Y. (2010). Exploring the effect 205 experimental fly-by-wire helicopter. In Proceedings of
of obscurants on safe landing zone identification. Journal the AIAA Guidance, Navigation, and Control Conference,
of Intelligent and Robotic Systems, 57, 281–295. Number AIAA 98-4300, Boston, MA.
Shakernia, O., Sharp, C., Vidal, R., Shim, D., Ma, Y., & Sobers, M., Chowdhary, G., & Johnson, E. N. (2009, August).
Sastry, S. (2002, May). Multiple view motion estimation Michael Sobers, Girish Chowdhary, and Eric N. John-
and control for landing an unmanned aerial vehicle. In son. In Indoor Navigation for Unmanned Aerial Vehicles,
Proceedings of the IEEE Conference on Robotics and Au- Chicago, IL (pp. 1–29).
tomation, Washington, DC (vol. 3, pp. 2793–2798). Srinivasan, M. V., Zhang, S., & Bidwell, N. (1997). Visually me-
Shaw, E., Chung H., Hedrick J., & Sastry, S. (2007). Coopera- diated odometry in honeybees. Journal of Experimental
tive systems: Control and optimization, Lecture Notes in Biology, 200, 2513–2522.
Sugeno, M., Griffin, M., & Bastian, A. (1993, July). Fuzzy hierar- vironments. In Proceedings of the AIAA Guidance, Nav-
chical control of an unmanned helicopter. In Proceedings igation and Control Conference and Exhibit, AIAA-2008-
of the International Fuzzy Systems and Applications con- 7412, Honolulu, HI (pp. 1–23).
ference, Seoul, South Korea (pp. 179–182). Valavanis, K. P. (Ed.). (2007). Advances in unmanned aerial ve-
Sugeno, M., Howard, W., Isao, H., & Satoru, K. (1995, May). hicles, Intelligent systems, control, and automation: sci-
Intelligent control of an unmanned helicopter based on ence and engineering, The Netherlands: Springer (vol. 33).
fuzzy logic. In Proceedings of the 51st American He- Vidal, R., Shakernia, O., Kim, H. J., Shim, H., & Sas-
licopter Society (AHS) Annual Forum, Fort Worth, TX try, S. (2002). Multi-agent probabilistic pursuit-evasion
(pp. 791–803). games with unmanned ground and aerial vehicles. IEEE
Takahashi, M., Abershitz, A., Rubinets, R., & Whalley, M. (2011, Transactions on Robotics and Automation, 18(5), 662–
May). Evaluation of safe landing area determination algo- 669.
rithms for autonomous rotorcraft using site benchmark- Viquerat, A., Blackhall, L., Reid, A., Sukkarieh, S., & Brooker,
ing. In Proceedings of the 67th Annual Forum of the G. (2007, July). Reactive collision avoidance for unmanned
American Helicopter Society (AHS), Virginia Beach, VA aerial vehicles using doppler radar. In Proceedings of the
(pp. 1–23). International Conference on Field and Service Robotics,
Takahashi, M., Schulein, G., & Whalley, M. (2008, April–May). Chamonix, France.
Flight control law design and development for an au- Visiongain (2009). The unmanned aerial vehicles (UAV) market
tonomous rotorcraft. In Proccedings of the 64th Annual 2009–2019, London, UK: Visiongain.
Forum of the American Helicopter Society, Montreal, Que- Wagner, H. (1982). Flow-field variables trigger landing in flies.
bec (pp. 1652–1671). Nature, 297, 147–148.
Tammero, L., & Dickinson, M. (2002). The influence of vi- Wang, B., Han, L., Zhang, H., Wang, Q., & Li, B. (2009, De-
sual landscape on the free flight behavior of the fruit fly cember). A flying robotic system for power line corri-
Drosophila melanogaster. Journal of Experimental Biology, dor inspection. In Proceedings of the IEEE International
205, 327–343. Conference on Robotics and Biomimetics, Guilin, China
Teichteil-Konigsbuch, F., & Fabiani, P. (2007, September). A (pp. 2468–2473).
multi-thread decisional architecture for real-time plan- Wang, W., Song, G., Nonami, K., Hirata, M., & Miyazawa,
ning under uncertainty. In Proceedings of the 3rd Work- O. (2006, October). Autonomous control for micro-
shop on Planning and Plan Execution for Real-World Sys- flying robot and small wireless helicopter X.R.B. In
tems, Providence, RI (pp. 1–6). Proceedings of the IEEE/RSJ International Conference
Templeton, T., Shim, D., Geyer, C., & Sastry, S. (2007, April). on Intelligent Robots and Systems, Beijing (pp. 2906–
Autonomous vision-based landing and terrain mapping 2911).
using an MPC-controlled unmanned rotorcraft. In Pro- Watanabe, Y., Lesire, C., Piquereau, A., & Fabiani, P. (2010,
ceedings of the IEEE International Conference on Robotics May). The ONERA ReSSAC unmanned autonomous he-
and Automation, Rome (pp. 1349–1356). licopter: Visual air-to-ground target tracking in an urban
Theodore, C., Rowley, D., Hubbard, D., Ansar, A., Matthies, L., environment. In Proceedings of the 66th Annual Forum of
Goldberg, S., & Whalley, M. (2006, May). Flight trials of a the American Helicopter Society, Phoenix, AZ.
rotorcraft unmanned aerial vehicle landing autonomously Whalley, M., Freed, M., Harris, R., Takahashi, Ma., Schulein,
at unprepared sites. In Proceedings of the 62nd Annual G., & Howlett, J. (2005, January). Design, integration, and
Forum of the American Helicopter Society, Phoenix, AZ. flight test results for an autonomous surveillance heli-
Thrun, S., Diel, M., & Hahnel, D. (2003, July). Scan alignment copter. In Proceedings of the AHS International Specialists
and 3d surface modeling with a helicopter platform. In Meeting on Unmanned Rotorcraft, Chandler, AZ (pp. 1–
Proceedings of the 4th International Conference on Field 13).
and Service Robotics. Lake Yamanaka, Japan. Whalley, M., Takahashi, M., Tsenkov, P., & Schulein, G. (2009,
Tischler, M. B., & Cauffman, M. G. (1992). Frequency-response May). Field-testing of a helicopter UAV obstacle field nav-
method for rotorcraft system identification: Flight applica- igation and landing system. In Proceedings of the 65th
tion to BO-105 coupled fuselage/rotor dynamics. Journal Annual Forum of the American Helicopter Society (AHS),
of the American Helicopter Society (AHS), 37(3), 3–17. Grapevine, TX.
Tornqvist, D., Schon, T. B., Karlsson, R., & Gustafsson, F. (2009). William, B., Green, E., & Oh, P. (2008). Optic-flow-based colli-
Particle filter SLAM with high dimensional vehicle model. sion avoidance. IEEE Robotics and Automation Magazine,
Journal of Intelligent and Robotic Systems, 55(4–5), 249– 15(1), 96–103.
266. Wzorek, M., Conte, G., Rudol, P., Merz, T., Duranti, S., & Do-
Trawny, N., Mourikis, A. I., Roumeliotis, S. I., Johnson, A. E., herty, P. (2006, April). From motion planning to control—
& Montgomery, J. F. (2007). Vision-aided inertial naviga- A navigation framework for an autonomous unmanned
tion for pin-point landing using observations of mapped aerial vehicle. In Proceedings of the 21th Bristol UAV Sys-
landmarks. Journal of Field Robotics, 24(5), 357–378. tems Conference, Bristol, UK (pp. 1–15).
Tsenkov, P., Howlett, J. K., Whalley, M., Schulein, G., Takahashi, Wzorek, M., & Doherty, P. (2006, November). Reconfigurable
M., Rhinehart, M. H., & Mettler, B. (2008, August). A sys- path planning for an autonomous unmanned aerial vehi-
tem for 3d autonomous rotorcraft navigation in urban en- cle. In Proceedings of the IEEE International Conference
on Hybrid Information Technology, Jeju Island, Korea Yu, Z., Nonami, K., Shin, J., & Celestino, D. (2007). 3D vision
(pp. 242–249). based landing control of a small scale autonomous he-
Wzorek, M., Kvarnstrom, J., & Doherty, P. (2010, May). Choos- licopter. International Journal of Advanced Robotic Sys-
ing path replanning strategies for unmanned aircraft sys- tems, 4(1), 51–56.
tems. In Proceedings of the Twentieth International Con- Yun, B., Chen, B. M., Lum, K. Y., & Lee, T. H. (2010). Design
ference on Automated Planning and Scheduling (ICAPS), and implementation of a leader–follower cooperative con-
Toronto, Canada (pp. 1–8). trol system for unmanned helicopters. Journal of Control
Yang, K., Gan, S. K., & Sukkarieh, S. (2010). An efficient path Theory and Applications, 8(1), 61–68.
planning and control algorithm for RUAV’s in unknown Zeigler, B. (1990, March). High autonomy systems: Concepts
and cluttered environments. Journal of Intelligent and and models. In Proceedings of the AI, Simulation, and
Robotic Systems, 57, 101–122. Planning in High Autonomy Systems, Tucson, AZ (pp. 2–
Yavrucuk, I., Prasad, J. V. R., & Unnikrishnan, S. (2009). En- 7).
velope protection for autonomous unmanned aerial vehi- Zufferey, J.-C., & Floreano, D. (2006). Fly-inspired visual steer-
cles. AIAA Journal of Guidance, Control, and Dynamics, ing of an ultralight indoor aircraft. IEEE Transactions On
32(1), 248–261. Robotics, 22(1), 137–146.