Heterogeneous Robotic Systems For Assemb
Heterogeneous Robotic Systems For Assemb
Ashley W. Stroupe, Terry Huntsberger, Brett Kennedy, Hrand Aghazarian, Eric T. Baumgartner,
Anthony Ganino, Michael Garrett, Avi Okon, Matthew Robinson, and Julie Anne Townsend
Jet Propulsion Laboratory/California Institute of Technology, 4800 Oak Grove Drive, MS 82-105, Pasadena, CA,USA,
{Ashley.Stroupe, Terry.Huntsberger, Brett.Kennedy, Hrand.Aghazarian, Eric.Baumgartner, Anthony.Ganino,
Michael.Garrett, Avi.Okon, Matthew.Robinson, Julie.Townsend}@jpl.nasa.gov
ABSTRACT
Proc. of 'The 8th International Symposium on Artifical Intelligence, Robotics and Automation in Space - iSAIRAS’, Munich, Germany.
5-8 September 2005, (ESA SP-603, August 2005)
The Distributed and Reconfigurable Electronics uncertainty and errors. The primary sensing modality
(DARE) project is developing electronics and sensors is vision, due to the high content of information and
that will enable structural elements to cooperate with low power required. Vision sensing also introduces
the robots that transport and assemble them by error. Motions therefore typically employ an iterative
providing feedback information. These cooperative approach in which a step is taken, progress is
components will also provide an inherent ability to evaluated, and any necessary corrective step computed
transfer power and information within the structure as until the resulting position is within the required
well as identify potential failures. A prototype is in accuracy. This type of approach has been implemented
development. In-Space Assembly (ISA) is a collection (both autonomously and hand-programmed) for the
of tasks, built around a dexterous limbed robot design, Mars Exploration Rovers: autonomous go-to-waypoint
that is investigating assembly and repair of orbital truss iteratively adjusts position using visual feature tracking
structures using heterogeneous teams of robots. A to verify progress and compute next actions [16, 17],
single small prototype has demonstrated several and manipulator positioning is done in two steps with a
walking gaits and the ability to precisely manipulate visual verification by human on the ground so that any
various tools. corrections can be made prior to activities . Sensing
results average over multiple frames to reduce error.
2. CORE TECHNOLOGIES
Behavior-based control also allows adaptiveness in
2.1 Behavior-Based Architecture for Robust Real-
other ways, such as quickly identifying an unexpected
Time Control
state and directly mapping this state to an action that
can achieve the desired result or to the need to call for
The current flight processors (Rad 6000) operate at
human intervention for recovery.
20Mhz, which severely limits the complexity of real-
time control. Despite this limitation, the control must
2.2 Hybrid Image Plane Stereo (HIPS) for Hand-
be highly robust and accurate despite uncertainty. In
Eye Coordination
order to provide the required performance within this
limitation, the basic software architecture is designed
Precision hand-eye coordination is a difficult problem
to be highly efficient. Much of the computationally
due to the many sources of error: rover pose,
complex aspects of the task are designed into the
manipulator pose, manipulator kinematic model, visual
system, such as task decomposition and task
target identification, and visual target pose. In order to
sequencing, through the use of finite state machines.
reduce the magnitudes of these errors the cameras are
The behavior-based approach, which is highly reactive,
calibrated relative to the manipulator’s configuration
can quickly adapt and select actions for changing state
space using a process called Hybrid Image Plane
without having to plan extensively. The hierarchical
Stereo (HIPS). Unlike traditional stereo vision with
behavior-based approach is based on the FIDO
forward kinematics [10], this allows the robot to
software architecture for real-time control and the
determine the exact configuration to place the
CAMPOUT architecture for multi-robot coordination.
manipulator instruments at the visually identified target
[3] (Fig. 3). While behaviour-based control is not new
more precisely by eliminating errors due to
[1], this particular implementation is specifically
manipulator model inaccuracies.
designed for real-time operations. This type of
beavhior-based software architecture was implemented
This calibration is accomplished by generating camera
for the Mars Exploration Rovers for these reasons [17].
models directly in the manipulator’s reference frame.
CENTRAL ROBOT Models are generated through comparing the visually
MISSION
PLANNER
observed position of a fiducial on the manipulator and
DISTRIBUTED the reported kinematics position of the manipulator.
MULTI-ROBOT CAMPOUT/FIDO HIPS continually updates models to account for any
PLANNING BEHAVIOR_BASED
HUMAN CONTROL
changes to the kinematics, as well as account for other
OPERATOR CONTROL types of errors. Thus, unlike for traditional stereo and
MODE
forward kinematics [10] computed target positions
based on image coordinates match with arm
HARDWARE INTERFACE
configuration (rather than to ground truth) and improve
SENSORS ACTUATORS manipulator placement accuracy relative to targets.
ROBOT
TEAM
HIPS uses an 18-parameter CAHVOR model, a pin-
Fig. 2. CAMPOUT / FIDO architecture.
hole camera with symmetrical radial distortion. The
initial camera model estimation step fits the compared
To achieve precise positioning, both of the robots and measured and observed manipulator position at known
of their manipulators, the system must account for
pre-determined positions to the CAHVOR model. This A second application of force sensing is in determining
may be computationally expensive and is therefore proper alignment for component or instrument
done offline ahead of time. The initial model accounts placement. In cooperative component acquisition or
for any systematic errors including frame placement, for example, the robot (or team) visually
transformation errors and kinematics model errors in determines the goal location for the component (and
link lengths or offsets. The second estimation step determines the manipulator joint configuration using
occurs online and readapts the models to time-varying HIPS) and computes a series of motions to achieve that
errors and run-time uncertainties using newly collected goal position. If in the course of reaching that position
measured/observed position pairs. Types of errors the robot experiences resistive forces, the robot can
include the adaptive model estimation addresses infer that a position error has occurred and take steps to
include flexion and droop (which may be orientation- correct it. Ensuring tool contact is made and that
dependent), joint resolution limitations, effects due to contact is in the appropriate position and direction is
wear, finite image-plane cue detection, and additional also done using this approach. Simple force sensing, in
camera model errors. Quantitative results indicate the form of contact switches, is currently used for MER
placement accuracy improvements of 60-90% over instrument placement.
traditional stereo with forward kinematics. More
details are provided in [13]. A final application of force sensing is for walking
delicately on fragile orbital structures. Force sensing is
2.3 Force-Sensing for Position Estimation used in gait modification in order to minimize impact
on the structures as well as during gait execution to
In the event that contact is required between a robot ensure that slight errors in positioning do not result in
and an object, the sense of touch can be more accurate structural damage. This application is in development.
in determining contact than vision. This has been
supported and utilized on the Mars Exploration Rovers
3. SURFACE SYSTEMS
in the form of contact switches on instruments [15, 17].
The instrument is commanded toward a position past
Robotic Construction Crew (RCC) is an ongoing
the desired target and motion is stopped when the
program directed at developing prototype robotic
contact switch is triggered; thus, errors in positioning
systems for surface construction of habitats. Habitat
due to visual estimation are eliminated.
construction by autonomous agents will eliminate the
need for extended surface EVA for habitat assembly
For JPL’s assembly and maintenance tasks, this
and provide a ready safe haven for astronauts prior to
approach has been adopted and expanded in order to
arrival in the event of difficulties. Efforts in this area
eliminate the effects of vision errors for many aspects
have been in development for six years. The primary
of the task. In addition to contact switches on some
focus of RCC is cooperative manipulation of large
instruments, manipulators have a 3-axis force-torque
components, including long distance traverse, precision
sensor positioned at the wrist. This allows the robot to
placement and mating, and handling of heterogeneous
sense not only contact, but the degree and direction of
component types with an adaptable system. This work
contact with many types of objects as well.
has been primarily carried out in an indoor
environment that simulates natural terrain (Fig. 3), with
One of the most important applications of force sensing
some work done in an outdoor environment.
is in determining the relative formation of two robots
carrying an object cooperatively. Typically, visual
To date, work in robotic assembly includes component
information on partner location is highly noisy (due to
mating using three specialized robots (vision, coarse
a robot’s complex structure) and in many cases is
manipulation, fine manipulation) [2,9]. Cooperative
completely unavailable. As in the case of two people
transport has focused on cooperative pushing [7,8].
carrying a large object, the primary cooperative cue for
remaining cooperative with a partner is reaction force
The RCC team has two four-wheeled rovers, each with
rather than vision. In the proper formation, reaction
a stereo pair of cameras and a 4 degree-of-freedom arm
forces are minimal (the partner is neither pulling nor
(with gripper) (Fig. 6). RCC has demonstrated the
pushing on the object). As the formation moves away
ability to autonomously obtain and place a component
from nominal, forces and torques increase. By
into an in-progress structure. This includes acquiring
empirically calibrating magnitude and direction of
the component, cooperatively transporting the
force and torque with formation offsets, the team can
component to the structure, precisely aligning with the
quantify formation errors and correct them. Some
structure for component installation, and placing the
work has applied force sensing for cooperative pushing
component into the structure and mating it with other
(rather than with rigid contact) such as in [5].
structure components. Both beams and panels have
been installed in the structure with high reliability.
Individual experiments looked at specific aspects of the
task as well as at the end-to-end task. The success rate
is shown in Table I. More results are in [11,12,13].
V
V ∆V ∆V
Tz
Fy
V V
∆V ∆V
Fig. 4. Relationship of formation and force-torque.
Left: Torque direction and magnitude indicates the
follower should slow down (top) or speed up
(bottom). Right: Force indicates the follower
should speed up (top) or slow down (bottom).