0% found this document useful (0 votes)
59 views7 pages

Heterogeneous Robotic Systems For Assemb

This document discusses technologies for autonomous robotic assembly and servicing of structures in space. It describes how NASA's vision for space exploration requires large orbital structures and planetary habitats to be built with minimal human involvement due to risks. Robotic teams must transport components, precisely mate parts, inspect structures, and repair failures. The document outlines several projects at JPL developing prototypes for these tasks, including cooperative transport of large parts, sensing structures to identify failures, and manipulating tools to assemble truss structures. It also discusses the behavior-based control architectures and vision techniques needed for the precise coordination required despite environmental uncertainties.

Uploaded by

ricky
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
59 views7 pages

Heterogeneous Robotic Systems For Assemb

This document discusses technologies for autonomous robotic assembly and servicing of structures in space. It describes how NASA's vision for space exploration requires large orbital structures and planetary habitats to be built with minimal human involvement due to risks. Robotic teams must transport components, precisely mate parts, inspect structures, and repair failures. The document outlines several projects at JPL developing prototypes for these tasks, including cooperative transport of large parts, sensing structures to identify failures, and manipulating tools to assemble truss structures. It also discusses the behavior-based control architectures and vision techniques needed for the precise coordination required despite environmental uncertainties.

Uploaded by

ricky
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

HETEROGENEOUS ROBOTIC SYSTEMS FOR ASSEMBLY AND SERVICING

Ashley W. Stroupe, Terry Huntsberger, Brett Kennedy, Hrand Aghazarian, Eric T. Baumgartner,
Anthony Ganino, Michael Garrett, Avi Okon, Matthew Robinson, and Julie Anne Townsend

Jet Propulsion Laboratory/California Institute of Technology, 4800 Oak Grove Drive, MS 82-105, Pasadena, CA,USA,
{Ashley.Stroupe, Terry.Huntsberger, Brett.Kennedy, Hrand.Aghazarian, Eric.Baumgartner, Anthony.Ganino,
Michael.Garrett, Avi.Okon, Matthew.Robinson, Julie.Townsend}@jpl.nasa.gov

ABSTRACT

NASA’s Vision for Space Exploration calls for an


extended human presence in space and development of
large-scale orbital structures. To reduce risk, it is
essential to minimize astronaut exposure by limiting
EVA and providing habitat infrastructure prior to
arrival. Efficient assembly of space structures requires
autonomous robotic teams with only high-level human
supervision. Tasks will include component transport,
precision component mating, structure inspection and Fig. 1. Johnson Space Center concept
analysis, and site surveying and clearing for surface of a planetary habitat.
structures. JPL is developing many of the required
identification of failures, and replacement or repair of
technologies for assembly and servicing to determine
damaged components. To simplify the construction
the challenges and required capabilities and to produce
process, structural components will likely be large
flight-relevant prototypes for maturing and testing
relative to robot size, requiring cooperative transport
these technologies in space-relevant environments.
and mating by multiple robots. For efficiency and
accuracy during assembly, as well as improving the
1. INTRODUCTION ability to identify structure health, the structures
themselves will require on-board sensing and data
Structures in space and on planetary surfaces play key handling. Surface structures will additionally require
roles in the current NASA Vision for Space site selection and clearing, and the environment will
Exploration [6], Fig. 1. Due to the extended periods provide the added difficulties of interaction with terrain
over which assembly and maintenance must be and soils. For orbital structures, the environment will
performed, the extreme environments, and, in most instead provide added difficulties of operating in zero
cases, long communication delays or blackouts, gravity and locomoting on delicate structures.
efficiency and safety will require that much of the
construction and maintenance tasks be accomplished Key skills that must be developed to reliably perform
autonomously with only occasional high-level assembly and maintenance tasks include robust
supervision and direct human intervention only in rare autonomous task sequencing, precise hand-eye
anomalous conditions. These complex tasks must be coordination, accurate robot and team positioning,
accomplished autonomously by intelligent systems tightly maintaining cooperative team formations,
despite severe limitations placed on such systems by autonomous error identification prior to catastrophic
the space environments and launch systems and in the failure, and identifying when fault recovery may be
presence of high uncertainty. These constraints limit autonomous or require human intervention.
the mass, power, volume of all components and limit
the processing speed of the on-board computer. JPL is Several ongoing projects are addressing these issues by
currently developing and testing technologies that will designing, developing, implementing, and testing
provide these capabilities for surface and on-orbit prototype systems in flight-relevant conditions. The
structure assembly and maintenance despite constraints Robotic Construction Crew (RCC) is a multi-robot
imposed by space operations. system for autonomous assembly of structures from
large components, such as beams and panels. To
Assembly and maintenance of space structures handle large components under gravity, a team of two
will require component transport over potentially long robots cooperatively transport and install components
distances, precision manipulation and mating of in a structure. A prototype system has demonstrated
components, inspection of components and reliable transport and mating of individual components.

Proc. of 'The 8th International Symposium on Artifical Intelligence, Robotics and Automation in Space - iSAIRAS’, Munich, Germany.
5-8 September 2005, (ESA SP-603, August 2005)
The Distributed and Reconfigurable Electronics uncertainty and errors. The primary sensing modality
(DARE) project is developing electronics and sensors is vision, due to the high content of information and
that will enable structural elements to cooperate with low power required. Vision sensing also introduces
the robots that transport and assemble them by error. Motions therefore typically employ an iterative
providing feedback information. These cooperative approach in which a step is taken, progress is
components will also provide an inherent ability to evaluated, and any necessary corrective step computed
transfer power and information within the structure as until the resulting position is within the required
well as identify potential failures. A prototype is in accuracy. This type of approach has been implemented
development. In-Space Assembly (ISA) is a collection (both autonomously and hand-programmed) for the
of tasks, built around a dexterous limbed robot design, Mars Exploration Rovers: autonomous go-to-waypoint
that is investigating assembly and repair of orbital truss iteratively adjusts position using visual feature tracking
structures using heterogeneous teams of robots. A to verify progress and compute next actions [16, 17],
single small prototype has demonstrated several and manipulator positioning is done in two steps with a
walking gaits and the ability to precisely manipulate visual verification by human on the ground so that any
various tools. corrections can be made prior to activities . Sensing
results average over multiple frames to reduce error.
2. CORE TECHNOLOGIES
Behavior-based control also allows adaptiveness in
2.1 Behavior-Based Architecture for Robust Real-
other ways, such as quickly identifying an unexpected
Time Control
state and directly mapping this state to an action that
can achieve the desired result or to the need to call for
The current flight processors (Rad 6000) operate at
human intervention for recovery.
20Mhz, which severely limits the complexity of real-
time control. Despite this limitation, the control must
2.2 Hybrid Image Plane Stereo (HIPS) for Hand-
be highly robust and accurate despite uncertainty. In
Eye Coordination
order to provide the required performance within this
limitation, the basic software architecture is designed
Precision hand-eye coordination is a difficult problem
to be highly efficient. Much of the computationally
due to the many sources of error: rover pose,
complex aspects of the task are designed into the
manipulator pose, manipulator kinematic model, visual
system, such as task decomposition and task
target identification, and visual target pose. In order to
sequencing, through the use of finite state machines.
reduce the magnitudes of these errors the cameras are
The behavior-based approach, which is highly reactive,
calibrated relative to the manipulator’s configuration
can quickly adapt and select actions for changing state
space using a process called Hybrid Image Plane
without having to plan extensively. The hierarchical
Stereo (HIPS). Unlike traditional stereo vision with
behavior-based approach is based on the FIDO
forward kinematics [10], this allows the robot to
software architecture for real-time control and the
determine the exact configuration to place the
CAMPOUT architecture for multi-robot coordination.
manipulator instruments at the visually identified target
[3] (Fig. 3). While behaviour-based control is not new
more precisely by eliminating errors due to
[1], this particular implementation is specifically
manipulator model inaccuracies.
designed for real-time operations. This type of
beavhior-based software architecture was implemented
This calibration is accomplished by generating camera
for the Mars Exploration Rovers for these reasons [17].
models directly in the manipulator’s reference frame.
CENTRAL ROBOT Models are generated through comparing the visually
MISSION
PLANNER
observed position of a fiducial on the manipulator and
DISTRIBUTED the reported kinematics position of the manipulator.
MULTI-ROBOT CAMPOUT/FIDO HIPS continually updates models to account for any
PLANNING BEHAVIOR_BASED
HUMAN CONTROL
changes to the kinematics, as well as account for other
OPERATOR CONTROL types of errors. Thus, unlike for traditional stereo and
MODE
forward kinematics [10] computed target positions
based on image coordinates match with arm
HARDWARE INTERFACE
configuration (rather than to ground truth) and improve
SENSORS ACTUATORS manipulator placement accuracy relative to targets.
ROBOT
TEAM
HIPS uses an 18-parameter CAHVOR model, a pin-
Fig. 2. CAMPOUT / FIDO architecture.
hole camera with symmetrical radial distortion. The
initial camera model estimation step fits the compared
To achieve precise positioning, both of the robots and measured and observed manipulator position at known
of their manipulators, the system must account for
pre-determined positions to the CAHVOR model. This A second application of force sensing is in determining
may be computationally expensive and is therefore proper alignment for component or instrument
done offline ahead of time. The initial model accounts placement. In cooperative component acquisition or
for any systematic errors including frame placement, for example, the robot (or team) visually
transformation errors and kinematics model errors in determines the goal location for the component (and
link lengths or offsets. The second estimation step determines the manipulator joint configuration using
occurs online and readapts the models to time-varying HIPS) and computes a series of motions to achieve that
errors and run-time uncertainties using newly collected goal position. If in the course of reaching that position
measured/observed position pairs. Types of errors the robot experiences resistive forces, the robot can
include the adaptive model estimation addresses infer that a position error has occurred and take steps to
include flexion and droop (which may be orientation- correct it. Ensuring tool contact is made and that
dependent), joint resolution limitations, effects due to contact is in the appropriate position and direction is
wear, finite image-plane cue detection, and additional also done using this approach. Simple force sensing, in
camera model errors. Quantitative results indicate the form of contact switches, is currently used for MER
placement accuracy improvements of 60-90% over instrument placement.
traditional stereo with forward kinematics. More
details are provided in [13]. A final application of force sensing is for walking
delicately on fragile orbital structures. Force sensing is
2.3 Force-Sensing for Position Estimation used in gait modification in order to minimize impact
on the structures as well as during gait execution to
In the event that contact is required between a robot ensure that slight errors in positioning do not result in
and an object, the sense of touch can be more accurate structural damage. This application is in development.
in determining contact than vision. This has been
supported and utilized on the Mars Exploration Rovers
3. SURFACE SYSTEMS
in the form of contact switches on instruments [15, 17].
The instrument is commanded toward a position past
Robotic Construction Crew (RCC) is an ongoing
the desired target and motion is stopped when the
program directed at developing prototype robotic
contact switch is triggered; thus, errors in positioning
systems for surface construction of habitats. Habitat
due to visual estimation are eliminated.
construction by autonomous agents will eliminate the
need for extended surface EVA for habitat assembly
For JPL’s assembly and maintenance tasks, this
and provide a ready safe haven for astronauts prior to
approach has been adopted and expanded in order to
arrival in the event of difficulties. Efforts in this area
eliminate the effects of vision errors for many aspects
have been in development for six years. The primary
of the task. In addition to contact switches on some
focus of RCC is cooperative manipulation of large
instruments, manipulators have a 3-axis force-torque
components, including long distance traverse, precision
sensor positioned at the wrist. This allows the robot to
placement and mating, and handling of heterogeneous
sense not only contact, but the degree and direction of
component types with an adaptable system. This work
contact with many types of objects as well.
has been primarily carried out in an indoor
environment that simulates natural terrain (Fig. 3), with
One of the most important applications of force sensing
some work done in an outdoor environment.
is in determining the relative formation of two robots
carrying an object cooperatively. Typically, visual
To date, work in robotic assembly includes component
information on partner location is highly noisy (due to
mating using three specialized robots (vision, coarse
a robot’s complex structure) and in many cases is
manipulation, fine manipulation) [2,9]. Cooperative
completely unavailable. As in the case of two people
transport has focused on cooperative pushing [7,8].
carrying a large object, the primary cooperative cue for
remaining cooperative with a partner is reaction force
The RCC team has two four-wheeled rovers, each with
rather than vision. In the proper formation, reaction
a stereo pair of cameras and a 4 degree-of-freedom arm
forces are minimal (the partner is neither pulling nor
(with gripper) (Fig. 6). RCC has demonstrated the
pushing on the object). As the formation moves away
ability to autonomously obtain and place a component
from nominal, forces and torques increase. By
into an in-progress structure. This includes acquiring
empirically calibrating magnitude and direction of
the component, cooperatively transporting the
force and torque with formation offsets, the team can
component to the structure, precisely aligning with the
quantify formation errors and correct them. Some
structure for component installation, and placing the
work has applied force sensing for cooperative pushing
component into the structure and mating it with other
(rather than with rigid contact) such as in [5].
structure components. Both beams and panels have
been installed in the structure with high reliability.
Individual experiments looked at specific aspects of the
task as well as at the end-to-end task. The success rate
is shown in Table I. More results are in [11,12,13].

FIDUCIAL CONE Table I: Construction Results


Experiment Runs Failures
Acquire Beam 24 0*
Align at Structure 19 1
GRASP
Place Beam 18 0
Fig. 3. Structure of interlocking beams. Inset:
End to End 5 0
Component fiducials and two interlock cones. * Excludes a non-algorithmic failure due to a poorly calibrated wrist.

Alignment and precision placement uses HIPS to


guarantee the manipulator places the component
correctly relative to the visually observed structure.
Components are identified by sets of fiducials that
provide position and orientation (Fig. 3). Force
sensing is used to maintain the formation during
transport; one team member adjusts velocity to oppose
non-nominal forces. The mapping from force and
torque to desired velocity was experimentally
determined. The corrective process is shown in Fig. 4.
Tz Fy

V
V ∆V ∆V
Tz
Fy

V V
∆V ∆V
Fig. 4. Relationship of formation and force-torque.
Left: Torque direction and magnitude indicates the
follower should slow down (top) or speed up
(bottom). Right: Force indicates the follower
should speed up (top) or slow down (bottom).

Force sensing also verifies component acquisition; if


resistive forces are experienced during acquisition, a
misalignment is detected which is corrected using a
local search. A comparison of forces for a correct and
incorrect component grasp are shown in Fig. 5.

Fig. 6. Top: Rovers align in grasping position.


Second: Team lifts the component and turns around.
Third: Rovers align at the structure for placement.
Bottom: Rovers place the component.

Fig. 5. In a nominal grasp (dotted) the robot sees


small friction forces. In a missed grasp (solid),
the gripper hits the component and the robot
experiences large forces and detects failure.

Snapshots of the construction process (with beams and


panels) are shown in Fig. 6 and Fig. 7. The ability of
RCC to autonomously acquire, transport, and place a Fig. 7. Left: RCC carries a panel and aligns with the
beam component has been quantitatively analysed.
structure. Right: RCC carries a beam outdoors.
Preliminary results have demonstrated the ability to reassigned to take on different roles within different
align with the structure and place a panel component systems with different organizational principles.
with high reliability (8 of 10 preliminary runs).
Additional results have illustrated the improvements As a relevant example of this concept (and the basis of
obtained by using force feedback for component a future demonstration) we have taken the case of a
acquisition: a robot was able to successfully identify construction of a “smart” structure. In this scenario an
and correct an improper grasp 5 of 5 times. This is awkward structural element is cooperatively carried by
illustrated in Fig. 7 (left), along with outdoor two robots and assembled onto an existing structure.
cooperative transport (right). Both the structural element and the existing structure
have DARE units embedded into them. While the
Near term goals for RCC include building a next element is being transported by the robots, it
generation of robots geared toward construction tasks communicates with them conveying useful state
(higher payloads) and demonstrating these same information. Once assembled, it communicates and
capabilities with higher reliability. Longer term goals shares resources with the rest of the structure.
include adding more sophisticated use of force sensing
for component placement and sequential component Near-term goals of the DARE project (2005) include
placement for building a structure. building a cooperative component with a 3-axis tilt
sensor and communication. This smart component will
provide, via communication, pose information to the
4. COOPERATIVE STRUCTURES
team of robots transporting and installing it into a
structure (RCC) in order to improve performance in
The DARE project is aimed at investigating how
terms of efficiency and reliability. Specifically, the
cooperative components can aid in the assembly
rover team can use the pose information from the
process and how structures composed of smart
component in order to traverse more difficult terrain
components can assist in inspection and maintenance.
while remaining in formation as well as to indicate that
For surface structures, particularly those that serve as
the component is level during installation. Lastly,
habitats for astronauts, the ability to quickly identify
establishing connectivity with the partial structure can
failures and recover is essential to keep humans safe.
provide feedback indicating a successful installation.
While mobile assembly/repair robots can aid in
Longer term goals include monitoring communication
inspection and repair, the structure itself can vastly
connectivity and other health state indicators to
improve the efficiency of identifying potential failures
identify component failure.
by performing self-monitoring. This can bring the
focus of attention of repair crews to critical locations
Currently, design efforts are in progress to design a
and can provide information on systems that may not
reusable computation/communication system that is
be easily observable by outside agents. The processing
small enough to not significantly alter the size and
and sensing built into structural components may also
mass characteristics of structure components, powerful
aid in the assembly process itself by providing
enough to provide the necessary processing, and simple
feedback on proper component connectivity, long-
enough to be connected to adjacent components
range beacons to component landing and construction
reliably by autonomous robot teams. An illustration of
sites, and pose information during transport. Building
this process is shown in Fig. 8.
components that provide this type of information, as
well as efficient data flow throughout large-scale
structures (highly distributed systems with a multitude
of individual elements), is the goal of DARE.

The DARE line of research will create electronic


elements that will facilitate the data collection and
command delivery throughout the system allowing a
spectrum of control from a strict and transparent
hierarchy (such as might be found within a robot), to a
semi-autonomous hierarchy in which raw data is
filtered by, and some command is ceded to, lower
levels of the hierarchy (such as a human commanding a Fig. 8. Schematic of the data flow enabled by DARE
team of robots assembling smart payloads in smart in a construction scenario
structures), to an absolutely “democratic” system of A prototype computing and 3-axis sensor chip has
electronic elements (as might be found in a sensor net). been designed and built toward this effort.
Moreover, these electronic elements will be able to be
of each. This concept meant that the workspace and
5. ORBITAL SYSTEMS
dexterity of the limb needed to be the union of those
needed for walking and manipulation. Therefore, a 4
In Space Assembly is a set of projects directed toward
degree-of-feedom (DOF) limb was designed consisting
developing robotic systems for assembly and
of a kinematically spherical shoulder and a 1 DOF
maintenance of orbital structures (Fig. 9).
elbow. The simplifying assumption was made that any
initial tool or gripper would be axisymmetric or have
passive DOF designed in. Lemur has demonstrated
multiple walking gaits, walking on a mesh using
contact sensing, and tool placement with very high
precision using HIPS. The current Lemur IIa platform
represents the jumping-off point toward two more
advanced robotic platforms as part of the ISA tasks.

The final design element of Lemur limbs is the


inclusion of a tool quick-release and the tools that mate
Fig. 9. JPL Concept of on-orbit to it. The release itself is a socket with a spring-locked
ball detent similar to others found throughout industry.
construction by robot teams.
To date, four tools have been designed to mate with the
quick release (Fig. 11). Simplest is the default
The assembly and maintenance requirements of walking/poking tool. For inspection purposes, a ultra-
permanent installations in space demand robots that bright LED task light tool can act alone or in
provide a high level of operational flexibility relative to conjunction with a “palm-cam” tool. Finally, a rotary
mass and volume. Such demands point to robots that tool with integral reaction torque sensing and its own
are dexterous, have significant processing and sensing bit chuck can be used for torqueing fasteners or other
capabilities, and can be easily reconfigured (both rotary operations depending on the bit used. In
physically and algorithmically). Evolving from Lemur keeping with the limb concept, all of these tools can be
I, Lemur IIa (Fig. 10, left) is an extremely capable used as feet as well as for manipulation operations.
system that both explores mechanical design elements
and provides an infrastructure for the development of
algorithms (such as force control for mobility and
manipulation and adaptive visual feedback) [7]. The
physical layout of the system consists of six, 4-degree-
of-freedom limbs arranged axi-symmetrically about a
hexagonal body platform. These limbs incorporate a
Fig. 11. LEMUR tool set, left to right: rotary driver,
“quick-connect” end-effector feature below the distal
joint that allows the rapid change-out of any of its camera, flash light, foot/pointer.
tools. The other major subsystem is a stereo camera
set that travels along a ring track, allowing Current efforts are in progress with Johnson Space
omnidirectional vision. Center to develop and test prototype orbital assembly
and maintenance systems based on the Lemur II
concept (Fig. 10, right). In phase I (2005), Lemur II
will cooperate with a new, larger limbed prototype
(Spider) to simulate installation of an Orbital
Replacement Unit (ORU); Spider will carry and place
the ORU and Lemur II will connect it using HIPS and
a driver tool and a threaded fastener. Phase II (2006-
2008) goals include designing and building a flight-
relevant Lemur (Lemur III) to perform a more complex
cooperative transport/assembly task in simulated
Fig. 10. Left: Lemur IIa robot using two tools. Right: micro-gravity.
ISA concept with large spiders and small lemurs.
Additional efforts in conjunction with Northrop
To date, the basic Lemur IIa platform has been Grumman are currently aimed at designing and
designed and built. The idea that Lemur was to have building another next-generation Lemur (AWIMIR) to
limbs, not arms or legs, dictated the arrangement of the perform inspection of a simulated orbital structure.
degrees of freedom and the effective range of motion
6. SUMMARY AND CONCLUSIONS 6. NASA Office of Exploration Systems. “Human
and Robotic Technology (H&RT) Formulation Plan.”
JPL is currently developing several core technologies Version 3.0, May 14, 2004.
for autonomous robotic construction and assembly 7. Parker L.E., “ALLIANCE: an architecture for
capabilities, though many of these technologies are fault tolerant, cooperative control of heterogeneous
broadly applicable to other robotic tasks. These core mobile robots.” Proceedings of the IEEE/RSJ
technologies are aimed at improving the reliability and International Conference on Intelligent Robots and
autonomy of such systems. Several projects have Systems, 2:776-683, 1994
demonstrated robust performance in preliminary tests 8. Rus D., Donald B., and Jennings J., “Moving
for tasks such as for hand-eye coordination and robotic furniture with teams of autonomous robots.”
assembly. Together, these core technologies will Proceedings of the IEEE/RSJ International Conference
provide the foundation for performing reliable surface on Intelligent Robots and Systems, 1:235-242, 1995.
and orbital construction and maintenance. 9. Simmons R., Singh S., Hershberger D., Ramos J.,
and Smith T., “First Results in the Coordination of
Heterogeneous Robots for Large-Scale Assembly.”
7. ACKNOWLEDGEMENTS
Proceedings of the International Symposium on
Experimental Robotics, 2000.
This work was carried out at the Jet Propulsion
10. Squyres S. et al, “Athena Investigation Overview.”
Laboratory, California Institute of Technology, under a
Journal of Geophysical Research, November 2003.
contract with the National Aeronautics and Space
11. Stroupe A., Huntsberger T., Okon A., and
Administration.
Aghazarian H., “Precision Manipulation with
Cooperative Robots.” Multi-Robot Systems: From
The authors wish to thank Neville Marzwell and Paul
Swarms to Intelligent Automata Volume III. Schultz et
Schenker for supporting this work. We also thank the
al (Eds), 2005.
Space Solar Power Development program and
12. Stroupe A., Huntsberger T., Okon A., Aghazarian
Exploration Systems Mission Directorate. Finally we
H., and Robinson M., “Behavior-Based Multi-Robot
thank our project partners, Johnson Space Center and
Collaboration for Autonomous Construction Tasks”
Northrop Grumman.
Proceedings of the International Conference Intelligent
8. REFERENCES Robots and Systems, 2005.
13. Stroupe A., Okon A., Robinson M., Huntsberger
1. Brooks R., “A Robust Layered Control System T., Aghazarian H., and Baumgartner, E., “Sustainable
for a Mobile Robot.” IEEE Journal of Robotics and Cooperative Robotic Technologies for Human and
Automation, 2(1), 1986. Robotic Outpost Infrastructure Construction and
2. Brookshire J., Singh S., and Simmons R., Maintenance.” Submitted to Autonomous Robots,
“Preliminary Results in Sliding Autonomy for 2005.
Coordinated Teams.” Proceedings of the 2004 Spring 14. Trebi-Ollennu A., Das H., Aghazarian H., Ganino
Symposium Series, March, 2004. A., Pirjanian P., Huntsberger T., and Schenker P.,
3. Huntsberger T., Pirjanian P., Trebi-Ollennu A., “Mars Rover Pair Cooperatively Transporting a Long
Nayar H.D., Aghazarian H., Ganino A., Garrett M., Payload.” Proceedings of the IEEE International
Joshi S.S., Schenker P.S., “CAMPOUT: A Control Conference on Robotics and Automation, 2002.
Architecture for Tightly Coupled Coordination of 15. Lindemann, R. and Voorhees, C. “Mars
Multi-Robot Systems for Planetary Surface Exploration Rover Mobility Assembly Design, Test
Exploration.” IEEE Transactions on Systems, Man & and Performance.” To appear in Proceedings of the
Cybernetics, Part A: Systems and Humans, Collective 2005 IEEE Conference on Systems, Man, and
Intelligence, 33(5): 550-559, 2003. Cybernetics, 2005.
4. Kennedy B., Agazarian H., Cheng Y., Garrett M., 16. Cheng, Y., Maimone, M., and Matthies, L. “Visual
Hickey G., Huntsberger T., Magnon L., Mahoney C., Odometry on the Mars Exploration Rovers.” Yang
Meyer A., and Knight J., “LEMUR: Legged Excursion Cheng, Mark Maimone and Larry Matthies. To appear
Mechanical Utility Rover.” Autonomous Robots in Proceedings of the 2005 IEEE Conference on
11:201-205, Kluwer Press, 2001. Systems, Man, and Cybernetics, 2005.
5. Mukaiyama T., Kyunghwan K., and Hori Y., 17. Reeves, G. “An Overview of the Mars Exploration
“Implementation of cooperative manipulation using Rovers Flight Software.” To appear in Proceedings of
decentralized robust position/force control.” the 2005 IEEE Conference on Systems, Man, and
Proceedings of the 4th International Workshop on Cybernetics, 2005.
Advanced Motion Control, 2:529-534, 1996.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy