A Position Based Visual Tracking System For A 7 DOF Robot Manipulator Using A Kinect Camera
A Position Based Visual Tracking System For A 7 DOF Robot Manipulator Using A Kinect Camera
Abstract— This paper presents a position based visual track- in Cartesian space or 3-D space and allows the direct and
ing system of a redundant manipulator using a Kinect camera. natural specification of the desired relative trajectories in the
Kinect camera provides 3-D information of a target object, 3-D space, used often for robotic task specification. Also,
therefore the control algorithm of the position-based visual
servoing (PBVS) can be simplified, as there is no requirement to by separating the pose estimation problem from the control
estimate a 3-D feature point position from the extracted image design problem, the control designer can take advantage of
and the camera model. The Kalman filter is used to predict the well-established robot 3-D control algorithms. Reviews and
target position and velocity. This control method is applied to a applications of PBVS have been presented in [9], [10].
calibrated robotic system with eye-to-hand configuration. The The majority of cases for the visual servoing control
stability analysis has been derived and real-time experiments
have been carried out using a 7 DOF PowerCube manipulator design problem estimate a 3-D parameter of the target object
from Amtec Robotic. The experimental results of both static and from the extracted image data obtained from a 2-D camera
moving targets are presented to demonstrate and to verify the system. Recently, Microsoft introduced a 3-D camera system
proposed position based visual tracking system performance. Microsoft Kinect camera. The provided features of the Kinect
camera system have been transforming not only computer
I. I NTRODUCTION gaming [11] but also research in robotics [12], [13]. In
The interaction of a robot manipulator end-effector with visual servoing system, the use of the direct 3-D information
parts or other objects in the work environment is a prominent from the Kinect camera simplifies the design effort of the
subject in most robot application research. In a conventional controller. Therefore, in this paper, we introduce the devel-
way, the high accuracy of positioning a robot end-effector in opment of a vision guided robot manipulator for tracking
a fixed world frame can be achieved with predefined robot a moving object using a Kinect camera, incorporating a
operations in a structured working space. Uncertainties in Kalman filter for target object state estimation.
either the robot end-effector or the target object position The organisation of this paper is as follows: the kinematic
can lead to a position missmatch which may cause a failure model of a 7 DOF PowerCube manipulator is introduced
of the operation. Among various kinds of sensors available in Section II. The state estimation of a moving object is
for robotic systems, the visual sensor is one of the most presented in Section III. The development of the position
promising device since salient informations can be extracted based visual tracking system is discussed in Section IV. The
from images. A vision-guided robot system is also known stability in sense of Lyapunov of the proposed method is
as a visual servoing system; comprehensive discussions of a explained in Section V. The experimental setup using a 7
basic visual servoing system are presented in [1], [2] and its DOF robot manipulator from PowerCube [14] with eye-to-
advance approaches has been discussed in [3]. hand configuration is explained in Section VI, followed by
Visual servoing control methods are mainly categorised the experimental results in Section VII. Finally, the work is
by the way the task error is presented. The first method is summarised in Section VIII.
called image based visual servoing, where the error signal is
II. K INEMATIC MODEL OF A 7 DOF P OWER C UBE
defined as image features error in image space. IBVS maps
MANIPULATOR
the error vector in the image space to the joint space of
a robot manipulator. IBVS schemes are also called feature- The seven DOF PowerCube manipulator considered in this
based schemes and known as 2D visual servoing. One of the paper is constructed as open chains where every revolute
problems with IBVS schemes is that it is difficult to estimate joint connects the successive links (Figure 1). The role of the
depth. A calibrated stereo camera can be used to overcome model of the mechanical structure is to place the end-effector
this problem, however the depth can also be estimated using at a given location (position and orientation) with a desired
a monovision system as presented in [4], [5]. Examples of velocity and acceleration. Given joint angles of each link,
IBVS applications have been discussed in [6]–[8] The second the forward kinematics of the manipulator exactly determines
method of vision-based robot control is called position based the position and orientation of the robot end-effector in the
visual servoing. A PBVS system defines the error signal Cartesian space taking the base link as the reference. The
kinematics model is concerned with the relationship between
Indrazno Siradjuddin, Laxmidhar Behera, T.M. McGinnity and Sonya the individual joints of the manipulator and the position and
Coleman are with the Intelligent Systems Research Center (ISRC), Uni- the orientation of the end effector. The Denavit-Hartenberg
versity of Ulster, Londonderry, Northern Ireland, United Kingdom (phone:
+44 (0)28 7137 5616; email: siradjuddin-i@email.ulster.ac.uk, {l.behera, (D-H) parameters for this manipulator are derived using the
tm.mcginnity, sa.coleman}@ulster.ac.uk). convention given in [15].
A. A static target
In this experiment, the robot end-effector and the target
position were chosen in arbitrary initial positions. The task
was to move the robot end-effector from the initial position
to the target position denoted as ≠ and Æ, respectively, as
shown in Fig. 5 which also shows the robot links movement.
Consider only the trajectory of the robot end-effector, the
straight line motion can be approximately achieved.
Exponential decay of the error signal kek is achieved, thus
Fig. 4. The extrinsic camera calibration the control design Eq.(15) is verified, as shown in Fig. 6. The
2
z-axis (m)
0.8
0.6
0.4 ¨
0.2 ≠ 1.5
0
-0.2 ✓1
0.4 1 ✓2
Æ 0.2
✓3
-0.4 0
✓ (rad)
-0.2 y-axis (m) 0.5
0 -0.2
x-axis (m) 0.2 -0.4
0.4
0
Fig. 5. Robot manipulator links movement for positioning the robot end-
effector to the static target object position. ¨ is the robot manipultor, ≠ is
-0.5
the robot end-effector initial position and Æ is the robot end-effector final
position
-1
1 0 1 2 3 4 5 6 7 8
Time (s)
1.5 ✓6
✓7
0.4
1
✓ (rad)
0.2
0.5
0
0 1 2 3 4 5 6 7 8 0
Time (s)
Fig. 6. The normalised error e for positioning the robot end-effector to -0.5
the static target object position 0 1 2 3 4 5 6 7 8
Time (s)
0.8
angles. The joint angle were changed along its trajectory 0.6
and converged to the configuration such that the robot end- 0.4 ≠
0.2
effector was altered in the target object position. 0 ¨
-0.2 Æ
B. A moving target 0.4
0.3
0.2
The experiment described in this section demonstrate the -0.4
-0.2 Ø 0.1
y-axis (m)
performance of the developed control algorithm for tracking 0
x-axis (m) 0.2 -0.1
0
a moving target. A target object was placed on top platform 0.4 -0.2
of mobile robot Pioneer. The scenario for using mobile robot Fig. 8. Robot manipulator links movement for for tracking a moving target.
Pioneer to hold the target object is only for the repetitive ¨ is the robot manipultor, ≠ represents the robot manipulator links, Æ is
experiment while the control parameter was adjusted manu- the target object trajectory and Ø is the stop point
ally, however the same performance could be achieved for a
free form motion of the target object, for example: the target
object could be moved by hand. The mobile robot pioneer Fig. 8 shows the robot manipulator links movement and the
was moved forwards and backwards along a straight line target object trajectory. The movement sequences of the mo-
path. The mobile robot pioneer was controlled independently bile robot pioneer was described as Ø-Æ-Ø. As the tracking
using a simple proportional velocity controller. The tracking controller was started when the target object approximately in
system in the robot manipulator was started manually and the middle of the way, the robot end-effector moved toward
the robot end-effector’s initial position was chosen to be the moving target and decreased the position error quickly,
approximately in the middle of the mobile robot straight line specifically in the z axis direction. The robot end-effector
path with 50 cm offset in z-axis of the robot base frame. was then tracking the moving target while maintaining small
0.6
tracking of the moving target object. In the tracking stage,
0.5
the joint angle configuration was dynamically changed but
then it was converged in approximately 20 seconds when
the target object stopped.
0.4
VIII. C ONCLUSION
kek
0.3
This paper has presented a position based visual tracking
system of a 7 DOF redundant manipulator using a Kinect
0.2
camera. The Kalman filter was used to estimate the state
of a moving object in 3-D space. The 3-D information
0.1
of the target object obtained directly from the Kinect sen-
sor data which then transformed to the robot manipulator
0
base coordinate system. The transformation matrix from the
0 5 10 15 20 25
Time (s) Kinect camera frame to the robot manipulator base frame
was obtained through extrinsic camera calibration process.
Fig. 9. The normalised error e for tracking a moving target Using 3-D information from the Kinect camera, the effort
of the controller development can be simplified by taking the
2
advantage of well-established robot manipulator Cartesian
control algorithms. The experimental results show that the
1.5 system convergence of the proposed method for positioning
and tracking tasks can be achieved.
✓1
1
✓2 ACKNOWLEDGEMENT
✓ (rad)