0% found this document useful (0 votes)
51 views7 pages

A Position Based Visual Tracking System For A 7 DOF Robot Manipulator Using A Kinect Camera

Uploaded by

Maitriya Damani
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
51 views7 pages

A Position Based Visual Tracking System For A 7 DOF Robot Manipulator Using A Kinect Camera

Uploaded by

Maitriya Damani
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

WCCI 2012 IEEE World Congress on Computational Intelligence

June, 10-15, 2012 - Brisbane, Australia IJCNN

A position based visual tracking system for a 7 DOF robot


manipulator using a Kinect camera
Indrazno Siradjuddin, Laxmidhar Behera, T.M. McGinnity and Sonya Coleman

Abstract— This paper presents a position based visual track- in Cartesian space or 3-D space and allows the direct and
ing system of a redundant manipulator using a Kinect camera. natural specification of the desired relative trajectories in the
Kinect camera provides 3-D information of a target object, 3-D space, used often for robotic task specification. Also,
therefore the control algorithm of the position-based visual
servoing (PBVS) can be simplified, as there is no requirement to by separating the pose estimation problem from the control
estimate a 3-D feature point position from the extracted image design problem, the control designer can take advantage of
and the camera model. The Kalman filter is used to predict the well-established robot 3-D control algorithms. Reviews and
target position and velocity. This control method is applied to a applications of PBVS have been presented in [9], [10].
calibrated robotic system with eye-to-hand configuration. The The majority of cases for the visual servoing control
stability analysis has been derived and real-time experiments
have been carried out using a 7 DOF PowerCube manipulator design problem estimate a 3-D parameter of the target object
from Amtec Robotic. The experimental results of both static and from the extracted image data obtained from a 2-D camera
moving targets are presented to demonstrate and to verify the system. Recently, Microsoft introduced a 3-D camera system
proposed position based visual tracking system performance. Microsoft Kinect camera. The provided features of the Kinect
camera system have been transforming not only computer
I. I NTRODUCTION gaming [11] but also research in robotics [12], [13]. In
The interaction of a robot manipulator end-effector with visual servoing system, the use of the direct 3-D information
parts or other objects in the work environment is a prominent from the Kinect camera simplifies the design effort of the
subject in most robot application research. In a conventional controller. Therefore, in this paper, we introduce the devel-
way, the high accuracy of positioning a robot end-effector in opment of a vision guided robot manipulator for tracking
a fixed world frame can be achieved with predefined robot a moving object using a Kinect camera, incorporating a
operations in a structured working space. Uncertainties in Kalman filter for target object state estimation.
either the robot end-effector or the target object position The organisation of this paper is as follows: the kinematic
can lead to a position missmatch which may cause a failure model of a 7 DOF PowerCube manipulator is introduced
of the operation. Among various kinds of sensors available in Section II. The state estimation of a moving object is
for robotic systems, the visual sensor is one of the most presented in Section III. The development of the position
promising device since salient informations can be extracted based visual tracking system is discussed in Section IV. The
from images. A vision-guided robot system is also known stability in sense of Lyapunov of the proposed method is
as a visual servoing system; comprehensive discussions of a explained in Section V. The experimental setup using a 7
basic visual servoing system are presented in [1], [2] and its DOF robot manipulator from PowerCube [14] with eye-to-
advance approaches has been discussed in [3]. hand configuration is explained in Section VI, followed by
Visual servoing control methods are mainly categorised the experimental results in Section VII. Finally, the work is
by the way the task error is presented. The first method is summarised in Section VIII.
called image based visual servoing, where the error signal is
II. K INEMATIC MODEL OF A 7 DOF P OWER C UBE
defined as image features error in image space. IBVS maps
MANIPULATOR
the error vector in the image space to the joint space of
a robot manipulator. IBVS schemes are also called feature- The seven DOF PowerCube manipulator considered in this
based schemes and known as 2D visual servoing. One of the paper is constructed as open chains where every revolute
problems with IBVS schemes is that it is difficult to estimate joint connects the successive links (Figure 1). The role of the
depth. A calibrated stereo camera can be used to overcome model of the mechanical structure is to place the end-effector
this problem, however the depth can also be estimated using at a given location (position and orientation) with a desired
a monovision system as presented in [4], [5]. Examples of velocity and acceleration. Given joint angles of each link,
IBVS applications have been discussed in [6]–[8] The second the forward kinematics of the manipulator exactly determines
method of vision-based robot control is called position based the position and orientation of the robot end-effector in the
visual servoing. A PBVS system defines the error signal Cartesian space taking the base link as the reference. The
kinematics model is concerned with the relationship between
Indrazno Siradjuddin, Laxmidhar Behera, T.M. McGinnity and Sonya the individual joints of the manipulator and the position and
Coleman are with the Intelligent Systems Research Center (ISRC), Uni- the orientation of the end effector. The Denavit-Hartenberg
versity of Ulster, Londonderry, Northern Ireland, United Kingdom (phone:
+44 (0)28 7137 5616; email: siradjuddin-i@email.ulster.ac.uk, {l.behera, (D-H) parameters for this manipulator are derived using the
tm.mcginnity, sa.coleman}@ulster.ac.uk). convention given in [15].

U.S. Government work not protected by U.S. copyright


z = ( ((s2 c3 c4 + c2 s4 )c5 s2 s3 s5 )s6 +
( s2 c3 s4 + c2 c4 )c6 )d7 +
( s2 c3 s4 + c2 c4 )d5 +
c 2 d3 + d1 (3)
where sj and cj denote sin ✓j and cos ✓j respectively. Similar
expressions for orientation, , the roll, , the pitch and ,
the yaw of the robot end-effector can be derived using D-H
parameters. By defining a task-space vector for the robot end-
effector x = [x, y, z, , , ]T and the joint angle vector ✓ =
[✓1 , ✓2 , · · · , ✓7 ]T , the forward kinematics can be represented
as
x = f (✓) (4)
where the forward kinematic map f is highly nonlinear. The
velocity relationships are then determined by differentiating
both sides of equation (4):
dx d✓
=J (5)
dt dt
where the forward kinematic Jacobian J is expressed as
@f
J= (6)
@✓
Since the Jacobian associated with linear velocity and angular
Fig. 1. Frame assigment for computing D-H parameters velocity of the robot end-effector can be computed separately,
TABLE I the Jacobian can be further expressed as:
D-H PARAMETERS OF P OWER C UBE J = [Jv J! ]
T
(7)
where Jv is the Jacobian associated with the linear velocity
link-j ↵j aj dj ✓j of the robot end-effector and J! is the Jacobian associated
1 -90o 0 d1 ✓1
2 90o 0 0 ✓2
with the angular velocity of the manipulator. Both Jacobians
3 -90o 0 d3 ✓3 are 3 ⇥ 7 matrices. Given the linear and angular velocities
4 90o 0 0 ✓4 of the robot end-effector, the joints velocity can be deduced
5 -90o 0 d5 ✓5
6 90o 0 0 ✓6
from equation (5) as follows:
7 0o 0 d7 ✓7
✓˙ = J+ ẋ (8)
where J is the pseudo inverse of the Jacobian, J. For the
+
The parameters are given in Table I. The robot link case of a redundant manipulator, the pseudo inverse robot
dimension are as follows: d1 = 0.318m, d3 = 0.3375m, Jacobian is expressed as
d5 = 0.3085m, and d7 = 0.2656m. The robot end-effector 1
position as obtained by using D-H parameters from Table I J+ = JT JJT (9)
is expressed as:
III. TARGET TRACKING USING K ALMAN FILTER
x = ( (((c1 c2 c3 s1 s3 )c4 c1 s2 s4 )c5 + The objective of the proposed method is to follow a mov-
( c1 c2 s3 s1 c3 )s5 )s6 ) + ing object. The problem of estimating and predicting position
( c1 c2 c3 s1 s3 )s4 c1 s2 c4 )c6 )d7 + and velocity of a moving object in real time comprises an
important subject for vision-based robot control system. The
( (c1 c2 c3 s1 s3 )s4 c1 s2 c4 )d5
Kalman filter is the most common tool for object tracking
c 1 s 2 d3 (1) systems and is based on the concept of defining a state space
model of a dynamic system. The state vector for the dynamic
y = ( (((s1 c2 c3 + c1 s3 )c4 s1 s2 s4 )c5 + model is defined as object 3-D position and its velocities.
( s1 c2 s3 + c1 c3 )s5 )s6 + Therefore, the dynamic model should accurately describe the
relative motion of the target object with respect to the robot
( (s1 c2 c3 + c1 s3 )s4 s1 s2 c4 )c6 )d7 +
manipulator base frame, as a reference frame. Let
( (s1 c2 c3 + c1 s3 )s4 s1 s2 c4 )d5 +
s 1 s 2 d3 (2) Xo = [xo , ẋo ]T (10)
be the system state vector, where xo is the object position manipulator base frame is chosen as the reference frame, xko
coordinate vector and ẋo is the object velocity vector in 3-D has to be represented with respect to the robot manipulator
coordinate system. The acceleration of the moving object is base frame. The transformation of the target object position
assumed to be constant for a small time sampling in each from the Kinect camera frame to the reference frame is
iteration. The discretised linear dynamic model is described defined as
as follows:
xbo = [Rkb ]T (xko dkb ) (19)
Xo(k+1) = FXo(k) + w(k) (11)
where Rkb and dkb are the rotation matrix and the translation
z(k) = HXo(k) + v(k) (12) of the origin of the robot manipulator base frame relative to
where F is a state transition matrix, H = [1 0]T is an the Kinect camera frame. Rkb and dkb are obtained from the
observation model vector. The process noise and the mea- extrinsic camera calibration process.
surement noise are denoted as w and v, respectively. From V. S TABILITY ANALYSIS OF THE CLOSED - LOOP SYSTEM
the Newton’s law motion model, the state transition matrix
A stability analysis of the position-based visual tracking
is defined as
 system can be verified using the Lyapunov method. Let the
1 t Lyapunov candidate function V is described as
F= (13)
0 1 1
V = eT e (20)
where t is the time sampling. 2
IV. P OSITION BASED VISUAL TRACKING SYSTEM where e is the error function which has been described in
Eq.(14). The derivative of V is expressed as
In the position-based visual servoing the error signal e is
defined in 3-D space. In this work, the orientation of the V̇ = eT ė (21)
end-effector is fixed, therefore the error signal e is defined Substituting the derivative of Eq.(14) into Eq.(21), the deriva-
as tive of the Lyapunov candidate function is rewritten as
e=x xo (14) V̇ = eT ẋ eT ẋo (22)
where x is the end-effector position and xo is the target = e J✓˙
T T
e ẋo (23)
object position. x and xo are represented in the manipulator
base frame. The control system is designed to minimise the Substituting the control algorithm of the PBVS as described
error exponentially, this process can be expressed as in Eq.(18) into Eq.(23) gives
V̇ = eT JJ+ e + eT JJ+ ẋo eT ẋo (24)
ė = e (15)
If JJT is invertible then JJ+ = I, Eq.(24) can be simplified
where is positive decay constant.
as
The robot end-effector velocity can be obtained using the
derivative of Eq.(14) and it is expressed as V̇ = eT e (25)
ẋ = ė + ẋo (16) where V̇ < 0 if > 0. V̇ = 0 if the system converges
in the equilibrium point e = 0, therefore the system is
Subtituting Eq.(16) into Eq.(8), the computed joint velocities
asymptotically stable in the sense of Lyapunov. In addition,
vector becomes
in the equilibrium point ė = 0, this confirms that the velocity
✓˙ = J+ (ė + ẋo ) (17) of the robot end-effector ẋ is equal with the velocity of the
target object ẋo .
Substituting Eq.(14) and Eq.(15) into Eq.(17), the control
algorithm of the position based visual tracking system is VI. E XPERIMENTAL SETUP
computed as The experimental setup is shown in Fig. 3. The exper-
iments have been carried out using a 7 DOF PowerCube
✓˙ = J+ ( (xo x) + ẋo ) (18)
robot manipulator with a Kinect camera placed on a static
where [xo ẋo ] is obtained using the predicted system states
T
place at the front of the robot manipulator. The process of
output of the Kalman filter. obtaining and manipulating data from the Kinect camera is
The architecture of the position based visual tracking is a heavy process, therefore, the control process is distributed
illustrated in Fig. 2. The Kinect camera provides image using ROS (Robot Operating System). The image processing
and 3-D information of every pixel point. Firstly, the target and the Kalman filtering were executed using one computer,
object image is extracted from the background image using a thus only the Kalman filter output state vector message
simple blob detection algorithm based on target image colour, was broadcasted to the robot manipulator controller through
then corresponding 3-D position (xko ) of the target blob the computer network. This setup significantly reduces the
centre point is obtained. The target object position vector computational processing time for the closed-loop robot
is represented in the Kinect camera frame. Since the robot manipulator controller.
Fig. 2. Position based visual tracking with kalman filter

In this experiment either the position of the manipulator


base or the camera base frame is static which is also known
as eye-to-hand configuration. The position of those frames
has to be calibrated. Fig. 4 shows the extrinsic camera
calibration scenario used in this experiment. The extrinsic
camera calibration process is to estimate Tkb the transforma-
tion of the manipulator base frame with respect to the Kinect
camera frame. By placing the chessboard on the tip centre,
the 3-D position of each corner point of the chessboard
can be computed with the help of the forward kinematic
homogeneous transformation matrix Tbc . The corresponding
2-D point of the chessboard corners with respect to the
camera frame are extracted from the image. Given a set of
point 3-D - 2-D pairs, the Tkb can be estimated. In [16], the
Fig. 3. The experimental setup of position based visual tracking using a extrinsic camera calibration algorithms have been discussed.
static Kinect camera
VII. E XPERIMENTAL RESULTS
Two experiments were performed to investigate the perfor-
A ball is used as the target in an uncluttered environment
mance of the position based visual tracking system control
in these experiments. The target object is placed on the
design described in the previous section. The first experiment
mobile robot pioneer and the robot pioneer is driven forward
is to use a static object and the second experiment is to
and backward in a straight line path across the base of the
apply the system for tracking a moving target. The main
robot manipulator. The movement of the mobile robot is
purpose of the experiments is to analyse the error signal
independent from the movement of the robot end-effector.
during robot movement, followed by the analysis of the each
This scenario is to verify the control algorithm for tracking
joint angle trajectory in joint space. In addition the trajectory
a moving object.
of a moving object is compared with the trajectory of the
robot end-effector in task space.

A. A static target
In this experiment, the robot end-effector and the target
position were chosen in arbitrary initial positions. The task
was to move the robot end-effector from the initial position
to the target position denoted as ≠ and Æ, respectively, as
shown in Fig. 5 which also shows the robot links movement.
Consider only the trajectory of the robot end-effector, the
straight line motion can be approximately achieved.
Exponential decay of the error signal kek is achieved, thus
Fig. 4. The extrinsic camera calibration the control design Eq.(15) is verified, as shown in Fig. 6. The
2
z-axis (m)

0.8
0.6
0.4 ¨
0.2 ≠ 1.5
0
-0.2 ✓1
0.4 1 ✓2
Æ 0.2
✓3
-0.4 0

✓ (rad)
-0.2 y-axis (m) 0.5
0 -0.2
x-axis (m) 0.2 -0.4
0.4
0
Fig. 5. Robot manipulator links movement for positioning the robot end-
effector to the static target object position. ¨ is the robot manipultor, ≠ is
-0.5
the robot end-effector initial position and Æ is the robot end-effector final
position
-1
1 0 1 2 3 4 5 6 7 8
Time (s)

(a) Joint angles trajectory: ✓1 , ✓2 and ✓3


0.8
2
✓4
0.6 ✓5
kek

1.5 ✓6
✓7
0.4
1
✓ (rad)

0.2
0.5

0
0 1 2 3 4 5 6 7 8 0
Time (s)

Fig. 6. The normalised error e for positioning the robot end-effector to -0.5
the static target object position 0 1 2 3 4 5 6 7 8
Time (s)

(b) Joint angles trajectory: ✓4 , ✓5 , ✓6 and ✓7


control system converged in 5 seconds where the exponential
decay = 20. The exponential decay constant was adjusted Fig. 7. Joint angle trajectory for positioning the robot end-effector to the
static target object position
manually for fast, but stable control responses.
Fig. 7 verifies the trajectory of the robot manipulator joint
z-axis (m)

0.8
angles. The joint angle were changed along its trajectory 0.6
and converged to the configuration such that the robot end- 0.4 ≠
0.2
effector was altered in the target object position. 0 ¨
-0.2 Æ
B. A moving target 0.4
0.3
0.2
The experiment described in this section demonstrate the -0.4
-0.2 Ø 0.1
y-axis (m)
performance of the developed control algorithm for tracking 0
x-axis (m) 0.2 -0.1
0
a moving target. A target object was placed on top platform 0.4 -0.2
of mobile robot Pioneer. The scenario for using mobile robot Fig. 8. Robot manipulator links movement for for tracking a moving target.
Pioneer to hold the target object is only for the repetitive ¨ is the robot manipultor, ≠ represents the robot manipulator links, Æ is
experiment while the control parameter was adjusted manu- the target object trajectory and Ø is the stop point
ally, however the same performance could be achieved for a
free form motion of the target object, for example: the target
object could be moved by hand. The mobile robot pioneer Fig. 8 shows the robot manipulator links movement and the
was moved forwards and backwards along a straight line target object trajectory. The movement sequences of the mo-
path. The mobile robot pioneer was controlled independently bile robot pioneer was described as Ø-Æ-Ø. As the tracking
using a simple proportional velocity controller. The tracking controller was started when the target object approximately in
system in the robot manipulator was started manually and the middle of the way, the robot end-effector moved toward
the robot end-effector’s initial position was chosen to be the moving target and decreased the position error quickly,
approximately in the middle of the mobile robot straight line specifically in the z axis direction. The robot end-effector
path with 50 cm offset in z-axis of the robot base frame. was then tracking the moving target while maintaining small
0.6
tracking of the moving target object. In the tracking stage,
0.5
the joint angle configuration was dynamically changed but
then it was converged in approximately 20 seconds when
the target object stopped.
0.4

VIII. C ONCLUSION
kek

0.3
This paper has presented a position based visual tracking
system of a 7 DOF redundant manipulator using a Kinect
0.2
camera. The Kalman filter was used to estimate the state
of a moving object in 3-D space. The 3-D information
0.1
of the target object obtained directly from the Kinect sen-
sor data which then transformed to the robot manipulator
0
base coordinate system. The transformation matrix from the
0 5 10 15 20 25
Time (s) Kinect camera frame to the robot manipulator base frame
was obtained through extrinsic camera calibration process.
Fig. 9. The normalised error e for tracking a moving target Using 3-D information from the Kinect camera, the effort
of the controller development can be simplified by taking the
2
advantage of well-established robot manipulator Cartesian
control algorithms. The experimental results show that the
1.5 system convergence of the proposed method for positioning
and tracking tasks can be achieved.
✓1
1
✓2 ACKNOWLEDGEMENT
✓ (rad)

✓3 I.Siradjuddin is supported by the Ministry of National Ed-


0.5 ucation Republic of Indonesia under the Directorat Jenderal
Pendidikan Tinggi (DIKTI) scholarship program.
0 R EFERENCES
[1] F. Chaumette and S. Hutchinson, “Visual servo control, part I: Basic
-0.5 approaches,” IEEE Robotics and Automation Magazine, vol. 13, no.
0 5 10 15 20 25
4, pp. 82–90, December 2006.
Time (s)
[2] F. Janabi-Sharifi, Lingfeng Deng, and W.J. Wilson, “Comparison of
basic visual servoing methods,” Mechatronics, IEEE/ASME Transac-
(a) Joint angles trajectory: ✓1 , ✓2 and ✓3 tions on, vol. 16, no. 5, pp. 967–983, oct. 2011.
[3] F. Chaumette and S. Hutchinson, “Visual servo control, part II:
2
✓4
Advanced approaches,” IEEE Robotics and Automation Magazine,
vol. 14, no. 1, pp. 109–118, March 2007.
✓5
[4] H. Fujimoto, “Visual servoing of 6 dof manipulator by multirate
1.5 ✓6 control with depth identification,” in Decision and Control, 2003.
✓7 Proceedings. 42nd IEEE Conference on, dec. 2003, vol. 5, pp. 5408–
5413 Vol.5.
1 [5] A. De Luca, G. Oriolo, and P.R. Giordano, “On-line estimation of
feature depth for image-based visual servoing schemes,” in Robotics
✓ (rad)

and Automation, 2007 IEEE International Conference on, april 2007,


pp. 2823–2828.
0.5
[6] F. Chaumette, “Image moments: a general and useful set of features
for visual servoing,” IEEE Transactions on Robotics, vol. 20, no. 4,
pp. 713–723, August 2004.
0 [7] I. Siradjuddin, L. Behera, T.M. McGinnity, and S. Coleman, “Image
based visual servoing of a 7 dof robot manipulator using a distributed
fuzzy proportional controller,” in Fuzzy Systems (FUZZ), 2010 IEEE
-0.5 International Conference on, July 2010, pp. 1–8.
0 5 10 15 20 25 [8] O. Tahri and F. Chaumette, “Point-based and region-based image
Time (s) moments for visual servoing of planar objects,” IEEE Trans. on
Robotics, vol. 21, no. 6, pp. 1116–1127, December 2005.
(b) Joint angles trajectory: ✓4 , ✓5 , ✓6 and ✓7 [9] A. Cherubini, F. Chaumette, and G. Oriolo, “A position-based
visual servoing scheme for following paths with nonholonomic mobile
Fig. 10. Joint angle trajectory for for tracking a moving target robots,” in Intelligent Robots and Systems, 2008. IROS 2008. IEEE/RSJ
International Conference on, sept. 2008, pp. 1648–1654.
[10] D.-H. Park, J.-H. Kwon, and I.-J. Ha, “A novel position-based
visual servoing approach to robust global stability under field-of-view
position error, when the mobile robot pioneer moved back constraint,” Industrial Electronics, IEEE Transactions on, vol. PP, no.
from Æ to finally stop at Ø. The corresponding decreased 99, pp. 1, 2011.
[11] T. Leyvand, C. Meekhof, Yi-Chen Wei, Jian Sun, and Baining Guo,
error signal kek is shown in Fig. 9. “Kinect identity: Technology and experience,” Computer, vol. 44, no.
Fig. 10 presents the dynamic joint angles trajectory during 4, pp. 94–96, april 2011.
[12] J. Stowers, M. Hayes, and A. Bainbridge-Smith, “Altitude control of
a quadrotor helicopter using depth map from microsoft kinect sensor,”
in Mechatronics (ICM), 2011 IEEE International Conference on, april
2011, pp. 358–362.
[13] P. Benavidez and M. Jamshidi, “Mobile robot navigation and target
tracking system,” in System of Systems Engineering (SoSE), 2011 6th
International Conference on, june 2011, pp. 299–304.
[14] “Amtec robotics,” www.amtec-robotics.com.
[15] M.W. Spong, S. Hutchinson, and M. Vidyasagar, Robot Modeling and
Control, John Wiley & sons Inc., 2005.
[16] W. Bourgeous, L. Ma, Pengyu Chen, and YangQuan Chen, “Simple
and efficient extrinsic camera calibration based on a rational model,”
in Mechatronics and Automation, Proceedings of the 2006 IEEE
International Conference on, june 2006, pp. 177–182.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy