Ok Task Based Control
Ok Task Based Control
Olivier Kermorgant
Basics of sensor-based controlyi −1
Task-based control
• Control in the task space
• + Robust, easy to design
• – What about the trajectory?
• – How to combine several tasks?
Content of this courseyi −1
Contents
• Basics of visual servoing and task function
• Visual moments for better trajectories
• Multi-task control
• Combining several tasks in a single control loop
• Redundancy and hierarchy
• Ensuring constraints during control
• Model Predictive Control
Requirements
• Linear algebra
• Numerical Optimization
Visual moments
• C. Steger (1996). “On the calculation of arbitrary moments of
polygons”. In: Munchen Univ., Munchen, Germany, Tech. Rep.
Fgbv-96-05
• F. Chaumette (2004). “Image moments: a general and useful set of
features for visual servoing”. In: IEEE Trans. on Robotics
Principle of visual servo controlyi −1
Image-Based VS Position-Based VS
• 2D features from image
• 3D features from pose
processing
computation
• Usually no particular object
• Typically with a CAD model
model
In all cases: need to express how the features vary when the camera
moves
The interaction matrix and its estimationyi −1
b+
Usual proportional control law: v = −λL se
b+
Actual behavior: ė = Ls v = −λLs L se
• Stability depends on Ls L
b+s
• Ideal case: Ls L+
b s = Im
Stability principlesyi −1
b+
General behavior: ė = Ls v = −λLs L se
• Ls L
b+s is (m×m) of rank min(m, 6) at best
• Ls L
b+s = P−1 DP with D = Diag(d1 , . . . , dm )
• Pė = −λDPe ⇒ components of Pe vary with ėi = −λdi ei
b+
If m > rank(Ls L s ) then ∃i, di = 0: local minima
• ∃e ̸= 0 such that Lb+s e = 0
• In particular if m > 6
b+
If Ls Ls > 0: ∥e∥ decreases ⇒ stable
• True if rank(Ls ) = m and L b+s = Im
bs = Ls as Ls L
• Less and less true as L bs gets far from Ls
Computing L for a 3D pointyi −1
Ẋ = L3D v
X is actually fixed: Ẋ = −v + [X]× ω = −I3 [X]× v
Interaction matrix of a 3D point X: L3D = −I3 [X]×
Computing L for a 2D pointyi −1
x = X /Z
Features s = (x, y) in normalized coordinates:
y = Y /Z
ṡ = L2D v
−X /Z 2
1/Z 0
ṡ = Ẋ
0 1/Z −Y /Z 2
−(1 + x 2 ) y
−1/Z 0 x/Z xy
ṡ = v
0 −1/Z y /Z 1 + y2 −xy −x
∂s ∂s
Then ṡ = ẋ (or ṡ = Ẋ)
∂x ∂X
∂s ∂s
And Ls = L2D (or Ls = L3D )
∂x ∂X
cartesian
polar
3.0
2.5
2.0
1.5
Cartesian
1.0
1.2 0.5
1.0
0.0
0.8
0.6 0.6
0.4
0.4 0.2
0.2 0.0
0.0 0.2
0.4
0.2 0.6
Polar
Large translation error: Cartesian induces a better trajectory
4-points servo, image and 3D behaviorsyi −1
cartesian
polar1.4
1.2
1.0
0.8
0.6
0.4
0.2
0.0
0.2
0.3 Cartesian
0.2
0.1
0.0
0.1
1.0 0.8 0.6 0.4 0.2 0.0 0.2
Polar
Large rotational error: Polar induces a better trajectory
Other basic 2D featuresyi −1
2D segment: s = (x1 , y1 , x2 , y2 )
• Actually better if s = (xm , ym , l, α) or s = (xm /l, ym /l, 1/l, α)
2D line: s = (ρ, θ)
s = (t, R) of dim. 6
• Rotation expressed as θu
• Which translation and rotation?
Translations
• c to → Lc,o = −I3 [c to ]× • c tc∗ → Lc∗,c = −I3 [c tc∗ ]×
• o
tc → Lo,c = [o Rc 03 ] • c∗
tc → Lc∗,c = [c∗ Rc 03 ]
c∗
Mc
c
Mc∗
0.0
−0.1
−0.2
−0.3
c∗ t + c∗ Rc
−0.4
c
−0.5
0.4
0.3
0.2
0.1
−0.2 0.0
−0.1 0.0 0.1 0.2 0.3
ct + c Rc∗
c∗
c∗ M gets perfect 3D behavior but in both cases: object lost!
c
PBVS with Fo , image and 3D behaviorsyi −1
c
to, c∗Rc
c
to, cRc∗
0.0
−0.1
−0.2
−0.3
ct + c∗ Rc
o
−0.4
−0.5
0.5
0.4
0.3
0.2
0.1
−0.1 0.0
0.0
0.1
0.2
0.3
ct + c Rc∗
o
E. Malis and F. Chaumette (2000). “2 1/2 d visual servoing with respect to unknown
objects through a new estimation scheme of camera displacement”. In: Int. J. of
Computer Vision
Base idea:
• We need some x − y control → use a single 2D point
• We need some depth control → use Z /Z ∗
• We need some rotation → use c Rc ∗
2 1/2D VSyi −1
2 1/2 D
c
to, c∗Rc
0.0
−0.1
−0.2
2 1/2 D
−0.3
−0.4
−0.5
0.5
0.4
0.3 0.3
0.2 0.2
0.1 0.1
0.0 0.0
−0.1
ct + c∗ Rc
o
with f (x, y ) = x i y j
RR
mij = DR f (x, y)dxdy
⇒ ṁij = ⊤ D
contour f (x, y)ẋ ndl
ZZ
i−1 j i j−1 i j ∂ ẋ ∂ ẏ
ṁij = ix y ẋ + jx y ẏ + x y + dxdy (1)
D ∂x ∂y
Remember the
interaction matrix of a point
ẋ = −1/Z 0 x/Z xy −(1 + x 2 ) y vc
ẏ = 0 −1/Z y /Z 1 + y 2 −xy −x vc
∂ ẋ
= −A 0 (2Ax + By + C) y −2x 0 vc
⇒ ∂x
∂ ẏ
= 0 −B (Ax + 2By + C) 2y −x 0 vc
∂y
Put back in (1) and you get sums of several moments
Interaction matrix Lij revealedyi −1
Under planar assumption Lij = mvx mvy mvz mωx mωy mωz
mvx = −i(Amij + Bmi−1,j+1 + Cmi−1,j ) − Amij
mvy = −j(Ami+1,j−1 + Bmij + Cmi,j−1 ) − Bmij
mvz = (i + j + 3)(Ami+1,j + Bmi,j+1 + Cmij ) − Cmij
with
mωx = (i + j + 3)mi,j+1 + jmi,j−1
m = −(i + j + 3)mi+1,j − imi−1,j
ωy
mωz = imi−1,j+1 − jmi+1,j−1
not to be known by heart and actually useless
Compromise between
• 2D features (from the image)
• 3D features (from pose computation)
Questions?
Conclusion on visual servoingyi −1
Assumptions
• The robot can be controlled in velocity
• Several sensors Si attached to the robot
• Correctly calibrated robot and sensors
• Only the robot is moving
• Proportional control laws
Notations
• Velocity command u, n = dim(u)
• Error ei = si − s∗i , mi = dim(ei )
• Feature Jacobian Ji
∂si ∂si ∂s∗
• ėi = u+ − i = Ji u = ṡi
∂x ∂t ∂t
• Feature desired variation ė∗i = −λ(si − s∗i )
Bibliographyyi −1
Multi-tasking is about
• Performing several tasks at the same time
• Ensuring some tasks are performed prior to other ones
N. Mansard and F. Chaumette (2009). “Directional redundancy for robot control”. In:
IEEE Trans. on Automatic Control
⇒ Synergy between several tasks
M. Marey and F. Chaumette (2010). “A new large projection operator for the
redundancy framework”. In: IEEE Int. Conf. on Robotics and Automation
⇒ Projector on the norm (unstable at final position)
If m ≥ n: over-constrained, we do at best
• u = arg min ∥Ju − ė∗ ∥2
u
• Typical case: 4 points = 8 features for only 6 possible motions
• Impossible to do anything else: all dof’s are constrained
If m < n: under-constrained
Resulting
motion
v x = 0
v = v∗
y y
v = 0
z
ωz = −vy∗ /Z
1st example: underwater task with extended Jacobianyi −1
−λxn
0 1/Z 0 1
−λy n 0 0 1/Z 0
ė∗ =
−λ(an − an∗ ) , J = 2a/Z
0 0 0
vy∗ 0 1 0 0
Resulting
motion
x v = 0
v = v∗
y y
v z = 0
ωz = −vy∗ /Z
Back to 2D example
2D exampleyi −1
Specification
• Unknown: (x, y )
• Task: be as close as possible from point (x0 , y0 )
• Constraint: stay on the line x − y + 3 = 0
Pure 2D
Pure PBVS
−0.5
0.0
0.5
1.0 2D coordinates
1.5
−2.0
2.0
−1.5
3.5 −1.0
3.0
2.5 −0.5
2.0 0.0
1.5
1.0 0.5
0.5
1.0
0.0
3D pose
Which one is better?
When asking for a straight line + field of viewyi −1
2D constraint
Pure PBVS
Area constraint
−0.10
−0.6
−0.05
0.00 −0.5 Full visibility
0.05 −0.4
0.10 −0.3
0.15 −0.2
0.20 −0.1
0.25 0.0
0.30 0.1
0.0 −0.1
0.35 0.2 0.1
0.4 0.3
0.5
2.5 25
2D-point weights
2D-point weights
2.0 20
1.5 15
1.0 10
0.5 5
0.0 0
0 200 400 600 800 1000 0 200 400 600 800 1000
iterations iterations
Case 1. eye-in-hand Case 2. eye-in-hand
Eye-to-hand weights
20 6
2D-point weights
5
15
4
10 3
2
5
1
0 0
0 200 400 600 800 1000 0 200 400 600 800 1000
iterations iterations
Case 3. eye-in-hand Case 3. eye-to-hand
Remaining problemsyi −1
A hint for the solution was proposed by two very different persons
What should a robot do? Two formulationsyi −1
1 A robot may not injure a human being or, let f1 be the cost function for law 1
1 Ω1 = argmin f1 (u)
through inaction, allow a human being to
u
come to harm.
2 A robot must obey the orders given it by let f2 be the cost function for law 2
human beings except where such orders 2 Ω2 = argmin f2 (u)
would conflict with the First Law. s.t. u ∈ Ω1
3 A robot must protect its own existence as let f3 be the cost function for law 3
long as such protection does not conflict 3 Ω3 = argmin f3 (u)
with the First or Second Laws. s.t. u ∈ Ω2
4 Really?
4 The end.
let fk be the cost function for law k
28 Ωk = argmin fk u
s.t. u ∈ Ωk−1
Quadratic programmingyi −1
1
u = argmin uT Hu − cT u
2
s.t. Au = b
s.t. Cu ≤ d
QP formulation for task controlyi −1
argmin f (u) ?
• f ′ (u) = 0
• f ′′ (u) ≥ 0
1 T
if f (u) = u Hu − cT u
2
• f ′ (u) = Hu − c
⇒ Solve Hu − c = 0
• f ′′ (u) = H
⇒ Solutions depend on the invertibility of H
⇒ As H = JTi Ji , actually depend on rank of Ji
Invertibility of H depends on feature Jacobianyi −1
n≤m n>m
• H > 0 invertible • H has n − m null eigen values
• u = H−1 c • JJT invertible, dim m × m
• u = (JT J)−1 JT ė∗ = J+ ė∗ • u0 = JT (JJT )−1 ė∗ = J+ ė∗
• 1 solution • u = u0 + (In − J+ J)z
• ∀z, f (u) = 0 and f ′ (u) = 0
• set of dim n − m
With equality constraintsyi −1
1 T
)
f (u) = u Hu − cT u
2 L(u, λ) = 12 uT Hu − cT u + λT (Au − b)
g(u) = Au − b
∂L
= Hu − c + AT λ = 0 H AT
u c
∂u =
∂L A 0 λ b
= Au − b = 0
∂λ
H AT
symmetric positive ⇒ 1 or ∞ solution
A 0
Why equality constraints are not really a problemyi −1
Main limitations
• Cannot handle explicitely inertia
• Just use small values for α
• Can only optimize the sole next control input
Trade-off between:
• Path / trajectory planning
• Offline
• Require perfect (future) knowledge of the environment
• To be done again if initial plan cannot be performed
• Reactive control
• Online
• Instantaneous measurement of the situation
• Limited to the next iteration only
Core idea
• Plan a trajectory over some time horizon
• Apply only the first control input
• Get new measurement and repeat planning
Main idea illustratedyi −1
Using N = 1 Using N = 10
1.0 2.5 1.0
3.5
2.5
0.6 1.5 0.6
x x 2.0
u u
x, x *
x, x *
x* x*
u
0.4 1.0 0.4 1.5
1.0
0.2 0.5 0.2
0.5
D. Dimitrov et al. (2010). “An optimized linear model predictive control solver”. In:
Recent Advances In Optimization and Its Applications In Engineering
F. Borrelli, A. Bemporad, and M. Morari (2017). Predictive control for linear and
hybrid systems.
MPC formulationyi −1
Cx x ≤ dx
Assuming bounds / linear constraints:
Cu u ≤ du
Then MPC is a (big) Quadratic Program:
u= arg min J(x, u) (quadratic)
u,x
Cx xk ≤ dx
s.t. ∀k ∈ J1, nK : Cu uk−1 ≤ du
xk = Axk−1 + Buk−1
MPC System
2
+
Model -
MPC tuning parametersyi −1
Prediction time ∆t: smaller is more accurate but less far in the future
Q and R weighting matrices
Qk = αk Q
• We may want to use with 0 < α < 1
Rk = β k R
Example of limited horizon controlyi −1
6 × 100 3 5
Final error
4
3
4 × 100 4
6
3 × 100 6
5
2 7 8 8
4 6 7 8
2 × 100 3 5 7
1.5 2.0 2.5 3.0 3.5
Zero-order hold Linear interpolation Average computation time