0% found this document useful (0 votes)
7 views85 pages

Ok Task Based Control

The document discusses task-based control in sensor-based systems, focusing on visual servoing principles and the combination of multiple tasks in control loops. It covers the computation of interaction matrices for both 2D and 3D features, emphasizing the importance of feature representation for stability and control. Additionally, it introduces concepts such as model predictive control and the use of image moments for enhancing trajectory planning.

Uploaded by

Flame Kaiser
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views85 pages

Ok Task Based Control

The document discusses task-based control in sensor-based systems, focusing on visual servoing principles and the combination of multiple tasks in control loops. It covers the computation of interaction matrices for both 2D and 3D features, emphasizing the importance of feature representation for stability and control. Additionally, it introduces concepts such as model predictive control and the use of image moments for enhancing trajectory planning.

Uploaded by

Flame Kaiser
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 85

Task-Based Control

Olivier Kermorgant
Basics of sensor-based controlyi −1

History: taught position control

Original use of sensors: still position control


• Position feedback from sensors
• Involves sensor fusion (Kalman)

Task-based control
• Control in the task space
• + Robust, easy to design
• – What about the trajectory?
• – How to combine several tasks?
Content of this courseyi −1

Contents
• Basics of visual servoing and task function
• Visual moments for better trajectories

• Multi-task control
• Combining several tasks in a single control loop
• Redundancy and hierarchy
• Ensuring constraints during control
• Model Predictive Control

Requirements
• Linear algebra
• Numerical Optimization

Let’s start with some visual servoing principles


Bibliographyyi −1

Visual servoing tutorials


• F. Chaumette and S. Hutchinson (2006). “Visual servo control. I. Basic
approaches”. In: IEEE Robotics and Automation Mag.
• F. Chaumette and S. Hutchinson (2007). “Visual servo control, Part II:
Advanced approaches”. In: IEEE Robotics and Automation Mag.

Visual moments
• C. Steger (1996). “On the calculation of arbitrary moments of
polygons”. In: Munchen Univ., Munchen, Germany, Tech. Rep.
Fgbv-96-05
• F. Chaumette (2004). “Image moments: a general and useful set of
features for visual servoing”. In: IEEE Trans. on Robotics
Principle of visual servo controlyi −1

Pinhole camera model


x = X /Z , y = Y /Z
xp = αx x + x0
yp = αy y + y0
Velocity screw of the camera vc
Extracted features s from the image
points, lines, areas, etc.

Feature setpoint - Error Proportional Control


To joint velocity
in sensor space
+

Feature extraction Sensor feedback


2D vs 3D Visual Servoingyi −1

Image-Based VS Position-Based VS
• 2D features from image
• 3D features from pose
processing
computation
• Usually no particular object
• Typically with a CAD model
model

In all cases: need to express how the features vary when the camera
moves
The interaction matrix and its estimationyi −1

The interaction matrix Ls of features s measured by an exteroceptive


sensor links the variation of s to the sensor twist v:
ṡ = Ls v with dim. Ls = (m×6)

Ls depends on s and other information


• Has to be estimated as L
cs

Setpoint s∗ and error e = s − s∗

b+
Usual proportional control law: v = −λL se

b+
Actual behavior: ė = Ls v = −λLs L se
• Stability depends on Ls L
b+s
• Ideal case: Ls L+
b s = Im
Stability principlesyi −1

b+
General behavior: ė = Ls v = −λLs L se
• Ls L
b+s is (m×m) of rank min(m, 6) at best
• Ls L
b+s = P−1 DP with D = Diag(d1 , . . . , dm )
• Pė = −λDPe ⇒ components of Pe vary with ėi = −λdi ei

b+
If m > rank(Ls L s ) then ∃i, di = 0: local minima
• ∃e ̸= 0 such that Lb+s e = 0
• In particular if m > 6

If ∃i, di < 0: some components of e increase ⇒ unstable

b+
If Ls Ls > 0: ∥e∥ decreases ⇒ stable
• True if rank(Ls ) = m and L b+s = Im
bs = Ls as Ls L
• Less and less true as L bs gets far from Ls
Computing L for a 3D pointyi −1

3D point X = (X , Y , Z ) in camera frame

Ẋ = L3D v

If X was rigidly linked to the camera: Ẋ = v + (−X) × ω

 
X is actually fixed: Ẋ = −v + [X]× ω = −I3 [X]× v

 
Interaction matrix of a 3D point X: L3D = −I3 [X]×
Computing L for a 2D pointyi −1


x = X /Z
Features s = (x, y) in normalized coordinates:
y = Y /Z
ṡ = L2D v

−X /Z 2
 
1/Z 0
ṡ = Ẋ
0 1/Z −Y /Z 2

We already know Ẋ = L3D v

−(1 + x 2 ) y
 
−1/Z 0 x/Z xy
ṡ = v
0 −1/Z y /Z 1 + y2 −xy −x

L2D or Lxy depends on measured (x, y ) and also not measured Z


Computing L: general methodyi −1

From any feature s, express it as a function of (x, y) (or (X , Y , Z ))

∂s ∂s
Then ṡ = ẋ (or ṡ = Ẋ)
∂x ∂X

∂s ∂s
And Ls = L2D (or Ls = L3D )
∂x ∂X

Try to express Ls as much as possible with values of s


Computing L for other 2D pointsyi −1

2D point expressed in polar coordinates: s = (ρ, θ)


 p 
• ρ = x2 + y2 x = ρ cos θ
and
θ = arctan(y/x) y = ρ sin θ

− cos θ/Z − sin θ/Z ρ/Z (1 + ρ2 ) sin θ −(1 + ρ2 ) cos θ 0


 
• Lρ,θ = sin θ
ρZ − cos
ρZ
θ
0 cos θ/ρ sin θ/ρ −1

2D point expressed in pixel coordinates s = (xp , yp )



• xp = x0 + αx x
yp = y0 + αy y
 
• ṡ = αx 0 L2D
0 αy
Why care about how we describe an object? example for a 2D pointyi −1

With Cartesian coordinates s = (x, y )


−(1 + x 2 ) y
 
• Lx,y = −1/Z 0 x/Z xy
2
0 −1/Z y/Z 1 + y −xy −x
• On the image center
 
−1/Z 0 0 0 −1 0
L0,0 =
0 −1/Z 0 1 0 0
• Motion on vx or ωy and vy or ωx

With polar coordinates s = (ρ, θ)


Lρ = [ − cos θ/Z − sin θ/Z ρ/Z (1 + ρ2 ) sin θ −(1 + ρ2 ) cos θ 0]
sin θ
Lθ = [ ρZ − cos
ρZ
θ
0 cos θ/ρ sin θ/ρ −1]
• Better decoupling between vz and ωz
not defined at (0, 0)

In both case: features sensitive to almost all camera motions


4-points servo, image and 3D behaviorsyi −1

cartesian
polar

3.0

2.5

2.0
1.5
Cartesian
1.0
1.2 0.5
1.0
0.0
0.8
0.6 0.6
0.4
0.4 0.2
0.2 0.0
0.0 0.2
0.4
0.2 0.6

Polar
Large translation error: Cartesian induces a better trajectory
4-points servo, image and 3D behaviorsyi −1

cartesian
polar1.4
1.2
1.0
0.8
0.6
0.4
0.2
0.0
0.2
0.3 Cartesian
0.2
0.1

0.0

0.1
1.0 0.8 0.6 0.4 0.2 0.0 0.2

Polar
Large rotational error: Polar induces a better trajectory
Other basic 2D featuresyi −1

2D segment: s = (x1 , y1 , x2 , y2 )
• Actually better if s = (xm , ym , l, α) or s = (xm /l, ym /l, 1/l, α)

2D line: s = (ρ, θ)

3D circle (hence 2D ellipse): ellipse coefficients or image moments

3D sphere: 2D circle or ellipse: same

In all cases: some parts of L are not measured in s (depth)


• Hence all the concern about estimating L
3D featuresyi −1

Current and Desired poses


from pose computation

s = (t, R) of dim. 6
• Rotation expressed as θu
• Which translation and rotation?

Several ways to define the error


• Absolute pose c Mo or o Mc
• c to or o tc vs c∗ to or o tc∗
• Not compatible with rotation
• Pose error c∗ Mc or c Mc∗
• Ok for translation and rotation
• s∗ = 0

Any combination is possible


• (c to , c∗ Rc ), (c tc∗ , c Rc∗ ), etc.
Interaction matrices for 3D visual servoingyi −1

Translations
• c to → Lc,o = −I3 [c to ]× • c tc∗ → Lc∗,c = −I3 [c tc∗ ]×
   

• o
tc → Lo,c = [o Rc 03 ] • c∗
tc → Lc∗,c = [c∗ Rc 03 ]

Rotations: s = θu and Lθu = [03 Lω ]


θ sinc θ 2
• θu from c∗ Rc → Lω = I3 + [u] + (1 − ) [u]×
2 × sinc2 (θ/2)
θ sinc θ 2
• θu from c Rc∗ → Lω = −I3 + [u]× − (1 − ) [u]×
2 2
sinc (θ/2)
• In both cases Lω θu = L−1
ω θu = ±θu

Choice of s induces different image and 3D behaviors


PBVS with Fc and Fc∗ , image and 3D behaviorsyi −1

c∗
Mc
c
Mc∗
0.0

−0.1

−0.2

−0.3

c∗ t + c∗ Rc
−0.4
c
−0.5

0.4

0.3

0.2

0.1

−0.2 0.0
−0.1 0.0 0.1 0.2 0.3

ct + c Rc∗
c∗
c∗ M gets perfect 3D behavior but in both cases: object lost!
c
PBVS with Fo , image and 3D behaviorsyi −1

c
to, c∗Rc
c
to, cRc∗
0.0

−0.1

−0.2

−0.3
ct + c∗ Rc
o
−0.4

−0.5

0.5

0.4

0.3

0.2

0.1

−0.1 0.0
0.0
0.1
0.2
0.3

ct + c Rc∗
o

Actually same behavior – object center draws a 2D straight line


Sensor space and 3D behavioryi −1

Sensor space is not 3D space


• Especially true for image-based visual features

Alternative 1: use position control


• Need a 3D model of the object
• Visibility issue

Alternative 2: use better features


1
• 2 D VS
2
• Image moments
2 1/2D VSyi −1

E. Malis and F. Chaumette (2000). “2 1/2 d visual servoing with respect to unknown
objects through a new estimation scheme of camera displacement”. In: Int. J. of
Computer Vision

First try to combine 2D and 3D features to have (6×6) Ls

Still based on 2D point detection

Base idea:
• We need some x − y control → use a single 2D point
• We need some depth control → use Z /Z ∗
• We need some rotation → use c Rc ∗
2 1/2D VSyi −1

From a set of 2D points {(xi , yi ), i ∈ [1, n]}

s = (xc , yc , log(Z /Z ∗ ), θu) of dim. 6


• xc = n1 xi , yc = n1
P P
yi (or nearest point to the border)
• Z /Z ∗ and θu obtained from two-views geometry:
• Epipolar matrix if n ≥ 8
• Homography matrix if n ≥ 4 and coplanar 3D points

Corresponding interaction matrix is triangular


• Also always invertible → globally stable control law
2 1/2D VSyi −1

2 1/2 D
c
to, c∗Rc

0.0

−0.1

−0.2

2 1/2 D
−0.3

−0.4

−0.5

0.5
0.4
0.3 0.3
0.2 0.2
0.1 0.1
0.0 0.0
−0.1

ct + c∗ Rc
o

Selected point (here centroid) follows 2D straight line


Definition of image momentsyi −1

Segmented region of interest D


• Widely used in image processing
• Usually defined by its contour
D
x i y j dxdy
RR
Moment of order (i, j): mij = D
• Related to intuitive geometrical concepts:
• Area a = m00
• Center of gravity (xg = m10 /m00 , yg = m01 /m00 )
• Orientation (main axis) α from m20 , m02 and m11
• Easy to compute from contour (x1 , . . . , xn = x0 )
n
1X
a= xi−1 yi − xi yi−1
2
i=1

How is ṁij linked to the camera velocity? ṁij = Lij vc


Interaction matrix of momentsyi −1

Computation carried in [Chaumette 2004]

with f (x, y ) = x i y j
RR
mij = DR f (x, y)dxdy
⇒ ṁij = ⊤ D
contour f (x, y)ẋ ndl

Green’s theorem says:


ZZ
ṁij = div[f (x, y )ẋ]dxdy
D
ZZ   
∂f ∂f ∂ ẋ ∂ ẏ
= ẋ + ẏ + f (x, y ) + dxdy
D ∂x ∂y ∂x ∂y
ZZ   
i−1 j i j−1 i j ∂ ẋ ∂ ẏ
= ix y ẋ + jx y ẏ + x y + dxdy
D ∂x ∂y
Interaction matrix of momentsyi −1

ZZ   
i−1 j i j−1 i j ∂ ẋ ∂ ẏ
ṁij = ix y ẋ + jx y ẏ + x y + dxdy (1)
D ∂x ∂y

Remember the
 interaction matrix of a point
ẋ =  −1/Z 0 x/Z xy −(1 + x 2 ) y  vc
 

ẏ = 0 −1/Z y /Z 1 + y 2 −xy −x vc

To go further... assumption of planar object 1/Z = Ax + By + C

∂ ẋ
  

 = −A 0 (2Ax + By + C) y −2x 0 vc
⇒ ∂x
∂ ẏ  

 = 0 −B (Ax + 2By + C) 2y −x 0 vc
∂y
Put back in (1) and you get sums of several moments
Interaction matrix Lij revealedyi −1

 
Under planar assumption Lij = mvx mvy mvz mωx mωy mωz

 mvx = −i(Amij + Bmi−1,j+1 + Cmi−1,j ) − Amij




 mvy = −j(Ami+1,j−1 + Bmij + Cmi,j−1 ) − Bmij
mvz = (i + j + 3)(Ami+1,j + Bmi,j+1 + Cmij ) − Cmij

with

 mωx = (i + j + 3)mi,j+1 + jmi,j−1
m = −(i + j + 3)mi+1,j − imi−1,j

 ωy



mωz = imi−1,j+1 − jmi+1,j−1
not to be known by heart and actually useless

Lij computed from moments up to (i + j + 2)


translational components also need plane parameters A, B, C
Cooking momentsyi −1

Basic moments mij not used directly


• Centered moments: µij = (x − xg )i (y − yg )j dxdy
RR
D

Need to compute only 3 basic moments, linked to translation


• Area a = m00 , La = −aA −aB a(3/Zg − C) 3ayg −3axg 0
 
• Center of gravity (xg = m10 /a, yg = m01 /a)

Search for combinations to get invariant to some motions


• Translation: normalized moments st = (an , xn , yn )
√ √ √
with an = 1/ a, xn = xg / a, yn = yg / a
 
• Rotation around z: main axis α = 1 2µ11
2 arctan µ20 −µ02

• Rotations around x and y: also exist


depend on symmetrical object or not
Example of an interaction matrixyi −1

Interaction matrix for 4 points with Cartesian coordinates


example from [Chaumette 2004]

−2 0 −0.3 0.01 −1.02 −0.1


  x 
1
0 −2 −0.2 1.01 −0.01 0.14
 −2 0 0.3 −0.01 −1.02 −0.1
  yx1 
   2 
0 −2 −0.2 1.01 0.01 −0.14  y2
Ls=s∗ =
  
 −2 0 0.3 0.01 −1.02 0.1 

 x3



 0 −2 0.2 1.01 −0.01 −0.14   y3 
−2 0 −0.3 −0.01 −1.02 0.1 x4
0 −2 0.2 1.01 0.01 0.14 y4

Same points but using moments (thanks to Green’s theorem)


 −2 0 0 0 −1.02 0   xn 
0 −2 0 1.01 0 0 yn
||  0 0 0.2 0 0 0   an 
Ls=s∗ = 
 0 0 0 −0.15 0 0



 sx


0 0 0 0 0.19 0 sy
0 0 0 0 0 −1 α

|| means parallel to image plane: A = B = 0


Conclusion on moments as visual featuresyi −1

Compromise between
• 2D features (from the image)
• 3D features (from pose computation)

Easy to compute from segmentation or contour

Decoupled behavior, strong link between 2D and 3D motions

Available in C++ library ViSP (http://visp.inria.fr/)

Questions?
Conclusion on visual servoingyi −1

Ideal visual features: still a research topic


• Can be computed almost without any image processing
• Do not require any knowledge on the observed scene
• Lead to decoupled motions (no strange 3D behavior)
• As globally stable as possible

Ad-hoc features for specific cases


• Aircraft landing: lines / angle / vanishing point
• Visual navigation: only local stability matters
• Textured object grasping: use the model (edges + texture)

Other research activities


• Stability and singularity analysis
• Speed
• Features from deep learning
Ad-hoc features: aircraft landingyi −1

L. Coutard, F. Chaumette, and J.-M. Pflimlin (2011). “Automatic landing on aircraft


carrier by visual servoing”. In: 2011 IEEE/RSJ International Conference on Intelligent
Robots and Systems. IEEE
Basic features: visual navigationyi −1

A. Cherubini and F. Chaumette (2013). “Visual navigation of a mobile robot with


laser-based collision avoidance”. In: The International Journal of Robotics Research
Generic features: mutual informationyi −1

A. Dame and E. Marchand (2011). “Mutual information-based visual servoing”. In:


IEEE Trans. on Robotics
Generic features: Gaussian mixturesyi −1

N. Crombez et al. (2018). “Visual servoing with photometric gaussian mixtures as


dense features”. In: IEEE Trans. on Robotics
Other challenge: try to do it fastyi −1

F. Fusco, O. Kermorgant, and P. Martinet (2020). “Integrating Features Acceleration


in Visual Predictive Control”. In: IEEE Robotics and Automation Letters
Assumptions and notationsyi −1

Assumptions
• The robot can be controlled in velocity
• Several sensors Si attached to the robot
• Correctly calibrated robot and sensors
• Only the robot is moving
• Proportional control laws

Notations
• Velocity command u, n = dim(u)
• Error ei = si − s∗i , mi = dim(ei )
• Feature Jacobian Ji
∂si ∂si ∂s∗
• ėi = u+ − i = Ji u = ṡi
∂x ∂t ∂t
• Feature desired variation ė∗i = −λ(si − s∗i )
Bibliographyyi −1

A brief history of redundancy


• O. Khatib (1986). “Real-Time Obstacle Avoidance for Manipulators and Mobile
Robots”. In: Int. Journal of Robotics Research
• Y. Nakamura, H. Hanafusa, and T. Yoshikawa (1987). “Task-priority based
redundancy control of robot manipulators”. In: int. Journal of Robotics Research
• E. Malis, G. Morel, and F. Chaumette (2001). “Robot Control Using Disparate
Multiple Sensors”. In: int. Journal of Robotics Research

Modern redundancy and constrained control


• O. Kanoun, F. Lamiraux, and P.-B. Wieber (2011). “Kinematic Control of
Redundant Manipulators: Generalizing the Task-Priority Framework to Inequality
Task.”. In: IEEE Trans. on Robotics
• O. Kermorgant and F. Chaumette (2014). “Dealing with constraints in
sensor-based robot control”. In: IEEE Trans. on Robotics
• A. Escande, N. Mansard, and P.-B. Wieber (2014). “Hierarchical quadratic
programming: Fast online humanoid-robot motion generation”. In: int. Journal of
Robotics Research
Redundancy and multi-taskingyi −1

Sensor-based control is about driving some feature s to s∗


Actually task-space control
• s can also be joint values or any feature reconstructed from (q, u)

Multi-tasking is about
• Performing several tasks at the same time
• Ensuring some tasks are performed prior to other ones

Joint limits, balance ≻ Collision ≻ Grasping ≻ Visibility


image : Oussama Kanoun
Rome was not built in one dayyi −1

J. D. Schutter et al. (2007). “Constraint-Based Task Specification and Estimation for


Sensor-Based Robot Systems in the Presence of Geometric Uncertainty”. In: Int.
Journal of Robotics Research
⇒ Handles uncertainties

N. Mansard and F. Chaumette (2009). “Directional redundancy for robot control”. In:
IEEE Trans. on Automatic Control
⇒ Synergy between several tasks

N. Mansard, O. Khatib, and A. Kheddar (2009). “A unified approach to integrate


unilateral constraints in the stack of tasks”. In: IEEE Trans. on Robotics
⇒ Dynamic weighting (CPU intensive)

M. Marey and F. Chaumette (2010). “A new large projection operator for the
redundancy framework”. In: IEEE Int. Conf. on Robotics and Automation
⇒ Projector on the norm (unstable at final position)

Most of them already out-dated because of unilateral contraints


Extended Jacobianyi −1

Combining several (k) tasks with no priority?


• Regroup all tasks in a single one
 ∗   
ė1 J1
 ė∗2   J2 

ė =  .  with Jacobian J = 
   
..
 .. 

 . 

ėk Jk
Just apply u = J+ ė∗
X
Minimizes ∥Ju − ė∗ ∥2 = ∥Ji u − ė∗i ∥2
This is actually what we do in visual servoing
s = (s1 , . . . , sm )

Resulting motion is a compromise


Task dimension vs degrees of freedomyi −1

What exactly means u = J+ ė∗ ?


depends on the dimensions n and m

If m ≥ n: over-constrained, we do at best
• u = arg min ∥Ju − ė∗ ∥2
u
• Typical case: 4 points = 8 features for only 6 possible motions
• Impossible to do anything else: all dof’s are constrained

If m < n: under-constrained

• We actually do u = arg min ∥u∥2


u
s.t. Ju = ė∗
• Other, non-minimal norm solutions exist to have Ju = ė∗
• Possible to use the remaining dof’s to do another task
Projection to null-spaceyi −1

Generalized control law when m < n: u = J+ ė∗ + Pz


• P (dim n × n) is the projection operator on the null space of J
• z can be any other task, it will not perturb ė

Assuming two tasks (ė∗1 , J1 ) ≻ (ė∗2 , J2 )


• Minimal norm law for ė∗1 : u1 = J+ ∗
1 ė1
• +
Define P1 = In − J1 J1
• Then u = u1 + P1 (J2 P1 )+ (ė∗2 − J2 u1 )
• We still have J1 u = ė∗1

What about the k-ieth task?


• Projection to the null space of (J1 , J2 , . . . , Jk−1 )
P 
k−1
• But Pk−1 = 0 as soon as i=1 m i ≥n

Example with points (0, 0), (1, 1) and line y = 2x − 1


1st example: underwater task with projectionyi −1

Underwater vehicle with a front camera observing a buoy


• 4 dof’s (3 translations + yaw) u = (vx , vy , vz , ωz )
• Which motion to circle around the buoy?
Task 1: keep object centered and at a distance
   
xn − 0 0 1/Z 0 1

ė1 = −λ yn − 0 J1|ė1=0 =
  ∗  0 0 1/Z 0 
a − a∗ 2a/Z 0 0 0
Task 2: side motion
ė∗2 = vy∗ J2 = 0 1 0 0
   

Resulting
 motion
 v x = 0
 v = v∗

y y
v = 0
 z


ωz = −vy∗ /Z
1st example: underwater task with extended Jacobianyi −1

−λxn
   
0 1/Z 0 1
−λy n  0 0 1/Z 0 
ė∗ = 
  
 −λ(an − an∗ ) , J =  2a/Z
 
0 0 0 
vy∗ 0 1 0 0
Resulting
 motion
 x v = 0
 v = v∗

y y

 v z = 0
ωz = −vy∗ /Z

Same as projection: enough dof’s to do everything perfectly


Faster computation by search space reductionyi −1

Equality constraints reduce the number of unknowns


u = argmin ∥J2 u − ė∗2 ∥2 dim u = n


s.t. J1 u = ė∗1 dim J1 = m1 < n

Basic projector Smarter one  


• Decomposition : JT1 = Qn×n Rm1 ×m1
• u = J+ ∗ +
1 ė1 + (I − J1 J1 )z
0
= u1 + Pz • Q = [Y Z] orthogonal
• Yn×m1 , Zn×(n−m1 )
• ⇒ min ∥J2 (u1 + Pz) − ė∗2 ∥2
z
• z of dim. n
• = YR−T ė∗1 + Zz
u
= u1 + Zz
• ⇒ min ∥J2 (u1 + Zz) − ė∗2 ∥2
z
• z of dim. n − m1

Back to 2D example
2D exampleyi −1

Specification
• Unknown: (x, y )
• Task: be as close as possible from point (x0 , y0 )
• Constraint: stay on the line x − y + 3 = 0

x = argmin ∥Jx − e∥2



Written in optimization form:
s.t. Ax = b
• J = I2 , e = (x0 , y0 ), A = [1 − 1], b = [−3]
Basic projector Advanced projector
    
x0 = A+ b
= (−3/2, 3/2) T −1 1 −2 R
A = = [Y Z]

1 1
 1 1 0 0
+
P=I−A A∝  
1 1 −1 1
x0 = × b = (−3/2, 3/2)
x = x0 + P(JP)+ (e − Jx0 ) 1 2
  +
x = x0 + Z(JZ) (e − Jx0 )
x0 + y0 − 3
x = 12  
x0 + y0 + 3 1 x0 + y0 − 3
x= 2
x0 + y0 + 3
Unilateral constraints in task-based controlyi −1

Typical constraints during robot servoing


• Joint limits, velocity limits, visibility, ...
• In most cases: keep a feature si in a given interval [si− , si+ ]

Through extended Jacobian


• Change the compromise when near to the limit

Through projection approach


• Ensure strict hierarchy only when near to the limit
A higher priority task can also be seen as a constraint
Extended Jacobianyi −1

Change compromise through weighting the error He


• Constrained features have a virtual centered setpoint
• Law becomes u = (HJ)+ Hė∗
• H = diag(h1 , . . . , hm )
100
80
60
40
20
0
s− ss− ss+ s+
Feature value
• O. Kermorgant and F. Chaumette (2014). “Dealing with constraints in
sensor-based robot control”. In: IEEE Trans. on Robotics
Dynamic weighting for Extended Jacobianyi −1

Allows only two levels of hierarchy


• Task-related: constant weights
• Constraint-related: varying weights
100
80
60
40
20
0
s− ss− ss+ s+
Feature value

Behavior similar to penalty methods (or potential fields)


• Assuming feature i is a constraint:

lim (HJ)+ Hė∗ = arg min ∥Hi Ji u − Hi ė∗i ∥2


hi →+∞
s.t. Ji u = ė∗i

• C. Van Loan (1985). “On the method of weighting for equality-constrained


least-squares problems”. In: Siam Journal on Numerical Analysis
Remember the 4-points servo? 2D vs 3Dyi −1

Pure 2D
Pure PBVS

−0.5

0.0

0.5

1.0 2D coordinates
1.5

−2.0
2.0
−1.5
3.5 −1.0
3.0
2.5 −0.5
2.0 0.0
1.5
1.0 0.5
0.5
1.0
0.0

3D pose
Which one is better?
When asking for a straight line + field of viewyi −1

2D constraint
Pure PBVS
Area constraint

−0.10
−0.6
−0.05
0.00 −0.5 Full visibility
0.05 −0.4

0.10 −0.3
0.15 −0.2
0.20 −0.1
0.25 0.0
0.30 0.1
0.0 −0.1
0.35 0.2 0.1
0.4 0.3
0.5

Partial visibility (75 %)


Constrained control is about defining the desired behavior
Application to joint limitsyi −1
Combining several constraintsyi −1
What about the weights?yi −1

2.5 25
2D-point weights

2D-point weights
2.0 20

1.5 15

1.0 10

0.5 5

0.0 0
0 200 400 600 800 1000 0 200 400 600 800 1000
iterations iterations
Case 1. eye-in-hand Case 2. eye-in-hand

Eye-to-hand weights
20 6
2D-point weights

5
15
4
10 3
2
5
1
0 0
0 200 400 600 800 1000 0 200 400 600 800 1000
iterations iterations
Case 3. eye-in-hand Case 3. eye-to-hand
Remaining problemsyi −1

Weighting is not compatible with projection


• Classical projection operator: P = I − J+ J
• If J becomes HJ: P = I − (HJ)+ (HJ)
• Which is not continuous at rank change (Hij going from 0 to non-0)
• N. Mansard, A. Remazeilles, and F. Chaumette (2009). “Continuity of
varying-feature-set control laws”. In: Ieee j ac

What we would like to be able to solve:


• Any number of hierarchy levels of tasks
• Some tasks could be equalities (Ji u = ė∗i )
• Some tasks could be inequalities (s− +
k ≤ sk ≤ sk )
• Low hierarchy tasks should not disturb higher-level tasks

A hint for the solution was proposed by two very different persons
What should a robot do? Two formulationsyi −1

1 A robot may not injure a human being or, let f1 be the cost function for law 1
1 Ω1 = argmin f1 (u)
through inaction, allow a human being to
u
come to harm.
2 A robot must obey the orders given it by let f2 be the cost function for law 2
human beings except where such orders 2 Ω2 = argmin f2 (u)
would conflict with the First Law. s.t. u ∈ Ω1

3 A robot must protect its own existence as let f3 be the cost function for law 3
long as such protection does not conflict 3 Ω3 = argmin f3 (u)
with the First or Second Laws. s.t. u ∈ Ω2

4 Really?
4 The end.
let fk be the cost function for law k
28 Ωk = argmin fk u
s.t. u ∈ Ωk−1
Quadratic programmingyi −1

A program is defined with:



 (H, c) minimization, H symmetric positive
(A, b) equality constraint
(C, d) inequality constraint

Corresponding quadratic program:

1
u = argmin uT Hu − cT u
2
s.t. Au = b
s.t. Cu ≤ d
QP formulation for task controlyi −1

Function to minimize (H, c) come from current hierarchy level


• u = argmin ∥Ji u − ė∗i ∥2 = argmin 12 uT Hu − cT u
• H = JTi Ji c = JTi ė∗i

Equality constraints (A, b) come from higher-hierarchy tasks


• Ji−1 u = ė∗i−1
• A = Ji−1 b = ė∗i−1

Inequality constraints (C, d) also but with a little trick


• Base formulation: s− ≤ s ≤ s+
• To time-variation: ṡi = Ji u ≤ α(s+ i − si )
ṡi = Ji u ≥ α(s− − si )
   +
i
• C= Ji α(si − si )
d=
−Ji −α(s− i − si )
How to solve a minimization problemyi −1

argmin f (u) ?
• f ′ (u) = 0
• f ′′ (u) ≥ 0

1 T
if f (u) = u Hu − cT u
2
• f ′ (u) = Hu − c
⇒ Solve Hu − c = 0

• f ′′ (u) = H
⇒ Solutions depend on the invertibility of H
⇒ As H = JTi Ji , actually depend on rank of Ji
Invertibility of H depends on feature Jacobianyi −1

f (u) = ∥Ju − ė∗ ∥2


dim J = m × n, rank = min(m, n)
H = JT J, dim n × n, rank = min(m, n)
c = JT ė∗

n≤m n>m
• H > 0 invertible • H has n − m null eigen values
• u = H−1 c • JJT invertible, dim m × m
• u = (JT J)−1 JT ė∗ = J+ ė∗ • u0 = JT (JJT )−1 ė∗ = J+ ė∗
• 1 solution • u = u0 + (In − J+ J)z
• ∀z, f (u) = 0 and f ′ (u) = 0
• set of dim n − m
With equality constraintsyi −1

argmin f (u) argmin L(u, λ) ?



s.t. g(u) = 0  ∂L = 0

• Let L(u, λ) = f (u) + λT g(u) • ∂u
 ∂L = 0

∂λ

1 T
)
f (u) = u Hu − cT u
2 L(u, λ) = 12 uT Hu − cT u + λT (Au − b)
g(u) = Au − b


∂L
= Hu − c + AT λ = 0 H AT
     
u c

∂u =
∂L A 0 λ b
= Au − b = 0


∂λ

H AT
 
symmetric positive ⇒ 1 or ∞ solution
A 0
Why equality constraints are not really a problemyi −1

Re-write f (u) = 12 uT Hu − cT u = 12 ∥Qu − r∥2


• H = QT Q c = QT r
• From sensor-based tasks: Q = Ji r = ė∗i

Then the problem becomes:


u = argmin ∥Qu − r∥2
s.t. Au = b
s.t. Cu ≤ d

Equalities removed from projection to null-space of A:


u = u0 + Zz ⇒ solve z = argmin ∥Q(u0 + Zz) − r∥2
s.t. C(u0 + Zz) ≤ d
With inequality constraints: active setsyi −1

u = argmin ∥Qu − r∥2



1 First solve without inequalities:

2 Check for most violated inequality : max(Cu − d) > 0

u = argmin ∥Qu − r∥2



3 Change it to an equality constraint:
s.t. C̃u = d̃

4 If several activated inequalities:


• Compute Lagrange multipliers λ = −(QC̃+ )T (Qu − r)
• If λk ≤ 0 then inequality k should be deactivated

5 Loop until Cu ≤ d and λk > 0 ∀k

6 Store activated inequalities for next time (warm start)


Illustrationyi −1

With naive start With pre-activation

Robot control: same with tens of constraints


Example in humanoid controlyi −1

Collisions, joint limits, robot balance...


Grasping
Visibility

image : Oussama Kanoun


Hierarchical Quadratic Programmingyi −1

Full robot task defined with:


• Ak , bk , Ck , dk : equality and inequality tasks (some may be null)
• Task 0 is the most important

At level 0: u0 , w0 = arg min ∥A0 u − b0 ∥2 + ∥w∥2


(u,w)
s.t. C0 u − w ≤ d0
• The overall task performance is A0 u0 − b0
• Some inequalities may have been activated: C0,i u0 = d0,i + w0,i
• They are changed into equality constraint for the next task level

At level 1: u1 , w1 = arg min ∥A1 u − b1 ∥2 + ∥w∥2


(u,w)
s.t. C1 u − w ≤ d1
s.t. C̄0 u ≤ d̄0
s.t. Ā0 u = b̄0
Same for all following levels...
Tele-operation in ultrasoundyi −1

Joint limits ≻ US visibility ≻ Tele-operation


Full body control of an underwater vehicleyi −1

Joint limits, singularity ≻ Body/arm balance ≻ IBVS ≻ Current


alignment
Remaining problems of constrained controlyi −1

Assume correct calibration & observation


• Few work on ill-conditionned systems

Main limitations
• Cannot handle explicitely inertia
• Just use small values for α
• Can only optimize the sole next control input

How to handle future constraints: model-predictive control


Model-Predictive Controlyi −1

Trade-off between:
• Path / trajectory planning
• Offline
• Require perfect (future) knowledge of the environment
• To be done again if initial plan cannot be performed
• Reactive control
• Online
• Instantaneous measurement of the situation
• Limited to the next iteration only

Core idea
• Plan a trajectory over some time horizon
• Apply only the first control input
• Get new measurement and repeat planning
Main idea illustratedyi −1

Using N = 1 Using N = 10
1.0 2.5 1.0
3.5

0.8 2.0 0.8 3.0

2.5
0.6 1.5 0.6
x x 2.0
u u
x, x *

x, x *
x* x*

u
0.4 1.0 0.4 1.5

1.0
0.2 0.5 0.2
0.5

0.0 0.0 0.0 0.0


0 20 40 60 80 100 0 20 40 60 80 100

Convergence begins before the setpoint is changed

Control looks ahead: receding horizon / moving horizon

Can deal with state (tasks) / control constraints


Some references on MPC in roboticsyi −1

W. H. Kwon, A. M. Bruckstein, and T. Kailath (1983). “Stabilizing state-feedback


design via the moving horizon method”. In: Int. J. of Control

D. Dimitrov et al. (2010). “An optimized linear model predictive control solver”. In:
Recent Advances In Optimization and Its Applications In Engineering

G. Allibert, E. Courtial, and F. Chaumette (2010). “Predictive control for constrained


image-based visual servoing”. In: Ieee j ro

J. B. Rawlings, D. Q. Mayne, and M. Diehl (2017). Model predictive control: theory,


computation, and design.

F. Borrelli, A. Bemporad, and M. Morari (2017). Predictive control for linear and
hybrid systems.
MPC formulationyi −1

Assuming control input u and state x:


n−1
2
X
u= arg min xk+1 − x∗k+1 Q
+ ∥uk ∥2R
u,x
k=0

uk−1 ∈ U
s.t. ∀k ∈ J1, nK :
xk ∈ X
• dim u is n × nu , uk is the k th future control input

• dim x is n × nx , xk is the k th future state (x0 is the current state)

• ∥u∥2R = uT Ru with R positive symmetric

• U and X are the valid domains for u and x

• We only apply u0 and put u1 , . . . , un−1 away

• Back to two levels of hierarchy: constraints ≻ cost function


MPC formulation with actual modelyi −1

u and x cannot be picked independently

Linked through the model of the process: xk +1 = f (xk , uk )

Two approaches: single or multiple shooting


Single or multiple shootingyi −1

Single shooting: solve for u and compute x


u= arg min J(x, u)
u 
xk = f̃ (x0 , u0 , . . . , uk−2 , uk−1 ) ∈ X
s.t. ∀k ∈ J1, nK :
uk−1 ∈ U
• xk = f (xk−1 , uk−1 ) = f (f (xk−2 , uk−2 ), uk−1 ) = f (f (f (. . .

Multiple shooting: solve for (u, x) with a constraint


u= arg min J(x, u)
u,x 
xk = f (xk−1 , uk−1 ) ∈ X
s.t. ∀k ∈ J1, nK :
uk −1 ∈ U

In both cases: highly depends on the model


MPC formulation with linear modelyi −1

Assuming a linear model: xk+1 = Axk + Buk


Cx x ≤ dx
Assuming bounds / linear constraints:
Cu u ≤ du
Then MPC is a (big) Quadratic Program:
u= arg min J(x, u) (quadratic)
u,x 
 Cx xk ≤ dx
s.t. ∀k ∈ J1, nK : Cu uk−1 ≤ du
xk = Axk−1 + Buk−1

Single shooting formulation is also a QP


General case with non-linearyi −1

Non-linear model xk+1 = f (xk , uk ) implies:


• Quadratic cost J(x, u)
• Non-linear equality constraints xk +1 = f (xk , uk )
• Maybe non-linear inequality constraints

With single shooting:


• Non-linear cost
• Non-linear equality constraints even if linear wrt x

In all formulations: non-linear constraints / cost


• Much more difficult to solve
• Especially if n becomes large
• Current research: how to solve it efficiently
Robustness to model uncertaintiesyi −1

Internal Model Control


• Feedback on error scorr between process and model
• Represents all model uncertainties and disturbances
• More advanced approach: continuous identification

MPC System

2
+
Model -
MPC tuning parametersyi −1

Prediction horizon versus control horizon


• Only use nc < n − 1 inputs
• Assume constant final control: unc +k = unc
• We only apply u0 anyway

Prediction time ∆t: smaller is more accurate but less far in the future
Q and R weighting matrices
Qk = αk Q

• We may want to use with 0 < α < 1
Rk = β k R
Example of limited horizon controlyi −1

nc = 1 vs np = 10: assumes constant control on whole horizon


Multi-maneuver parking without path planningyi −1

D. Pérez-Morales et al. (Aug. 2021). “Multi-Sensor-Based Predictive Control For


Autonomous Parking”. In: IEEE Trans. on Robotics
MPC in practiceyi −1

Many non-linear solvers: IPOPT, NLOPT, ACADOS

Specialized libraries for robot MPC: Crocoddyl, control-toolbox

Heavy use of code generation from robot models


• Matlab or Python symbolic toolboxes, C++ (CppADCodeGen, CasADi)
• fast computation of forward geometry, kinematics, dynamics
• and their corresponding derivatives

Current research: mostly speed optimization


• How to pick a good initial guess in non-linear MPC
• Linear approximation of non-linear MPC
• Reducing the search space: control parameterization
Research on MPC in roboticsyi −1

Reducing the search space by control parameterization


2
classic
101 ZOH
2 LERP

6 × 100 3 5

Final error
4
3
4 × 100 4
6
3 × 100 6
5
2 7 8 8
4 6 7 8
2 × 100 3 5 7
1.5 2.0 2.5 3.0 3.5
Zero-order hold Linear interpolation Average computation time

Starting the solver from near-optimal guess

parameterized MPC images: F. Fusco

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy