Power Systems Operation and Management: JMEE7311
Power Systems Operation and Management: JMEE7311
9-3-2019
Daniel Kirschen
1
2
Equal Incremental Cost Dispatch
dC A dC B dC C
dPA dPB dPC
PA PB PC
PA + PB + PC
3
Lambda search algorithm
1. Choose a starting value for λ
3. If one of these values exceeds its lower or upper limit, fix it at that limit
4. Calculate PTOTAL = PA + PB + PC
6. Go To Step 2
4
Linear Cost Curves
CA CB
PA PB
PAMAX PBMAX
dC A dC B
dPA dPB
λ
PA PB
PAMAX PBMAX
5
CA CB
PA PB
PAMAX PBMAX
dC A dC B
dPA dPB
λ
PA PB
PAMAX PBMAX
6
CA CB
PA PB
PAMAX PBMAX
dC A dC B
dPA dPB
PA PB
PAMAX PBMAX
7
Linear Cost Curves
• The output of the unit with the lowest incremental cost (in this case unit A) is
increased first (it is “loaded” first)
• The output of the other units is not increased until unit A is loaded up to its
maximum
• The output of the unit with second lowest incremental cost is then increased (i.e.
the “second cheapest” unit)
• And so on until the generation meets the load
8
Linear cost curves with minimum generation
CA CB
PA PB
dC A dC B
dPA dPB
PA PB
9
Linear cost curves with minimum generation
10
Piecewise Linear Cost Curves
CA CB
PA PB
dC A dC B
dPA dPB
PA PB
11
Economic Dispatch with Piecewise Linear Cost Curves
CA CB
PA PB
dC A dC B
dPA dPB
λ
PA PB
12
CA CB
PA PB
dC A dC B
dPA dPB
λ
PA PB
13
CA CB
PA PB
dC A dC B
dPA dPB
PA PB
14
CA CB
PA PB
dC A dC B
dPA dPB
PA PB
15
Economic Dispatch with Piecewise Linear Cost Curves
CA CB
PA PB
dC A dC B
dPA dPB
PA PB
16
• All generators except one are at breakpoints of their cost curve
• The marginal generator is between breakpoints to balance the load and generation
• Rank all the segments of the piecewise linear cost curves in order of
incremental cost
• Then go down the table until the generation matches the load
17
Example
dC A
dPA
Unit PSegment Ptotal Lambda
0.7 A&B min 20+30 = 50 50
B 70-20=50 100 0.1
0.5
0.3 A 80-30=50 150 0.3
A 120-80=40 190 0.5
30 80 120 150 PA B 110-70=40 230 0.6
A 150-120=30 260 0.7
B 140-110=30 290 0.8
dC B
dPB
f(x)
A D x
df d2 f
For x = A and x = D, we have: = 0 and 2
<0
dx dx
20
Which one is the real optimum?
x2
x1
Convexity
• If the feasible set is convex and the objective function is convex, there is only
one minimum and it is thus the global minimum
22
Examples of Convex Feasible Sets
x2 x2
x1 x1
x2
x1 x1
x1min x1max
23
Example of Non-Convex Feasible Sets
x2 x2
x1 x1
x2
x1
x1 x1
x1a x1b x1c x1 d
24
Example of Convex Feasible Sets
A set is convex if, for any two points belonging to the set, all the points on the straight line joining
these two points belong to the set
x2 x2
x1 x1
x2
x1 x1
x1min x1max
25
Example of Non-Convex Feasible Sets
x2 x2
x1 x1
x2
x1
x1
x1 a x1 b x1 c x1 d x1
26
Example of Convex Function
x2
f(x)
x
x1
Example of Non-Convex Function
x2
f(x)
x1
x 27
Definition of a Convex Function
f(y)
z = kf ( x a ) + (1− k ) f ( x b ) ≥ f ( y ) = f [ kx a + ( 1− k ) x b ]
xa y xb x
f(x)
28
x
Importance of Convexity
29
• Naïve One-Dimensional Search
• Multi-Dimensional Search
Steepest Ascent/Descent Algorithm
Motivation
Method of Lagrange multipliers
• Very useful insight into solutions
• Analytical solution practical only for small problems
• Direct application not practical for real-life problems because these problems are too large
• Difficulties when problem is non-convex
Often need to search for the solution of practical optimization problems using:
• Objective function only or
• Objective function and its first derivative or
• Objective function and its first and second derivatives
31
Naïve One-Dimensional Search
• Suppose: f(x)
• That we want to find the value
of x that minimizes f(x)
• That the only thing that we
can do is calculate the value
of f(x) for any value of x
• We could calculate f(x) for a
range of values of x and choose
the one that minimizes f(x)
x
• Requires a considerable amount of computing time if range
is large and a good accuracy is needed
32
One-Dimensional Search
f(x)
x0 x
33
f(x)
x0 x1 x
34
f(x)
Current search
range
x0 x1 x2 x
35
f(x)
x0 x1 x3 x2 x
x0 x1 x4 x3 x2 x
37
f(x)
x0 x1 x4 x3 x2 x
38
Multi-Dimensional Search
39
Steepest Ascent/Descent Algorithm
x2
x1 40
Steepest Ascent/Descent Algorithm
x2
x1 41
Unidirectional Search
Objective function
Gradient direction 42
Steepest Ascent/Descent Algorithm
x2
x1 43
Steepest Ascent/Descent Algorithm
x2
x1 44
Steepest Ascent/Descent Algorithm
x2
x1 45
Steepest Ascent/Descent Algorithm
x2
x1 46
Steepest Ascent/Descent Algorithm
x2
x1 47
Steepest Ascent/Descent Algorithm
x2
x1 48
Steepest Ascent/Descent Algorithm
x2
x1 49
Steepest Ascent/Descent Algorithm
x2
x1 50
Steepest Ascent/Descent Algorithm
x2
x1 51
Steepest Ascent/Descent Algorithm
x2
x1 53
Choosing a Direction
54
Penalty functions Barrier functions
• Replace enforcement of inequality constraints
by addition of penalty terms to objective • Barrier must be made progressively closer to the limit
function
• Works better than penalty function
Penalty • Interior point methods
K(x-xmax)2
Barrier cost
xmin xmax
• Stiffness of the penalty function must be increased
progressively to enforce the constraints tightly enough
• Not very efficient method
Penalty
xmin xmax
xmin xmax 55
Non-Robustness
Different starting points may lead to different solutions if the problem is not convex
x2
x1
X Y 56
Linear Programming (LP)
Motivation
• Many optimization problems are linear
• Linear objective function
• All constraints are linear
• Non-linear problems can be linearized:
• Piecewise linear cost curves
• DC power flow
• Efficient and robust method to solve such
problems
CA
Piecewise linearization of a cost curve
58 PA
Mathematical formulation
n
minimize Σ cj xj
j =1
n
subject to: Σ aij xj ≤ bi, i = 1, 2, . . ., m
j =1
n
Σ cij xj = di, i = 1, 2, . . ., p
j =1
59
Example 1
y
4
Maximize x + y
3
Subject to: x ≥ 0; y ≥ 0
Feasible
x≤3 Region
2
y≤4
x+2y≥2 1
0 x
0 1 2 3
60
Example 1
Maximize x + y
y
Subject to: x ≥ 0; y ≥ 0 4
x≤3
3
y≤4
x+2y≥2
2
0 x
0 1 2 3
x+y=0 61
Example 1
Maximize x + y
y
Subject to: x ≥ 0; y ≥ 0 4
x≤3
3
y≤4
x+2y≥2
2
Feasible
Solution
1
0 x
0 1 2 3
x+y=1 62
Example 1
Maximize x + y
y
Subject to: x ≥ 0; y ≥ 0 4
x≤3
3
y≤4
x+2y≥2
2
1
Feasible
Solution
0 x
0 1 2 3
x+y=2 63
Example 1
Maximize x + y
y
Subject to: x ≥ 0; y ≥ 0 4
x≤3
3
y≤4
x+2y≥2
2
0 x
0 1 2 3
x+y=3 64
Example 1
Maximize x + y Optimal
y Solution
Subject to: x ≥ 0; y ≥ 0 4
x≤3
3 x+y=7
y≤4
x+2y≥2
2
0 x
0 1 2 3
65
Solving a LP problem (1)
• Constraints define a polyhedron in n dimensions
• If a solution exists, it will be at an extreme point (vertex) of this
polyhedron
• Starting from any feasible solution, we can find the optimal
solution by following the edges of the polyhedron
• Simplex algorithm determines which edge should be followed next
66
Which direction?
Maximize x + y Optimal
y Solution
Subject to: x ≥ 0; y ≥ 0 4
x≤3
3 x+y=7
y≤4
x+2y≥2
2
0 x
0 1 2 3
67
Solving a LP problem (2)
• If a solution exists, the Simplex algorithm will find it
• But it could take a long time for a problem with many variables!
• Interior point algorithms
• Equivalent to optimization with barrier functions
68
Interior point methods
Extreme points
(vertices)
Constraints
(edges)
70
Solve linear programming problems by Matlab
x = linprog(f,A,b,Aeq,beq,lb,ub)
Example :
min − 1 − (2 )⁄3
Subject to:
• Linear equality constraint
• Bounds:
Solution:
A = [1 1
1 1/4
1 -1
-1/4 -1
-1 -1
x=
-1 1]; % A of Linear inequality
constraints
0.1875
b = [2 1 2 1 -1 2]; % b of Linear inequality
1.2500
constraints:
Aeq = [1 1/4]; % Aeq of linear equality
constraint
beq = 1/2; % beq of linear equality fval =
constraint
lb = [-1,-0.5]; %range lb < x < ub. -0.6042
ub = [1.5,1.25];
f = [-1 -1/3]; % objective function
[x,fval] = linprog(f,A,b,Aeq,beq,lb,ub) % LP solver
Thank you