0% found this document useful (0 votes)
11 views73 pages

Power Systems Operation and Management: JMEE7311

The document covers practical economic dispatch in power systems, focusing on optimization techniques such as linear programming and the lambda search algorithm. It discusses local and global optima, convexity, and the challenges of non-linear programming in power system optimization. Additionally, it highlights the importance of piecewise linear cost curves and provides examples of economic dispatch scenarios.

Uploaded by

Majdi Thaher
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views73 pages

Power Systems Operation and Management: JMEE7311

The document covers practical economic dispatch in power systems, focusing on optimization techniques such as linear programming and the lambda search algorithm. It discusses local and global optima, convexity, and the challenges of non-linear programming in power system optimization. Additionally, it highlights the importance of piecewise linear cost curves and provides examples of economic dispatch scenarios.

Uploaded by

Majdi Thaher
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 73

JMEE7311

Power Systems Operation and


Management
Fourth lecture
 Practical Economic Dispatch
 Local and Global Optima
 Non-Linear Programming
 Optimization of Linear Problems:
Linear Programming (LP)

Dr. Nassim Iqteit

9-3-2019

Daniel Kirschen
1
2
Equal Incremental Cost Dispatch

dC A dC B dC C
dPA dPB dPC

PA PB PC

PA + PB + PC

3
Lambda search algorithm
1. Choose a starting value for λ

dC A (PA ) dC B (PB ) dCC (PC )


2. Calculate PA ,PB ,PC such that = = =λ
dPA dPB dPC

3. If one of these values exceeds its lower or upper limit, fix it at that limit

4. Calculate PTOTAL = PA + PB + PC

5. If PTOTAL < L, then increase λ


Else If PTOTAL > L, then decrease λ
Else If PTOTAL ≈ L, then exit

6. Go To Step 2
4
Linear Cost Curves

CA CB

PA PB

PAMAX PBMAX

dC A dC B
dPA dPB

λ
PA PB

PAMAX PBMAX
5
CA CB

PA PB

PAMAX PBMAX

dC A dC B
dPA dPB

λ
PA PB

PAMAX PBMAX
6
CA CB

PA PB

PAMAX PBMAX

dC A dC B
dPA dPB

PA PB

PAMAX PBMAX

7
Linear Cost Curves

• The output of the unit with the lowest incremental cost (in this case unit A) is
increased first (it is “loaded” first)
• The output of the other units is not increased until unit A is loaded up to its
maximum
• The output of the unit with second lowest incremental cost is then increased (i.e.
the “second cheapest” unit)
• And so on until the generation meets the load

8
Linear cost curves with minimum generation
CA CB

PA PB

PAMIN PAMAX PBMIN PBMAX

dC A dC B
dPA dPB

PA PB

PAMIN PAMAX PBMIN PBMAX

9
Linear cost curves with minimum generation

• All generating units are first loaded up to their minimum generation

• The generating units are then loaded in order of incremental cost

10
Piecewise Linear Cost Curves

CA CB

PA PB

dC A dC B
dPA dPB

PA PB

11
Economic Dispatch with Piecewise Linear Cost Curves

CA CB

PA PB

dC A dC B
dPA dPB

λ
PA PB

12
CA CB

PA PB

dC A dC B
dPA dPB

λ
PA PB

13
CA CB

PA PB

dC A dC B
dPA dPB

PA PB

14
CA CB

PA PB

dC A dC B
dPA dPB

PA PB

15
Economic Dispatch with Piecewise Linear Cost Curves
CA CB

PA PB

dC A dC B
dPA dPB

PA PB

16
• All generators except one are at breakpoints of their cost curve

• The marginal generator is between breakpoints to balance the load and generation

• Not well-suited for lambda-search algorithm

• Very fast table lookup algorithm:

• Rank all the segments of the piecewise linear cost curves in order of
incremental cost

• First dispatch all the units at their minimum generation

• Then go down the table until the generation matches the load

17
Example
dC A
dPA
Unit PSegment Ptotal Lambda
0.7 A&B min 20+30 = 50 50
B 70-20=50 100 0.1
0.5
0.3 A 80-30=50 150 0.3
A 120-80=40 190 0.5
30 80 120 150 PA B 110-70=40 230 0.6
A 150-120=30 260 0.7
B 140-110=30 290 0.8
dC B
dPB

0.8 If Pload = 210MW


0.6 The optimal economic dispatch is:
PA = 120MW
0.1 PB = 90 MW
20 70 110 140
PB λ = 0.6
18
19
Which one is the real maximum?

f(x)

A D x

df d2 f
For x = A and x = D, we have: = 0 and 2
<0
dx dx

20
Which one is the real optimum?
x2

x1

A, B,C and D are all minima because we have:


∂f ∂f ∂2 f ∂2 f ∂2 f
= 0; = 0 and 2
< 0; 2 < 0; <0
∂x1 ∂x2 ∂x1 ∂x2 ∂x1 ∂x2
21
Local and Global Optima
• The optimality conditions are local conditions
• They do not compare separate optima
• They do not tell us which one is the global optimum
• In general, to find the global optimum, we must find and compare all the optima
• In large problems, this can be require so much time that it is essentially an
impossible task

Convexity

• If the feasible set is convex and the objective function is convex, there is only
one minimum and it is thus the global minimum

22
Examples of Convex Feasible Sets

x2 x2

x1 x1

x2

x1 x1
x1min x1max
23
Example of Non-Convex Feasible Sets

x2 x2

x1 x1

x2
x1

x1 x1
x1a x1b x1c x1 d
24
Example of Convex Feasible Sets

A set is convex if, for any two points belonging to the set, all the points on the straight line joining
these two points belong to the set
x2 x2

x1 x1

x2

x1 x1
x1min x1max
25
Example of Non-Convex Feasible Sets

x2 x2

x1 x1

x2
x1

x1
x1 a x1 b x1 c x1 d x1
26
Example of Convex Function
x2
f(x)

x
x1
Example of Non-Convex Function
x2
f(x)

x1

x 27
Definition of a Convex Function

f(x) A convex function is a function such that, for any


two points xa and xb belonging to the feasible set and
any k such that 0 ≤ k ≤1, we have:
z

f(y)
z = kf ( x a ) + (1− k ) f ( x b ) ≥ f ( y ) = f [ kx a + ( 1− k ) x b ]
xa y xb x

Example of Non-Convex Function

f(x)

28
x
Importance of Convexity

• If we can prove that a minimization problem is convex:


Convex feasible set
Convex objective function
Then, the problem has one and only one solution

Proving convexity is often difficult


Power system problems are usually not convex
There may be more than one solution to power system optimization
problems

29
• Naïve One-Dimensional Search

• Multi-Dimensional Search
Steepest Ascent/Descent Algorithm
Motivation
 Method of Lagrange multipliers
• Very useful insight into solutions
• Analytical solution practical only for small problems
• Direct application not practical for real-life problems because these problems are too large
• Difficulties when problem is non-convex
 Often need to search for the solution of practical optimization problems using:
• Objective function only or
• Objective function and its first derivative or
• Objective function and its first and second derivatives

31
Naïve One-Dimensional Search
• Suppose: f(x)
• That we want to find the value
of x that minimizes f(x)
• That the only thing that we
can do is calculate the value
of f(x) for any value of x
• We could calculate f(x) for a
range of values of x and choose
the one that minimizes f(x)

x
• Requires a considerable amount of computing time if range
is large and a good accuracy is needed
32
One-Dimensional Search

f(x)

x0 x

33
f(x)

x0 x1 x

34
f(x)

Current search
range

x0 x1 x2 x

If the function is convex, we have bracketed the optimum

35
f(x)

x0 x1 x3 x2 x

Optimum is between x1 and x2


We do not need to consider x0 anymore
36
f(x)

x0 x1 x4 x3 x2 x

Repeat the process until the range is sufficiently small

37
f(x)

x0 x1 x4 x3 x2 x

The procedure is valid only if the function is convex!

38
Multi-Dimensional Search

• Unidirectional search not applicable


• Naïve search becomes totally impossible as dimension of the problem
increases
• If we can calculate the first derivatives of the objective function, much
more efficient searches can be developed
• The gradient of a function gives the direction in which it
increases/decreases fastest

39
Steepest Ascent/Descent Algorithm
x2

x1 40
Steepest Ascent/Descent Algorithm
x2

x1 41
Unidirectional Search
Objective function

Gradient direction 42
Steepest Ascent/Descent Algorithm
x2

x1 43
Steepest Ascent/Descent Algorithm
x2

x1 44
Steepest Ascent/Descent Algorithm
x2

x1 45
Steepest Ascent/Descent Algorithm
x2

x1 46
Steepest Ascent/Descent Algorithm

x2

x1 47
Steepest Ascent/Descent Algorithm
x2

x1 48
Steepest Ascent/Descent Algorithm
x2

x1 49
Steepest Ascent/Descent Algorithm
x2

x1 50
Steepest Ascent/Descent Algorithm
x2

x1 51
Steepest Ascent/Descent Algorithm
x2

© 2011 Daniel Kirschen and University of


Washington x1 52
Steepest Ascent/Descent Algorithm
x2

x1 53
Choosing a Direction

• Direction of steepest ascent/descent is not always the best choice


• Other techniques have been used with varying degrees of success
• In particular, the direction chosen must be consistent with the equality constraints

How far to go in that direction?


x2 How do I know that
• Unidirectional searches can be time-consuming I have to stop here?
• Second order techniques that use information about the
second derivative of the objective function can be used to
speed up the process
• Problem with the inequality constraints
• There may be a lot of inequality constraints
• Checking all of them every time we move in one direction can take
an enormous amount of computing time x1
Handling of inequality constraints

54
 Penalty functions  Barrier functions
• Replace enforcement of inequality constraints
by addition of penalty terms to objective • Barrier must be made progressively closer to the limit
function
• Works better than penalty function
Penalty • Interior point methods

K(x-xmax)2
Barrier cost

xmin xmax
• Stiffness of the penalty function must be increased
progressively to enforce the constraints tightly enough
• Not very efficient method

Penalty

xmin xmax

xmin xmax 55
Non-Robustness
Different starting points may lead to different solutions if the problem is not convex

x2

x1

X Y 56
Linear Programming (LP)
Motivation
• Many optimization problems are linear
• Linear objective function
• All constraints are linear
• Non-linear problems can be linearized:
• Piecewise linear cost curves
• DC power flow
• Efficient and robust method to solve such
problems
CA
Piecewise linearization of a cost curve

58 PA
Mathematical formulation

Decision variables: xj j=1, 2, .. n

n
minimize Σ cj xj
j =1
n
subject to: Σ aij xj ≤ bi, i = 1, 2, . . ., m
j =1

n
Σ cij xj = di, i = 1, 2, . . ., p
j =1

cj, aij, bi, cij, di are constants

59
Example 1
y
4
Maximize x + y
3
Subject to: x ≥ 0; y ≥ 0
Feasible
x≤3 Region
2
y≤4
x+2y≥2 1

0 x
0 1 2 3

60
Example 1

Maximize x + y
y
Subject to: x ≥ 0; y ≥ 0 4

x≤3
3
y≤4
x+2y≥2
2

0 x
0 1 2 3
x+y=0 61
Example 1

Maximize x + y
y
Subject to: x ≥ 0; y ≥ 0 4

x≤3
3
y≤4
x+2y≥2
2
Feasible
Solution
1

0 x
0 1 2 3
x+y=1 62
Example 1

Maximize x + y
y
Subject to: x ≥ 0; y ≥ 0 4

x≤3
3
y≤4
x+2y≥2
2

1
Feasible
Solution
0 x
0 1 2 3
x+y=2 63
Example 1

Maximize x + y
y
Subject to: x ≥ 0; y ≥ 0 4

x≤3
3
y≤4
x+2y≥2
2

0 x
0 1 2 3
x+y=3 64
Example 1

Maximize x + y Optimal
y Solution
Subject to: x ≥ 0; y ≥ 0 4

x≤3
3 x+y=7
y≤4
x+2y≥2
2

0 x
0 1 2 3
65
Solving a LP problem (1)
• Constraints define a polyhedron in n dimensions
• If a solution exists, it will be at an extreme point (vertex) of this
polyhedron
• Starting from any feasible solution, we can find the optimal
solution by following the edges of the polyhedron
• Simplex algorithm determines which edge should be followed next

66
Which direction?
Maximize x + y Optimal
y Solution
Subject to: x ≥ 0; y ≥ 0 4

x≤3
3 x+y=7
y≤4
x+2y≥2
2

0 x
0 1 2 3
67
Solving a LP problem (2)
• If a solution exists, the Simplex algorithm will find it
• But it could take a long time for a problem with many variables!
• Interior point algorithms
• Equivalent to optimization with barrier functions

68
Interior point methods
Extreme points
(vertices)

Constraints
(edges)

Simplex: search from vertex to Interior-point methods: go through


vertex along the edges the inside of the feasible space
69
Sequential Linear Programming (SLP)
• Used if more accuracy is required
• Algorithm:
• Linearize
• Find a solution using LP
• Linearize again around the solution
• Repeat until convergence

70
Solve linear programming problems by Matlab

x = linprog(f,A,b,Aeq,beq,lb,ub)

/*Set A = [] and b = [] if no inequalities exist.

Example :
min − 1 − (2 )⁄3

Subject to:
• Linear equality constraint

• Linear inequality constraints:

• Bounds:
Solution:

A = [1 1
1 1/4
1 -1
-1/4 -1
-1 -1
x=
-1 1]; % A of Linear inequality
constraints
0.1875
b = [2 1 2 1 -1 2]; % b of Linear inequality
1.2500
constraints:
Aeq = [1 1/4]; % Aeq of linear equality
constraint
beq = 1/2; % beq of linear equality fval =
constraint
lb = [-1,-0.5]; %range lb < x < ub. -0.6042
ub = [1.5,1.25];
f = [-1 -1/3]; % objective function
[x,fval] = linprog(f,A,b,Aeq,beq,lb,ub) % LP solver
Thank you

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy