0% found this document useful (0 votes)
2K views

Lecture-Golden Section Search Method

The document describes the golden section search method for optimization. It notes two problems with the Fibonacci search method: Fibonacci numbers must be calculated and stored, and the proportion of eliminated region is not constant across iterations. The golden section search method overcomes these by using points spaced by the golden ratio at each iteration to ensure a constant eliminated proportion. It presents the algorithm which iteratively evaluates points to narrow the search interval until the desired accuracy is reached.

Uploaded by

rocks tharan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2K views

Lecture-Golden Section Search Method

The document describes the golden section search method for optimization. It notes two problems with the Fibonacci search method: Fibonacci numbers must be calculated and stored, and the proportion of eliminated region is not constant across iterations. The golden section search method overcomes these by using points spaced by the golden ratio at each iteration to ensure a constant eliminated proportion. It presents the algorithm which iteratively evaluates points to narrow the search interval until the desired accuracy is reached.

Uploaded by

rocks tharan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 27

Optimization Techniques

Golden section search method


Golden Section Search Method

• One difficulty of the Fibonacci search method is that the Fibonacci


numbers have to be calculated and stored.
Golden Section Search Method

• One difficulty of the Fibonacci search method is that the Fibonacci


numbers have to be calculated and stored.
• Another problem is that at every iteration the proportion of the
eliminated region is not the same.
Golden Section Search Method

• One difficulty of the Fibonacci search method is that the Fibonacci


numbers have to be calculated and stored.
• Another problem is that at every iteration the proportion of the
eliminated region is not the same.
• In order to overcome these two problems and yet calculate one new
function evaluation per iteration, the golden section search method is
used.
Golden Section Search Method

• One difficulty of the Fibonacci search method is that the Fibonacci


numbers have to be calculated and stored.
• Another problem is that at every iteration the proportion of the
eliminated region is not the same.
• In order to overcome these two problems and yet calculate one new
function evaluation per iteration, the golden section search method is
used.
In this algorithm, the search space (a, b) is first linearly mapped to a unit
interval search space (0, 1). Thereafter, two points at τ from either end of
the search space are chosen so that at every iteration the eliminated region is
(1 − τ ) to that in the previous iteration (Figure 2.10). This can be achieved
by equating 1 − τ with (τ × τ ).
Golden Section Search Method

• One difficulty of the Fibonacci search method is that the Fibonacci


numbers have to be calculated and stored.
• Another problem is that at every iteration the proportion of the
eliminated region is not the same.
• In order to overcome these two problems and yet calculate one new
function evaluation per iteration, the golden section search method is
used.
In this algorithm, the search space (a, b) is first linearly mapped to a unit
interval search space (0, 1). Thereafter, two points at τ from either end of
the search space are chosen so that at every iteration the eliminated region is
(1 − τ ) to that in the previous iteration (Figure 2.10). This can be achieved
by equating 1 − τ with (τ × τ ).This yields the golden number:
= 0.618.
Golden Section Search Method

• One difficulty of the Fibonacci search method is that the Fibonacci


numbers have to be calculated and stored.
• Another problem is that at every iteration the proportion of the
eliminated region is not the same.
• In order to overcome these two problems and yet calculate one new
function evaluation per iteration, the golden section search method is
used.
In this algorithm, the search space (a, b) is first linearly mapped to a unit
interval search space (0, 1). Thereafter, two points at τ from either end of
the search space are chosen so that at every iteration the eliminated region is
(1 − τ ) to that in the previous iteration (Figure 2.10). This can be achieved
by equating 1 − τ with (τ × τ ).This yields the golden number:
= 0.618.Figure 2.10 can be used to verify that in each iteration one of the
two points x1 and x2 is always a point considered in the previous iteration.
Golden Section Search Method
Algorithm for Golden Section Search Method

Algorithm:
I Step 1: Choose a lower bound a and an upper bound b. Also choose a
small number . Normalize the variable x by using the equation
(x − a)
ω= . Thus, aω = 0, bω = 1, and Lω = 1. Set k = 1.
(b − a)
Algorithm for Golden Section Search Method

Algorithm:
I Step 1: Choose a lower bound a and an upper bound b. Also choose a
small number . Normalize the variable x by using the equation
(x − a)
ω= . Thus, aω = 0, bω = 1, and Lω = 1. Set k = 1.
(b − a)
I Step 2: Set ω1 = aω + (0.618)Lω and ω2 = bω − (0.618)Lω . Compute
f (ω1 ) or f (ω2 ), depending on whichever of the two was not evaluated
earlier. Use the fundamental region-elimination rule to eliminate a
region. Set new aω and bω .
Algorithm for Golden Section Search Method

Algorithm:
I Step 1: Choose a lower bound a and an upper bound b. Also choose a
small number . Normalize the variable x by using the equation
(x − a)
ω= . Thus, aω = 0, bω = 1, and Lω = 1. Set k = 1.
(b − a)
I Step 2: Set ω1 = aω + (0.618)Lω and ω2 = bω − (0.618)Lω . Compute
f (ω1 ) or f (ω2 ), depending on whichever of the two was not evaluated
earlier. Use the fundamental region-elimination rule to eliminate a
region. Set new aω and bω .
I Step 3:
 Is |Lω | ≤  small? If no, set k = k + 1 and go to Step 2;
Algorithm for Golden Section Search Method

Algorithm:
I Step 1: Choose a lower bound a and an upper bound b. Also choose a
small number . Normalize the variable x by using the equation
(x − a)
ω= . Thus, aω = 0, bω = 1, and Lω = 1. Set k = 1.
(b − a)
I Step 2: Set ω1 = aω + (0.618)Lω and ω2 = bω − (0.618)Lω . Compute
f (ω1 ) or f (ω2 ), depending on whichever of the two was not evaluated
earlier. Use the fundamental region-elimination rule to eliminate a
region. Set new aω and bω .
I Step 3:
 Is |Lω | ≤  small? If no, set k = k + 1 and go to Step 2;
 Else Terminate.
Algorithm for Golden Section Search Method

Algorithm:
I Step 1: Choose a lower bound a and an upper bound b. Also choose a
small number . Normalize the variable x by using the equation
(x − a)
ω= . Thus, aω = 0, bω = 1, and Lω = 1. Set k = 1.
(b − a)
I Step 2: Set ω1 = aω + (0.618)Lω and ω2 = bω − (0.618)Lω . Compute
f (ω1 ) or f (ω2 ), depending on whichever of the two was not evaluated
earlier. Use the fundamental region-elimination rule to eliminate a
region. Set new aω and bω .
I Step 3:
 Is |Lω | ≤  small? If no, set k = k + 1 and go to Step 2;
 Else Terminate.
Golden Section Search Method

Note
• In this algorithm, the interval reduces to (0.618)n−1 after n function
evaluations. Thus, the number of function evaluations n required to
achieve a desired accuracy  is calculated by solving the following
equation:
(0.618)n−1 (b − a) = .
Golden Section Search Method

Note
• In this algorithm, the interval reduces to (0.618)n−1 after n function
evaluations. Thus, the number of function evaluations n required to
achieve a desired accuracy  is calculated by solving the following
equation:
(0.618)n−1 (b − a) = .

• Like the Fibonacci method, only one function evaluation is required at


each iteration and the effective region elimination per function
evaluation is exactly 38.2 per cent, which is higher than that in the
interval halving method.
Golden Section Search Method

Note
• In this algorithm, the interval reduces to (0.618)n−1 after n function
evaluations. Thus, the number of function evaluations n required to
achieve a desired accuracy  is calculated by solving the following
equation:
(0.618)n−1 (b − a) = .

• Like the Fibonacci method, only one function evaluation is required at


each iteration and the effective region elimination per function
evaluation is exactly 38.2 per cent, which is higher than that in the
interval halving method.
• This quantity is the same as that in the Fibonacci search for large n.
Golden Section Search Method

Note
• In this algorithm, the interval reduces to (0.618)n−1 after n function
evaluations. Thus, the number of function evaluations n required to
achieve a desired accuracy  is calculated by solving the following
equation:
(0.618)n−1 (b − a) = .

• Like the Fibonacci method, only one function evaluation is required at


each iteration and the effective region elimination per function
evaluation is exactly 38.2 per cent, which is higher than that in the
interval halving method.
• This quantity is the same as that in the Fibonacci search for large n.
• In fact, for a large n, the Fibonacci search is equivalent to the golden
section search.
Example
Perform four iterations of Golden Section Search Method to minimize
f (x) = x4 − 14x3 + 60x2 − 70x, x ∈ (0, 2) with  = 10−3 .

Solution
Iteration:1
Step 1. We choose a = 0 and b = 2. The transformation equation becomes
x
ω = . Thus, aω = 0, bω = 1, and Lω = 1. Since the golden section
2
method works with a transformed variable ω, it is convenient to work
with the transformed function:

g(ω) = 16ω 4 − 112ω 3 + 240ω 2 − 140ω.


Example
Perform four iterations of Golden Section Search Method to minimize
f (x) = x4 − 14x3 + 60x2 − 70x, x ∈ (0, 2) with  = 10−3 .

Solution
Iteration:1
Step 1. We choose a = 0 and b = 2. The transformation equation becomes
x
ω = . Thus, aω = 0, bω = 1, and Lω = 1. Since the golden section
2
method works with a transformed variable ω, it is convenient to work
with the transformed function:

g(ω) = 16ω 4 − 112ω 3 + 240ω 2 − 140ω.

Step 2. We set ω1 = 0 + (0.618)1 = 0.618 and ω2 = 1 − (0.618)1 or


ω2 = 0.382. The corresponding function values are g(ω1 ) = −18.959
and g(w2 ) = −24.3607. Since g(ω1 ) > g(ω2 ), we eliminate the region
(ω1 , b) or (0.618, 1). Thus, aω = 0 and bω = 0.618.
Golden Section Search Method
new |Lω |
aω & bω ωi g(ωi ) Condition interval <
aω = 0 ω1 = 0.618 g(ω1 ) = −18.95 g(ω1 ) > g(ω2 ) (0, 0.618)
bω = 1 ω2 = 0.382 g(ω2 ) = −24.36 No
aω = 0 ω1 = 0.382 g(ω1 ) = −24.36 g(ω2 ) > g(ω1 ) (0.236, 0.618)
bω = 0.618 ω2 = 0.236 g(ω2 ) = −21.095 No
aω = 0.236 ω1 = 0.472 g(ω1 ) = −23.59 g(ω1 ) > g(ω2 ) (0.236, 0.472)
bω = 0.618 ω2 = 0.382 g(ω2 ) = −24.36 No
aω = 0.236 ω1 = 0.382 g(ω1 ) = −24.36 g(ω2 ) > g(ω1 ) (0.382, 0.472)
bω = 0.472 ω2 = 0.326 g(ω2 ) = −23.83 No

At the end of fourth iteration, the minimum lies in


(0.236 × 2, 0.382 × 2) = (0.472, 0.764).
Example
Perform three iterations of Golden Section Search Method to minimize
54
f (x) = x2 + , x ∈ (0, 5) with  = 10−3 .
x
Solution:
Iteration:1
Step 1. We choose a = 0 and b = 5. The transformation equation becomes
x
ω = . Thus, aω = 0, bω = 1, and Lω = 1. Since the golden section
5
method works with a transformed variable ω, it is convenient to work
with the transformed function:
54
g(ω) = 25ω 2 + .

Example
Perform three iterations of Golden Section Search Method to minimize
54
f (x) = x2 + , x ∈ (0, 5) with  = 10−3 .
x
Solution:
Iteration:1
Step 1. We choose a = 0 and b = 5. The transformation equation becomes
x
ω = . Thus, aω = 0, bω = 1, and Lω = 1. Since the golden section
5
method works with a transformed variable ω, it is convenient to work
with the transformed function:
54
g(ω) = 25ω 2 + .

Step 2. We set ω1 = 0 + (0.618)1 = 0.618 and ω2 = 1 − (0.618)1 or


ω2 = 0.382. The corresponding function values are g(ω1 ) = 27.02 and
g(w2 ) = 31.92. Since g(ω1 ) < g(ω2 ), the minimum cannot lie in any
point smaller than ω = 0.382. Thus, we eliminate the region (a, ω2 ) or
(0, 0.382). Thus, aω = 0.382 and bω = 1.
I At this stage, Lω = 1 − 0.382 = 0.618. The region being eliminated
after this iteration is shown in Figure 2.11. The position of the exact
minimum at ω = 0.6 is also shown.
I At this stage, Lω = 1 − 0.382 = 0.618. The region being eliminated
after this iteration is shown in Figure 2.11. The position of the exact
minimum at ω = 0.6 is also shown.
Step 3. Since |Lω | is not smaller than , we set k = 2 and move to Step 2. This
completes one iteration of the golden section search method.
I At this stage, Lω = 1 − 0.382 = 0.618. The region being eliminated
after this iteration is shown in Figure 2.11. The position of the exact
minimum at ω = 0.6 is also shown.
Step 3. Since |Lω | is not smaller than , we set k = 2 and move to Step 2. This
completes one iteration of the golden section search method.
Golden Section Search Method

new |Lω |
aω & bω ωi g(ωi ) Condition interval <
aω = 0 ω1 = 0.618 g1 = 27.02 g1 < g2 (0.382, 1)
bω = 1 ω2 = 0.382 g2 = 31.92 No
aω = 0.382 ω1 = 0.764 g1 = 28.73 g1 > g2 (0.382, 0.764)
bω = 1 ω2 = 0.618 g2 = 27.02 No
aω = 0.382 ω1 = 0.618 g1 = 27.02 g1 < g2 (0.528, 0.764)
bω = 0.764 ω2 = 0.0.528 g2 = 27.43 No
At the end of third iteration, the minimum lies in
(0.528 × 5, 0.764 × 5) = (2.64, 3.82).
Exercise

1. Using Fibonacci search method minimize f (x) = x5 − 5x3 − 20x + 5 in


the interval (0, 5) with n = 7. Do 5 iterations.
2. Using Fibonacci search, bracket the minimum of the function
4
f (x) = x + on [0, 4]. Do 5 iterations.
x
3. Given that f (x) = x2 + 2x in the interval [3, 4]. Using Golden section
method method, perform three iterations to identify the interval in
which the optimum lies.
4. Given that f (x) = x(x − 1.5) in the interval [0, 10]. Use three iterations
of Fibonacci search method (i:e n = 4 with F0 = 1; F1 = 1) to find the
interval in which the optimum lies.
5. Using Golden section method, bracket the minimum of the function
f (x) = x5 − 5x3 − 20x + 5 in the interval (0; 5). Do 3 iterations.
6. Bracket the minimum for the function f (x) = x3 − 6x2 + 9x + 6 on
(0, 4) using Golden section method. Do 3 iterations.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy