Lecture-Golden Section Search Method
Lecture-Golden Section Search Method
Algorithm:
I Step 1: Choose a lower bound a and an upper bound b. Also choose a
small number . Normalize the variable x by using the equation
(x − a)
ω= . Thus, aω = 0, bω = 1, and Lω = 1. Set k = 1.
(b − a)
Algorithm for Golden Section Search Method
Algorithm:
I Step 1: Choose a lower bound a and an upper bound b. Also choose a
small number . Normalize the variable x by using the equation
(x − a)
ω= . Thus, aω = 0, bω = 1, and Lω = 1. Set k = 1.
(b − a)
I Step 2: Set ω1 = aω + (0.618)Lω and ω2 = bω − (0.618)Lω . Compute
f (ω1 ) or f (ω2 ), depending on whichever of the two was not evaluated
earlier. Use the fundamental region-elimination rule to eliminate a
region. Set new aω and bω .
Algorithm for Golden Section Search Method
Algorithm:
I Step 1: Choose a lower bound a and an upper bound b. Also choose a
small number . Normalize the variable x by using the equation
(x − a)
ω= . Thus, aω = 0, bω = 1, and Lω = 1. Set k = 1.
(b − a)
I Step 2: Set ω1 = aω + (0.618)Lω and ω2 = bω − (0.618)Lω . Compute
f (ω1 ) or f (ω2 ), depending on whichever of the two was not evaluated
earlier. Use the fundamental region-elimination rule to eliminate a
region. Set new aω and bω .
I Step 3:
Is |Lω | ≤ small? If no, set k = k + 1 and go to Step 2;
Algorithm for Golden Section Search Method
Algorithm:
I Step 1: Choose a lower bound a and an upper bound b. Also choose a
small number . Normalize the variable x by using the equation
(x − a)
ω= . Thus, aω = 0, bω = 1, and Lω = 1. Set k = 1.
(b − a)
I Step 2: Set ω1 = aω + (0.618)Lω and ω2 = bω − (0.618)Lω . Compute
f (ω1 ) or f (ω2 ), depending on whichever of the two was not evaluated
earlier. Use the fundamental region-elimination rule to eliminate a
region. Set new aω and bω .
I Step 3:
Is |Lω | ≤ small? If no, set k = k + 1 and go to Step 2;
Else Terminate.
Algorithm for Golden Section Search Method
Algorithm:
I Step 1: Choose a lower bound a and an upper bound b. Also choose a
small number . Normalize the variable x by using the equation
(x − a)
ω= . Thus, aω = 0, bω = 1, and Lω = 1. Set k = 1.
(b − a)
I Step 2: Set ω1 = aω + (0.618)Lω and ω2 = bω − (0.618)Lω . Compute
f (ω1 ) or f (ω2 ), depending on whichever of the two was not evaluated
earlier. Use the fundamental region-elimination rule to eliminate a
region. Set new aω and bω .
I Step 3:
Is |Lω | ≤ small? If no, set k = k + 1 and go to Step 2;
Else Terminate.
Golden Section Search Method
Note
• In this algorithm, the interval reduces to (0.618)n−1 after n function
evaluations. Thus, the number of function evaluations n required to
achieve a desired accuracy is calculated by solving the following
equation:
(0.618)n−1 (b − a) = .
Golden Section Search Method
Note
• In this algorithm, the interval reduces to (0.618)n−1 after n function
evaluations. Thus, the number of function evaluations n required to
achieve a desired accuracy is calculated by solving the following
equation:
(0.618)n−1 (b − a) = .
Note
• In this algorithm, the interval reduces to (0.618)n−1 after n function
evaluations. Thus, the number of function evaluations n required to
achieve a desired accuracy is calculated by solving the following
equation:
(0.618)n−1 (b − a) = .
Note
• In this algorithm, the interval reduces to (0.618)n−1 after n function
evaluations. Thus, the number of function evaluations n required to
achieve a desired accuracy is calculated by solving the following
equation:
(0.618)n−1 (b − a) = .
Solution
Iteration:1
Step 1. We choose a = 0 and b = 2. The transformation equation becomes
x
ω = . Thus, aω = 0, bω = 1, and Lω = 1. Since the golden section
2
method works with a transformed variable ω, it is convenient to work
with the transformed function:
Solution
Iteration:1
Step 1. We choose a = 0 and b = 2. The transformation equation becomes
x
ω = . Thus, aω = 0, bω = 1, and Lω = 1. Since the golden section
2
method works with a transformed variable ω, it is convenient to work
with the transformed function:
new |Lω |
aω & bω ωi g(ωi ) Condition interval <
aω = 0 ω1 = 0.618 g1 = 27.02 g1 < g2 (0.382, 1)
bω = 1 ω2 = 0.382 g2 = 31.92 No
aω = 0.382 ω1 = 0.764 g1 = 28.73 g1 > g2 (0.382, 0.764)
bω = 1 ω2 = 0.618 g2 = 27.02 No
aω = 0.382 ω1 = 0.618 g1 = 27.02 g1 < g2 (0.528, 0.764)
bω = 0.764 ω2 = 0.0.528 g2 = 27.43 No
At the end of third iteration, the minimum lies in
(0.528 × 5, 0.764 × 5) = (2.64, 3.82).
Exercise