Numerical Search Methods
z It may be impossible algebraically to solve for a maximum or a minimum using calculus.
z Various search methods permit us to approximate solutions to nonlinear optimization problems with a single independent variable.
z A unimodal function on an interval has exactly one point where a maximum or minimum
occurs in the interval.
Numerical Search Methods
Search Method Paradigm
z The region [a, b] is divided into two overlapping intervals [a, x1] and [x2,b]. Then determine the subinterval where the optimal solution lies and use that subinterval to continue the search.
z There are 3 cases in the maximization problem.
z If f(x1) < f(x2), then the solution lies in (x1,b].
z If f(x1) > f(x2), then the solution lies in [a,x2).
z If f(x1) = f(x2), then the solution lies in (x1,x2).
Dichotomous Search Method
z The Dichotomous Search Method computes
the midpoint , and then moves slightly to either side of the midpoint to compute two test points: , where ε is a very small
number. The objective being to place the two test points as close together as possible. The procedure continues until it gets within some small interval containing the optimal solution.
2 a +b
2 a b
+ ε
±
Dichotomous Search Method
Dichotomous Search Algorithm to maximize f(x) over the interval [a,b]
STEP 1: Initialize: Choose a small number ε > 0, such as 0.01. Select a small t such that
0 < t < b – a, called the length of uncertainty for the search. Calculate the number of
iterations n using the formula ln(( )/ )
ln 2 .
b a t
n ⎡ − ⎤
⎢ ⎥
= ⎢⎢ ⎥⎥
Dichotomous Search Method
STEP 2: For k = 1 to n, do Steps 3 and 4.
STEP 3:
STEP 4: (For a mximization problem)
If f (x1) ≥ f(x2) , then b = x2 else a = x1. k = k + 1
Return to Step 3.
STEP 5: Let
1 and 2
2 2
a b a b
x = ⎛⎜⎜⎜⎝ + ⎞⎟⎟⎟⎠ − ε x = ⎛⎜⎜⎜⎝ + ⎞⎟⎟⎟⎠ + ε
* and ( )*
2 a b
x + MAX f x
= =
Dichotomous Search Method
z In stead of determining the number of
iterations, we may wish to continue until the change in the dependent variable is less than some predetermined amount, say Δ. That is continue to iterate until f(a) – f(b) ≤ Δ.
z To minimize a function y = f(x), either
maximize –y or switch the directions of the signs in Step 4.
Dichotomous Search Method
Example
Maximize f(x) = –x2 – 2x over the interval
–3 ≤ x ≤ 6. Assume the optimal tolerance to be less than 0.2 and we choose ε = 0.01.
z We determine the number of iterations to be
⎡ ⎤
6 ( 3)
ln 0.2 ln 45 5.49 6
ln 2 ln 2
n
⎡ ⎛⎜ − − ⎟⎞⎤
⎢ ⎜⎜ ⎟⎟⎥
⎢ ⎝ ⎠⎥ ⎡ ⎤
= ⎢ ⎥ = ⎢ ⎥ = =
⎢ ⎥ ⎢ ⎥
⎢ ⎥ ⎢ ⎥
Dichotomous Search Method
z Results of a Dichotomous Search.
– 0.87531 – 1.03563
6
0.984453 0.989041
– 0.87531 – 0.89531
– 0.735 – 1 .03563
5
0.999756 0.998731
– 1.01563 – 1.03563
– 0.735 – 1.31625
4
0.912236 0.899986
– 1.29625 – 1.31625
– 0.735 – 1.8775
3
0.264694 0.229994
– 1.8575 – 1.8775
– 0.735 – 3
2
0.929775 0.939975
– 0.735 – 0.755
1.51 – 3
1
– 5.3001 – 5.2001
1.51 1.49
6 – 3
0
f (x2) f (x1)
x2 x1
b a
n
* 1.03563 0.87531 *
0.95547 and ( ) 0.99802
x − 2− f x
= = − =
Golden Section Search Method
z The Golden Section Search Method chooses x1 and x2 such that the one of the two evaluations of the function in each step can be reused in
the next step.
z The golden ratio is the ratio r satisfying
r (1−r)
1 5 1
0.618034
1 2
r r
r r
− −
= ⇒ = ≈
Golden Section Search Method
a x1 x2 b x
y
2 1
1 2
1 2
1 1
r and r
r r
r r
− −
= =
2 1
x a
r b a
= −
−
1 2
b x
r b a
= −
−
1 2
5 1
r r 2−
= =
Golden Section Search Method
Golden Section Search Method to maximize f (x) over the interval a ≤ x ≤ b.
STEP 1: Initialize: Choose a tolerance t > 0.
STEP 2: Set and define the test points:
STEP 3: Calculate f(x1) and f(x2).
5 1
r 2−
=
( )( )
( )
1 2
1
x a r b a
x a r b a
= + − −
= + −
Golden Section Search Method
STEP 4: (For a maximization problem) If f(x1) ≤ f(x2) then a = x1, x1 = x2
else b = x2, x2 = x1. Find the new x1 or x2 using the formula in Step 2.
STEP 5: If the length of the new interval from Step 4 is less than the tolerance t specified, then stop. Otherwise go back to Step 3.
STEP 6: Estimate x* as the midpoint of the final interval and compute MAX* = f(x*)
2 a b
x +
=
Golden Section Search Method
z To minimize a function y = f(x), either
maximize –y or switch the directions of the signs in Step 4.
z The advantage of the Golden Section Search Method is that only one new test point must be computed at each successive iteration.
z The length of the uncertainty is 61.8% of he length of the previous interval of uncertainty.
Golden Section Search Method
-0.97651 -1.02174
11
0.999448 0.999961
-0.97651 -0.99379
-0.94856 -1.02174
10
0.999961 0.999527
-0.99379 -1.02174
-0.94856 -1.06696
9
0.997354 0.999961
-0.94856 -0.99379
-0.87539 -1.06696
8
0.999961 0.995516
-0.99379 -1.06696
-0.87539 -1.18536
7
0.984472 0.999961
-0.87539 -0.99379
-0.68381 -1.18536
6
0.900025 0.984472
-0.68381 -0.87539
-0.37384 -1.18536
5
0.984472 0.96564
-0.87539 -1.18536
-0.37384 -1.68692
4
0.607918 0.984472
-0.37384 -0.87539
0.437694 -1.68692
3
0.984472 0.528144
-0.87539 -1.68692
0.437694 -3
2
-1.06696 0.984472
0.437694 -0.87539
2.562306 -3
1
-11.69 -1.06696
2.562306 0.437694
6 -3
0
f(x2) f(x1)
x2 x1
b a
n
* 0.99913 and 0.999999
x = − MAX =
Golden Section Search Method
How small should the tolerance be?
If x* is the optimal point, then
If we want the second term to be of a fraction ε of the first term, then
As a rule of thumb, we need
* 1 * * 2
( ) ( ) ( )( ) .
f x ≈ f x + 2 f x′′ x − x
*
* *
* 2 *
2 ( ) ( ) ( ). x x x f x
x f x ε
− =
′′
* * .
x −x ≈ ε x
Fibonacci Search Method
z
If a number of test points is specified in advanced, then we can do slightly better than the Golden Section Search Method.
z
This method has the largest interval
reduction compared to other methods
using the same number of test points.
Fibonacci Search Method
xn xn−1 xn−2
In
α In
In 1
In−
1 2
( n ) ( n )
f x − > f x −
1 1
1 1
(2 )
2 2
n n n n n
I = I − + αI ⇒ I − = − α I
1
In I α = ε
Fibonacci Search Method
2 1 (2 ) (3 )
n n n n n n
I − = I − + I = − α I + I = − α I
xn xn−1 xn−2
In
In
1 2
2 3
( ) ( )
( ) ( )
n n
n n
f x f x
f x f x
− −
− −
>
>
1
In−
3
xn− 2
In−
Fibonacci Search Method
1
2 1
3 2 1
4 3 2
2
(2 )
(2 ) (3 )
(3 ) (2 ) (5 2 )
(5 2 ) (3 ) (8 3 )
( )
n n
n n n n n n
n n n n n n
n n n n n n
n k k k n
I I
I I I I I I
I I I I I I
I I I I I I
I F F I
α
α α
α α α
α α α
α
−
− −
− − −
− − −
− +
= −
= + = − + = −
= + = − + − = −
= + = − + − = −
= −
#
2 1 1 2
where Fk+ = Fk+ + Fk, F = F = 1
Fibonacci Search Method
x1 x2 b
I2
I1
a
I3
I3 I2
( ) ( )
( ) ( )
1 3 1 3 1 3 1
2 2 2 2 1
n n n n n n
n n n n n n
x a I a F F I a F I F I
x a I a F F I a F I F I
α ε
α ε
− − − −
− −
= + = + + = + +
= + = + + = + +
1 ( n 1 n 1 ) n n 1 n n 1( 1)
I = F + −F − α I = F I+ − F − εI
1
1 1
1 n
n
n
I F I
F
− ε
+
⎛ + ⎞⎟
⎜ ⎟
= ⎜⎜⎜⎝ ⎟⎟⎠
Fibonacci Search Method
z
If ε = 0, then the formula simplify to
1 1 n
n
I I
F +
=
( )
( )
1 1
1
2
1 n n
n n
x a F b a
F
x a F b a
F
− +
+
= + −
= + −
Fibonacci Search Method
Fibonacci Search Method to maximize f (x) over the interval a ≤ x ≤ b.
STEP 1: Initialize: Choose the number of test points n.
STEP 2: Define the test points:
STEP 3: Calculate f(x1) and f(x2).
( ) ( )
1
1 2
1 1
,
n n
n n
F F
x a b a x a b a
F F
−
+ +
= + − = + −
Fibonacci Search Method
STEP 4: (For a maximization problem) If f(x1) ≤ f(x2) then a = x1, x1 = x2 else b = x2, x2 = x1.
n = n – 1. Find the new x1 or x2 using the formula in Step 2.
STEP 5: If n > 1, return to Step 3.
STEP 6: Estimate x* as the midpoint of the final interval and compute MAX* = f(x*)
2 a b
x +
=
Fibonacci Search Method
-0.99142 -1.03004
1 1
0.999926 0.999926
-0.99142 -0.99142
-0.95279 -1.03004
1 2
0.999926 0.999097
-0.99142 -1.03004
-0.95279 -1.06867
2 3
0.997771 0.999926
-0.95279 -0.99142
-0.87554 -1.06867
3 4
0.999926 0.995284
-0.99142 -1.06867
-0.87554 -1.18455
5 5
0.984509 0.999926
-0.87554 -0.99142
-0.6824 -1.18455
8 6
0.899132 0.984509
-0.6824 -0.87554
-0.37339 -1.18455
13 7
0.984509 0.965942
-0.87554 -1.18455
-0.37339 -1.6867
21 8
0.607361 0.984509
-0.37339 -0.87554
0.437768 -1.6867
34 9
0.984509 0.52845
-0.87554 -1.6867
0.437768 -3
55 10
-1.06718 0.984509
0.437768 -0.87554
2.562232 -3
89 11
-11.6895 -1.06718
2.562232 0.437768
6 -3
144 12
233 13
f(x2) f(x1)
x2 x1
b a
Fn n
* 1.01073 and 0.999885
x = − MAX =