Heuristic Search
Heuristic Search
Generate-and-test
Hill climbing
Best-first search
Problem reduction
Constraint satisfaction
Means-ends analysis
44
Algorithm : Generate-and-Test
45
GENERATE-AND-TEST
Acceptable for simple problems.
10
GENERATE-AND-TEST
Generate solution randomly: British museum algorithm;
wandering randomly.
13
Algorithm : Simple Hill-Climbing
1. Evaluate the initial state. If it is also a goal state, then return it and
quit. Otherwise, continue with the initial state as the current state.
2. Loop until a solution is found or until there are no new operators left
to be applied in the current state:
(a) Select an operator that has not yet been applied to the current state and
apply it to produce a new state.
(b) Evaluate the new state.
(i) If it is a goal state, then return it and quit.
(ii) If it is not a goal state but it is better than the current state, then make it the
current state.
(iii) If it is not better than the current state, then continue in the loop.
46
EX
1) Use heuristic function as measure of how far off the number of tiles
out of place.
2) Choose rule giving best increase in function.
2 8 3 1 2 3
1 6 4 8 4
7 5 7 6 5
Example
-4 -3 -3
2 8 3 U 2 8 3 U 2 3 L
1 6 4 1 4 1 8 4
7 5 7 6 5 7 6 5
-2 -1 0
2 3 D 1 2 3 R 1 2 3
1 8 4 8 4 8 4
7 6 5 7 6 5 7 6 5
STEEPEST-ASCENT HILL CLIMBING
Considers all the moves from the current state.
Selects the best one as the next state.
17
Algorithm : Steepest-Ascent Hill Climbing or
gradient search
1. Evaluate the initial state. If it is also a goal state, then return it and quit.
Otherwise, continue with the initial state as the current state.
2. Loop until a solution is found or until a complete iteration produces no
change to current state:
(a) Let SUCC be a state such that any possible successor of the current state will
be better than SUCC.
(b) For each operator that applies to the current state do:
(ii) Evaluate the new state. If it is a goal state, then return it and quit. If not, compare
it to SUCC. If it is better, then set SUCC to this state. If it is not better, leave SUCC
alone.
(c) If the SUCC is better than current state, then set current state to SUCC.
47
HILL CLIMBING: DISADVANTAGES
Fail to find a solution
Either Algorithm may terminate not by finding a goal state but by
getting to a state from which no better state can be generated.
Plateau
Ridge
48
HILL CLIMBING: DISADVANTAGES
Ways Out
21
HILL CLIMBING: DISADVANTAGES
Hill climbing is a local method:
Decides what to do next by looking only at the “immediate”
consequences of its choices rather than by exhaustively exploring
all the consequences.
22
A Hill-Climbing Problem
Local: Add one point for every block that is resting on thing it is supposed to be resting on.
Subtract one point from every block that is sitting on wrong thing.
4 8
49
One Possible Moves
50
Three Possible Moves
4
4
4
Hill Climbing will Halt because all states have lower score than the Current state.
50
A Hill-Climbing Problem
Global: Add one point for every block in correct support structure, subtract one point for every
block in wrong support structure.
-28 28
49
One Possible Moves
-21
50
Three Possible Moves
-28
-16
-15
50
SIMULATED ANNEALING
A variation of hill climbing in which, at the beginning of the
process, some downhill moves may be made.
T is the temperature
K is Boltzmann constant.
Suppose k=1,
P’ = e –∆E/T
52
SIMULATE ANNEALING:
IMPLEMENTATION
It is necessary to select an annealing schedule which has three
components:
Initial value to be used for temperature
Criteria that will be used to decide when the
temperature will be reduced
Amount by which the temperature will be reduced.
32
BEST-FIRST SEARCH
Combines the advantages of both DFS and BFS into a single
method.
33
Best-first search
• General approach of informed search:
– Best-first search: node is selected for expansion based on
an evaluation function f(n)
• Idea: evaluation function measures distance to the
goal.
– Choose node which appears best
• Implementation:
– fringe is queue sorted in decreasing order of desirability.
– Special cases: greedy search, A* search
34
BEST-FIRST SEARCH
At each step of the BFS search process, we select the most
promising of the nodes we have generated so far.
35
BEST-FIRST SEARCH
A A A
B C D B C D
3 5 1 3 5
E F
4 6
A A
B C D B C D
5 5
G H E F G H E F
6 5 4 6 6 5 6
I J
36
2 1
BEST FIRST SEARCH VS HILL
CLIMBING
Similar to Steepest ascent hill climbing with two exceptions:
39
Greedy search example
Arad (366)
40
Greedy search example
Arad
Sibiu(253) Zerind(374)
Timisoara
• The first expansion step produces: (329)
– Sibiu, Timisoara and Zerind
• Greedy best-first will select Sibiu.
41
Greedy search example
Arad
Sibiu
Fagaras
Sibiu Bucharest
(253) (0)
42
Greedy search, evaluation
• Completeness: NO (cfr. DF-search)
– Check on repeated states
– Minimizing h(n) can result in false starts, e.g. Iasi to Fagaras.
43
Greedy search, evaluation
• Completeness: NO (cfr. DF-search)
• Time complexity? O(b m )
– Cfr. Worst-case DF-search
(with m is maximum depth of search space)
– Good heuristic cangive dramatic improvement.
44
BEST-FIRST SEARCH
OPEN: nodes that have been generated, but have not examined.
This is organized as a priority queue.
45
Algorithm : Best-First Search
1. Start with OPEN containing just the initial state.
(i) If it has not been generated before, evaluate it, add it to OPEN, and
record its parent.
(ii) If it has been generated before, change the parent if this new path is
better than the previous one. In that case, update the cost of getting
to this node and to any successors that this node may already,
have.
54
47