0% found this document useful (0 votes)
154 views49 pages

Heuristic Search

The document discusses heuristic techniques and heuristic search algorithms. It begins by defining a heuristic as a technique that can find approximate solutions faster than classic methods, though it may not guarantee an optimal solution. It then discusses various aspects of heuristic functions, heuristic search, and specific heuristic search algorithms like generate-and-test, hill climbing, and steepest-ascent hill climbing. The document provides examples to illustrate how these algorithms work and also notes some disadvantages like getting stuck in local maxima.

Uploaded by

saikiran
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
154 views49 pages

Heuristic Search

The document discusses heuristic techniques and heuristic search algorithms. It begins by defining a heuristic as a technique that can find approximate solutions faster than classic methods, though it may not guarantee an optimal solution. It then discusses various aspects of heuristic functions, heuristic search, and specific heuristic search algorithms like generate-and-test, hill climbing, and steepest-ascent hill climbing. The document provides examples to illustrate how these algorithms work and also notes some disadvantages like getting stuck in local maxima.

Uploaded by

saikiran
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 49

HEURISTIC TECHNIQUE

• A Heuristic is a technique to solve a problem faster


than classic methods, or to find an approximate
solution when classic methods cannot.
• This is a kind of a shortcut as we often trade one of
optimality, completeness, accuracy, or precision for
speed. A Heuristic (or a heuristic function) takes a
look at search algorithms. At each branching step, it
evaluates the available information and makes a
decision on which branch to follow.
• It does so by ranking alternatives. The Heuristic is
any device that is often effective but will not
guarantee work in every case.
Heuristic functions
• A heuristic function is a function that maps
from problem state descriptions to measures
of desirability, usually represented as numbers
• Well designed heuristic function play an
important role in guiding a search process
toward a solution.
• In chess heuristic function should be high
whereas in travelling salesman it is low.

• The purpose of heuristic function is to guide
the search process in profitable direction by
suggesting which path to follow first when
more than one is available.
• There is a trade off between the cost of
evaluating a heuristic function and the savings
in search time that the function provides.
Heuristic search
• A heuristic search is a technique that improves
the efficiency of a search process by sacrificing
claims of completeness. Heuristics are like our
tour guides.
• Using good heuristics we can get good
possible non optimal solutions to hard
problems in less than exponential time.
So why do we need heuristics?
• One reason is to produce, in a reasonable
amount of time, a solution that is good
enough for the problem in question.
• It doesn’t have to be the best- an
approximate solution will do since this is
fast enough.
• Most problems are exponential. Heuristic
Search let us reduce this to a rather
polynomial number.
• We use this in AI because we can put it to
use in situations where we can’t find known
algorithms.
Heuristic search

 Generate-and-test

 Hill climbing

 Best-first search

 Problem reduction

 Constraint satisfaction

 Means-ends analysis

44
Algorithm : Generate-and-Test

1. Generate a possible solution.


2. Test to see if this is actually a solution by comparing
the chose point or the endpoint of the chosen path to
the set of acceptable goal states.
3. If a solution has been found, quit. Otherwise, return to
step 1.

45
GENERATE-AND-TEST
 Acceptable for simple problems.

 Eg : 1. finding key of a 3 digit lock.


2. 8-puzzle problem

 Inefficient for problems with large space.


 Use DFS as all possible solution generated, before they can be
tested.

10
GENERATE-AND-TEST
 Generate solution randomly: British museum algorithm;
wandering randomly.

 Exhaustive generate-and-test. : consider each case in depth


 Heuristic generate-and-test: not consider paths that seem
unlikely to lead to a solution.
 Plan generate-test:
 Create a list of candidates.
 Apply generate-and-test to that list on the basis of
constraint-satisfaction.
Ex – DENDERAL, which infers the structure of organic
compounds using mass spectrogram and nuclear magnetic
resonance (NMR) data.
11
HILL CLIMBING

 Generate-and-test + direction to move (feedback from test procedure).

 Test function + heuristic function = Hill Climbing


 Heuristic function (objective function) to estimate how close a given
state is to a goal state.
 Hill climbing can be used to solve problems that have many solutions
but where some solutions are better than others.

 Hill climbing is often used when a good heuristic function is available


for evaluating states but when no other useful knowledge is available.
 Example: In Travelling salesman, straight line (as crow flies) distance
between two cities can be a heuristic measure of remaining distance.
 Like heuristic value is measure of remaining cost from current state to
goal state.
12
SIMPLE HILL CLIMBING
 Evaluation function as a way to inject task-specific knowledge into the
control process.

 Key difference between Simple Hill climbing and Generate-and-test is


the use of evaluation function as a way to inject task specific
knowledge into the control process.

 Better : higher value of heuristic function


Lower value

13
Algorithm : Simple Hill-Climbing
1. Evaluate the initial state. If it is also a goal state, then return it and
quit. Otherwise, continue with the initial state as the current state.

2. Loop until a solution is found or until there are no new operators left
to be applied in the current state:
(a) Select an operator that has not yet been applied to the current state and
apply it to produce a new state.
(b) Evaluate the new state.
(i) If it is a goal state, then return it and quit.

(ii) If it is not a goal state but it is better than the current state, then make it the
current state.

(iii) If it is not better than the current state, then continue in the loop.

46
EX
1) Use heuristic function as measure of how far off the number of tiles
out of place.
2) Choose rule giving best increase in function.

2 8 3 1 2 3
1 6 4 8 4
7 5 7 6 5
Example
-4 -3 -3
2 8 3 U 2 8 3 U 2 3 L
1 6 4 1 4 1 8 4
7 5 7 6 5 7 6 5

-2 -1 0
2 3 D 1 2 3 R 1 2 3
1 8 4 8 4 8 4
7 6 5 7 6 5 7 6 5
STEEPEST-ASCENT HILL CLIMBING
 Considers all the moves from the current state.
 Selects the best one as the next state.

 Also known as Gradient Search.

17
Algorithm : Steepest-Ascent Hill Climbing or
gradient search
1. Evaluate the initial state. If it is also a goal state, then return it and quit.
Otherwise, continue with the initial state as the current state.
2. Loop until a solution is found or until a complete iteration produces no
change to current state:
(a) Let SUCC be a state such that any possible successor of the current state will
be better than SUCC.
(b) For each operator that applies to the current state do:

(i) Apply the operator and generate a new state.

(ii) Evaluate the new state. If it is a goal state, then return it and quit. If not, compare
it to SUCC. If it is better, then set SUCC to this state. If it is not better, leave SUCC
alone.

(c) If the SUCC is better than current state, then set current state to SUCC.

47
HILL CLIMBING: DISADVANTAGES
 Fail to find a solution
 Either Algorithm may terminate not by finding a goal state but by
getting to a state from which no better state can be generated.

 This happen if program reached


 Local maximum: A state that is better than all of its
neighbours, but not better than some other states far away.
 Plateau: A flat area of the search space in which all
neighbouring states have the same value.
 Ridge: Special kind of local maximum. It is an area of search
space that is higher than surrounding areas but cannot be
traversed by single moves in any one direction. Means move in
several directions at once.
19
The orientation of the high region, compared to the set of
available moves, makes it impossible to climb up.
Hill-Climbing Dangers
 Local maximum

 Plateau

 Ridge

48
HILL CLIMBING: DISADVANTAGES
Ways Out

 Backtrack to some earlier node and try going in a different


direction. (good way in dealing with local maxima)

 Make a big jump to try to get in a new section. (good way in


dealing with plateaus)

 Moving in several directions at once. (good strategy for dealing


with ridges)

21
HILL CLIMBING: DISADVANTAGES
 Hill climbing is a local method:
Decides what to do next by looking only at the “immediate”
consequences of its choices rather than by exhaustively exploring
all the consequences.

 Global information might be encoded in heuristic functions.

22
A Hill-Climbing Problem
Local: Add one point for every block that is resting on thing it is supposed to be resting on.
Subtract one point from every block that is sitting on wrong thing.

4 8

49
One Possible Moves

50
Three Possible Moves

4
4
4

Hill Climbing will Halt because all states have lower score than the Current state.

50
A Hill-Climbing Problem
Global: Add one point for every block in correct support structure, subtract one point for every
block in wrong support structure.

-28 28

49
One Possible Moves

-21

50
Three Possible Moves

-28
-16
-15

50
SIMULATED ANNEALING
 A variation of hill climbing in which, at the beginning of the
process, some downhill moves may be made.

 To do enough exploration of the whole space early on, so that the


final solution is relatively insensitive to the starting state.

 Lowering the chances of getting caught at a local maximum, or


plateau, or a ridge.

 The term simulated annealing derives from the roughly


analogous physical process of heating and then slowly cooling
a substance to obtain a strong crystalline structure.

 The simulated annealing process lowers the temperature


by slow stages until the system ``freezes" and no further
changes occur.
29
SIMULATED ANNEALING
 Probability of transition to higher energy state is given by
function:
 P = e –∆E/kT
Where ∆E is the positive change in the energy level

T is the temperature
K is Boltzmann constant.

Suppose k=1,
P’ = e –∆E/T

Annealing schedule: if the temperature is lowered


sufficiently slowly, then the goal will be attained.
30
Algorithm : Simulated Annealing

52
SIMULATE ANNEALING:
IMPLEMENTATION
 It is necessary to select an annealing schedule which has three
components:
 Initial value to be used for temperature
 Criteria that will be used to decide when the
temperature will be reduced
 Amount by which the temperature will be reduced.

32
BEST-FIRST SEARCH
 Combines the advantages of both DFS and BFS into a single
method.

 Depth-first search: not all competing branches having to be


expanded.

 Breadth-first search: not getting trapped on dead-end paths.


 Combining the two is to follow a single path at a time, but
switch paths whenever some competing path look more promising
than the current one.

33
Best-first search
• General approach of informed search:
– Best-first search: node is selected for expansion based on
an evaluation function f(n)
• Idea: evaluation function measures distance to the
goal.
– Choose node which appears best
• Implementation:
– fringe is queue sorted in decreasing order of desirability.
– Special cases: greedy search, A* search

34
BEST-FIRST SEARCH
 At each step of the BFS search process, we select the most
promising of the nodes we have generated so far.

 This is done by applying an appropriate heuristic function


to each of them.

 We then expand the chosen node by using the rules to


generate its successors

 This is called OR-graph, since each of its branches


represents an alternative problem solving path

35
BEST-FIRST SEARCH

A A A

B C D B C D
3 5 1 3 5
E F
4 6
A A

B C D B C D
5 5
G H E F G H E F
6 5 4 6 6 5 6
I J
36
2 1
BEST FIRST SEARCH VS HILL
CLIMBING
 Similar to Steepest ascent hill climbing with two exceptions:

 In hill climbing, one move is selected and all the


others are rejected, never to be reconsidered. In BFS,
one move is selected, but the others are kept around
so that they can be revisited later if the selected path
becomes less promising
 The best available state is selected in the BFS, even
if that state has a value that is lower than the value
of the state that was just explored. Whereas in hill
climbing the progress stop if there are no better
successor nodes.
37
A heuristic function
• [dictionary]“A rule of thumb, simplification, or
educated guess that reduces or limits the
search for solutions in domains that are
difficult and poorly understood.”
– h(n) = estimated cost of the cheapest path from
node n to goal node.
– If n is goal then h(n)=0

More information later.


38
Romania with step costs in km
• hSLD=straight-line distance
heuristic.
• hSLD can NOT be computed
from the problem
description itself
• In this example f(n)=h(n)
– Expand node that is closest
to goal

= Greedy best-first search

39
Greedy search example

Arad (366)

• Assume that we want to use greedy search to solve the


problem of travelling from Arad to Bucharest.
• The initial state=Arad

40
Greedy search example

Arad

Sibiu(253) Zerind(374)

Timisoara
• The first expansion step produces: (329)
– Sibiu, Timisoara and Zerind
• Greedy best-first will select Sibiu.

41
Greedy search example

Arad

Sibiu

Fagaras

Sibiu Bucharest
(253) (0)

• If Fagaras is expanded we get:


– Sibiu and Bucharest
• Goal reached !!
– Yet not optimal (see Arad, Sibiu, Rimnicu Vilcea, Pitesti)

42
Greedy search, evaluation
• Completeness: NO (cfr. DF-search)
– Check on repeated states
– Minimizing h(n) can result in false starts, e.g. Iasi to Fagaras.

43
Greedy search, evaluation
• Completeness: NO (cfr. DF-search)
• Time complexity? O(b m )
– Cfr. Worst-case DF-search
(with m is maximum depth of search space)
– Good heuristic cangive dramatic improvement.

44
BEST-FIRST SEARCH
 OPEN: nodes that have been generated, but have not examined.
This is organized as a priority queue.

 CLOSED: nodes that have already been examined.


Whenever a new node is generated, check whether it has been
generated before.

45
Algorithm : Best-First Search
1. Start with OPEN containing just the initial state.

2. Until a goal is found or there are no nodes left on OPEN do:

(a) Pick them best node on OPEN.


(b) Generate its successors.
(c) For each successor do:

(i) If it has not been generated before, evaluate it, add it to OPEN, and
record its parent.
(ii) If it has been generated before, change the parent if this new path is
better than the previous one. In that case, update the cost of getting
to this node and to any successors that this node may already,
have.
54
47

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy