0% found this document useful (0 votes)
8 views49 pages

4.heuristic (Informed) SearchTechniques

Heuristic search techniques aim to solve problems faster or find approximate solutions using heuristic functions that evaluate and rank alternatives at each branching step. Key types of heuristic search include Generate and Test, Hill Climbing, Best First Search, and their variants, each with specific algorithms and applications. These techniques often trade optimality and completeness for speed and efficiency in problem-solving.

Uploaded by

Bablu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views49 pages

4.heuristic (Informed) SearchTechniques

Heuristic search techniques aim to solve problems faster or find approximate solutions using heuristic functions that evaluate and rank alternatives at each branching step. Key types of heuristic search include Generate and Test, Hill Climbing, Best First Search, and their variants, each with specific algorithms and applications. These techniques often trade optimality and completeness for speed and efficiency in problem-solving.

Uploaded by

Bablu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 49

Heuristic (Informed) Search Techniques

 A Heuristic is a technique to solve a problem faster than classic


methods, or to find an approximate solution when classic methods
cannot.

 A Heuristic (or a heuristic function) takes a look at search algorithms.


At each branching step, it evaluates the available information and
makes a decision on which branch to follow.

 It does so by ranking alternatives. The Heuristic is any device that is


often effective but will not guarantee work in every case.

 This is a kind of a shortcut as we often trade one of optimality,


completeness, accuracy, or precision for speed.
1
Types of Heuristic (Informed) Search Techniques

1. Generate and test


2. Hill Climbing
3. Best First Search
4. Problem Reduction
5. Constraint Satisfaction
6. Means-ends analysis

2
1. Generate-and-Test

Algorithm:

1. Generate a possible solution. For some problems, this means


generating a particular point in the problem space. For others it
means generating a path from a start state.

2. Test to see if this is actually a solution by comparing the chosen


point or the endpoint of the chosen path to the set of acceptable
goal states.

3. If a solution has been found, quit, Otherwise return to step 1.

3
Generate-and-Test

4
Generate-and-test Example
Travelling Sales Man(TSP)
 A salesman has a list of cities, each of which he must visit
exactly once. There are direct roads between each pair of cities
on the list. Find the route the salesman should follow for the
shortest possible round trip that both starts and finishes at any
one of the cities.
 Traveler needs to visit n cities.
 Know the distance between
each pair of cities.
 Want to know the shortest
route that visits all the cities
once.

5
Generate-and-test Example
Travelling Sales Man(TSP)
• TSP - generation of possible solutions is done in lexicographical order of
cities:
Length of
Search for Path
Path
1 ABCD 18 A B C D
2 ABDC 13
3 ACBD 11
4 ACDB 13 B C D
5 ADBC 11
Continued
C D B D C B

D C D B B C
 Finally, select the path whose length
6
is less.
 It is a depth first search procedure since complete
solutions must be generated before they can be tested.

 In its most systematic form, it is simply an exhaustive


search of the problem space.

 Operate by generating solutions randomly.

 Also called as British Museum algorithm

7
2. Hill Climbing
 It is a variant of generate-and test in which feedback from the test procedure
is used to help the generator decide which direction to move in search
space. (Generate and test + direction to move)

 Hill climbing algorithm is a local search algorithm which continuously moves


in the direction of increasing elevation/value to find the peak of the mountain
or best solution to the problem.

 It terminates when it reaches a peak value where no neighbor has a higher


value.

 Searching for a goal state = Climbing to the top of a hill

 The test function is augmented with a heuristic function that provides an


estimate of how close a given state is to the goal state.

 Hill climbing is often used when a good heuristic function is available for
evaluating states but when no other useful knowledge is available.

 It is also called greedy local search as it only looks to its good immediate
neighbor state and not beyond that.
8
9
10
11
12
13
14
15
16
17
18
Features of Hill Climbing

 Generate and Test variant: Hill Climbing is the variant of


Generate and Test method. The Generate and Test method
produce feedback which helps to decide which direction to
move in the search space.

 Greedy approach: Hill-climbing algorithm search moves in


the direction which optimizes the cost.

 No backtracking: It does not backtrack the search space,


as it does not remember the previous states.

19
State-space Diagram for Hill Climbing

 The state-space landscape is a graphical representation of the hill-


climbing algorithm which is showing a graph between various states
of algorithm and Objective function/Cost.
 On Y-axis we have taken the function which can be an objective
function or cost function, and state-space on the x-axis.
 If the function on Y-axis is cost then, the goal of search is to find the
global minimum and local minimum.
 If the function of Y-axis is Objective function, then the goal of the
search is to find the global maximum and local maximum.
20
Problems with Hill Climbing In AI

1. Local Maximum: A local maximum is a peak state in the


landscape which is better than each of its neighboring states, but
there is another state also present which is higher than the local
maximum.

21
2. Plateau: A plateau is the flat area of the search space in which all
the neighbor states of the current state contains the same value,
because of this algorithm does not find any best direction to move. A
hill-climbing search might be lost in the plateau area.

To avoid this,
we randomly
make a big
jump.

3. Ridges: A ridge is a special form of the local maximum. It has an


area which is higher than its surrounding areas, but itself has a slope,
and cannot be reached in a single move.

To avoid this, we
may use two or more
rules before testing

22
Types of Hill Climbing Algorithm

1.Simple hill Climbing

2.Steepest-Ascent hill-

climbing

3.Stochastic hill Climbing

23
1.Simple hill Climbing
 Simple hill climbing is the simplest way to implement a hill

climbing algorithm.

 It only evaluates the neighbor node state at a time and

selects the first one which optimizes current cost and set

it as a current state.

 It only checks it's one successor state, and if it finds better than

the current state, then move else be in the same state.

 This algorithm has the following features:

 Less time consuming

 Less optimal solution and the solution is not guaranteed


24
Algorithm for Simple Hill Climbing

Step 1: Evaluate the initial state, if it is goal state


then return success and Stop.
Step 2: Loop Until a solution is found or there is no
new operator left to apply.
Step 3: Select and apply an operator to the current
state.
Step 4: Check new state:
a. If it is goal state, then return success and quit.
b. Else if it is better than the current state then assign
new state as a current state.
c. Else if not better than the current state, then return to
step2.

Step 5: Exit.
25
Simple Hill Climbing-
Example
• TSP - define state space as the set of all possible
tours.

• Operators exchange the position of adjacent cities


within the current tour.

• Heuristic function is the length of a tour.

26
TSP Hill Climb State Space
ABCD Initial State
ABCD

Swap 1,2 Swap 2,3


Swap 3,4 Swap 4,1

BACD
BACD ACBD
ACBD ABDC
ABDC DBCA
DBCA

Swap 1,2 Swap 3,4


Swap 2,3 Swap 4,1

CABD
CABD ABCD
ABCD ACDB
ACDB DCBA
DCBA

27
2. Steepest-Ascent hill climbing

 The steepest-Ascent algorithm is a variation of


simple hill climbing algorithm.

 This algorithm examines all the neighboring


nodes of the current state and selects one
neighbor node which is closest to the goal state.

 This algorithm consumes more time as it searches


for multiple neighbors

28
Algorithm for Steepest-Ascent hill climbing
Step 1: Evaluate the initial state, if it is goal state then return success and
stop, else make current state as initial state.
Step 2: Loop until a solution is found or the current state does not
change.
A. Let SUCC be a state such that any successor of the current
state will be better than it.
B. For each operator that applies to the current state:
i. Apply the new operator and generate a new state.
ii. Evaluate the new state.
iii. If it is goal state, then return it and quit, else compare it to
the SUCC.
iv. If it is better than SUCC, then set new state as SUCC.
v. If the SUCC is better than the current state, then set
current state to SUCC. 29
30
3. Stochastic hill climbing

 Stochastic hill climbing does not examine for all its


neighbor before moving.

 Rather, this search algorithm selects one neighbor


node at random and decides (based on the amount
of improvement in that neighbor) whether to
choose it as a current state or examine another
state.

31
3. Stochastic hill climbing - Algorithm

1. Evaluate the initial state. If it is a goal state then stop and


return success. Otherwise, make the initial state the
current state.
2. Repeat these steps until a solution is found or the current
state does not change.
i. Select a state that has not been yet applied to the current
state.
ii. Apply the successor function to the current state and
generate all the neighbor states.
iii. Among the generated neighbor states which are better
than the current state choose a state randomly (or based
on some probability function).
iv. If the chosen state is the goal state, then return success,
else make it the current state and repeat step 2 of the
second point.
3. Exit from the function. 32
Simulated Annealing

 A hill-climbing algorithm which never makes a move towards a


lower value guaranteed to be incomplete because it can get
stuck on a local maximum.
 And if algorithm applies a random walk, by moving a
successor, then it may complete but not efficient.
 Simulated Annealing is an algorithm which yields both
efficiency and completeness.
 In mechanical term Annealing is a process of hardening a
metal or glass to a high temperature then cooling gradually, so
this allows the metal to reach a low-energy crystalline state.
 The same process is used in simulated annealing in which the
algorithm picks a random move, instead of picking the best
move.
 If the random move improves the state, then it follows the
same path.
 Otherwise, the algorithm follows the path which has a
probability of less than 1 or it moves downhill and chooses
33
34
3. Best First Search Algorithm (Greedy search)
3. 1 OR Graphs

 Greedy best-first search algorithm always selects the path


which appears best at that moment.
 It is the combination of depth-first search and breadth-first
search algorithms.
 It uses the heuristic function and search. Best-first search
allows us to take the advantages of both algorithms.
 With the help of best-first search, at each step, we can
choose the most promising node.
 In the best first search algorithm, we expand the node
which is closest to the goal node and the closest cost is
estimated by heuristic function, i.e.
f(n)= h(n)

Were, h(n)= estimated cost from node n to the goal.


35
 The greedy best first algorithm is implemented by the priority
queue.
Best first search (OR-Graphs) algorithm:

Step 1: Place the starting node into the OPEN list.


Step 2: If the OPEN list is empty, Stop and return failure.
Step 3: Remove the node n, from the OPEN list which has the lowest
value of h(n), and places it in the CLOSED list.
Step 4: Expand the node n, and generate the successors of node n.
Step 5: Check each successor of node n, and find whether any node is
a goal node or not. If any successor node is goal node, then return
success and terminate the search, else proceed to Step 6.
Step 6: For each successor node, algorithm checks for evaluation
function f(n), and then check if the node has been in either OPEN or
CLOSED list. If the node has not been in both list, then add it to the
OPEN list.
Step 7: Return to Step 2.
36
Example - Best first search (OR-Graphs)

Consider the below search problem, and we will traverse it using


greedy best-first search. At each iteration, each node is expanded
using evaluation function f(n)=h(n) , which is given in the below
table.

37
 In this search example, we are using two lists which
are OPEN and CLOSED Lists. Following are the iteration for
traversing the above example.

38
Expand the nodes of S and put in the CLOSED list

Initialization: Open [A, B], Closed [S]

Iteration 1: Open [A], Closed [S, B]

Iteration 2: Open [E, F, A], Closed [S, B]

: Open [E, A], Closed [S, B, F]

Iteration 3: Open [I, G, E, A], Closed [S, B, F]

: Open [I, E, A], Closed [S, B, F, G]

Hence the final solution path will be: S----> B----->F----> G

39
Time Complexity: The worst case time complexity of Greedy
best first search is O(bm).

Space Complexity: The worst case space complexity of


Greedy best first search is O(bm). Where, m is the maximum
depth of the search space.

Complete: Greedy best-first search is also incomplete, even if


the given state space is finite.

Optimal: Greedy best first search algorithm is not optimal.

40
Advantages:
 Best first search can switch between BFS and DFS
by gaining the advantages of both the algorithms.
 This algorithm is more efficient than BFS and DFS
algorithms.

Disadvantages:
 It can behave as an unguided depth-first search in
the worst case scenario.
 It can get stuck in a loop as DFS.
 This algorithm is not optimal.
41
3. Best First Search Algorithm (Greedy search)
3. 2 A* Algorithm

 A* search is the most commonly known form of best-first


search.

 It uses heuristic function h(n), and cost to reach the node n


from the start state g(n).

 It has combined features of UCS and greedy best-first


search, by which it solve the problem efficiently.

 A* search algorithm finds the shortest path through the


search space using the heuristic function.

 This search algorithm expands less search tree and


provides optimal result faster. 42
 A* algorithm is similar to UCS except that it uses g(n)+h(n)
instead of g(n).

 In A* search algorithm, we use search heuristic as well as


the cost to reach the node.

 Hence we can combine both costs as following, and this sum


is called as a fitness number.

 At each point in the search space, only those node is expanded which have the
lowest value of f(n), and the algorithm terminates when the goal node is found.

43
Algorithm of A* search:

Step1: Place the starting node in the OPEN list.


Step 2: Check if the OPEN list is empty or not, if the list is empty
then return failure and stops.
Step 3: Select the node from the OPEN list which has the smallest
value of evaluation function (g+h), if node n is goal node then
return success and stop, otherwise
Step 4: Expand node n and generate all of its successors, and put n
into the closed list. For each successor n', check whether n' is
already in the OPEN or CLOSED list, if not then compute evaluation
function for n' and place into Open list.
Step 5: Else if node n' is already in OPEN and CLOSED, then it should
be attached to the back pointer which reflects the lowest g(n')
value.
Step 6: Return to Step 2. 44
Example for A* Algorithm

In this example, we will traverse the given graph using the A*


algorithm. The heuristic value of all states is given in the below
table so we will calculate the f(n) of each state using the
formula f(n)= g(n) + h(n), where g(n) is the cost to reach any
node from start state.

Here we will use OPEN and CLOSED list. 45


Solution:

Initialization: {(S, 5)}


Iteration1: {(S--> A, 4), (S-->G, 10)}
Iteration2: {(S--> A-->C, 4), (S--> A-->B, 7), (S-->G, 10)}
Iteration3: {(S--> A-->C--->G, 6), (S--> A-->C--->D, 11), (S--> A-->B, 7),
(S-->G, 10)}
Iteration 4 will give the final result, as S--->A--->C--->G it provides the
optimal path with cost 6. 46
Points to remember:

 A* algorithm returns the path which occurred first,


and it does not search for all remaining paths.

 The efficiency of A* algorithm depends on the


quality of heuristic.

 A* algorithm expands all nodes which satisfy the


condition f(n)

47
Complete: A* algorithm is complete as long as:
 Branching factor is finite.
 Cost at every action is fixed.
Optimal: A* search algorithm is optimal if it follows below two
conditions:
 Admissible: the first condition requires for optimality is
that h(n) should be an admissible heuristic for A* tree
search. An admissible heuristic is optimistic in nature.
 Consistency: Second required condition is consistency
for only A* graph-search.
 If the heuristic function is admissible, then A* tree search
will always find the least cost path.

Time Complexity: The time complexity of A* search algorithm


depends on heuristic function, and the number of nodes expanded
is exponential to the depth of solution d. So the time complexity is
O(b^d), where b is the branching factor.
48
Space Complexity: The space complexity of A* search algorithm
Advantages:
 A* search algorithm is the best algorithm than other
search algorithms.
 A* search algorithm is optimal and complete.
 This algorithm can solve very complex problems.
Disadvantages:
 It does not always produce the shortest path as it mostly
based on heuristics and approximation.
 A* search algorithm has some complexity issues.
 The main drawback of A* is memory requirement as it
keeps all generated nodes in the memory, so it is not
practical for various large-scale problems.

49

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy