0% found this document useful (0 votes)
8 views24 pages

Ainn Unit 2

The document discusses various search techniques used by problem-solving agents, including uninformed strategies like breadth-first and depth-first search, as well as informed strategies such as A* and AO* search. It outlines the components necessary to define a problem in state space search, including initial state, successor function, goal test, and path cost. Additionally, it provides examples like route finding and the 8-puzzle problem to illustrate these concepts.

Uploaded by

saiofshridi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views24 pages

Ainn Unit 2

The document discusses various search techniques used by problem-solving agents, including uninformed strategies like breadth-first and depth-first search, as well as informed strategies such as A* and AO* search. It outlines the components necessary to define a problem in state space search, including initial state, successor function, goal test, and path cost. Additionally, it provides examples like route finding and the 8-puzzle problem to illustrate these concepts.

Uploaded by

saiofshridi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 24

UNIT-II Search techniques

Problem solving agents, searching for solutions; uninformed search strategies: breadth first
search, depth first search, depth limited search, bidirectional search, comparing uninform
search strategies. Heuristic search strategies Greedy best-first search, A* search, AO* search,
memory bounded heuristic search: local search algorithms & optimization problems: Hill
climbing search, simulated annealing search, local beam search.
PROBLEM-SOLVING AGENTS
Intelligent agents are supposed to maximize their performance measure.
Problem formulation is the process of deciding what actions and states to consider, given a
goal.

Formulate a Goal, Formulate Problem

Search

Execute

WELL-DEFINED PROBLEMS AND SOLUTIONS:

Four components need to define a problem as state space search problem–

a)Initial state– It is the starting point of an agent. i.e. in (Agent X).The starting state which
agent knows itself.

UNIT 2 Page 1
b)Successor Function -The set of possible actions available to the agent. The term operator
is used to denote the description of an action in terms of which state will be reached by
carrying out the action in a particular state.

For a successor function S, given a particular state x, S(x) returns the set of states reachable
from x by any single action.

State Space Search = Initial State + Successor Function

Set of all states reachable from initial state is known as state space search.

c) The goal test –In which the agent can apply to a single state description to determine if it
is a goal state. Sometimes there is an explicit set of possible goal states, and the test simply
Checks to see if we have reached one of them. Sometimes the goal is specified by an abstract
property rather than an explicitly enumerated set of states.
For example, in chess, the goal is to reach a state called "checkmate," where the opponent's
king can be captured on the next move no matter what the opponent does.

d) Path cost– A path cost function is a function that assigns a cost to a path. In all cases we
will consider, the cost of a path is the sum of the costs of the individual actions along the
path. The pathcost function is often denoted by g.

Path: A path in the state space is a sequence of states connected by a sequence of actions.

State Space– the state space forms a graph in which the nodes are states and arcs between
nodes are actions.

Formal Description of the problem


1. Define a state space that contains all the possible configurations of the relevant
objects.
2. Specify one or more states within that space that describe possible situations from
which the problem solving process may start ( initial state)
3. Specify one or more states that would be acceptable as solutions to the problem. ( goal
states)
4. Specify a set of rules that describe the actions (operations) available.

To build a system to solve a problem


1. Define the problem precisely
2. Analyze the problem
3. Isolate and represent the task knowledge that is necessary to solve the problem
4. Choose the best problem-solving techniques and apply it to the particular problem.

UNIT 2 Page 2
Figure-1

Example: 1. Route finding problem. In figure-1 given map between Coimbatore and
Chennai via other places. Your task is to find the best way to reach from Coimbatore to
Chennai.
Initial State: In (Coimbatore)
Successor Function: {< Go (Pollachi), In (Pollachi)>
< Go (Erode), In (Erode)>
< Go (Palladam), In (Palladam)>
< Go (Mettupalayam), In (Mettupalayam)>}
Goal Test: In (Chennai)
Solution : i. Coimbatore  Mettupalayam  can’t reach goal
ii. Coimbatore  Pollachi  Palani Dindigul Trichy  Chennai
path cost = 37 + 60+57 +97+320=571
iii. Coimbatore  Erode  Salem VelloreChennai
Path Cost:100 + 66 + 200 + 140 = 506
So the best solution is third one because the path cost is least.

Example: 2. 8-Puzzle Problem


The 8-puzzle problem consists of a 3 x 3 board with eight numbered tiles and a blank space.
A tile adjacent to the blank space can slide into the space. The object is to reach a specified
goal state.
States: A state description specifies the location of each of the eight tiles and the blank in one
of the nine squares.
Initial state: Any state can be designated as the initial state.
Successor function: This generates the legal states that result from trying the four actions
(blank moves Left, Right, Up, or Down).
Goal test: This checks whether the state matches the goal configuration (Other goal
configurations are possible.)
Path cost: Each step costs 1, so the path cost is the number of steps in the path.

UNIT 2 Page 3
2 8 3 1 2 3
1 6 4 4 5 6
5 7 8
7

Initial State Final State

SEARCHING FOR SOLUTIONS


 A solution is an action sequence, so search algorithms work by considering various
possible action sequences.
 The possible action sequences starting at the initial state form a search tree with the
initial state at the root; the branches are actions and the nodes correspond to states in
the state space of the problem.
 The following figure shows the first few steps in growing the search tree for finding a
route from Arad to Bucharest.
 The root node of the tree corresponds to the initial state, In(Arad).
 The first step is to test whether this is a goal state.
 Then we need to consider taking various actions. We do this by expanding the
current state; that is, applying each legal action to the current state, thereby
generating a new set of states.
 In this case, we add three branches from the parent node In(Arad) leading to three
new child nodes: In(Sibiu), In(Timisoara), and In(Zerind). Now we must choose
which of these three possibilities to consider further.
 This is the essence of search—following up one option now and putting the others
aside for later, in case the first choice does not lead to a solution.
 Suppose we choose Sibiu first. We check to see whether it is a goal state (it is not)
and then expand it to get In(Arad), In(Fagaras), In(Oradea), and In(RimnicuVilcea).
 We can then choose any of these four or go back and choose Timisoara or Zerind.
Each of these six nodes is a leaf node, that is, a node with no children in the tree.
 The set of all leaf nodes available for expansion at any given point is called the
frontier.
 In below Figure, the frontier of each tree consists of those nodes with bold outlines.
 The process of expanding nodes on the frontier continues until either a solution is
found or there are no more states to expand.

UNIT 2 Page 4
The general TREE-SEARCH algorithm is shown below:

UNIT 2 Page 5
UNINFORMED SEARCH STRATEGIES
 Uninformed search (also called blind search).
 The term means that the strategies have no additional information about states
beyond that provided in the problem definition.
 All they can do is generate successors and distinguish a goal state from a non-goal
state.

Breadth-first search
 Breadth-first search is a simple strategy in which the root node is expanded first,
then all the successors of the root node are expanded next, then their successors, and
so on.
 In general, all the nodes are expanded at a given depth in the search tree before any
nodes at the next level are expanded.
 This is achieved very simply by using a FIFO queue for the frontier.
 Thus, new nodes (which are always deeper than their parents) go to the back of the
queue, and old nodes, which are shallower than the new nodes,get expanded first.

UNIT 2 Page 6
Uniform-cost search
 Instead of expanding the shallowest node, uniform-cost search expands the node n
with the lowest path cost g(n).
 This is done by storing the frontier as a priority queue ordered by g.

UNIT 2 Page 7
The problem is to get from Sibiu to Bucharest. The successors of Sibiu are Rimnicu Vilcea
and Fagaras, with costs 80 and 99, respectively. The least-cost node, Rimnicu Vilcea, is
expanded next, adding Pitesti with cost 80 + 97=177. The least-cost node is now Fagaras, so
it is expanded, adding Bucharest with cost 99+211=310. Now a goal node has been
generated, but uniform-cost search keeps going, choosing Pitesti for expansion and adding a
second path to Bucharest with cost 80+97+101= 278. Now the algorithm checks to see if this
new path is better than the old one; it is, so the old one is discarded. Bucharest, now with g-
cost 278, is selected for expansion and the solution is returned.

Depth-first search
 Depth-first search always expands the deepest node in the current frontier of the
search tree.
 The search proceeds immediately to the deepest level of the search tree, where the
nodes have no successors.
 As those nodes are expanded, they are dropped from the frontier, so then the search
“backs up” to the next deepest node that still has unexplored successors.
 depth-first search uses a LIFO queue.
 A LIFO queue means that the most recently generated node is chosen for expansion.

UNIT 2 Page 8
Depth-limited search

 The embarrassing failure of depth-first search in infinite state spaces .


 It is solved by supplying depth-first search with a predetermined depth limit l. That
is, nodes at depth l arebtreated as if they have no successors. This approach is called
depth-limited search.
 The depth limit solves the infinite-path problem.

UNIT 2 Page 9
 depth-limited search can terminate with two kinds of failure: the standard failure
value indicates no solution; the cutoff value indicates no solution within the depth
limit.

Bidirectional search

 Bidirectional search is implemented by replacing the goal test with a check to see
whether the frontiers of the two searches intersect; if they do, a solution has been
found.
 The check can be done when each node is generated or selected for expansion and,
with a hash table, will take constant time.
 The idea behind bidirectional search is to run two simultaneous searches—one
forward from the initial state and the other backward from the goal—hoping that the
two searches meet in the middle

Comparing uninformed search strategies

Complete — if the shallowest goal node is at some finite depth d, will eventually find it after
generating all shallower nodes

INFORMED (HEURISTIC) SEARCH STRATEGIES

 informed search strategy


—one that uses problem-specific knowledge beyond the definition of the
problem itself
—can find solutions more efficiently than can an uninformed strategy.

UNIT 2 Page 10
Greedy best-first search
 Greedy best-first search tries to expand the node that is closest to the goal, on the
grounds that this is likely to lead to a solution quickly.
 Thus, it evaluates nodes by using just the heuristic function; that is, f(n) = h(n).
 Let us see how this works for route this works for route-finding problems in Romania;
we use the straight line distance heuristic, which we will call hSLD.

UNIT 2 Page 11
Best first search algorithm:
Step 1: Place the starting node into the OPEN list.
Step 2: If the OPEN list is empty, Stop and return failure.
Step 3: Remove the node n, from the OPEN list which has the lowest value of h(n), and
places it in the CLOSED list.
Step 4: Expand the node n, and generate the successors of node n.
Step 5: Check each successor of node n, and find whether any node is a goal node or not. If
any successor node is goal node, then return success and terminate the search, else proceed to
Step 6.
Step 6: For each successor node, algorithm checks for evaluation function f(n), and then
check if the node has been in either OPEN or CLOSED list. If the node has not been in both
list, then add it to the OPEN list.
Step 7: Return to Step 2.

A* search: Minimizing the total estimated solution cost


 It evaluates nodes by combining g(n), the cost to reach the node, and h(n), the cost to
get from the node to the goal: f(n) = g(n) + h(n) .
 Since g(n) gives the path cost from the start node to node n, and h(n) is the estimated
costof the cheapest path from n to the goal, we have f(n) = estimated cost of the
cheapest solution through n .

UNIT 2 Page 12
Algorithm of A* search:
Step1: Place the starting node in the OPEN list.

Step 2: Check if the OPEN list is empty or not, if the list is empty then return failure
and stops.

Step 3: Select the node from the OPEN list which has the smallest value of evaluation
function (g+h), if node n is goal node then return success and stop, otherwise
Step 4: Expand node n and generate all of its successors, and put n into the closed list.
For each successor n', check whether n' is already in the OPEN or CLOSED list, if not
then compute evaluation function for n' and place into Open list.

UNIT 2 Page 13
Step 5: Else if node n' is already in OPEN and CLOSED, then it should be attached to
the back pointer which reflects the lowest g(n') value.

Step 6: Return to Step 2.

AO* Algorithm

 AO* algorithm is a best first search algorithm.


 AO* algorithm uses the concept of AND-OR graphs to decompose any complex
problem given into smaller set of problems which are further solved.
 AND-OR graphs are specialized graphs that are used in problems that can be broken
down into sub problems where AND side of the graph represent a set of task that need
to be done to achieve the main goal , whereas the or side of the graph represent the
different ways of performing task to achieve the same main goal.

 In the above figure we can see an example of a simple AND-OR graph wherein, the
acquisition of speakers can be broken into sub problems/tasks that could be performed
to finish the main goal.
 The sub task is to either steal speakers which will directly helps us achieve the main
goal "or" earn some money "and" buy speakers which helps us achieve the main goal.
 The AND part of the graphs are represented by the AND-ARCS, referring that all the
sub problems with the AND-ARCS need to be solved for the predecessor node or
problem to be completed.
 The edges without AND-ARCS are OR sub problems that can be done instead of the
sub problems with And-arcs.
 It is to be noted that several edges can come from a single node as well as the
presence of multiple AND arcs and multiple OR sub problems are possible.
 The AO* algorithm is a knowledge-based search technique, meaning the start state
and the goal state is already defined , and the best path is found using heuristics.
 The time complexity of the algorithm is significantly reduced due to the informed
search technique.
 Compared to the A* algorithm , AO* algorithm is very efficient in searching the
AND-OR trees very efficiently.

The AO* algorithm works on the formula given below :

UNIT 2 Page 14
f(n) = g(n) + h(n)
where,
 g(n): The actual cost of traversal from initial state to the current state.
 h(n): The estimated cost of traversal from the current state to the goal state.
 f(n): The actual cost of traversal from the initial state to the goal state.
AO* Algorithm

Step-1: Create an initial graph with a single node (start node).


Step-2: Transverse the graph following the current path, accumulating node that has not yet
been expanded or solved.
Step-3: Select any of these nodes and explore it. If it has no successors then call this value-
FUTILITY else calculate f'(n) for each of the successors.
Step-4: If f'(n)=0, then mark the node as SOLVED.
Step-5: Change the value of f'(n) for the newly created node to reflect its successors by
backpropagation.
Step-6: Whenever possible use the most promising routes, If a node is marked as SOLVED
then mark the parent node as SOLVED.
Step-7: If the starting node is SOLVED or value is greater than FUTILITY then stop else
repeat from Step-2.

Example

Here, in the above example all numbers in brackets are the heuristic value i.e h(n). Each edge
is considered to have a value of 1 by default.

Step-1
Starting from node A, we first calculate the best path.
f(A-B) = g(B) + h(B) = 1+4= 5 , where 1 is the default cost value of travelling from A to B
and 4 is the estimated cost from B to Goal state.

UNIT 2 Page 15
f(A-C-D) = g(C) + h(C) + g(D) + h(D) = 1+2+1+3 = 7 , here we are calculating the path cost
as both C and D because they have the AND-Arc. The default cost value of travelling from
A-C is 1, and from A-D is 1, but the heuristic value given for C and D are 2 and 3
respectively hence making the cost as 7.

The minimum cost path is chosen i.e A-B.

Step-2
Using the same formula as step-1, the path is now calculated from the B node,
f(B-E) = 1 + 6 = 7.
f(B-F) = 1 + 8 = 9
Hence, the B-E path has lesser cost. Now the heuristics have to be updated since there is a
difference between actual and heuristic value of B. The minimum cost path is chosen and is
updated as the heuristic , in our case the value is 7. And because of change in heuristic of B
there is also change in heuristic of A which is to be calculated again.
f(A-B) = g(B) + updated((h(B)) = 1+7=8

UNIT 2 Page 16
Step-3
Comparing path of f(A-B) and f(A-C-D) it is seen that f(A-C-D) is smaller. Hence f(A-C-D)
needs to be explored.
Now the current node becomes C node and the cost of the path is calculated,
f(C-G) = 1+2 = 3
f(C-H-I) = 1+0+1+0 = 2
f(C-H-I) is chosen as minimum cost path,also there is no change in heuristic since it matches
the actual cost. Heuristic of path of H and I are 0 and hence they are solved, but Path A-D
also needs to be calculated , since it has an AND-arc.
f(D-J) = 1+0 = 1, hence heuristic of D needs to be updated to 1. And finally the f(A-C-D)
needs to be updated.
f(A-C-D) = g(C) + h(C) + g(D) + updated((h(D)) = 1+2+1+1 =5.

The solved path is f(A-C-D).

Memory-bounded heuristic search

•To reduce memory- Iterative deepening to the heuristic search.


•2 memory bounded algorithm:
1) RBFS (recursive best-first search).
2) MA* (Memory-bounded A*) and
SMA*(simplified memory MA*)
 The simplest way to reduce memory requirements for A∗ is to adapt the idea of
iterative deepening to the heuristic search context, resulting in the iterative-
deepening A∗ (IDA∗) algorithm.
 The main difference between IDA∗ and standard iterative deepening is that the cutoff
used is the f-cost (g+h) rather than the depth; at each iteration, the cutoff value is the
smallest f-cost of any node that exceeded the cutoff on the previous iteration.

UNIT 2 Page 17
 Recursive best-first search (RBFS) is a simple recursive algorithm that attempts to
mimic the operation of standard best-first search, but using only linear space

 It seems sensible, therefore, to use all available memory.


Algorithm

UNIT 2 Page 18
function RECURSIVE – BEST – FIRST – SEARCH (Problem) return
RBFS (Problem, MAKE – NODE) (INITIAL – STATE [problem
function RBFS (problem, node, f - limit) return a solution, failure and
a new f – cost limit
if GOAL – TEST [problem] (state) then return node
successors ← EXPAND (node, problem)
if successors is empty then return failure, ∞
for each g in successors do

UNIT 2 Page 19
UNIT 2 Page 20
 Two algorithms that do this are MA∗ (memory-bounded A∗) and SMA∗ (simplified
MA∗).
 SMA∗ proceeds just like A∗, expanding the best leaf until memory is full.
 At this point, it cannot add a new node to the search tree without dropping an old one.
SMA∗ always drops the worst leaf node—the one with the highest f-value.
 Like RBFS, SMA∗ then backs up the value of the forgotten node to its parent.
 In this way, the ancestor of a forgotten subtree knows the quality of the best path in
that subtree.
 With this information, SMA∗ regenerates the subtree only when all other paths have
been shown to look worse than the path it has forgotten.
 Another way of saying this is that, if all the descendants of a node n are forgotten,
then we will not know which way to go from n, but we will still have an idea of how
worthwhile it is to go anywhere from n.

LOCAL SEARCH ALGORITHMS AND OPTIMIZATION PROBLEMS


 If the path to the goal does not matter, we might consider a different class of
algorithms,
ones that do not worry about paths at all.
 Local search algorithms operate using a single current node (rather than multiple
paths) and generally move only to neighbors of that node. Typically, the paths
followed by the search are not retained.
 Although local search algorithms are not systematic, they have two key advantages:
(1) they use very little memory—usually a constant amount; and
(2) they can often find reasonable solutions in large or infinite (continuous)
state spaces for which systematic algorithms are unsuitable.

UNIT 2 Page 21
 In addition to finding goals, local search algorithms are useful for solving pure
optimization problems, in which the aim is to find the best state according to an
objective function.
 To understand local search, we find it useful to consider the state-space landscape.

 A landscape has both “location” (defined by the state) and “elevation” (defined by the value
of the heuristic cost function or objective function).
 If elevation corresponds to cost, then the aim is to find the lowest valley—a global
minimum; if elevation corresponds to an objective function, then the aim is to find the
highest peak—a global maximum.
 Local search algorithms explore this landscape.
 A complete local search algorithm always finds a goal if one exists; an optimal algorithm
always finds a global minimum/maximum.

Hill-climbing search

The hill-climbing search algorithm (steepest-ascent version) is shown below:-

 It is simply a loop that continually moves in the direction of increasing value—that is,
uphill.
 It terminates when it reaches a “peak” where no neighbor has a higher value.
 The algorithm does not maintain a search tree, so the data structure for the current
node need only record the state and the value of the objective function.
 Hill climbing does not look ahead beyond the immediate neighbors of the current
state.
 Hill climbing is sometimes called greedy local search because it grabs a good
neighbour state without thinking ahead about where to go next.

UNIT 2 Page 22
 Hill climbing often makes rapid progress toward a solution because it is usually quite
easy to improve a bad state.

Hill climbing often gets stuck for the following reasons:

• Local maxima: a local maximum is a peak that is higher than each of its neighboring states
but lower than the global maximum. Hill-climbing algorithms that reach the vicinity of a
local maximum will be drawn upward toward the peak but will then be stuck with nowhere
else to go.
• Ridges: Ridges result in a sequence of local maxima that is very difficult for greedy
algorithms to navigate.
• Plateaux: a plateau is a flat area of the state-space landscape. It can be a flat local
maximum, from which no uphill exit exists, or a shoulder, from which progress is
possible.

The variants of hill climbing are:

1. Stochastic hill climbing chooses at random from among the uphill moves; the
probability of selection can vary with the steepness of the uphill move. This usually
converges more slowly than steepest ascent, but in some state landscapes, it finds
better solutions.
2. First-choice hill climbing implements stochastic hill climbing by generating
successors randomly until one is generated that is better than the current state. This is
a good strategy when a state has many (e.g., thousands) of successors.
3. Random-restart hill climbing adopts the well-known adage, “If at first you don’t
succeed, try, try again.” It conducts a series of hill-climbing searches from randomly
generated initial states, until a goal is found.

Simulated annealing

 A hill-climbing algorithm that never makes “downhill” moves toward states with
lower value (or higher cost) is guaranteed to be incomplete, because it can get stuck
on a local maximum.
 In contrast, a purely random walk—that is, moving to a successor chosen uniformly at
random from the set of successors—is complete but extremely inefficient.
 Therefore, it seems reasonable to try to combine hill climbing with a random walk in
some way that yields both efficiency and completeness. Simulated annealing is such
an algorithm.
 In metallurgy, annealing is the process used to temper or harden metals and glass by
heating them to a high temperature and then gradually cooling them, thus allowing the
material to reach a low energy crystalline state.
 To explain simulated annealing, we switch our point of view from hill climbing to
gradient descent (i.e., minimizing cost) and imagine the task of getting a ping-pong
ball into the deepest crevice in a bumpy surface.
 If we just let the ball roll, it will come to rest at a local minimum.
 If we shake the surface, we can bounce the ball out of the local minimum.
 The trick is to shake just hard enough to bounce the ball out of local minima but not
hard enough to dislodge it from the global minimum.
 The simulated-annealing solution is to start by shaking hard (i.e., at a high
temperature) and then gradually reduce the intensity of the shaking (i.e., lower the
temperature).
UNIT 2 Page 23
 Instead of picking the best move, however, it picks a random move.
 If the move improves the situation, it is always accepted.
 Otherwise, the algorithm accepts the move with some probability less than 1.
 The probability decreases exponentially with the “badness” of the move—the amount
ΔE by which the evaluation is worsened.
 The probability also decreases as the “temperature” T goes down: “bad” moves are
more likely to be allowed at the start when T is high, and they become more unlikely
as T decreases.
 If the schedule lowers T slowly enough, the algorithm will find a global optimum
with probability approaching 1.

Local beam search

 The local beam search algorithm3 keeps track of k states rather than just one.
 It begins with k randomly generated states.
 At each step, all the successors of all k states are generated. If any one is a goal, the
algorithm halts.
 Otherwise, it selects the k best successors from the complete list and repeats.
 At first sight, a local beam search with k states might seem to be nothing more than
running k random restarts in parallel instead of in sequence.
 In fact, the two algorithms are quite different.
 In a random-restart search, each search process runs independently of the others. In a
local beam search, useful information is passed among the parallel search threads.
 In its simplest form, local beam search can suffer from a lack of diversity among the
k states—they can quickly become concentrated in a small region of the state space,
making the search little more than an expensive version of hill climbing.

UNIT 2 Page 24

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy