Unit – 3 Problem Solving With Searching
Unit – 3 Problem Solving With Searching
Problem Formulation
Path – The sequence of actions form a path, and solution is a path from
initial state to goal state.
The Romania Problem
• An agent is in the city of Arad, Romania, enjoying a touring vacation.
• The agent wants to
• Take sights,
• Enjoy nightlife,
• Improve in Romanian,
• Avoid hangover so on.
• Agent has non refundable ticket to fly out of Bucharest the following
day
The Goal Test which determines whether the state is in goal state
Goal states = {In(Bucharest)}
The path cost function – assigns numeric cost to each path.
Optimal Solution / how do we know we achieve the goals
• A solution to the problem is :
• An action sequence that leads from initial state to goal state
• Solution quality is measured by the path cost function
• An Optimal solution has the lowest path cost among all solutions
Abstraction when formulating Problems
• There are so many details in the actual world.
• Actual world state = the travelling companion, the current radio
program, the scenery out of the window, the condition of the road,
the weather etc.
• Abstract mathematical state = In(Arad)
• We left all the other considerations in the state description because
they are irrelevant to the problem of finding a route to Bucharest.
EXAMPLES PROBEMS
8 Puzzle Problem
The 8-puzzle, often regarded as a small, solvable piece of a larger
puzzle, holds a central place in AI because of its relevance in
understanding and developing algorithms for more complex problems.
3. Transitional model
Given the state and action, this returns the resulting state.
4. Goal Test
This checks whether the state matches the goal configuration.
5. Path cost
Each step cost 1, so path cost is the number of steps in the path.
The Vacuum World Problem
Figure : The state-space graph for vacuum world. There are 8 states and 3 actions
for each state : L=left, R=right, S=Suck
Formulating Vacuum World Problem
States : the state is determined by both agent location and dirt
location. The agent is in one of the two locations, which might or might
not contain dirt.
- Possible world state : 2*24 = 8
- If 4 rooms then: n*2n =4*24
Step 4: This process will continue until we are getting the goal node.
Algorithm:
Step 1: PUSH the starting node into the stack.
Step 2: If the stack is empty then stop and return failure.
Step 3: If the top node of the stack is the goal node, then stop and
return success.
Step 4: Else POP the top node from the stack and process it. Find all its
neighbours that are in ready state and PUSH them into the stack in any
order.
Step 5: Go to step 3.
Step 6: Exit.
As in the example given below, DFS algorithm traverses from A to B, B
to C, C to D, D to C, C to F.
Advantages:
• DFS consumes very less memory space.
• It will reach at the goal node in a less time period than BFS if it
traverses in a right path.
• It may find a solution without examining much of search because we
may get the desired solution in the very first go.
Disadvantages :
• May find a sub-optimal solution (one that is deeper or more costly
than the best solution)
• Sometimes the states may also enter into infinite loops.
Application of Depth First Search
Depth First Search (DFS) is a popular algorithm used in computer science to
traverse and search through a graph or tree structure. Here are some common
applications of DFS:
Finding connected components: DFS can be used to identify all the connected
components in an undirected graph.
Cycle detection: DFS can be used to detect cycles in a graph. If a node is visited
again during a DFS traversal, it indicates that there is a cycle in the graph.
Pathfinding: DFS can be used to find a path between two nodes in a graph.
Solving puzzles: DFS can be used to solve puzzles such as mazes, where the goal is
to find a path from the start to the end.
Spanning trees: DFS can be used to construct a spanning tree of a graph. A
spanning tree is a subgraph of a connected graph that includes all the vertices of
the original graph and is also a tree.
Backtracking: DFS can be used for backtracking in algorithms like the N-Queens
problem or Sudoku.
3. Depth Limited Search (DLS)
• Depth limited search is an uninformed search algorithm which is similar to
Depth First Search(DFS). It can be considered equivalent to DFS with a
predetermined depth limit 'l'. Nodes at depth l are considered to be nodes
without any successors.
• Depth limited search may be thought of as a solution to DFS's infinite path
problem; in the Depth limited search algorithm, DFS is run for a finite
depth 'l', where 'l' is the depth limit.
• Before moving on to the next path, a Depth First Search starts at the root
node and follows each branch to its deepest node. The problem with DFS is
that this could lead to an infinite loop.
• By incorporating a specified limit termed as the depth limit, the Depth
Limited Search Algorithm eliminates the issue of the DFS algorithm's
infinite path problem;
• In a graph, the depth limit is the point beyond which no nodes are
explored.
Depth Limited Search Algorithm
We are given a graph G and a depth limit 'l'. Depth Limited
Search is carried out in the following way:
1.Set STATUS=1(ready) for each of the given nodes in graph G.
2.Push the Source node or the Starting node onto the stack and
set its STATUS=2(waiting).
3.Repeat steps 4 to 5 until the stack is empty or the goal node
has been reached.
4.Pop the top node T of the stack and set its STATUS=3(visited).
5.Push all the neighbours of node T onto the stack in the ready
state (STATUS=1) and with a depth less than or equal to depth
limit 'l' and set their STATUS=2(waiting).
(END OF LOOP)
6.END
When one of the following instances are satisfied, a Depth
Limited Search can be terminated.
• When we get to the target node.
• Once all of the nodes within the specified depth limit have been
visited.
• UCS is both optimal and complete when dealing with positive path
costs, as it always finds the least-cost solution, provided one exists.
This quality makes it particularly advantageous in applications where
path cost minimization is a priority, ensuring that the solution found is
both achievable and cost-effective.
The algorithm for UCS algorithm
Properties
• Source node and Goal node are unique and known
• Same branching factor on both sides of search space
• Fast and complete if BFS is used
• Time complexity is reduced
Step 1: Say, A is the initial node and O is the goal node, and H is the
intersection node.
Step 2: We will start searching simultaneously from start to goal node
and backward from goal to start node.
Step 3: Whenever the forward search and backward search intersect at
one node, then the searching stops.
Thus, it is possible when both the Start node and goal node are known
and unique, separate from each other. Also, the branching factor is the
same for both traversals in the graph. Also, other points to be noted
are that bidirectional searches are complete if a breadth-first search is
used for both traversals, i.e. for both paths from start node till
intersection and from goal node till intersection.
Advantages
• One of the main advantages of bidirectional searches is the speed at which we get the
desired results.
• It drastically reduces the time taken by the search by having simultaneous searches.
• It also saves resources for users as it requires less memory capacity to store all the
searches.
Disadvantages
• The fundamental issue with bidirectional search is that the user should be aware of the
goal state to use bidirectional search and thereby to decrease its use cases drastically.
• The implementation is another challenge as additional code and instructions are needed
to implement this algorithm, and also care has to be taken as each node and step to
implement such searches.
• The algorithm must be robust enough to understand the intersection when the search
should come to an end or else there’s a possibility of an infinite loop.
• It is also not possible to search backwards through all states.
6. Iterative Deepening Search (IDS)
• The Iterative Deepening Depth-First Search (or Iterative Deepening
search) algorithm, repeatedly applies depth-limited search with
increasing limits.
• It gradually increases limits from 0,1,...d, until the goal node is found.
The goal node G has been reached and the path we will follow is
A->C->F->G
Local Search and Optimization
• Local search algorithm operates by searching from start state to
neighbouring states, without keeping tracks of path nor the set of
states that have been reached.
• That means they are not systematic.
• They might never explore a portion of search space where a solution
actually resides.
• However, they have two key advantages
• the use of very little memory
• They can often find reasonable solutions in large or infinite state spaces for
which systematic algorithms are unsuitable
• In the context of local search algorithms, the "local" aspect refers to
the limited scope of the search.
• These algorithms are designed to optimize within a constrained
neighborhood of the current state, as opposed to global optimization
methods that attempt to find the global optimum across the entire
solution space.
Components of Local Search Algorithms
1. Initial State: The initial state, also known as the starting point, is
where the local search begins.
2. Neighbors: Neighbors are solutions that are closely related to the
current state. Neighbors are essential because local search
algorithms focus on refining the current solution by examining
these nearby options.
3. Objective Function: This function quantifies the quality or
desirability of a solution. It assigns a numerical value to each
solution, reflecting how close it is to the optimal solution. The
objective function guides the search process by helping the
algorithm select the most promising neighbors for exploration.
Types of Local search Algorithms
• Hill Climbing Search
• Simulated annealing
• Genetic algorithm
Hill Climbing Search
• Hill climbing belongs to the class of local search algorithms. Unlike
methods like brute force search that explore the entire problem
space, hill climbing focuses the search on a promising local region.
• It keeps track of one current state and on each iteration moves to the
neighbouring state with highest value. That is, its heads in the
direction that provide the steepest ascent.
• It terminates when it reaches a peak where no neighbour has a higher
value.
• Hill climbing is sometimes called greedy local source because it grabs
a good neighbour state without thinking ahead about where to go
next.
• No backtracking: It does not backtrack the search space, as it does
not remember the previous states.
State-space Diagram for Hill Climbing:
If the function on Y-axis is cost or objective function then, the goal of
search is to find the global minimum and local minimum
Different regions in the state space landscape
Local Maximum: Local maximum is a state which is better than its neighbor
states, but there is also another state which is higher than it.
Global Maximum: Global maximum is the best possible state of state space
landscape. It has the highest value of objective function.
Flat local maximum: It is a flat space in the landscape where all the neighbor
states of current states have the same value.
• The drawback of hill climbing is that it never makes downhill moves towards the
state with lower value
• Simulated Annealing in AI is a popular optimization technique inspired by the
annealing process in metallurgy. In metallurgy, annealing involves heating and
then slowly cooling a material to remove defects and improve its properties.
• Similarly, in AI, simulated annealing helps in finding the near best solutions to
complex problems by exploring different solutions and gradually converging to an
optimal or near-optimal solution.
• The method is widely used in combinatorial optimization, where problems often
have numerous local optima
• Simulated Annealing excels in escaping these local minima by introducing
controlled randomness in its search, allowing for a more thorough exploration of
the solution space.
• Simulated annealing is a powerful optimization technique inspired by
the annealing process in metallurgy. It balances exploration and
exploitation by allowing occasional moves to worse solutions with a
probability that decreases over time. This helps in finding the global
optimum in complex search spaces.
Simulated annealing state space diagram
• Simulated annealing improves this strategy through the introduction
of 2 tricks
• This algorithm picks a random move instead of picking the best move
• If the move improves the result then it accepts this random move otherwise
it accepts the move with some probability less than 1.
How Simulated Annealing Works
Imagine a technique inspired by the principles of metallurgy, adept at
solving AI’s most intricate optimization puzzles. Simulated Annealing,
with its unique approach, mirrors the heating and cooling process of
metallurgy to navigate through the complexities of AI algorithms.
• Start with High Temperature: The process begins at a high
‘temperature’, setting a broad scope for exploration, akin to
metallurgy’s initial phase of intense heat.
• Exploratory Adjustments and Probabilistic Techniques: At this
stage, SA explores various solutions, making significant leaps to
avoid being trapped in local minima, utilizing a probabilistic
technique for variable value adjustments.
• Gradual Cooling: As the process progresses, the ‘temperature’
lowers gradually, resembling the controlled cooling in metallurgy.
This reduces the extent of search space exploration, focusing more
on refinement.
• Refinement and Finalization: In its final phase, SA hones in on the
most promising solutions, much like the precision required in
metallurgy, ensuring an efficient approach to finding the global
optimum.
Genetic Algorithm
• A genetic algorithm is a heuristic search algorithm i.e. inspired by Charles
Darwin’s theory of natural evolution.
• The algorithm reflects the process of natural selection where the fittest
individuals are selected for reproduction in-order to produce offsprings of
the next generation.