0% found this document useful (0 votes)
5 views90 pages

Unit – 3 Problem Solving With Searching

This document discusses problem-solving techniques in AI, focusing on search algorithms in a defined environment. It differentiates between informed and uninformed search methods, outlines the components of problem formulation, and provides examples such as the Romania problem and the 8-puzzle. Additionally, it covers search algorithms like Breadth-First Search (BFS) and Depth-First Search (DFS), detailing their methodologies, complexities, and applications.

Uploaded by

ramailo11
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views90 pages

Unit – 3 Problem Solving With Searching

This document discusses problem-solving techniques in AI, focusing on search algorithms in a defined environment. It differentiates between informed and uninformed search methods, outlines the components of problem formulation, and provides examples such as the Romania problem and the 8-puzzle. Additionally, it covers search algorithms like Breadth-First Search (BFS) and Depth-First Search (DFS), detailing their methodologies, complexities, and applications.

Uploaded by

ramailo11
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 90

Unit – 3 Problem Solving with Searching

• We will cover several Search Algorithms.


• In this chapter we consider only the simplest environment: episodic,
single agent, fully observable, deterministic, static, discrete and
known.
• We distinguish between informed algorithm, in which an agent can
estimate how far it is from the goal and on uninformed algorithm,
where no such estimate is available.
Uninformed Search
Here, we see how an agent can look ahead to find a sequence of actions that
will eventually achieve it’s goal.
Problem Solving
• An important application of AI is problem solving
• An agent capable of problem solving
Example :Sudoku
Steps in Problem solving
• Define problem statement first
• Generating the solution by keeping the different conditions in mind.
Searching
• It is the most commonly used technique of problem solving in AI.
• The process of looking for a sequence of actions/state that reaches the
goal.
Problem solving Agent
• Problem solving agents are goal driven agent
Steps performed by Problem Solving agent
Goal Formulation
Goal formulation based on current situation and agent performance
measure.
It organizes the step required to reach that goal.

Problem Formulation

Problem formulation is the process of deciding what actions should be


taken to achieve the formulated goal.
Components involve in Problem Formulation
Well defined Problem and Solutions
1. Initial states – the initial state that agents starts in.
2. Actions - A description of possible actions available to the agent
3. Transitional model – a description of what each action does
4. Goal Test – determines whether the given state is goal state.
Sometimes there are explicit set goal states, and test simply checks
whether the given state is one of them.
5. Path Cost – the path cost function assigns a numeric cost to each
path. The problem solving agent chooses a cost function that
reflects it’s performance measure.

Path – The sequence of actions form a path, and solution is a path from
initial state to goal state.
The Romania Problem
• An agent is in the city of Arad, Romania, enjoying a touring vacation.
• The agent wants to
• Take sights,
• Enjoy nightlife,
• Improve in Romanian,
• Avoid hangover so on.
• Agent has non refundable ticket to fly out of Bucharest the following
day

Goal : Reach Bucharest on time.


Well defined Problems and Solutions
A problem can be defined formally by five components
 The initial state that an agent starts in
 Initial state for our agent = In(Arad)

 A description of possible actions available to the agent


 From the state In(Arad) the applicable actions are {Go(Sibiu),Go(Timisoara),
Go(Zerind)}

 A description of what each action does (transitional model)- which


describes what happens at the given state after you take a action
 Successor – Any state reachable from a given state by a single action
 RESULT In(Arad), Go(Zerind) = In(Zerind)
 The initial state, actions and transition model implicitly define the state space
of the problem – the set of all states reachable from the initial state by any
sequence of actions.
 The state space forms a directed networked (graph) in which nodes are states
and links are actions.

 The Goal Test which determines whether the state is in goal state
 Goal states = {In(Bucharest)}
 The path cost function – assigns numeric cost to each path.
Optimal Solution / how do we know we achieve the goals
• A solution to the problem is :
• An action sequence that leads from initial state to goal state
• Solution quality is measured by the path cost function
• An Optimal solution has the lowest path cost among all solutions
Abstraction when formulating Problems
• There are so many details in the actual world.
• Actual world state = the travelling companion, the current radio
program, the scenery out of the window, the condition of the road,
the weather etc.
• Abstract mathematical state = In(Arad)
• We left all the other considerations in the state description because
they are irrelevant to the problem of finding a route to Bucharest.
EXAMPLES PROBEMS
8 Puzzle Problem
The 8-puzzle, often regarded as a small, solvable piece of a larger
puzzle, holds a central place in AI because of its relevance in
understanding and developing algorithms for more complex problems.

Rules and Constraints:


• The 8-puzzle is typically played on a 3x3 grid, which provides a 3x3
square arrangement for tiles. This grid structure is fundamental to the
problem's organization.
• The puzzle comprises 8 numbered tiles (usually from 1 to 8) and one
blank tile. These numbered tiles can be slid into adjacent positions
(horizontally or vertically) when there's an available space, which is
occupied by the blank tile.
• The objective of the 8-puzzle is to transform an initial state, defined
by the arrangement of the tiles on the grid, into a specified goal state.
• The goal state is often a predefined configuration, such as having the
tiles arranged in ascending order from left to right and top to bottom,
with the blank tile in the bottom-right corner.
Initial state and Goal state
States : A state description specifies the location of each tiles.
1. Initial State
• The initial state of the 8-puzzle represents the starting configuration. It's
the state from which the puzzle-solving process begins.
• The initial state can be any arrangement of the tiles, which can be specified
manually or generated randomly.
• The problem-solving algorithm aims to transform the initial state into the
goal state using a sequence of valid moves.
2. Actions
The simplest formulation defines the actions as movement of blank space
left, right, up or down. Different subsets of these are possible depending
upon where the blank is.

3. Transitional model
Given the state and action, this returns the resulting state.

4. Goal Test
This checks whether the state matches the goal configuration.

5. Path cost
Each step cost 1, so path cost is the number of steps in the path.
The Vacuum World Problem
Figure : The state-space graph for vacuum world. There are 8 states and 3 actions
for each state : L=left, R=right, S=Suck
Formulating Vacuum World Problem
States : the state is determined by both agent location and dirt
location. The agent is in one of the two locations, which might or might
not contain dirt.
- Possible world state : 2*24 = 8
- If 4 rooms then: n*2n =4*24

1. Initial state – any state may be designated as initial state


2. Action : Left, Right, Suck
3. Transitional model – The actions have expected effects, except that
moving left to the leftmost square, moving right in the rightmost
square and sucking in a clean square has no effect.
4. Goal state – Check whether all squares are clean
5. Path Cost – Each step cost 1, so path cost number of steps in a path.
Real World Problems
Route finding problems are common in real time Consider the airline
travel problem that must be solve by travel planning website.
States : Each states includes a location i.e. an airport and the current
time
Initial state – the user’s home airport (specified by user’s query)
Action – take any flight from current location, in any seat class, leaving
after the current time, leaving enough time for within-airport transfer if
needed.
Transitional model – the state resulting from taking a flight will have
the flight’s destination as the new location and the flight’s arrival time
as the new time.
Goal Test – a Destination city. Sometimes the goal can be more
complex, such as ‘arrive at the destination on a nonstop flight’.

Path cost – a combination of monetary cost, waiting time, flight time,


custom and immigration procedures, seat quality, time of day, type of
airline, frequent flyer reward points and so on.
Problem Types
1. Deterministic/observable (single state problems)
2. Non-observable (Multiple-state problems / conformant problem)
3. Non-deterministic/partially observable(contingency problem)
4. Unknown state space problems
1. Deterministic/observable (single state problems)
• Each state is fully observable and it goes to one definite state after
any action.
• Here the goal state is reachable in one single action or sequence of
actions
• Deterministic environment ignore uncertainty
• Example : Vacuum cleaner with sensor

Note environment is fully observable and we define exactly what to do


in the next step, problem defined well
2. Non-observable (Multiple-state problems / conformant problem)
• Problem solving agent does not have any information about the state
• Solution may or may not be reached
• Example: in case of vacuum cleaner the goal state is to clean the floor
rather clean floor. Action is to suck if there is dirt.
• So in non observable condition as there is no sensor, it will have to
suck dirt irrespective of whether it is left or right. Here the solution
space is the states specifying it’s movement across the floor.
3. Non-deterministic/partially observable(contingency problem)
• The effect of action is not clear
• Percept provide new information about the current state
• Example if we take a vacuum cleaner, and now assume that sensor is
attached to it, then it will suck if there is dirt. Movement of the
cleaner will be based on the current percept.
4. Unknown state space problems
• It is typically exploration problems
• States and impact of actions are not known
• Example : Online search that involves acting without complete
knowledge of the next state or scheduling without map.
Search Algorithms
A search algorithm takes a search problem as input and return a
solution, or an indication of failure.
It is important to understand the distinction between the state space
and the search tree.
The state space describes the (possibly infinite) sets of states in the
world, and the actions that allow transition from one state to another.
The search tree describes the paths between these states, reaching
towards the goal.
There are 2 types of Search
1. Informed search
• Search with information
• Use knowledge to find steps to solution
• Quick solution
• Less complex
• BFS, A* search, heuristic function
2. Uninformed search
• Search without information
• No knowledge
• Time consuming
• More complex
• BFS, DFS, etc.
Uninformed Search
1. Breadth First Search(BFS)
• Breadth-first search (BFS) is an algorithm that is used to graph data or
searching tree or traversing structures.
• It is particularly useful for finding the shortest path on unweighted
graph.
• The algorithm efficiently visits and marks all the key nodes in a graph
in an accurate breadthwise fashion. This algorithm selects a single
node (initial or source point) in a graph and then visits all the nodes
adjacent to the selected node. Remember, BFS accesses these nodes
one by one.
• The algorithm is useful for analyzing the nodes in a graph and
constructing the shortest path of traversing through these.
• The algorithm traverses the graph in the smallest number of
iterations and the shortest possible time.
• Once the algorithm visits and marks the starting node, then it moves
towards the nearest unvisited nodes and analyses them. Once visited,
all nodes are marked.
• These iterations continue until all the nodes of the graph have been
successfully visited and marked.

• The time complexity of BFS is O(V + E), where V is the number of


vertices and E is the number of edges in the graph. This is because
every vertex and every edge will be explored in the worst-case
scenario.
• Space complexity is of O(V), where V is the number of vertices in the
graph
• In the various levels of the data, you can mark any node as the
starting or initial node to begin traversing. The BFS will visit the node
and mark it as visited and places it in the queue.
• Now the BFS will visit the nearest and un-visited nodes and marks
them. These values are also added to the queue. The queue works on
the FIFO model.
• In a similar manner, the remaining nearest and un-visited nodes on
the graph are analyzed marked and added to the queue. These items
are deleted from the queue as receive and printed as the result.
Rules of BFS Algorithm
Here, are important rules for using BFS algorithm:
• A queue (FIFO-First in First Out) data structure is used by BFS.
• You mark any node in the graph as root and start traversing the data from
it.
• BFS traverses all the nodes in the graph and keeps dropping them as
completed.
• BFS visits an adjacent unvisited node, marks it as done, and inserts it into a
queue.
• Removes the previous vertex from the queue in case no adjacent vertex is
found.
• BFS algorithm iterates until all the vertices in the graph are successfully
traversed and marked as completed.
• There are no loops caused by BFS during the traversing of data from any
node.
Applications of BFS Algorithm
• Navigation Systems: BFS can help find all the neighboring locations
from the main or source location.
• P2P Networks: BFS can be implemented to locate all the nearest or
neighboring nodes in a peer to peer network. This will find the
required data faster.
• Path finding algorithm - In a robot navigation system, BFS can be used
to find the shortest path for the robot to reach its destination while
avoiding obstacles.
Conditions for Ending BFS Traversal
All Nodes Visited:
BFS will end when all the nodes in the graph have been visited. This is typical in scenarios
where you want to explore the entire graph.
Example: When performing a full traversal to check for connectivity or to generate a
breadth-first spanning tree.

Target Node Found:


BFS will end as soon as the target node (or goal node) is found. This is common in
pathfinding problems where you are searching for the shortest path to a specific node.
Example: In a maze-solving problem, BFS will stop once the exit of the maze is found.

Queue Becomes Empty:


BFS uses a queue to manage the nodes to be explored. When the queue becomes empty, it
means there are no more nodes left to explore, signaling the end of the traversal.
Example: In a disconnected graph, BFS will end after exploring all reachable nodes from the
starting node and finding no more nodes in the queue.
2. Depth First Search(DFS)
• Depth First Search (DFS) algorithm is a recursive algorithm for searching all
the vertices of a graph or tree data structure.
• DFS visits all the vertices in the graph. This type of algorithm always
chooses to go deeper into the graph.
• After DFS visited all the reachable vertices from a particular sources
vertices it chooses one of the remaining undiscovered vertices and
continues the search.
• It is implemented by using STACK(LIFO)
• The time complexity of DFS is O(V + E), where V is the number of vertices
and E is the number of edges in the graph. This is because every vertex and
every edge will be explored in the worst-case scenario.
• Space complexity is of O(V), where V is the number of vertices in the graph
Concept:

Step 1: Traverse the root node.

Step 2: Traverse any neighbour of the root node.

Step 3: Traverse any neighbour of neighbour of the root node.

Step 4: This process will continue until we are getting the goal node.
Algorithm:
Step 1: PUSH the starting node into the stack.
Step 2: If the stack is empty then stop and return failure.
Step 3: If the top node of the stack is the goal node, then stop and
return success.
Step 4: Else POP the top node from the stack and process it. Find all its
neighbours that are in ready state and PUSH them into the stack in any
order.
Step 5: Go to step 3.
Step 6: Exit.
As in the example given below, DFS algorithm traverses from A to B, B
to C, C to D, D to C, C to F.
Advantages:
• DFS consumes very less memory space.
• It will reach at the goal node in a less time period than BFS if it
traverses in a right path.
• It may find a solution without examining much of search because we
may get the desired solution in the very first go.

Disadvantages :
• May find a sub-optimal solution (one that is deeper or more costly
than the best solution)
• Sometimes the states may also enter into infinite loops.
Application of Depth First Search
Depth First Search (DFS) is a popular algorithm used in computer science to
traverse and search through a graph or tree structure. Here are some common
applications of DFS:

Finding connected components: DFS can be used to identify all the connected
components in an undirected graph.
Cycle detection: DFS can be used to detect cycles in a graph. If a node is visited
again during a DFS traversal, it indicates that there is a cycle in the graph.
Pathfinding: DFS can be used to find a path between two nodes in a graph.
Solving puzzles: DFS can be used to solve puzzles such as mazes, where the goal is
to find a path from the start to the end.
Spanning trees: DFS can be used to construct a spanning tree of a graph. A
spanning tree is a subgraph of a connected graph that includes all the vertices of
the original graph and is also a tree.
Backtracking: DFS can be used for backtracking in algorithms like the N-Queens
problem or Sudoku.
3. Depth Limited Search (DLS)
• Depth limited search is an uninformed search algorithm which is similar to
Depth First Search(DFS). It can be considered equivalent to DFS with a
predetermined depth limit 'l'. Nodes at depth l are considered to be nodes
without any successors.
• Depth limited search may be thought of as a solution to DFS's infinite path
problem; in the Depth limited search algorithm, DFS is run for a finite
depth 'l', where 'l' is the depth limit.
• Before moving on to the next path, a Depth First Search starts at the root
node and follows each branch to its deepest node. The problem with DFS is
that this could lead to an infinite loop.
• By incorporating a specified limit termed as the depth limit, the Depth
Limited Search Algorithm eliminates the issue of the DFS algorithm's
infinite path problem;
• In a graph, the depth limit is the point beyond which no nodes are
explored.
Depth Limited Search Algorithm
We are given a graph G and a depth limit 'l'. Depth Limited
Search is carried out in the following way:
1.Set STATUS=1(ready) for each of the given nodes in graph G.
2.Push the Source node or the Starting node onto the stack and
set its STATUS=2(waiting).
3.Repeat steps 4 to 5 until the stack is empty or the goal node
has been reached.
4.Pop the top node T of the stack and set its STATUS=3(visited).
5.Push all the neighbours of node T onto the stack in the ready
state (STATUS=1) and with a depth less than or equal to depth
limit 'l' and set their STATUS=2(waiting).
(END OF LOOP)
6.END
When one of the following instances are satisfied, a Depth
Limited Search can be terminated.
• When we get to the target node.
• Once all of the nodes within the specified depth limit have been
visited.

• Completeness: DLS search algorithm is complete if the solution is


above the depth-limit.
• Time Complexity: Time complexity of DLS algorithm is O(bℓ) where b is
the branching factor of the search tree, and l is the depth limit.
• Space Complexity: Space complexity of DLS algorithm is
O(b×ℓ) where b is the branching factor of the search tree, and l is the
depth limit.
Advantages of Depth Limited Search
1.Depth limited search is more efficient than DFS, using less time and
memory.
2.If a solution exists, DFS guarantees that it will be found in a finite
amount of time.
3.To address the drawbacks of DFS, we set a depth limit and run our
search technique repeatedly through the search tree.
4.DLS has applications in graph theory that are highly comparable to
DFS.
Disadvantages of Depth Limited Search
1.For this method to work, it must have a depth limit.
2.If the target node does not exist inside the chosen depth limit, the
user will be forced to iterate again, increasing execution time.
3.If the goal node does not exist within the specified limit, it will not be
discovered.
4. Uniform Cost Search(UCS)
• The Uniform Cost Search Algorithm is a search algorithm to find the
minimum cumulative cost of the path from the source node to the
destination node.
• It is an uninformed algorithm i.e. it doesn’t have prior information
about the path or nodes and that is why it is a brute-force approach.
• The core of Uniform Cost Search revolves around a few essential
concepts, notably the priority queue and path cost
• In UCS, nodes are stored in a priority queue, ordered by the
cumulative cost from the start node. Nodes with the lowest
cumulative cost are expanded first, ensuring UCS follows the least-
cost path as it searches.
• The goal state is the target node that UCS aims to reach at the
minimum cost. The algorithm continues to expand nodes until it
encounters the goal with the lowest path cost. This expansion order is
what differentiates UCS from other algorithms—it does not blindly
expand in layers or depth but rather in the order of least cost.

• UCS is both optimal and complete when dealing with positive path
costs, as it always finds the least-cost solution, provided one exists.
This quality makes it particularly advantageous in applications where
path cost minimization is a priority, ensuring that the solution found is
both achievable and cost-effective.
The algorithm for UCS algorithm

• Create a priority queue, a boolean array visited of the size of the


number of nodes, and a min_cost variable initialized with maximum
value. Add the source node to the queue and mark it visited.
• Pop the element with the highest priority from the queue. If the
removed node is the destination node, check the min_cost variable, if
the value of the min_cost variable is greater than the current cost
then update the variable.
• If the given node is not the destination node then add all the
unvisited nodes to the priority queue adjacent to the current node.
Example
Input: Let the graph be as below with source node being A and
destination E.
Applications of UCS in AI
• Its primary applications include pathfinding in robotics and navigation systems
where optimal routes are critical.
• UCS is also valuable in resource management, where it optimizes the allocation of
limited resources, ensuring cost-effective usage.

Advantages of Uniform Cost Search


• UCS is optimal because it guarantees finding the least-cost path to the goal,
provided all path costs are positive.
• UCS is also complete, meaning it will find a solution if one exists, making it
reliable for pathfinding.
• Compared to Depth-First Search (DFS) and BFS, UCS is better suited for cost-
driven scenarios because it considers cumulative path costs rather than purely
distance or depth.
• Its cost-aware approach allows it to solve more complex problems effectively,
making it a preferred choice in scenarios where the minimum cost is crucial, such
as in logistics, routing, and resource management tasks.
Challenges and Limitations of UCS
• One of the primary limitations is its high memory usage; UCS stores
all explored nodes to avoid re-expansion, which can consume
substantial memory, especially in large graphs.
• Another limitation of UCS is its inefficiency in large or complex graphs
without an informed heuristic.
• Since UCS explores nodes based solely on cumulative path costs, it
may expand many nodes unnecessarily, increasing computational
time.
5. Bi- directional Search
• Bidirectional search is a graph search where unlike Breadth First
search and Depth First Search, the search begins simultaneously from
Source vertex and Goal vertex and ends when the two searches meet
somewhere in between in the graph.
• This is thus especially used for getting results in a fraction of the time
taken by both DFS and BFS searches.
• The search from the initial node is forward search while that from the
goal node is backwards. It is also based on heuristic search meaning
finding the shortest path to goal optimally.
• It significantly reduces the amount of exploration done. It is
implemented using the Breadth First Search (BFS) Algorithm. BFS is
run simultaneously on two vertices - the start and the end vertex.
• Time and Space complexity of the bidirectional search is represented
by O(b^{d/2}

Properties
• Source node and Goal node are unique and known
• Same branching factor on both sides of search space
• Fast and complete if BFS is used
• Time complexity is reduced
Step 1: Say, A is the initial node and O is the goal node, and H is the
intersection node.
Step 2: We will start searching simultaneously from start to goal node
and backward from goal to start node.
Step 3: Whenever the forward search and backward search intersect at
one node, then the searching stops.

Thus, it is possible when both the Start node and goal node are known
and unique, separate from each other. Also, the branching factor is the
same for both traversals in the graph. Also, other points to be noted
are that bidirectional searches are complete if a breadth-first search is
used for both traversals, i.e. for both paths from start node till
intersection and from goal node till intersection.
Advantages
• One of the main advantages of bidirectional searches is the speed at which we get the
desired results.
• It drastically reduces the time taken by the search by having simultaneous searches.
• It also saves resources for users as it requires less memory capacity to store all the
searches.

Disadvantages
• The fundamental issue with bidirectional search is that the user should be aware of the
goal state to use bidirectional search and thereby to decrease its use cases drastically.
• The implementation is another challenge as additional code and instructions are needed
to implement this algorithm, and also care has to be taken as each node and step to
implement such searches.
• The algorithm must be robust enough to understand the intersection when the search
should come to an end or else there’s a possibility of an infinite loop.
• It is also not possible to search backwards through all states.
6. Iterative Deepening Search (IDS)
• The Iterative Deepening Depth-First Search (or Iterative Deepening
search) algorithm, repeatedly applies depth-limited search with
increasing limits.
• It gradually increases limits from 0,1,...d, until the goal node is found.

• It terminates in the following two cases:

• When the goal node is found


• The goal node does not exist in the graph/tree.
• Iterative Deepening Search (IDS) is an iterative graph searching
strategy that takes advantage of the completeness of the Breadth-
First Search (BFS) strategy but uses much less memory in each
iteration (similar to Depth-First Search).

• IDS achieves the desired completeness by enforcing a depth-limit on


DFS that mitigates the possibility of getting stuck in an infinite or a
very long branch. It searches each branch of a node from left to right
until it reaches the required depth. Once it has, IDS goes back to the
root node and explores a different branch that is similar to DFS.
• If b is the branching factor, and d is the depth of the goal node or the
depth at which the iteration of IDDFS function terminates, the time
complexity is O(b^d) and space complexity is O(bd).

• Here l = depth-limit, d = depth of the goal node, m = depth of the


search tree/graph.
Algorithms Completeness Optimality Time complexity Space complexity
Compare study of all uninformed search strategies
Breadth First BFS is complete because BFS is optimal because it O(V+E)where V, is no. The space complexity
Search it explores all nodes level gives shortest path to the of nodes or vertices of BFS is 𝑂(𝑉), where
by level. If there is a goal because it explores all and E is no. of edges. 𝑉 is the number of
solution, BFS is nodes at the present depth In AI o(bd) vertices.
guaranteed to find it. level before moving on to
nodes at the next depth
level.
DFS DFS is not complete for DFS is not optimal. It does The time complexity The space complexity
infinite graphs because it not guarantee finding the of DFS is 𝑂(𝑉+𝐸), DFS of DFS is 𝑂(𝑉) in the
might follow an infinite shortest path to a goal explores all vertices worst case,
branch without finding because it explores paths to and edges in the worst
the solution. For finite their maximum depth before case.
graphs, it is complete if backtracking, which might
all edges are eventually not be the shortest path.
explored.
Comparative study of all uninformed search strategy
Time Complexity -
Breadth-First Search (BFS) 𝑂(𝑉+𝐸) Visits all vertices and edges
Depth-First Search (DFS) 𝑂(𝑉+𝐸) Visits all vertices and edges
Uniform-Cost Search 𝑂(𝑉+𝐸) Visits all vertices and edges
with costs
Depth-Limited Search (DLS) 𝑂(𝑏𝑙) Explores up to depth limit
𝑙Iterative Deepening Search (IDS) 𝑂(𝑏𝑑) Repeated depth-limited
searches to depth 𝑑
Bidirectional Search 𝑂(𝑏𝑑/2) Searches from both start and goal
Informed Search
Heuristic function
Think of a maze in which you are now alone; your aim is to come out as
faster as possible but how many ways are there? Now imagine, if you
are given a map where you can highlight areas which are worth
pursuing and which ones are not! That’s exactly the part heuristic
functions serve in algorithms of artificial intelligence.
These intelligent instruments assist the AI systems to arrive at better
and more prompt decisions, thus deeply simplifying the performance
complexity.
• A heuristic function estimates the cost or distance between a specific
state and the goal in a search strategy. It provides a way to select the
most promising paths, increasing the likelihood of an effective
solution.
• In other words, a heuristic function gives the algorithm guidance on
which direction to take, helping it reach the goal with fewer steps. By
doing so, it minimizes the search space and improves the efficiency of
the search process.
Heuristic Function: Euclidean Distance
In this example, we use the Euclidean distance as our heuristic. This
heuristic estimates the cost from the current node to the goal node as
the straight-line distance, calculated as:
Role of Heuristic Functions in Artificial Intelligence:
1. Guiding Search Algorithms
Heuristics provide a way to prioritize and evaluate potential solutions,
focusing on those that are more likely to lead to a goal state. This
guidance reduces the need to explore all possible options exhaustively,
making search more efficient.

2. Speeding Up Problem Solving:


Heuristic functions can dramatically speed up problem-solving in AI
applications, especially in domains with large state spaces or complex
decision trees. By quickly assessing the quality of different options,
heuristics allow AI systems to make intelligent choices without
exploring every possibility.
3. Improving Decision-Making:
Heuristics are used in decision-making processes, such as in game-playing AI
or route planning. They help AI agents assess the desirability of different
moves or actions by estimating the potential outcome and expected value of
those choices.

4. Approximation of Cost or Value:


Heuristic functions provide an approximation of the cost or value associated
with a state in a problem space. This allows AI systems to estimate how close
a given state is to a goal state or the expected utility of a decision.

5. Heuristics assist AI agents in balancing exploration (searching for new


possibilities) and exploitation (choosing actions that appear most promising).
By estimating the value of states, heuristics guide agents to explore less-
known states while exploiting the most promising ones.
Best First Search(Greedy Search)
• Greedy Best-First Search is an AI search algorithm that attempts to
find the most promising path from a given starting point to a goal.
• Here, by the term most promising we mean the path from which the
estimated cost of reaching the destination node is the minimum.
• It prioritizes paths that appear to be the most promising, regardless of
whether or not they are actually the shortest path.
• The algorithm works by evaluating the cost of each possible path and
then expanding the path with the lowest cost. This process is
repeated until the goal is reached.
• Best First Search (BFS) follows a graph by using a priority queue and heuristics. It
keeps an ‘Open’ list for nodes that need exploring and a ‘Closed’ list for those
already checked.

Here’s how it operates:


• Create 2 empty lists: OPEN and CLOSED
• Start from the initial node (say N) and put it in the ‘ordered’ OPEN list
• Repeat the next steps until the GOAL node is reached
• If the OPEN list is empty, then EXIT the loop returning ‘False’
• Select the first/top node (say N) in the OPEN list and move it to the CLOSED list.
Also, capture the information of the parent node
• If N is a GOAL node, then move the node to the Closed list and exit the loop
returning ‘True’. The solution can be found by backtracking the path
• If N is not the GOAL node, expand node N to generate the ‘immediate’ next
nodes linked to node N and add all those to the OPEN list
• Reorder the nodes in the OPEN list in ascending order according to an evaluation
function f(n)
An example of the best-first search algorithm is below graph, suppose
we have to find the path from A to G

The goal node G has been reached and the path we will follow is
A->C->F->G
Local Search and Optimization
• Local search algorithm operates by searching from start state to
neighbouring states, without keeping tracks of path nor the set of
states that have been reached.
• That means they are not systematic.
• They might never explore a portion of search space where a solution
actually resides.
• However, they have two key advantages
• the use of very little memory
• They can often find reasonable solutions in large or infinite state spaces for
which systematic algorithms are unsuitable
• In the context of local search algorithms, the "local" aspect refers to
the limited scope of the search.
• These algorithms are designed to optimize within a constrained
neighborhood of the current state, as opposed to global optimization
methods that attempt to find the global optimum across the entire
solution space.
Components of Local Search Algorithms
1. Initial State: The initial state, also known as the starting point, is
where the local search begins.
2. Neighbors: Neighbors are solutions that are closely related to the
current state. Neighbors are essential because local search
algorithms focus on refining the current solution by examining
these nearby options.
3. Objective Function: This function quantifies the quality or
desirability of a solution. It assigns a numerical value to each
solution, reflecting how close it is to the optimal solution. The
objective function guides the search process by helping the
algorithm select the most promising neighbors for exploration.
Types of Local search Algorithms
• Hill Climbing Search
• Simulated annealing
• Genetic algorithm
Hill Climbing Search
• Hill climbing belongs to the class of local search algorithms. Unlike
methods like brute force search that explore the entire problem
space, hill climbing focuses the search on a promising local region.
• It keeps track of one current state and on each iteration moves to the
neighbouring state with highest value. That is, its heads in the
direction that provide the steepest ascent.
• It terminates when it reaches a peak where no neighbour has a higher
value.
• Hill climbing is sometimes called greedy local source because it grabs
a good neighbour state without thinking ahead about where to go
next.
• No backtracking: It does not backtrack the search space, as it does
not remember the previous states.
State-space Diagram for Hill Climbing:
If the function on Y-axis is cost or objective function then, the goal of
search is to find the global minimum and local minimum
Different regions in the state space landscape
Local Maximum: Local maximum is a state which is better than its neighbor
states, but there is also another state which is higher than it.

Global Maximum: Global maximum is the best possible state of state space
landscape. It has the highest value of objective function.

Current state: It is a state in a landscape diagram where an agent is currently


present.

Flat local maximum: It is a flat space in the landscape where all the neighbor
states of current states have the same value.

Shoulder: It is a plateau region which has an uphill edge.


Problems in Hill Climbing Algorithm
1. Local Maximum: It is a state that is better than all its neighbors but not better than
some other states which are far away, (there might be a better solution ahead and this
solution is referred to as the global maximum.)
2. Plateau: A plateau is the flat area of the search space in which all the neighbor states
of the current state contains the same value, because of this, algorithm does not find
any best direction to move. A hill-climbing search might be lost in the plateau area.
3. Ridges: A ridge is a special form of the local maximum. It has an area which is higher
than its surrounding areas, but itself has a slope, and cannot be reached in a single
move.

Applications of Hill Climbing


Robotics: In robotics, hill climbing can help robots navigate through physical environments,
adjusting their paths to reach a destination.
Network Design: It can be used to optimize network topologies and configurations in
telecommunications and computer networks.
Game Playing: In game playing AI, hill climbing can be employed to develop strategies that
maximize game scores.
Simulated annealing
• Simulated Annealing is often preferred for its ability to find global optimum
solutions, avoiding local optima traps common in other optimization
methods.

• The drawback of hill climbing is that it never makes downhill moves towards the
state with lower value
• Simulated Annealing in AI is a popular optimization technique inspired by the
annealing process in metallurgy. In metallurgy, annealing involves heating and
then slowly cooling a material to remove defects and improve its properties.
• Similarly, in AI, simulated annealing helps in finding the near best solutions to
complex problems by exploring different solutions and gradually converging to an
optimal or near-optimal solution.
• The method is widely used in combinatorial optimization, where problems often
have numerous local optima
• Simulated Annealing excels in escaping these local minima by introducing
controlled randomness in its search, allowing for a more thorough exploration of
the solution space.
• Simulated annealing is a powerful optimization technique inspired by
the annealing process in metallurgy. It balances exploration and
exploitation by allowing occasional moves to worse solutions with a
probability that decreases over time. This helps in finding the global
optimum in complex search spaces.
Simulated annealing state space diagram
• Simulated annealing improves this strategy through the introduction
of 2 tricks
• This algorithm picks a random move instead of picking the best move
• If the move improves the result then it accepts this random move otherwise
it accepts the move with some probability less than 1.
How Simulated Annealing Works
Imagine a technique inspired by the principles of metallurgy, adept at
solving AI’s most intricate optimization puzzles. Simulated Annealing,
with its unique approach, mirrors the heating and cooling process of
metallurgy to navigate through the complexities of AI algorithms.
• Start with High Temperature: The process begins at a high
‘temperature’, setting a broad scope for exploration, akin to
metallurgy’s initial phase of intense heat.
• Exploratory Adjustments and Probabilistic Techniques: At this
stage, SA explores various solutions, making significant leaps to
avoid being trapped in local minima, utilizing a probabilistic
technique for variable value adjustments.
• Gradual Cooling: As the process progresses, the ‘temperature’
lowers gradually, resembling the controlled cooling in metallurgy.
This reduces the extent of search space exploration, focusing more
on refinement.
• Refinement and Finalization: In its final phase, SA hones in on the
most promising solutions, much like the precision required in
metallurgy, ensuring an efficient approach to finding the global
optimum.
Genetic Algorithm
• A genetic algorithm is a heuristic search algorithm i.e. inspired by Charles
Darwin’s theory of natural evolution.
• The algorithm reflects the process of natural selection where the fittest
individuals are selected for reproduction in-order to produce offsprings of
the next generation.

A genetic algorithm is used to solve complicated problems with a greater


number of variables & possible outcomes/solutions. The combinations of
different solutions are passed through the Darwinian based algorithm to find
the best solutions. The poorer solutions are then replaced with the offspring
of good solutions.

The whole process of genetic algorithms is a computer program simulation in


which the attributes of the problem & solution are treated as the attributes
of the Darwinian theory. The basic processes which are involved in genetic
algorithms are as follows:
• A population of solutions is built to any particular problem. The
elements of the population compete with each other to find out the
fittest one.
• The elements of the population that are fit are only allowed to create
offspring (better solutions).
• The genes from the fittest parents (solutions) create a better
offspring. Thus, future solutions will be better and sustainable.
• Use cases of genetic algorithm
• Image processing
• Designing electronic circuit
• Artificial creativity(AI systems can create visual art, design patterns, and even
entire paintings.)
• When faced with complex problems that have many variables and potential
outcomes or solutions, a genetic algorithm is utilized to solve them. To identify
the optimal solutions, various combinations of solutions are run through a
Darwinian-based algorithm. Next, the progeny of excellent solutions takes the
place of the inferior ones.

Here’s how genetic algorithm works:


• Initialization: A population of potential solutions is randomly generated to represent
the first generation.
• Fitness Evaluation: Each solution in the population is evaluated based on a
predefined fitness function, which measures how well it solves the problem at hand.
• Selection: Solutions are selected for reproduction based on their fitness. The fitter individuals are
more likely to be chosen, simulating the “survival of the fittest.”
• Crossover: Pairs of selected solutions undergo genetic crossover,
exchanging parts of their genetic information to create new offspring.

• Mutation: Some of the new solutions undergo random changes, or


mutations, to introduce genetic diversity.
• Replacement: The new generation, now composed of both parents
and offspring, replaces the previous generation.
• Termination: The algorithm repeats these steps for multiple
generations or until a satisfactory solution is found.
Previous year Questions
12 marks
1. What do you mean by uninformed search strategies? Compare BFS
and DFS with examples. Why best fit search is called greedy search?
(2024)
2. What is searching? What are its types? Explain BFS and DFS search
technique with suitable examples, advantages and
disadvantages.(2023)
3. What is uninformed search? List all basic search algorithms and
explain BFS algorithm in detail.(2022)
4. What are the different types of basic search algorithm and explain
depth first search algorithm in detail. (2021)
6 marks
1. Explain heuristic function with best first (greedy) search algorithm
in detail.(2022, 2021)
2. What is heuristic function? Explain hill climbing problem with
illustration. (2018)
3. What is genetic algorithm and explain with example how it works.
(2015)

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy