0% found this document useful (0 votes)
8 views117 pages

Problem-Solving Agents

A problem-solving agent is a system that makes decisions to achieve a goal through a structured approach involving goal formulation, problem formulation, and search and execution. It systematically updates its state based on perceptions and follows a 'formulate, search, execute' cycle to reach its objectives, as illustrated through examples like robots navigating rooms and Google Maps. The document also discusses well-defined problems and solutions, including components like initial state, actions, transition models, and path costs, along with examples of toy problems and real-world applications.

Uploaded by

mr.prathik.c
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views117 pages

Problem-Solving Agents

A problem-solving agent is a system that makes decisions to achieve a goal through a structured approach involving goal formulation, problem formulation, and search and execution. It systematically updates its state based on perceptions and follows a 'formulate, search, execute' cycle to reach its objectives, as illustrated through examples like robots navigating rooms and Google Maps. The document also discusses well-defined problems and solutions, including components like initial state, actions, transition models, and path costs, along with examples of toy problems and real-world applications.

Uploaded by

mr.prathik.c
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 117

PROBLEM-SOLVING AGENTS

• A problem-solving agent is a system that makes decisions to


achieve a goal. It follows a structured approach using three key
steps:

• Goal Formulation – The agent decides its goal based on the


current situation and its performance measure (how success is
evaluated).

• Problem Formulation – It determines what actions and states to


consider in order to reach the goal.

• Search and Execution – It looks for a sequence of actions


(search) that leads to a state of known value, then follows these
actions (execution phase).
• This approach is called the "formulate, search,
execute" design.
How It Works:

• The agent starts by formulating a goal and


defining the problem it needs to solve.
• It uses a search algorithm to find a solution (a
sequence of actions).
• The agent then executes the first action in the
sequence and removes it from the list.
• Once the solution is fully executed, the agent
formulates a new goal and repeats the process
Example: A Robot Navigating a Room
• Goal Formulation: The robot’s goal is to reach a
charging station.
• Problem Formulation: It considers different paths
and obstacles.
• Search: It finds the best route to the charging
station.
• Execution: It follows the planned path step by step.
• Once it reaches the charging station, it formulates
a new goal, such as delivering an object.
• This process ensures that the agent systematically
makes the best decisions to achieve its objectives.
Example 1: A Robot in a Maze
• Imagine a robot trying to reach the exit of a
maze.
• Goal formulation: The goal is to find the exit.
• Problem formulation: The robot analyzes the
paths and possible moves.
• Search and Execution: It plans the best route
and follows it step by step until it reaches the
exit.
Example 2: Google Maps
• Goal formulation: You want to go from home
to a restaurant.
• Problem formulation: Google Maps considers
all possible routes (e.g., shortest path, least
traffic).
• Search and Execution: It selects the best route
and gives turn-by-turn directions
Example 3: Chess Game AI
• Goal formulation: The AI wants to win the
game.
• Problem formulation: It considers different
moves and their outcomes.
• Search and Execution: It selects the best move
and plays accordingly.
This function SIMPLE-PROBLEM-SOLVING-AGENT represents the
working of a basic problem-solving agent. It follows the
"formulate, search, execute" cycle.
• Persistent Variables:
• seq: A sequence of actions (initially empty).
• state: The current description of the world.
• goal: The objective the agent wants to achieve
(initially null).
• problem: The problem the agent needs to
solve.
• Updating the State:
• The agent updates its state based on new
perceptions from the environment using
UPDATE-STATE(state, percept).
• If No Actions Are Left (seq is empty):
• The agent formulates a new goal using
FORMULATE-GOAL(state).
• It then defines a problem based on the goal
using FORMULATE-PROBLEM(state, goal).
• The agent searches for a solution (a sequence
of actions) using SEARCH(problem).
• If no solution is found (seq = failure), the agent
returns a null action and does nothing
• Executing the Action Sequence:
• If a solution (seq) is found, the agent chooses
the first action using FIRST(seq).
• The remaining actions are stored using
REST(seq).
• The agent returns the chosen action to
execute it.
• Repeating the Process:
• Once the current action sequence is finished,
the agent formulates a new goal and starts
the process again.
Example: A Cleaning Robot
• Imagine a robot that cleans a house.
• It observes its surroundings (percept).
• Updates its state (e.g., “The kitchen is dirty”).
• If it has no planned actions, it:
– Formulates a goal (e.g., "Clean the kitchen").
– Defines the problem (e.g., “Find the best way to clean the kitchen”).
– Searches for a solution (e.g., “Go to the kitchen, vacuum the floor, wipe the
table”).
• Executes the actions one by one until the kitchen is clean.
• Once done, it formulates a new goal, like moving to another dirty
room.
• This process ensures that the agent always works towards achieving
its goal
1.1 Well-defined problems and solutions

• A problem in AI is defined using key


components, which help an agent find a solution
systematically.
• Initial State
• Actions
• Transition Model
• State Space
• Goal Test
• Path Cost Function
Well-Defined Problems and Solutions (Explained with
Romania Map Example)
• 1. Initial State
• This is the starting position of the agent.
• In this example, suppose our agent starts in
Arad, so the initial state is In(Arad).
• 2. Actions
• The agent can perform certain actions in each
state.
• In any given city, the possible actions are moving
to connected cities.
• If the agent is in Arad, the available actions are:
– Go(Sibiu)
– Go(Timisoara)
– Go(Zerind)
• 3. Transition Model
• This model defines what happens when an
action is taken. It describes how states change.
• If the agent is in Arad and takes the action
Go(Zerind), it reaches Zerind.
• Mathematically, this can be written as:
– RESULT(In(Arad), Go(Zerind)) = In(Zerind)
• Each action leads to a new state (successor
state).
• 4. State Space
• The state space includes all possible states the
agent can reach through a sequence of actions.
• The Romania roadmap (Figure 3.2) represents
the state space as a graph.
– Cities = States (Nodes)
– Roads = Actions (Edges)
• A path in the state space is a sequence of actions
connecting cities.
• 5. Goal Test
• This test checks if the agent has reached the
goal.
• Suppose the agent's goal is to reach
Bucharest.
• The goal test checks:
– Is the current state In(Bucharest)?
– If yes, the agent has found a solution.
• 6. Path Cost Function
• The cost function helps the agent find the most efficient
path.
• The cost of a path is the sum of distances between cities.
• In Figure 3.2, each road is labeled with its distance in
kilometers.
– Example: The distance from Arad → Sibiu is 140 km.
– If the agent takes the path Arad → Sibiu → Fagaras →
Bucharest, the total cost is:
• 140 + 99 + 211 = 450 km
• The agent chooses the lowest-cost path to optimize travel.
• Summary
• The agent starts at a city (Initial State).
• It chooses actions (moving to connected cities).
• The transition model defines where it reaches after an
action.
• The state space is the road network.
• It reaches the goal (e.g., Bucharest).
• The best path is selected based on minimum distance.
• This structure helps the agent efficiently solve navigation
problems like finding the shortest route in Romania!
1.2 Formulating problems.
• 1. Problem Formulation Components
To solve a problem, we define it using:
Initial state: Where we start (e.g., "In Arad").
Actions: The possible moves (e.g., "Drive to Sibiu").
Transition model: How actions change the state (e.g.,
"Driving to Sibiu puts us in Sibiu").
Goal test: How we check if the goal is reached (e.g., "Are
we in Bucharest?").
Path cost: A measure of how expensive the solution is
(e.g., distance, time, fue
• 2. The Role of Abstraction
Since the real world is full of details, we remove
irrelevant information to simplify the problem.
This makes problem-solving tractable for AI.
Instead of tracking every aspect of the journey
(weather, music, road conditions), we only
focus on locations.
Actions are also abstracted: We don't model
every steering movement, just the high-level
action ("Drive to Sibiu").
• 3. Validity and Usefulness of Abstraction
A good abstraction should:

Be valid: Every abstract solution (e.g., "Arad →


Sibiu → Rimnicu Vilcea → Bucharest") must
have a corresponding real-world execution.
Be useful: The abstract actions must be easy to
perform without further complex decision-
making (e.g., driving between two cities is
straightforward).
2. EXAMPLE PROBLEMS(Toy problems)

Vacuum World in Simple Words


Imagine you have a small robot vacuum cleaner in a two-room
house. The robot's job is to clean both rooms. Here's how the
problem works:

1. The World (States)

The house has two rooms (Left and Right).


Each room can be dirty or clean.
The vacuum can be in either room.
So,
there are 8 possible situations (combinations of
location and dirt).
STATES
2. What the Robot Can Do (Actions)
The robot has three actions:
Move Left (L): If in the right room, move to the
left.
Move Right (R): If in the left room, move to the
right.
Suck (S): Clean the current room if it’s dirty.
3. How Actions Change the World (Transition
Model)
Moving left or right changes the robot’s location
(unless it’s already at the edge).
Sucking removes dirt from the room.
If the robot tries to suck in a clean room,
nothing happens.
4. The Goal
The goal is to clean both rooms.
5. How We Measure the Best Solution (Path
Cost)
Each step (L, R, or S) costs 1.
The fewer steps the robot takes, the better the
solution.
2.2. 8-puzzle Problem
• 1. The Puzzle Board
The board is a 3×3 grid with 8 numbered tiles
and one blank space.
The tiles can slide into the blank space.
• 2. How the Problem is Defined
States: A state tells us where each tile and the blank space are
located.
Initial State: Any arrangement of tiles can be the starting point.
Actions: You can move the blank Left, Right, Up, or Down, if
possible.
Transition Model: Moving the blank swaps it with the adjacent tile.
Goal Test: The puzzle is solved when the tiles match the goal state
(shown on the right in Figure 3.4).
Path Cost: Each move costs 1 step, so the total cost is the number
of moves taken.
• 3. Why is the 8-Puzzle Important?
It helps in testing AI search algorithms (like BFS,
A*, and heuristics).
The puzzle belongs to a group called sliding-
block puzzles, which are hard to solve (NP-
complete).
• 4. Bigger Versions
15-Puzzle (4×4 board): 1.3 trillion states, solved
in milliseconds with good algorithms.
24-Puzzle (5×5 board): 10²⁵ states, takes hours
to solve optimally.
2.3. 8-queens problem
• The 8-Queens Problem is a puzzle where we
need to place 8 queens on a chessboard so
that no queen can attack another.
1. How Do Queens Attack?
• A queen in chess can attack in three ways:
Same row (left or right)
Same column (up or down)
Diagonally (both directions)
So, when placing 8 queens on the board, we must
make sure none of them are in the same row,
column, or diagonal.
Incremental Approach
• Two Ways to Solve the Problem
• 1.Incremental Approach (Adding Queens One
by One)
• Start with an empty board.
• Place one queen at a time in any empty square.
• Repeat until all 8 queens are placed correctly.
• Problem: Too many possible ways (about 1.8 ×
10¹⁴), making it slow.
Complete-State Formulation
• Smarter Approach (Placing Queens More
Efficiently)
• Always place a queen in a column where it
won't be attacked.
• This reduces the number of possibilities to
just 2,057, making it much faster to find a
solution.
2.4. Toy problem devised by Donald Knuth
This problem was created by Donald Knuth in 1964 to show how infinite state spaces
can appear. The idea is that starting with 4, we can reach any positive integer by using
three mathematical operations:

1.Factorial (!) → (only for whole numbers)

2.Square root (√)

3.Floor function (⌊x⌋) → (rounds down to the nearest whole number)


• Example: Getting 5 from 4
• Start with 4
• Apply factorial (4!) → 4! = 4 × 3 × 2 × 1 = 24
• Take multiple square roots until close to 5
• Apply the floor function to round down to 5
• Knuth suggested that using these three
operations, you can get ANY number from 4!
2. 5. Real-world problems
• Example 1: Booking a Flight (Airline Travel Problem)

• When using a travel website to book a flight, the system solves a problem with these parts:

✅ States – Your current location, time, and details about past flights (e.g., economy/business

class, domestic/international).

✅ Initial State – Your starting airport and time (given by the user).

✅ Actions – Choosing a flight (destination, time, seat class).

✅ Transition Model – After taking a flight, you land in a new location at a new time.

✅ Goal Test – Have you reached your final destination?

✅ Path Cost – Factors like ticket price, travel time, layovers, comfort, and frequent flyer miles

affect the best choice.


• Example 2: Visiting Multiple Cities (Touring Problem &
TSP)
• Suppose you need to visit all cities in a country and return
home.
• Touring Problem: You must visit every city at least once
but can revisit cities.
• Traveling Salesperson Problem (TSP): You must visit each
city exactly once and find the shortest route.
TSP is very hard (NP-hard), but solving it is important for:
✔ Planning sales trips
✔Optimizing delivery routes
✔Factory automation (e.g., drilling circuit boards)
• Example 3: Robot Navigation
• Robots move differently than planes or cars.
• Instead of fixed roads, they move in
continuous space (any direction).
• If a robot has arms or wheels, movement gets
even more complex!
• Smart algorithms simplify the problem so
robots can navigate efficiently.
3. SEARCHING FOR SOLUTIONS
After defining a problem, we need to find a
solution. A solution is a sequence of actions
that leads from the initial state to the goal
state. Search algorithms help us find the best
sequence by exploring different possibilities.
Search Tree and State Space
• The state space consists of all possible states
and actions.
• A search tree is formed from the state space,
starting from the initial state (root) and
expanding actions (branches) to reach new
states (nodes).The search tree helps visualize
how we move from the initial state to the
goal.
• Example: Route from Arad to Bucharest
• Figure 3.6 shows a search tree where the initial
state is In(Arad).
• First, we check if Arad is the goal state.
• If not, we expand it by applying all possible
actions (moving to connected cities).
• This creates new states: In(Sibiu), In(Timișoara),
and In(Zerind).
• We then choose which state to explore next.
• Tree-Search Algorithm (Figure 3.7)
• Starts from the initial state and maintains a
frontier (list of unexplored states).
• It repeatedly selects and expands a node until
it finds a goal.
• The search strategy determines which node to
expand first.
• TREE-SEARCH Algorithm (Explores all paths without avoiding repeated states)

• function TREE-SEARCH(problem) returns a solution, or failure


→ This function tries to find a solution to the given problem. If it cannot, it returns failure.

• initialize the frontier using the initial state of problem


→ The frontier is a set of nodes to be explored. It starts with the initial state of the problem.

• loop do
→ This means the algorithm will keep running until it finds a solution or fails.

• if the frontier is empty then return failure


→ If there are no more nodes left to explore, it means there is no solution, so return failure.

• choose a leaf node and remove it from the frontier


→ Pick one of the nodes from the frontier and remove it (so we can process it).

• if the node contains a goal state then return the corresponding solution
→ If this node is the goal (the solution to the problem), return it as the answer.

• expand the chosen node, adding the resulting nodes to the frontier
→ If the node is not the goal, generate its child nodes and add them to the frontier to be
explored later.
• GRAPH-SEARCH Algorithm (Avoids exploring the same state multiple times)

• function GRAPH-SEARCH(problem) returns a solution, or failure


→ This function also tries to find a solution, but it avoids repeating the same states.

• initialize the frontier using the initial state of problem


→ The frontier starts with the initial state of the problem.

• initialize the explored set to be empty


→ Create an explored set, which will keep track of already visited states.

• loop do
→ The algorithm keeps running until it finds a solution or fails.

• if the frontier is empty then return failure


→ If there are no more nodes to explore, return failure.

• choose a leaf node and remove it from the frontier


→ Pick a node from the frontier and remove it.
• if the node contains a goal state then return the corresponding solution
→ If this node is the goal, return it as the answer.

• add the node to the explored set


→ Mark this node as explored to avoid visiting it again.

• expand the chosen node, adding the resulting nodes to the frontier
→ Generate the child nodes of this node.

• only if not in the frontier or explored set


→ Important improvement: Add the child nodes to the frontier only if
they have not been explored before or are not already in the frontier.
This prevents repeating the same paths.
• Avoiding Repeated States (Graph-Search
Algorithm)
• Tree-Search does not track visited states, so it
may repeat paths.
• Graph-Search introduces an explored set
(closed list) to remember visited nodes and
avoid redundant exploration.
• Figure 3.8 shows how Graph-Search prevents
revisiting states like Oradea (a dead-end).
• Frontier in State-Space Graph (Figure 3.9)
• The frontier separates explored and
unexplored states.
• Each step moves a state from the frontier to
the explored region and adds new states to
the frontier.
• This systematic process continues until a
solution is found.
3.1. Infrastructure for search algorithms
• Understanding Search for Problem Solving
• When solving a problem, we need to find a sequence of actions that
leads to the goal. Search algorithms explore possible actions step by
step to find the best solution.

• Key Concepts:
• State Space: Represents all possible states and actions.
• Search Tree: Represents the paths we explore from an initial state to
a goal state.

• Search Tree vs. State Space


• State Space: The actual world of possible configurations (states and
actions).
• Search Tree: The way we explore paths to find solutions.
• Example: Finding a Route from Arad to Bucharest
• We start from Arad.
• From Arad, we can go to Sibiu, Timisoara, or
Zerind.
• Each move creates a new state in the search tree.
• The search continues until we reach the goal
(Bucharest).
• Expanding a node means checking all possible
next moves.
• Search Algorithms
• Tree Search: Expands states but does not
remember visited states.
• Graph Search: Keeps track of visited states to
avoid redundant paths.
• Graph search is more efficient because it
prevents loops and redundant paths.
Nodes in Search Trees

Each node in a search tree has:


1.State (current situation)
2.Parent (previous node)
3.Action (how we reached this node)
4.Path Cost (cost to reach this node)

• Example: In a puzzle, if you move a tile right, the new


node stores the updated puzzle configuration.
• Managing the Search Process
• Search algorithms use a queue to track which
nodes to explore next:
• Insert(): Add a new node to the queue.
• Pop(): Remove a node for expansion.
• Empty()?: Check if the queue is empty.
• The strategy for picking nodes from the queue
defines different search algorithms (e.g., breadth-
first search, depth-first search, A* search).
• This diagram explains how nodes are structured in a search tree.

• A node represents a state in the problem space (e.g., a tile


puzzle state in the left image)

• Each node has:


– STATE – The current state of the problem (e.g., the tile arrangement in
the puzzle).
– PARENT – The node that generated this node (helps track the path).
– ACTION – The move taken to reach this state (e.g., moving a tile
Right).
– PATH-COST – The total cost of reaching this node (e.g., 6 in the image).

• The arrows in the search tree show how nodes are linked: each
child node points back to its parent, forming a structure that
allows path tracking.
Take input → A problem, a parent node, and an action.
Find the new state → Apply the action to the parent’s
state.
Set the parent → Store the given parent node. Record
the action → Store the action used to reach this
node.
Calculate the path cost → Add the step cost to the
parent's path cost.
Return the new node → The child node is created with
the updated values
This function explains how new nodes are created during a search.
Steps of the Function:
The STATE of the child is determined by applying an action to the parent’s
state.

Example: If the parent is at a tile arrangement and the action is move right, the
new state reflects that move.

The PARENT of the new node is set as the parent node.


The ACTION taken to reach this node is recorded.
The PATH-COST is updated:

g(n)=g(parent)+step_cost
The step cost is the cost of moving from the parent state to the new state.

This function ensures that each new node contains all necessary
information for tracking the search process.
3.2. Measuring problem-solving performance

Before selecting a search algorithm, we must evaluate how well


it performs using four key criteria:
1. Completeness
Definition: An algorithm is complete if it always finds a solution
whenever one exists.
Example:
Breadth-First Search (BFS) is complete because it explores all
possible paths systematically.
Depth-First Search (DFS) may fail in infinite state spaces if it
keeps going deeper indefinitely.
2. Optimality
Definition: An algorithm is optimal if it always
finds the best (shortest/cheapest) solution.
Example:
BFS is optimal if all steps have equal cost.
Uniform Cost Search (UCS) is always optimal, even
with different step costs.
DFS is not necessarily optimal because it may find a
longer path before the shortest one.
3.Time Complexity
Definition: Time complexity refers to how long the
algorithm takes to find a solution.
Measured by: The number of nodes generated
during the search.
Example:
BFS explores all nodes at each level before going deeper
→ Time Complexity: O(b^d)
DFS may go deep quickly but can get stuck → Time
Complexity: O(b^m)
• 4. Space Complexity
• Definition: Space complexity refers to how much
memory the algorithm needs to store nodes.
• Measured by: The maximum number of nodes
stored in memory at any point.
• Example:
– BFS keeps all nodes in memory at each level → Space
Complexity: O(b^d) (can be huge!).
– DFS stores only the current path → Space Complexity:
O(bm) (much better for deep searches).
• Measuring Complexity in Graph Search
• The size of a search problem depends on its
state space graph:
– |V| = number of nodes (states).
– |E| = number of edges (links between states).
– Total graph size = |V| + |E|.
• Three Key Quantities for Complexity Analysis
• b = Branching Factor
– The maximum number of successors (children) each node can have.
– Higher b → More nodes generated → More time and space needed.
• d = Depth of the Shallowest Goal Node
– The shortest path from the initial state to the goal.
– If d is small, BFS and UCS(Uniform Cost Search) work well.
– If d is large, DFS might be more efficient (in space).
• m = Maximum Path Length
– The longest possible path in the state space.
– If m is infinite, DFS may never finish searching.
4. UNINFORMED SEARCH STRATEGIES

• Uninformed search strategies do not use any


extra information beyond what is given in the
problem itself. These strategies can only
generate new possible states and check
whether they have reached the goal.
1. Breadth-First Search (BFS)
How It Works:
BFS starts at the root (starting point) and
explores all its direct children (successors)
first.
Then, it moves to the next level and expands all
those nodes before going even deeper.
It continues expanding level by level until it finds
the goal.
Steps in BFS:
Start from the root node.
Expand all its direct children.
Move to the next level and expand all those
nodes.
Repeat until you find the goal.
Key Concept:
BFS always expands the shallowest (least deep) node first
before moving deeper.
This is done using a FIFO (First-In, First-Out) queue, meaning
the first node added will be the first to be expanded.
When a new node is created, it is added at the back of the
queue, while older nodes are removed first and expanded.
The goal test (checking if we reached the solution) happens as
soon as we generate a node, not when we expand it.
• Complexity Analysis
• Time Complexity
– If the branching factor (number of children per node) is b, and
the solution is at depth d, the worst-case time complexity is:


– This means that if the search tree grows too deep, the time
required becomes very large.
• Space Complexity
– BFS stores all nodes at the current level before moving deeper.
– This requires a lot of memory, making BFS memory-intensive.
• Is BFS Always the Best?
• ✔ Advantages:
• BFS always finds a solution if one exists.
• If multiple solutions exist, BFS finds the shortest
solution (the one with the fewest steps).
• ✖ Disadvantages:
• Requires a lot of memory because it keeps track of all
nodes at a level before moving deeper.
• If the solution is far from the root, BFS takes a long
time to reach it.
Figure 3.12 (Binary Tree) shows BFS
working level-by-level.
What is Happening in the Image?
The diagram shows a binary tree, where BFS explores level by level.
The dark-colored nodes are those that have already been expanded.
The triangular marker (▶) indicates the next node to be expanded.
Step-by-Step Explanation:
Start at the Root (A): BFS begins from node A and marks it as expanded.
Expand Level 1: Nodes B and C (children of A) are added to the queue. B is chosen first
for expansion.
Expand Level 2: The algorithm moves to B, expanding its children D and E.
Move to C: Now, C is expanded, and its children F and G are added.
Continue Level by Level: The process continues until all nodes are expanded or the
goal is found.
Key Takeaway:
BFS explores all nodes at a given depth before moving to the next level.
It uses a FIFO queue, ensuring shallow nodes are expanded first.
Figure 3.11 (Algorithm) explains how BFS systematically expands
nodes using a queue.
Step-by-Step Explanation
• Start with the initial node
• Create a node with the initial state.
• If this node is already the goal, return the solution
immediately.
• Set up the search structure
• Use a FIFO (First In, First Out) queue (frontier) to store nodes
that need to be explored.
• Use an explored set to keep track of visited nodes.
• Loop until a solution is found or all possibilities are checked
• If the queue (frontier) is empty, return failure (no solution
found).
• Remove a node from the queue (this is the shallowest node).
• Add this node to the explored set (to avoid visiting it again)..
• Expand the current node
• Get all possible actions for the current state.
• Generate a child node for each possible action.
• If the child node is not already explored or in the queue,
check:
– If it is the goal, return the solution.
– Otherwise, add it to the queue to explore later.
• Repeat until a solution is found or the queue is empty
4.2 Depth-first search:
• How DFS Works
• Start from the root node.
• Move to the deepest possible node (keep
expanding the last visited node).
• If a node has no children (successors), backtrack to
the previous node and explore other unexplored
paths.
• Continue this process until the goal node (M in the
figure) is found or all possibilities are exhausted.
• Key Features
• Memory Efficient: DFS only stores nodes along the
current path, unlike BFS (which stores all nodes at a
level).
• Uses a Stack (LIFO): The last explored node is
processed first, ensuring deep exploration before
backtracking.
• Time Complexity: O(bm)O(b^m)O(bm), where b is
the branching factor and m is the maximum depth.
Advantages
• Requires less memory since it doesn’t store all
nodes.
• Can be faster than BFS if the goal node is on a
deep path.
Disadvantages
• May get stuck in an infinite loop if cycles exist.
• Does not guarantee the shortest path to the
goal.
4.3. Iterative deepening depth-first search:
• Iterative Deepening Search (IDS) is a combination of Depth-
First Search (DFS) and Breadth-First Search (BFS). It starts with
a depth limit of 0 and gradually increases the limit (1, 2, 3, etc.)
until it finds the goal node.

• How IDS Works


• Start with Depth 0: Check only the root node.
• Increase Depth Limit: Expand nodes up to depth 1, then depth
2, and so on.
• Repeat DFS for Each Depth: Apply Depth-Limited Search (DLS)
at each level.
• Stop when Goal is Found: The search ends when the goal node
is reached.
• Understanding Figure 3.18 (IDS Algorithm)
• The algorithm runs Depth-Limited Search
(DLS) at increasing depths.
• If the search finds the goal, it returns the
result.
• If it reaches a cutoff (depth limit) without
finding the goal, it expands further in the next
iteration.
• Understanding Figure 3.19 (IDS Example on a Binary
Tree)
• This figure shows how IDS searches a binary tree with
increasing depth limits:
• Limit = 0: Only the root node (A) is checked.
• Limit = 1: Expands A’s children (B, C).
• Limit = 2: Expands B and C’s children (D, E, F, G).
• Limit = 3: Expands further, covering all nodes at depth 3.
• The goal is found in four iterations, showing how IDS
gradually expands the search while using memory
efficiently.
• Key Features of IDS
• Combines DFS and BFS: Like DFS, it uses minimal
memory. Like BFS, it finds the shortest path.
• Memory Requirement: O(bd), where b is the
branching factor and d is the depth.
• Time Complexity: O(bd) (same as BFS).
• Completeness: Always finds a solution if one exists
(when branching is finite).
• Optimality: Ensures the shortest path when cost
increases with depth.
• Advantages of IDS
• ✔Fast search like BFS.
✔ Memory-efficient like DFS.
✔ Finds optimal solution if the cost function is
uniform.
• Disadvantages of IDS
• ❌ Nodes are generated multiple times, increasing
computational effort.
❌ Not efficient for extremely large graphs due to
repeated work.
IDS is a smart way to balance between DFS (low
memory, risk of getting stuck) and BFS (high
memory, guaranteed shortest path). It
gradually expands the depth, ensuring that
the search is efficient and finds the best
solution.
4.4. Depth-limited search

Depth-Limited Search (DLS) is a variation of


Depth-First Search (DFS) where we set a
maximum depth limit (l) to avoid infinite
loops in large or infinite graphs.
• How DLS Works
• Set a Depth Limit (l): Nodes beyond this depth are treated
as if they have no children.
• Apply DFS Up to the Limit: The search explores deeper
nodes but stops at the set limit.
• Handle Two Cases of Failure:
– If the goal is not found, return failure.
– If the search hits the depth limit without finding the goal,
return cutoff (indicating the goal might be deeper).
• DLS can be Recursive: The function calls itself while
reducing the depth limit.
• Understanding Figure 3.17 (Recursive DLS Algorithm)

• DEPTH-LIMITED-SEARCH(problem, limit) → Starts the search


with a depth limit.
• RECURSIVE-DLS(node, problem, limit) → Expands nodes
recursively until the limit is reached.
• Goal Check (problem.GOAL-TEST(node.STATE)) → If the node is
the goal, return the solution.
• If limit = 0, return cutoff (meaning the goal may be deeper).
• For Each Child Node:
– Call Recursive-DLS with reduced limit.
– If the child node’s result is cutoff, mark it as true.
– If failure, continue exploring.
• Final Step: If a cutoff occurred, return cutoff, else return failure.
• Key Features of DLS
• Solves the infinite-path problem of DFS.
• Time complexity: O(bl) (where b is the branching
factor, l is the depth limit).
• Space complexity: O(bl) (same as DFS).
• DFS is a special case of DLS when l → ∞ (no limit).
• DLS can be implemented recursively, as shown in
Figure 3.17.
• Advantages of DLS
• ✔Prevents infinite loops in large or infinite
graphs.
✔Uses less memory compared to BFS.
• Disadvantages of DLS
• ❌ Incomplete if the depth limit (l) is too small
(goal is deeper).
❌ Non-optimal if the depth limit (l) is too large,
leading to extra work.
• Depth-Limited Search is useful when we need
to limit search depth to avoid infinite loops.
However, choosing the right depth limit is
crucial—too small and we miss the goal, too
large and we waste resources.
Uniform-Cost Search (UCS)

• Uniform-Cost Search (UCS) is a graph search


algorithm that expands the node with the
lowest path cost g(n). It is similar to Breadth-
First Search (BFS) but instead of expanding
nodes based on depth, it expands them based
on the least cost from the start node.
Key Features of UCS
✅ Uses a priority queue (ordered by path cost).
✅ Expands the least-cost node first.
✅ Checks for shorter paths and updates if a
better one is found.
✅ Finds the optimal path if all costs are
positive.
Example: Romania Map (Figure 3.15)
Goal: Find the shortest path from Sibiu to Bucharest
1.Start at Sibiu
Successors: Rimnicu Vilcea (cost = 80), Fagaras (cost = 99)
Priority queue: [Rimnicu Vilcea (80), Fagaras (99)]
2.Expand the least-cost node: Rimnicu Vilcea (80)
New successor: Pitesti (cost = 80 + 97 = 177)
Updated priority queue: [Fagaras (99), Pitesti (177)]
3.Expand the least-cost node: Fagaras (99)
New successor: Bucharest (cost = 99 + 211 = 310)
Updated priority queue: [Pitesti (177), Bucharest (310)]
• 4.Expand the least-cost node: Pitesti (177)
• New path to Bucharest: cost = 80 + 97 + 101 =
278
• Since 278 is less than 310, replace the
previous path
• Updated priority queue: [Bucharest (278)]
• 5. Expand Bucharest (278) → Goal reached! 🎯
• Final Solution Path:
• ✅ Sibiu → Rimnicu Vilcea → Pitesti →
Bucharest (Total Cost = 278)
Differences from BFS
1. UCS applies the goal test only when a node
is expanded, not when generated.

2.UCS replaces paths if a cheaper path is


found.

3.Priority queue orders nodes based on cost


instead of depth.
Bidirectional Search
• What is Bidirectional Search?
• It runs two searches simultaneously:
– Forward search from the start node
– Backward search from the goal node
• The search stops when both searches meet in
the middle.
• It reduces the number of explored nodes,
making it faster than BFS or DFS alone.
• Why is This Efficient?
• Normal BFS explores O(b^d) nodes.
• Bidirectional Search explores O(b^(d/2))
nodes, making it much faster!

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy