0% found this document useful (0 votes)
9 views84 pages

Module 2 Chapter 3

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views84 pages

Module 2 Chapter 3

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 84

Chapter 3

Solving Problems by Searching


Reflex agent is simple
⚫ base their actions on
⚫ a direct mapping from states to actions

⚫ but cannot work well in environments


⚫ which this mapping would be too large to store
⚫ and would take too long to learn

Hence, goal-based agent is used


Problem-solving agent
Problem-solving agent
⚫ A kind of goal-based agent
⚫ It solves problem by
⚫ finding sequences of actions that lead to
desirable states (goals)
⚫ To solve a problem,
⚫ the first step is the goal formulation, based on
the current situation
Goal formulation
The goal is formulated
⚫ as a set of world states, in which the goal is
satisfied
Reaching from initial state → goal state
⚫ Actions are required
Actions are the operators
⚫ causing transitions between world states
⚫ Actions should be abstract enough at a
certain degree, instead of very detailed
⚫ E.g., turn left VS turn left 30 degree, etc.
Problem formulation
The process of deciding
⚫ what actions and states to consider
E.g., driving Amman → Zarqa
⚫ in-between states and actions defined
⚫ States: Some places in Amman & Zarqa

⚫ Actions: Turn left, Turn right, go straight,


accelerate & brake, etc.
Search
Because there are many ways to achieve
the same goal
⚫ Those ways are together expressed as a tree
⚫ Multiple options of unknown value at a point,
⚫ the agent can examine different possible
sequences of actions, and choose the best
⚫ This process of looking for the best sequence
is called search
⚫ The best sequence is then a list of actions,
called solution
Search algorithm
Defined as
⚫ taking a problem
⚫ and returns a solution

Once a solution is found


⚫ the agent follows the solution
⚫ and carries out the list of actions –
execution phase
Design of an agent
⚫ “Formulate, search, execute”
Well-defined problems and solutions
A problem is defined by 5 components:
Initial state
Actions
Transition model or
(Successor functions)
Goal Test.
Path Cost.
Well-defined problems and solutions
A problem is defined by 5 components:
⚫ The initial state
⚫ that the agent starts in
⚫ The set of possible actions
⚫ Transition model: description of what each action
does.
(successor functions): refer to any state reachable from
given state by a single action
⚫ Initial state, actions and Transition model define the
state space
⚫ the set of all states reachable from the initial state by any
sequence of actions.
⚫ A path in the state space:
⚫ any sequence of states connected by a sequence of actions.
Well-defined problems and solutions
The goal test
⚫ Applied to the current state to test
⚫ if the agent is in its goal
-Sometimes there is an explicit set of possible goal states.
(example: in Amman).
-Sometimes the goal is described by the properties
⚫ instead of stating explicitly the set of states
⚫ Example: Chess
⚫ the agent wins if it can capture the KING of the opponent on
next move ( checkmate).
⚫ no matter what the opponent does
Well-defined problems and solutions
A path cost function,
⚫ assigns a numeric cost to each path
⚫ = performance measure
⚫ denoted by g
⚫ to distinguish the best path from others

Usually the path cost is


⚫ the sum of the step costs of the individual
actions (in the action list)
Well-defined problems and solutions
Together a problem is defined by
⚫ Initial state
⚫ Actions
⚫ Successor function
⚫ Goal test
⚫ Path cost function
The solution of a problem is then
⚫ a path from the initial state to a state satisfying the goal
test
Optimal solution
⚫ the solution with lowest path cost among all solutions
Formulating problems
Besides the five components for problem
formulation
⚫ anything else?
Abstraction
⚫ the process to take out the irrelevant information
⚫ leave the most essential parts to the description of the
states
( Remove detail from representation)
⚫ Conclusion: Only the most important parts that are
contributing to searching are used
Problem-Solving Agents
agents whose task is to solve a particular
problem (steps)
⚫ goal formulation
⚫ what is the goal state
⚫ what are important characteristics of the goal state
⚫ how does the agent know that it has reached the goal
⚫ are there several possible goal states
⚫ are they equal or are some more preferable
⚫ problem formulation
⚫ what are the possible states of the world relevant for solving
the problem
⚫ what information is accessible to the agent
⚫ how can the agent progress from state to state
Example 15 km
Ramtha

Irbed
20 km 25 km
ajlun
33km 35 km Mafraq
45 km 15 km 20 km
Jarash
45 km 75 km
Zarqa
Salat 40 km15 km
15 km Azraq
25 km amman

55 km 250 km
Madaba
150 km

150 km
200 km
Karak
180 km

Aqaba
From the Example
1. Formulate Goal

- Be In Amman

2. Formulate Problem

- States : Cities
- actions : Drive Between Cities

3. Find Solution

- Sequence of Cities : ajlun – Jarash - Amman


Our Example

1. Problem : To Go from Ajlun to Amman

2. Initial State : Ajlun

3. Operator : Go from One City To another .

4. State Space : {Jarash , Salat , irbed,……..}

5. Goal Test : are the agent in Amman.

6. Path Cost Function : Get The Cost From The Map.

7. Solution : { {Aj → Ja → Ir → Ma → Za → Am} , {Aj →Ir → Ma → Za → Am} ….

{Aj → Ja → Am} }
8. State Set Space : {Ajlun → Jarash → Amman}
Example: Romania
On holiday in Romania; currently in Arad.
Flight leaves tomorrow from Bucharest

Formulate goal:
⚫ be in Bucharest

Formulate problem:
⚫ states: various cities
⚫ actions: drive between cities

Find solution:
⚫ sequence of cities, e.g., Arad, Sibiu, Fagaras,
Bucharest
Example: Romania
Single-state problem formulation
A problem is defined by four items:
initial state e.g., "at Arad"
1. actions or successor function S(x) = set of action–state pairs
⚫ e.g., S(Arad) = {<Arad → Zerind, Zerind>, … }

2. goal test, can be


⚫ explicit, e.g., x = "at Bucharest"
⚫ implicit, e.g., Checkmate(x)

3. path cost (additive)


⚫ e.g., sum of distances, number of actions executed, etc.
⚫ c(x,a,y) is the step cost, assumed to be ≥ 0

A solution is a sequence of actions leading from the initial state


to a goal state
Example problems
Toy problems
⚫ those intended to illustrate or exercise
various problem-solving methods
⚫ E.g., puzzle, chess, etc.

Real-world problems
⚫ tend to be more difficult and whose
solutions people actually care about
⚫ E.g., Design, planning, etc.
Toy problem (1)
Example: vacuum world
Number of states: 8
Initial state: Any
Number of actions: 4
⚫ left, right, suck,
noOp
Goal: clean up all dirt
⚫ Goal states: {7, 8}

Path Cost:
⚫ Each step costs 1
Toy problem (1)
Vacuum world
⚫ States: 8 = (2*2^2) or n*2^n
⚫ What is the first n about?
⚫ Initial states: any
⚫ Actions: Left, Right, Suck
⚫ Transition model – Shown in next slide
⚫ Goal test: Clean or not
⚫ Path cost: Each step costs 1

25
The 8-puzzle
The 8-puzzle
States:
⚫a state description specifies the location of each of
the eight tiles and blank in one of the nine squares
No. of States – 9!/2 = 181,440 reachable (Worst Case)
Initial State:
⚫ Any state in state space
Actions
⚫ the blank moves Left, Right, Up, or Down
Goal test:
⚫ current state matches the goal configuration
Path cost:
⚫ each step costs 1, so the path cost is just the length
of the path
The 8-queens
There are two ways to formulate the
problem
All of them have the common followings:
⚫ Goal test: 8 queens on board, not attacking
to each other
⚫ Path cost: zero
The 8-queens
(1) Incremental formulation
⚫ Initial State: No queens on board
⚫ Action: Each action adds a queen to the state

⚫ States: any arrangement of 0 to 8 queens on


board
⚫ Transition model: Returns the board with a
queen added to the specific square
⚫ Goal test: 8 Queens on board with none
attacked
64 * 63 *62……..57 ~~ 1.8 * 10^14
Possible sequences to explore
The 8-queens
(2) Complete-state formulation
⚫ Initial State: starts with all 8 queens on the
board
⚫ States: move the queens individually around
⚫ any arrangement of 8 queens, one per column in
the leftmost columns
⚫ Actions: Add a queen to any square in the
leftmost empty column such that it not
attacked by any other
⚫ Reduces 8-queens state space from 1.8 *
10^14 to just 2,057
The 8-queens
Conclusion:
⚫ the right formulation makes a big difference
to the size of the search space
Real-world problems
Route finding
Touring and traveling salesperson problem
VLSI layout
Robot navigation
Assembly sequencing
Protein design

33
3.3 Searching for solutions
3.3 Searching for solutions
Finding out a solution is done by
⚫ searching through the state space
All problems are transformed
⚫ as a search tree
⚫ generated by the initial state and
successor function
Search tree
Initial state
⚫ The root of the search tree is a search node
Expanding
⚫ applying successor function to the current state
⚫ thereby generating a new set of states

leaf nodes
⚫ the states having no successors
Fringe: Set of search nodes that have not been
expanded yet.
Refer to next figure
Tree search example
Tree search example
Search tree
The essence of searching
⚫ in case the first choice is not correct
⚫ choosing one option and keep others for later
inspection
Hence we have the search strategy
⚫ which determines the choice of which state to
expand
⚫ good choice → fewer work → faster

Important:
⚫ state space ≠ search tree
Search tree
State space
⚫ has unique states {A, B}
⚫ while a search tree may have cyclic paths:
A-B-A-B-A-B- …
A good search strategy should avoid
such paths
Search tree
A node is having five components:
⚫ STATE: which state it is in the state space
⚫ PARENT-NODE: from which node it is generated

⚫ ACTION: which action applied to its parent-node


to generate it
⚫ PATH-COST: the cost, g(n), from initial state to
the node n itself
⚫ DEPTH: number of steps along the path from the
initial state
Measuring problem-solving performance
The evaluation of a search strategy
⚫ Completeness:
⚫ is the strategy guaranteed to find a solution when
there is one?
⚫ Optimality:
⚫ does the strategy find the highest-quality solution
when there are several different solutions?
⚫ Time complexity:
⚫ how long does it take to find a solution?
⚫ Space complexity:
⚫ how much memory is needed to perform the search?
Measuring problem-solving performance
In AI, complexity is expressed in
⚫ b, branching factor, maximum number of
successors of any node
⚫ d, the depth of the shallowest goal node.
(depth of the least-cost solution)
⚫ m, the maximum length of any path in the state
space
Time and Space is measured in
⚫ number of nodes generated during the search
⚫ maximum number of nodes stored in memory
Measuring problem-solving performance

For effectiveness of a search algorithm


⚫ we can just consider the total cost
⚫ The total cost = path cost (g) of the solution
found + search cost
⚫ search cost = time necessary to find the solution
Tradeoff:
⚫ (long time, optimal solution with least g)
⚫ vs. (shorter time, solution with slightly larger
path cost g)
3.4 Uninformed search strategies
3.4 Uninformed search strategies
Uninformed search
⚫ no information about the number of steps
⚫ or the path cost from the current state to
the goal
⚫ search the state space blindly

Informed search, or heuristic search


⚫ a cleverer strategy that searches toward
the goal,
⚫ based on the information from the current
state so far
Uninformed search strategies
Breadth-first search
⚫ Uniform cost search
Depth-first search
⚫ Depth-limited search
⚫ Iterative deepening search

Bidirectional search
Breadth-first search
The root node is expanded first (FIFO)
All the nodes generated by the root
node are then expanded
And then their successors and so on
Breadth-first search
S

A D

B D A E

C E E B B F
11

D F B F C E A C G
14 17 15 15 13
G C G F
19 19 17 G 25
Breadth-first search (Analysis)
Breadth-first search
⚫ Complete – find the solution eventually
⚫ Optimal, if step cost is 1

The disadvantage
⚫ if the branching factor of a node is large,
⚫ the space complexity and the time complexity
are enormous
Properties of breadth-first search
Complete? Yes (if b is finite)

Time? 1+b+b2+b3+… +bd = O(bd) (Worst Case)


When goal test is applied when selected for expansion than when
generated then it is - O(bd+1)

Space? O(bd+1) (keeps every node in memory)

Optimal? Yes (if cost = 1 per step)

Space is the bigger problem (more than time)


Breadth-first search (Analysis)
assuming 10000 nodes can be processed per second, each with
1000 bytes of storage
Uniform cost search – Sibiu to Bucharest
Uniform cost search – Sibiu to Bucharest
Uniform cost search – Sibiu to Bucharest
Depth-first search
Always expands one of the nodes at the
deepest level of the tree
Only when the search hits a dead end
⚫ goes back and expands nodes at shallower levels
⚫ Dead end → leaf nodes but not the goal
Backtracking search
⚫ only one successor is generated on expansion
⚫ rather than all successors
⚫ fewer memory
Depth-first search
Expand deepest unexpanded node
Implementation:
⚫ fringe = LIFO queue, i.e., put successors at front
Depth-first search
Expand deepest unexpanded node
Implementation:
⚫ fringe = LIFO queue, i.e., put successors at front
Depth-first search
Expand deepest unexpanded node
Implementation:
⚫ fringe = LIFO queue, i.e., put successors at front
Depth-first search
Expand deepest unexpanded node
Implementation:
⚫ fringe = LIFO queue, i.e., put successors at front
Depth-first search
Expand deepest unexpanded node
Implementation:
⚫ fringe = LIFO queue, i.e., put successors at front
Depth-first search
Expand deepest unexpanded node
Implementation:
⚫ fringe = LIFO queue, i.e., put successors at front
Depth-first search
Expand deepest unexpanded node
Implementation:
⚫ fringe = LIFO queue, i.e., put successors at front
Depth-first search
Expand deepest unexpanded node
Implementation:
⚫ fringe = LIFO queue, i.e., put successors at front
Depth-first search
Expand deepest unexpanded node
Implementation:
⚫ fringe = LIFO queue, i.e., put successors at front
Depth-first search
Expand deepest unexpanded node
Implementation:
⚫ fringe = LIFO queue, i.e., put successors at front
Depth-first search
Expand deepest unexpanded node
Implementation:
⚫ fringe = LIFO queue, i.e., put successors at front
Depth-first search
Expand deepest unexpanded node
Implementation:
⚫ fringe = LIFO queue, i.e., put successors at front
Depth-first search
S

A D

B D A E

C E E B B F
11

D F B F C E A C G
14 17 15 15 13
G C G F
19 19 17 G 25
Depth-first search (Analysis)
Not complete
⚫ because a path may be infinite or looping
⚫ then the path will never fail and go back try
another option
Not optimal
⚫ it doesn't guarantee the best solution
It overcomes
⚫ the time and space complexities
Properties of depth-first search
Complete? No: fails in infinite-depth spaces,
spaces with loops
→ complete in finite spaces

Time? O(bm): terrible if m is much larger than


d
⚫ but if solutions are dense, may be much faster
than breadth-first

Space? O(bm), i.e., linear space!

Optimal? No

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy