0% found this document useful (0 votes)
14 views84 pages

Chapter3 UnInformed Search-lastModified

Chapter 3 discusses problem-solving in Artificial Intelligence (AI) through search algorithms, emphasizing the importance of efficiently finding solutions to various tasks. It outlines the process of problem formulation, including goal formulation and the use of heuristics to limit search space, as well as the distinction between uninformed and informed search strategies. The chapter also highlights the components of well-defined problems and the significance of search performance metrics such as completeness, time complexity, and optimality.

Uploaded by

reyam20030
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views84 pages

Chapter3 UnInformed Search-lastModified

Chapter 3 discusses problem-solving in Artificial Intelligence (AI) through search algorithms, emphasizing the importance of efficiently finding solutions to various tasks. It outlines the process of problem formulation, including goal formulation and the use of heuristics to limit search space, as well as the distinction between uninformed and informed search strategies. The chapter also highlights the components of well-defined problems and the significance of search performance metrics such as completeness, time complexity, and optimality.

Uploaded by

reyam20030
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 84

Chapter 3- Part 1

Problem Solving - Search Algorithms


Un-Informed Search
What is a problem in Artificial Intelligence
Understanding problems in AI is essential because AI agents must solve
problems efficiently to reach a goal

2
Imagine a World Without Search Algorithms

 "Imagine you’re using Google, but instead of getting relevant


search results instantly, it shows you millions of random
pages—most of them completely unrelated to your query.
How useful would Google be?"
 The truth is, Search is at the heart of AI.
From finding the fastest route on Google Maps to making
medical diagnoses, AI must search for the best possible
solution among many choices.

3
Why Is Problem Solving Important in AI?

 AI exists to solve problems—whether it’s recognizing speech, driving


a car, or playing chess.
 But before an AI system can act, it needs to:
Understand the problem.
Define possible solutions.
Choose the best one efficiently.
 Real-Life Examples:
A self-driving car needs to find the safest and shortest path to
its destination.
A game-playing AI must evaluate millions of possible moves
before making the best choice.
A robot vacuum must determine the most efficient way to
clean a room.
 "In all of these cases, AI must ‘search’ for the best solution."

4
Solving Problems by Searching

 Since AI agents do not always know the correct answer immediately, they must
search for a solution among possible alternatives.
 Problems are solved by searching among alternative choices
 Reflex agent is simple
◼ base their actions on a direct mapping from states to actions, but cannot work
well in environments:
 which this mapping would be too large to store
 and would take too long to learn
 Hence, goal-based agent is used in solving problems.

5
Which Agent Type?

6
Tic Tac

7
How Human beings think?

 "Human beings do not search the entire state space (exhaustive search)."
• Exhaustive search means checking every possible option before making a decision.
• Humans don’t do this because it would take too much time and effort.
• Instead, humans use experience and intuition to focus on relevant choices rather than
checking all possibilities.
 Example:
• Imagine you are searching for a book in a library.
• Exhaustive search: Checking every single book one by one.
• Human thinking: Going directly to the category or section where the book is likely
to be.

8
Humans Explore Only Useful Alternatives

 "Only alternatives that experience has shown to be effective


are explored."
• People rely on past experiences to make better decisions.
• Instead of trying everything, we focus on what has worked
before.
 Example:
• When solving a maze, instead of testing every possible path, a
person might look for patterns or remember previous
successful paths.

9
Humans Use Heuristics (Judgmental Rules)

 "Human problem solving is based on judgmental rules that


limit the exploration of search space to those portions of
state space that seem somehow promising."
• Humans filter out unlikely choices using experience and
reasoning.
• These judgmental rules help reduce unnecessary effort.
 Example:
• When looking for a lost item at home, you don’t check the
ceiling because you know it’s unlikely to be there. Instead,
you check places where you usually put it.

10
What Are Heuristics?

 "These judgmental rules are known as heuristics."


• Heuristics are mental shortcuts that help humans make
decisions quickly.
• AI systems also use heuristics to make search algorithms more
efficient.
 Example of Heuristics in AI:
• Google Maps: Instead of checking all possible roads, it
prioritizes highways and main roads to find the fastest route.
• Chess AI: Instead of checking all possible moves, it focuses on
moves that give strategic advantages.

11
Problems in AI

12
Problem-solving agent

 A problem is really a collection of information that the agent


will use to decide to what to do.
 Problem solving agent(a kind of goal-based agent): decides
what to do by finding sequences of actions that lead to
desirable states.
 Problem solving agent focuses on the following main points to solve
a problem,
1. The first step is the Goal formulation, based on the current situation
2. The next step is Problem formulation: is the process of deciding what
actions and states to consider, given the goal formulation.
3. The final step is Finding a solution and then execute that solution.

13
Problem-solving agent terminologies

14
Goal formulation (modeling)

 This is the first phase of agent design


 The goal is formulated as a set of world states, in which the goal is
satisfied(reached)
 To reach from initial state → goal state, Actions are required.
 Actions are the operators that cause transitions between world states.
(for example driving to reach one city)

15
Problem formulation

 Second phase of agent design


 The process of deciding what actions and states to consider .
 E.g., driving Yanbu → Dammam
◼ States: Some places between Yanbu & Dammam
◼ Actions: Turn left, Turn right, go straight, accelerate & brake, etc.

16
Search&Execution
 Third phase of agent design
 The state space of the problem is the set of all states reachable from initial state by any
sequence of actions.
 The states can be represented as a directed graph where the nodes are the states and links
between nodes are actions.
 Because there are many ways to achieve the same goal
◼ Those ways are together expressed as a tree
◼ Multiple options of unknown value at a point,
 the agent can examine different possible sequences of actions, and choose the best
◼ This process of looking for the best sequence is called search
◼ The best sequence is then a list of actions, called solution

17
Search algorithm

 "Defined as taking a problem and returning a solution."


• A search algorithm is a step-by-step process that AI uses to find solutions to problems.
• It works by exploring different possibilities to find the best path or action.
 Example:
• A GPS navigation system finds the best route from point A to point B by searching among multiple
paths.
 "Once a solution is found:"
• The agent follows the solution by executing actions step by step.
The AI follows three key phases:
1. Formulation (Define the Problem)
1. Identify initial state, goal state, actions, and constraints.
2. Example: A self-driving car needs to define its starting point, destination, and available
roads.
2. Search (Find the Best Solution)
1. Explore different possible solutions.
2. Use uninformed search (BFS, DFS, UCS) or *informed search (A, Greedy Search)**.
3. Example: The self-driving car evaluates multiple routes to find the fastest one.
3. Execution (Carry Out the Actions)
1. Follow the computed solution in the real world.
2. Example: The car drives step by step using the selected route.
18
1-Well-defined problems and solutions
 A problem is defined by 4 components: initial state, successor function, goal
test, and path cost.
1. The initial state
 that the agent starts in
2. The set of possible actions (successor functions)
 The state space, is the set of all states reachable from the initial
state
❑ A path in the state space: any sequence of actions leading from one
state to another
3. The goal test , Applied to the current state to test if the agent is in its goal .
◼ Sometimes the goal is described by the properties instead of stating
explicitly the set of states
◼ Example: Chess, the agent wins if it can capture the KING of the opponent
on next move no matter what the opponent does

19
Well-defined problems and solutions cont.

4. A path cost function, assigns a numeric cost to each path .


◼ It is equal to performance measure
◼ denoted by g
◼ Used to distinguish the best path from others.
Usually the path cost is the sum of the step costs of the
individual actions (in the action list)
◼ The solution of a problem is then a path from the initial state to
a state satisfying the goal test
◼ Optimal solution is the solution with lowest path cost among all
solutions

20
Example:

21
Romania’s problem formulation:
A problem is defined by four items:

1. initial state e.g., "at Arad"


2. actions or successor function S(x) = set of action–state
pairs
◼ e.g., S(Arad) = {<Arad → Zerind or Sibiu >, … }
3. goal test, can be
◼ explicit, e.g., x = “at Bucharest”, or “checkmate” in chess
◼ implicit, e.g., NoDirt(x)
4. path cost (additive)
◼ e.g., sum of distances, number of actions executed, etc.
◼ c(x,a,y) is the step cost, assumed to be ≥ 0
 A solution is a sequence of actions leading from the initial
state to a goal state

22
The successor function

 Successor function: for a given state, returns a set of


action/new-state pairs.

 Vacuum-cleaner world: (A, dirty, clean) → (’Left’, (A, dirty,


clean)),(’Right’, (B, dirty, clean)), (’Suck’, (A, clean, dirty)),
(’NoOp, (A, dirty, clean))

 Romania: In(Arad) → ((Go(Timisoara), In(Timisoara),


(Go(Sibiu), In(Sibiu)), (Go(Zerind), In(Zerind))

23
Abstraction

 Besides the four components for problem formulation


◼ anything else?
 Abstraction
◼ the process to take out the irrelevant information and
leave the most essential parts to the description of
the states
❑ Conclusion: Only the most important parts that are
contributing to searching are used

24
Example of problems

 Toy problems
◼ those intended to illustrate or exercise various problem-solving
methods
◼ E.g., puzzle, chess, etc.
 Real-world problems
◼ tend to be more difficult and whose solutions people actually
care about
◼ E.g., Design, planning, etc.

25
Toy problems: Example vacuum world

Number of states: 8
Initial state: Any
Number of actions: 4
⚫ left, right, suck,
noOp
Goal: clean up all dirt
⚫ Goal states: {7, 8}

⚫ Path Cost:
⚫ Each step costs 1

26
27
The 8-puzzle

28
The 8-puzzle

 States:
◼ a state description specifies the location of each of the
eight tiles and blank in one of the nine squares
 Initial State:
◼ Any state in state space
 Successor function:
◼ the blank moves Left, Right, Up, or Down
 Goal test:
◼ current state matches the goal configuration
 Path cost:
◼ each step costs 1, so the path cost is just the length of
the path
29
A portion of the state space of a 8-Puzzle
problem
5 4
6 1 8
7 3 2

5 4 548
618 61
732 732

514 54
6 8 618
732 732

514
68
732
30
2- Searching for solutions

 After defining and formulating the problem well,


we should search for a solution.
 Finding out a solution is done by searching
through the state space.
 All problems are transformed(using a search
strategy) as a search tree generated by the initial
state and successor function.

31
Search tree
 Initial state
◼ The root of the search tree is a search node
 Expanding
◼ applying successor function to the current state, thereby
generating a new set of states
 leaf nodes
◼ the states having no successors or they haven’t yet been
expanded (fringe)
 Refer to next figure

32
Tree search example

33
Tree search example

34
Search tree

 The essence of searching in case the first choice is not


correct is:
◼ choosing one option and keep others for later inspection
 Hence we have the search strategy , which determines the
choice of which state to expand
◼ good choice → fewer work → faster
 Important:
◼ state space ≠ search tree
 State space has unique states {A, B}
 while a search tree may have cyclic paths: A-B-A-B-A-B- …
 A good search strategy should avoid such paths.

35
2.1 Infrastructure for search algorithms

 A node is having five components


◼ STATE: which state it is in the state space

◼ PARENT-NODE: from which node it is generated

◼ ACTION: which action applied to its parent-node to generate it

◼ PATH-COST: the cost, g(n), from initial state to the node n itself

◼ DEPTH: number of steps along the path from the initial state

36
Implementation: states vs. nodes
A state is a (representation of) a physical configuration
A node is a data structure constituting part of a search tree
includes state, parent node, action, path cost g(x), depth

Figure 3.10

◼The Expand function creates new nodes, filling in the various


fields and using the SuccessorFn of the problem to create the
corresponding states. 37
2.2 Measuring problem-solving performance

 Completeness
◼ Guarantees finding a solution whenever one exists
 Time Complexity
◼ How long (worst or average case) does it take to find
a solution? Usually measured in terms of the number
of nodes expanded
 Space Complexity
◼ How much space is used by the algorithm? Usually
measured in terms of the maximum size that the
“OPEN" list becomes during the search
 Optimality/Admissibility
◼ If a solution is found, is it guaranteed to be an
optimal one? For example, is it the one with minimum
cost? 38
Measuring problem-solving performance cont.

 In AI, complexity is expressed in


◼ b, branching factor, maximum number of successors of any
node
◼ d, the depth of the shallowest(least cost solution) goal node
◼ m, the maximum length of any path in the state space(or
maximum depth of the state space). May be infinity.
 Time and Space is measured in
◼ number of nodes generated during the search
◼ maximum number of nodes stored in memory

39
Search strategies

 Uninformed (blind) search


◼ no information about the number of steps
◼ or the path cost from the current state to the goal
◼ search the state space blindly
 Informed Search
◼ a cleverer strategy that searches toward the goal,
◼ based on the information from the current state so far
 Adversarial Search (Game Theory)

40
Uninformed search strategies

 Uninformed search strategies use only the information


available in the problem definition
 Breadth-first search
◼ Uniform cost search
 Depth-first search
◼ Depth-limited search
◼ Iterative deepening search
 Bidirectional search

41
42
Breadth-first search:

 Move
A D downwards,
level by level,
B D A E until goal is
reached.

C E E B B F

D F B F C E A C G

G C G F
G

It explores the space in a level-by-level fashion.


43
Breadth-first search

 BFS is complete: if a solution exists, one will be found


 Expand shallowest unexpanded node
 Implementation:
◼ fringe is a FIFO queue, i.e., new successors go at end

Queue(fringe): A
Expanded:0
Level: 0

44
Breadth-first search

 Expand shallowest unexpanded node


 Implementation:
◼ fringe is a FIFO queue, i.e., new successors go at end

Queue: B C
Expanded:1- A

45
Breadth-first search

 Expand shallowest unexpanded node


 Implementation:
◼ fringe is a FIFO queue, i.e., new successors go at end

Queue: B C
Expanded:1- A
Level: 1

46
Breadth-first search

 Expand shallowest unexpanded node


 Implementation:
◼ fringe is a FIFO queue, i.e., new successors go at end

Queue: C D E
Expanded:2- A B
Queue: D E F G
Expanded:2- A B
Queue: G
Expanded:- A B

47
Breadth-first search (Analysis)

 Breadth-first search
◼ Complete – find the solution eventually
◼ Optimal, if the path cost is a non-decreasing function of the
depth of the node
 The disadvantage
◼ if the branching factor of a node is large,
◼ for even small instances (e.g., chess)
 the space complexity and the time complexity are enormous
‫هائل‬

48
49
Uniform cost search

 Breadth-first finds the shallowest goal state


◼ but not necessarily be the least-cost solution
◼ work only if all step costs are equal
 Uniform cost search
◼ modifies breadth-first strategy
 by always expanding the lowest-cost node
◼ The lowest-cost node is measured by the path cost g(n)
◼ UCS follows the cheapest path without knowing where the goal is.
50
Uniform cost search

 the first found solution is guaranteed to be the cheapest


◼ least in depth
◼ But restrict to non-decreasing path cost
◼ Unsuitable for operators with negative cost

51
Example: Finding the Shortest Path in a Graph

 Imagine we have the following weighted


graph where nodes are cities and edges
represent distances in km:

52
Depth-first search

 Always expands one of the nodes at the deepest level of the tree
 Only when the search hits a dead end
◼ goes back and expands nodes at shallower levels
◼ Dead end → leaf nodes but not the goal
 Backtracking search
◼ only one successor is generated on expansion
◼ rather than all successors
◼ fewer memory

53
DFS Example:
Graph Traversal

54
Depth-first search

 Expand deepest unexpanded node


 Implementation:
◼ fringe = LIFO queue (Stack), i.e., put successors at
front

55
Depth-first search

 Expand deepest unexpanded node


 Implementation:
◼ fringe = LIFO queue, i.e., put successors at front

56
Depth-first search

 Expand deepest unexpanded node


 Implementation:
◼ fringe = LIFO queue, i.e., put successors at front

57
Depth-first search

 Expand deepest unexpanded node


 Implementation:
◼ fringe = LIFO queue, i.e., put successors at front

58
Depth-first search

 Expand deepest unexpanded node


 Implementation:
◼ fringe = LIFO queue, i.e., put successors at front

59
Depth-first search

 Expand deepest unexpanded node


 Implementation:
◼ fringe = LIFO queue, i.e., put successors at front

60
Depth-first search

 Expand deepest unexpanded node


 Implementation:
◼ fringe = LIFO queue, i.e., put successors at front

61
Depth-first search

 Expand deepest unexpanded node


 Implementation:
◼ fringe = LIFO queue, i.e., put successors at front

62
Depth-first search

 Expand deepest unexpanded node


 Implementation:
◼ fringe = LIFO queue, i.e., put successors at front

63
Depth-first search

 Expand deepest unexpanded node


 Implementation:
◼ fringe = LIFO queue, i.e., put successors at front

64
Depth-first search

 Expand deepest unexpanded node


 Implementation:
◼ fringe = LIFO queue, i.e., put successors at front

65
Depth-first search

 Expand deepest unexpanded node


 Implementation:
◼ fringe = LIFO queue, i.e., put successors at front

66
67
Depth-first search (Analysis)

 Not complete
◼ because a path may be infinite or looping
◼ then the path will never fail and go back try another option
 Not optimal
◼ it doesn't guarantee the best solution
 It overcomes
◼ the time and space complexities

68
DFS vs BFS

69
Depth-limited search

 It is depth-first search
◼ with a predefined maximum depth
◼ However, it is usually not easy to define the suitable
maximum depth
◼ too small → no solution can be found
◼ too large → the same problems are suffered from
 Anyway the search is
◼ complete
◼ but still not optimal

70
Example : (BFS and DFS)
Initial state
A

B C D E F
Goal state

G H I J K L M N O P

Q R S T U V W X Y Z
Press space to see a BFS of the example
71
node set
We
Thethensearch
Node
backtrack
B
then
is expanded
moves
to expand
to then
the A We
This
Node
begin
node
Awith
isisremoved
then
our expanded
initial
fromstate:
the
to
first
removed
node
nodefrom
in
C,the
andthe
queue.
the
queue.
process
Press
The thequeue.
reveal
node labeled
further
Each revealed
A.
(unexpanded)
Pressnode
spaceis
revealedcontinues.
nodes
space
are to
Press
added
continue.
space
to the added to the ENDnodes.ofPress
tothe
continue
queue.
space
ENDBof the queue. C Press space. D PressEspace to continue F the
search.

G H I J K L M N O P

Node L is located and the


Q R S T U search returns a solution.
Press space to end.

Press
Press space
space to
to continue
begin thethe
search
search
Size
SizeofofQueue:
Queue:0987651 Queue:
Queue:
Queue:
Queue:
Queue:
K,
L,
J,
G,
H,
I,Queue:
K,
J,
L,
M,
H,
I,
F,
Queue:
K,
J,
L,
M,
G,
I,
N,
Queue:
E,K,
L,
M,
J,
H,
N,
O,
F,
D,
K,
M,
L,
C,
N,
I,
O,
G,
P,
Queue:
E,M,
L,
J,
N,
D,
B,
Q,
O,
P,
H,
F,K,
M,
N,
O,
E,
C,
Q,
R,
P,
Queue:
G,
I,L,
N,
F,
O,
Q,
P,
D,
J,
R,
Empty
S,
H,M,
G,
Q,
K,
O,
T,
P,
E,
R,
S,I,Q
H
A
N
R
U L
T
F
P
SJ
10 Current
Nodes FINISHED
Action:
Current
SEARCH
Expanding
Action: Current Currentlevel:
level:210
expanded:
expanded:10119876543210 Backtracking
BREADTH-FIRST SEARCH PATTERN n/a
72
The process
Node search
B is expanded
then
nowmoves
continues
andtoremoved
the
until
first
thefrom
node Node
We
Thisbegin
node
A is with
removed
is then
ourexpanded
initial
from the
state:
to
Stack.
reveal
the node
Each
in the
the
goalStack.
state
Stack.
isRevealed
achieved.
Press space
nodes
Press
toare
continue
space.
added to A labeled A.
further
revealed(unexpanded)
node
Press
is space
addednodes.
totocontinue
the Press
FRONT space.
of
the FRONT of the Stack. Press space. the Stack. Press space to continue.

B C D E F

Node L is located and the search


G H I J K L returns a solution. Press space to
end.

Q R S T U

Press space to continue


begin thethe
search
search

Size of Stack: 034561 Stack: A


T,
B,J,
G,
Q,
H,
R,
C,
I,
S,
J,
D,
K,
U,
L,
Empty
D,
J,
C,
E,
D,
L,
H,D,
C, D,
E,
F
D,
E,
D,
C,E,
E,
FF
E,
E,
D,FFF
FE, F
Nodes expanded: 14
9876543210
10
11
12
13 CurrentFINISHED
Action: Backtracking
Expanding
SEARCH Current level: n/a
2130

73
DEPTH-FIRST SEARCH PATTERN
Iterative deepening search

 No choosing of the best depth limit


 It tries all possible depth limits:
◼ first 0, then 1, 2, and so on
◼ combines the benefits of depth-first and breadth-first search

74
Iterative deepening search

75
Iterative deepening search (Analysis)

 optimal
 complete
 Time and space complexities
◼ reasonable
 suitable for the problem
◼ having a large search space
◼ and the depth of the solution is not known

76
Iterative lengthening search

 IDS is using depth as limit


 ILS is using path cost as limit
◼ an iterative version for uniform cost search
◼ has the advantages of uniform cost search
 while avoiding its memory requirements
◼ but ILS incurs substantial overhead
 compared to uniform cost search

77
78
Bidirectional search

 Run two simultaneous searches


◼ one forward from the initial state
◼ another backward from the goal
◼ stop when the two searches meet
 However, computing backward is difficult
◼ A huge amount of goal states
◼ at the goal state, which actions are used to compute it?
◼ can the actions be reversible to computer its predecessors?

79
80
81
When to use what

 Depth-First Search:
◼ Many solutions exist
◼ Know (or have a good estimate of) the depth of solution
 Breadth-First Search:
◼ Some solutions are known to be shallow
 Uniform-Cost Search:
◼ Actions have varying costs
◼ Least cost solution is required
This is the only uninformed search that worries about costs.
 Iterative-Deepening Search:
◼ Space is limited and the shortest solution path is required

82
Avoiding repeated states

 for all search strategies


◼ There is possibility of expanding states
 that have already been encountered and expanded before, on some
other path
◼ may cause the path to be infinite → loop forever
◼ Algorithms that forget their history
 are doomed to repeat it

83
Avoiding repeated states

 Three ways to deal with this possibility


◼ Do not return to the state it just came from
 Refuse generation of any successor same as its parent state
◼ Do not create paths with cycles
 Refuse generation of any successor same as its ancestor states
◼ Do not generate any generated state
 Not only its ancestor states, but also all other expanded states have
to be checked against

84

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy