0% found this document useful (0 votes)
92 views61 pages

Preeti Soni Lecturer (C S E) Rsrrcet Bhilai, C.G

This document discusses heuristic search algorithms. It begins by stating that uninformed search methods lack problem-specific knowledge and are inefficient, while informed search methods use heuristics to improve search speed. A heuristic function estimates the value of a state and is used to minimize the search process. The document then discusses concepts like heuristic functions, heuristic knowledge, and how heuristic functions estimate the cost from the current node to the goal. It provides examples of heuristic functions for problems like the 8-puzzle and chess. Finally, it discusses heuristic search algorithms like hill climbing, best-first search, and A* search that make use of heuristic functions.

Uploaded by

Bhupesh Pandey
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
92 views61 pages

Preeti Soni Lecturer (C S E) Rsrrcet Bhilai, C.G

This document discusses heuristic search algorithms. It begins by stating that uninformed search methods lack problem-specific knowledge and are inefficient, while informed search methods use heuristics to improve search speed. A heuristic function estimates the value of a state and is used to minimize the search process. The document then discusses concepts like heuristic functions, heuristic knowledge, and how heuristic functions estimate the cost from the current node to the goal. It provides examples of heuristic functions for problems like the 8-puzzle and chess. Finally, it discusses heuristic search algorithms like hill climbing, best-first search, and A* search that make use of heuristic functions.

Uploaded by

Bhupesh Pandey
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 61

PREETI SONI

LECTURER(C S E)
RSRRCET
Bhilai, C.G.

Prepared By PREETI SONI


LECTURER(CSE)

Uninformed

search

methods

lack

problem-specific

knowledge. Such methods are prohibitively inefficient in


many

Srivastava
cases. Puja
Using
problem-specific knowledge can
Asst. Prof.(C S E)
dramatically improve
the search speed. In this Unit-II we will
RSRRCET
Bhilai, C.G.search algorithms that use problem
study some informed
specific heuristics. At the heart of such algorithms there is
the concept of a heuristic function.

Prepared By PREETI SONI


LECTURER( CSE)

In heuristic search or informed search, heuristics are used to identify the


most promising search path.

A heuristic function, is a function that ranks alternatives in various search


algorithms at each branching step based on the available information
(heuristically) in order to make a decision about which branch to follow
during a search. Heuristic function is a function that estimate the value of
a state, It is an approximation used to minimize the search process .

Heuristic Knowledge : knowledge of approaches that are likely to work or


of properties that are likely to be true (but not guaranteed).

This is also known as objective function.

Prepared By PREETI SONI


LECTURER(CSE)

A heuristic function at a node n is an estimate


of the optimum cost from the current node to a
goal. It is denoted by h(n).
h(n) = estimated cost of the cheapest path from
node n to a goal node
HEURISTIC FUNCTIONS:
f: States --> Numbers
f(T) : expresses the quality of the state TS
allow to express problem-specific knowledge, can be
imported in a generic way in the algorithms.

Prepared By PREETI SONI


LECTURER(CSE

Example 1: We want a path from Kolkata to Guwahati


Heuristic for Guwahati may be straight-line distance between
Kolkata and Guwahati.
h(Kolkata) = euclideanDistance(Kolkata, Guwahati)

Prepared By PREETI SONI


LECTURER(CSE

Prepared By PREETI SONI


LECTURER(CSE

f1(T) = the number correctly placed tiles on the board:

f2(T) = number or incorrectly placed tiles on board:


gives (rough!) estimate of how far we are from goal

f2

1
8
5

2
4
7

=4

Most often, distance to goal heuristics are more useful !


Prepared By PREETI SONI
LECTURER(CSE)

f3(T) = the sum of ( the horizontal + vertical distance that each tile is
away from its final destination):
gives a better estimate of distance from the goal node

f3

1
8
3

5
6

2
4
7

= 1 + 4 + 2 + 3 = 10

Prepared By Puja Srivastava Asst. Prof.


(CSE)

F(T) = (Value count of black pieces) - (Value


count of white pieces)

= v( ) + v( )
+ v( ) + v( )
- v( ) - v( )

Prepared By preeti soni


lect(CSE)

Heuristic - a rule of thumb used to help guide search


often, something learned experientially and recalled when
needed

given a search space, a current state and a goal state

generate all successor states and evaluate each with our heuristic
function

select the move that yields the best heuristic value

Methods that use a heuristic function to provide specific

knowledge about the problem


Hill Climbing
Best First Search

A*
AO*
Beem Search
Greedy Search

In simple hill climbing method, the first state is better than the
current state is selected.

In case of gradient search we consider all the moves from the


current state and select the best as the the next state.

There is a trade off between the time required to slect a move and
number of moves required to get a solution that must be
considered wwhile deciding which method will work better for a
particular problem.

Usually time required to select a move is longer for gradient search


and number of moves required to get a solution is usually longer
for basic hill climbing.

Root

8
B

A
2.7

C
2.9

2
D

Goal Node

1.

Put the initial node on a list START

2.

If (START is empty) or (START == GOAL) terminate search

3.

Remove the first node from START. Call this node a.

4.

If (a == GOAL) terminate search with success.

5.

Else if node a has successors generate all of them. Find out


how far they are from the goal node. Sort them by remaining
distance from the goal and addd them to the beginning of

START.
6.

Goto step 2.

Both the simple hill climbing and gradient search may fail to find
a solution, either the algorithm may terminate not by finding the
goal state but getting to state from which any better state cant be
generated.

This will happen when the program has reached either:


1. Local maximum

2. Plateau

3. Ridge

1. LOCAL MAXIMUM

A local maximum is a state i.e. better than all its neighbor but not better
than some other state further away

At local maximum all moves appear to make thing worse.


Remedy for Local Maximum
Back track to some earlier node and try to go in some different
direction.

2. PLATEAU

A flat area of the search space in which all neighbouring states have the
same value.

On a plateau it is not possible to determine the best direction in which to

move by making a local comparison.

Remedy
Make a big jump in some direction and try to get a new section of search
space.

3.RIDGE

It is a special kind of local maximum.


The orientation of the high region, compared to the set of available
moves, makes it impossible to climb up.

Remedy
Apply two or more rules before doing the test.

Adventages Of DFS:

1. Requires less memory.

2. It allows a solution without examining all the nodes.

Adventages Of BFS:

1. It doesnt get trapped at dead end

If there is a solution definitly it will find the solution and if more


than on solution then then it will give the minimum solution

In case of Best first search it combines both the advantages of BFS


and DFS to follow a single path at a time but changes path whenever
some computing path looks more promising than the current node.

In Best First Search one move is selected but the other nodes are kept
in consideration so that they can be examined or expanded later on if

the select path becomes less promising.


f(n) = g(n) + h(n)

g(n)

major of the cost getting from initial state to the current

node
h(n)
node

estimate of the cost getting from current node to the goal

1.

Put the initial node on a list START

2.

If (START is empty) or (START == GOAL) terminate search

3.

Remove the first node from START. Call this node a.

4.

If (a == GOAL) terminate search with success.

5.

Else if node a has successors generate all of them. Find out


how far they are from the goal node. Sort all the childern
generated so far by the remaining distance form the goal.

6.

Name this list as START1.

7.

Replace START with START1.

8.

Goto step 2.

9
3

E
12

6
Start Node

1 K

0 L

14

2
C

M
6

Goal Node

S.No.

Node
being
expanded

Children

Available Nodes

Node Choosen

1.

(A:3), (B:6), (C:5)

(A:3), (B:6), (C:5)

(A:3)

2.

(D:9), (E:8)

(B:6), (C:5), (D:9), (E:8)

(C:5)

3.

(H:7)

4.

(F:12), (G:14)

(D:9), (E:8), (H:7), (F:12),


(G:14)

(H:7)

5.

(I:5), (J:6)

(D:9), (E:8), (F:12), (G:14),


(I:5), (J:6)

(I:5)

6.

(K:1), (L:0), (M:2)

(B:6), (D:9), (E:8), (H:7)

(D:9), (E:8), (F:12), (G:14),


(J:6), (K:1), (L:0), (M:2)

(B:6)

Search stops as
goal is reached

A* utilizes evaluation function values and cost


function values.

Fitness Number = Evaluation function + Cost


function involved from start node to target
node.

14

FitnessValue

6
A

8
3

3
12

1 K

14
G

0 L
2

M
6

1.

Put the initial node on a list START

2.

If (START is empty) or (START == GOAL) terminate search

3.

Remove the first node from START. Call this node a.

4.

If (a == GOAL) terminate search with success.

5.

Else if node a has successors generate all of them. Estimate the


fitness number by the successors by totaling the evaluation function
value and cost function value. Sort the list by fitness number.

6.

Name this list as START1.

7.

Replace START with START1.

8.

Goto step 2.

An arc (

) connecting different branches is called AND tree.

The complex problem and the sub problem , there exist two kinds of
relationships.

AND relationship

OR relationship.

In AND relationship, the solution for the problem is obtained by solving all the
sub problems.

In OR relationship, the solution for the problem is obtained by solving any of


the sub problem.

9
7
A

4
B

6
D

Step 1: Create an intial graph GRAPH with a single NODE.


Compute the evaluation function value of NODE.
Step 2: Repeat until NODE is solve or cost reaches a very
high value that cannot be expanded.

Step 2.1 Select a node NODE1 from NODE. Keep track of the path.
Step2.2 Expand NODE1 by generating its childern. For childern which
are not the ancessors of NODE1, evaluate the evaluation Function
value. If the child node is terminal node label it END_NODE.
Step2.3. Generate a set of nodes DIFF_NODES having only NODE1.
Step 2.4 Repeat until DIFF_NODES is empty.

Step 2.4.1. Choose a node CHOOSE_NODE from DIFF_NODES such that


none of the descedant of CHOOSE_NODE is in DIFF_NODE.
Step 2.4.2. Estimate the cost of each node emerging from
CHOOSE_NODE. This cost is the total of the evaluation function value
and the cost of the arc.
Step 2.4.3 Find the minimal value and mark a connector through which
minimum is achieved overwriting the previous if it is different.
Step 2.4.4 If all the output nodes of the marked connector are marked
END_NODE label CHOOSE_NODE as over.
Step 2.4.5 If CHOOSE_NODE has been marked OVER or the cost has
changed, add to set DIFF_NODES all ancestors of CHOOSE_NODE.

A
5

A 6
9

B
3

A 9
B
3

C
4

C
4

D
5

A 11
D 10
E
4

F
4

B 6
G
5

12
H
7

C
4

D 10
E
4

F
4

Many AI problems can be viewed as problems of constraint satisfaction.

As compared with a straightforard search procedure, viewing a problem


as one of constraint satisfaction can reduce substantially the amount of
search.

Operates in a space of constraint sets.

Initial state contains the original constraints given in the problem.

A goal state is any state that has been constrained enough.

Two-step process:
1. Constraints are discovered and propagated as far as possible.
2. If there is still not a solution, then search begins, adding new
constraints.
Two kinds of rules:
1. Rules that define valid constraint propagation.

2. Rules that suggest guesses when necessary.

Cryptarithmetic puzzle:

SEND

MORE

MONEY

Rules for propagating constraints generates the following constraints:

M = 1, since two single-digit numbers plus a carry can not total more
than 19.

S = 8 or 9, since S+M+C3 > 9 (to generate the carry) and M = 1,


S+1+C3>9, so S+C3 >8 and C3 is at most 1.

O = 0, since S + M(1) + C3(<=1) must be at least 10 to generate a


carry and it can be most 11. But M is already 1, so O must be 0.

N = E or E+1, depending on the value of C2. But N cannot have the


same value as E. So N = E+1 and C2 is 1.

In order for C2 to be 1, the sum of N + R + C1 must be greater than 9,

so N + R must be greater than 8.

N + R cannot be greater than 18, even with a carry in so E cannot be 9.

Suppose E is assigned the value 2.

The constraint propagator now observes that:

N = 3 since N = E + 1.

R = 8 or 9, since R + N (3) + C1 (1 or 0) = 2 or 12. But since

N is already 3, the sum of these nonnegative numbers


cannot be less than 3. Thus R + 3 +(0 or 1) = 12 and R = 8
or 9.

2 + D = Y or 2 + D = 10 + Y, fro the sum in rithmost


column.

Start

Initial state:
No two letters have
the same value.
The sum of the digits
must be as shown.

M=1
R=9
S=8
E=2
N=3
O=0
D=4
Y =6

M=1
S = 8 or 9
O=0
N=E+1
C2 = 1
N+R>8
E9

SEND
MORE
MONEY

E=2
N=3
R = 8 or 9
2 + D = Y or 2 + D = 10 + Y

C1 = 0
2+D=Y
N + R = 10 + E
R=9
S =8

M=1
R=9
S=8
E=2
N=3
O=0
D=8
Y=0

M=1
R=9
S=8
E=2
N=3
O=0
D=9
Y=1

C1 = 1
2 + D = 10 + Y
D=8+Y
D = 8 or 9

D=8
Y=0

Conflict

D=9
Y=1

Conflict

Why has game playing been a focus of AI?

Games have well-defined rules, which can be implemented in


programs

The rules of the game are limited. Hence extensive amounts of


domain-specific knowlege are seldom needed.

Game provid a structured task wherein success or failure can be


measured with least effort.

interfaces required are usually simple

For the human expert, it is easy to explain the rationale for a move
unlike other domains.

Usual conditions:

Each player has a global view of the board

Zero-sum game: any gain for one player is a loss for the other

Components of game:

initial state

for each state, list of legal moves and consequent states

test to determine if a state is a terminal state--the end of the

game

utility function:
computes a single numeric value for a terminal state

win, lose, or draw, and sometimes by how much.

It is a depth first, depth-limited search procedure.

1st player MAX tries to maximize the utility fn

2nd player MIN tries to minimize the utility fn

assumes the opponent always makes the best possible move


not always assumed by a human player

under such conditions gives best possible outcomemaximizes


the worst-case outcome

Let A be the intial state of the game .

The plausible move generator three childern for that move and the
static evaluation function generator assigns the values given along
with each of the states.

It is asssumed that the static evaluation function generator returns


a value from -20 and +20, wherein a value +20 indicates a win for
the maximizer and a value of +20 a win for the minimizer.

A value of 0 (zero) indicates a tie or darw.

The maximizer, always tries to go to a position whrere the static


evaluation function value is the maximum positive value.

C
-6

D
7

current node is the root; a MAX node

want to make a decision at the root

look ahead to consequent states resulting from legal moves

edges are legal moves

nodes are game states

leaves are terminal states

when a leaf node is evaluated, a large value is good for player


MAX; a small value is good for player MIN

which player is making the move alternates between adjacent


levels
(level 0 MAX, level 1 MIN, level 2 MAX, etc.)

minimaxValue(n) =
utility(n)
if n is a terminal state
max minimaxValue(s) of all successors, s
if n is a MAX node
min minimaxValue(s) of all successors, s
if n is a MIN node

the purpose of exploring the game tree with minimax is to avoid


potential loss

due to time constraints, it may not be possible to explore a path to a


leaf

going one level deeper may have revealed a trapa sudden negative
turn of events!

Game tree with depth d, branching factor b:

full minimax requires (bd) evaluations

minimax on chess game:


average number of alternative legal moves: 35
average number of moves for one player over course of
game: 50
number of Nodes: 35100 = 10154
number of distinct nodes: 1040

looks at every line of play, no matter how unlikely can retain


optimality without this drawback smarter searching

improves minimax algorithm by pruning needless


evaluations

computes same result without searching the entire tree

dont explore a move which is inferior to a known


alternative

if cannot search to terminal state, use a heuristic to


approximate the eventual terminal state

Alpha:
minimal score that player MAX is guaranteed to attain
(best known so far, but possibility of improvement.
Minimum attainable)
Beta:
best score that player MIN can attain so far (lowest
score known so far, lower score may yet be found.
Maximum attainable)

Max

Min

Max

Min

Beta: best score that MIN can attain so far

Beta = 6

Max

Min

Max

Min

Alpha = 6

Alpha: best score that MAX can attain so far

typically, can reduce branching factor to the square root


of full branching

from bd to bd/2

Deep Blue (chess-playing program that beat Garry


Kasparov):

branching factor reduced by alpha beta

pruning from 35 to 6!

if possible, consider best successors first


cutoffs will occur earlier
makes it possible to search smaller portion of tree
requires evaluation of interior nodes
sort on evaluation function
neural net to learn better evaluation function?

Each player knows the total game state.

tic-tac-toe
connect four
checkers
chess
amazon

nim
othello
go
hex

backgammon (dice)

No player knows the total game state.

scrabble

bridge

poker

battleship

kriegspiel

Waiting for Quiescence

Secondary Search

Using Book Moves

Alternatives to Minmax

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy