0% found this document useful (0 votes)
36 views162 pages

02-Problem Decomposition and Planning

The document outlines the vision and mission of a department focused on empowering students for professional competence and social responsibility in techno-economic development. It details course outcomes related to Artificial Intelligence, problem-solving processes, and methodologies such as problem decomposition and planning. Additionally, it discusses rule-based systems and knowledge-based system architecture, emphasizing the importance of reasoning and knowledge representation in problem-solving.

Uploaded by

rahulbadhe630
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
36 views162 pages

02-Problem Decomposition and Planning

The document outlines the vision and mission of a department focused on empowering students for professional competence and social responsibility in techno-economic development. It details course outcomes related to Artificial Intelligence, problem-solving processes, and methodologies such as problem decomposition and planning. Additionally, it discusses rule-based systems and knowledge-based system architecture, emphasizing the importance of reasoning and knowledge representation in problem-solving.

Uploaded by

rahulbadhe630
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 162

 Departmental Vision –

Empowering the students to be professionally


competent & socially responsible for techno-
economic development of society.

 Departmental Mission –
a) To provide quality education enabling
students for higher studies, research and
entrepreneurship.
b) To inculcate professionalism and ethical
values through day to day practices.
AI& R COURSE OUTCOMES:
 CO1 : Able to identify and apply the concept of AI
and various search strategies in various AI
applications
 CO2 : Able to apply Artificial Intelligence

techniques for problem solving.


 CO3: Able to design a knowledge based system.

 CO4 : Able to apply the methods to new NLP

problems and to solve the problems outside NLP.


 CO5 : Able to understand and design robot system

control.
 CO6 : To understand and analyze various types of

robots in practice
UNIT 2 : PROBLEM
DECOMPOSITION AND
PLANNING
CONTENTS:
 Problem Decomposition: Goal Trees, Rule Based
Systems, Rule Based Expert Systems.

 Planning: STRIPS, Forward and Backward State


Space Planning, Goal Stack Planning, Plan Space
Planning, A Unified Framework For Planning.

 Constraint Satisfaction : N-Queens, Constraint


Propagation, Scene Labeling, Higher order and
Directional Consistencies, Backtracking and Look
ahead Strategies.
Problem Solving Process
 Problem solving is a process of generating solutions for a
given situation as shown in the following figure this
process consists of sequence of well-defined method that
can handle doubts or inconsistency issues, uncertainty,
ambiguity and help in achieving the desired goal.
Framework for Problem Solving

 The following four phases can be identified in the process


of solving problems:
(1) Understanding the problem.
(2) Making a plan of solution.
(3) Carrying out the plan
(4) Looking back i.e. verifying
The Problem Solving process consists of a sequence of
sections that fit together depending on the type of problem
to be solved.
These are:
 Problem Definition.

 Problem Analysis.

 Generating possible Solutions.

 Analyzing the Solutions.

 Selecting the best Solution(s).

 Planning the next course of action (Next Steps)


Formulating Problem
 The first step is to identify the problem there are many
question are to be answered while defining problem.
 After we need to specify the problem space along with
the target that we want to achieve.
 After that the current solution will be measured with the
standard problem objectives or requirements.
 The next step is to analysis and represents the task.
 We start with the state space approach for problem
solving using a very well known puzzle.
 Here the target is the m/c for solving the problem as
shown in the following figure shown use to represent the
steps involved in formulation problem.
Figure shows problem formulating steps
Problem analysis & representation
 Knowledge representation and reasoning
dedicated to representing information about the
world in a form that a computer system can
utilize to solve complex tasks such as diagnosing
a medical condition or having a dialog in a
natural language.
 Knowledge representation and reasoning also
incorporates findings from logic to automate
various kinds of reasoning, such as the
application of rules or the relations of sets and
subsets.
The Problem Definition should satisfy the
following criteria
 Compactness
 Utility
 Soundness
 Completeness
 Generality
 Transparency
Following is the example of tower of Hanoi
problem which is use to represent the knowledge

 Problem:- . It consists of three rods, and a


number of disks of different sizes which
can slide onto any rod. The puzzle starts
with the disks in a neat stack in ascending
order of size on one rod, the smallest at the
top, thus making a Conical shape.
Objectives
 The objective of the puzzle is to move the entire
stack to another rod, obeying the following simple
rules:
 Only one disk can be moved at a time.
 Each move consists of taking the upper disk from one of
the stacks and placing it on top of another stack i.e. a disk
can only be moved if it is the uppermost disk on a stack.
 No disk may be placed on top of a smaller disk.
 With three disks, the puzzle can be solved in seven
moves. The minimum number of moves required to solve
a Tower of Hanoi puzzle is 2n - 1, where n is the number
of disks.
Solution
 For an even number of disks:
 make the legal move between pegs A and B
 make the legal move between pegs A and C
 make the legal move between pegs B and C
 repeat until complete
 For an odd number of disks:
 make the legal move between pegs A and C
 make the legal move between pegs A and B
 make the legal move between pegs C and B
 repeat until complete
 In each case, a total of 2n-1 moves are made.
Problem Reduction Space example
 In a problem reduction space, the nodes represent
problems to be solved or goals to be achieved, and the
edges represent the decomposition of the problem into sub-
problems.
 This is best illustrated by the example of the Towers of
Hanoi problem.

AA BB C C
Problem Reduction Space

A B C

3AC
Problem Reduction Space

A B C

3AC

2AB

1AC
Problem Reduction Space

A B C

3AC

2AB

1AC 1AB
Problem Reduction Space

A B C

3AC

2AB

1AC 1AB 1CB


Problem Reduction Space

A B C

3AC

2AB 1AC

1AC 1AB 1CB


Problem Reduction Space

A B C

3AC

2AB 1AC 2BC

1AC 1AB 1CB 1BA


Problem Reduction Space

A B C

3AC

2AB 1AC 2BC

1AC 1AB 1CB 1BA 1BC


Problem Reduction Space

A B C

3AC

2AB 1AC 2BC

1AC 1AB 1CB 1BA 1BC 1AC


GOAL TREES
AO* ALGORITHM
 AO* is a top-down algorithm & bottom-up
generalization of A*.
 When a problem can be divided into a set
of sub problems, where each sub problem
can be solved separately and a
combination of these will be a solution.
 AND-OR graphs/ trees are used for
representing the solution.
 The decomposition of the problem or
problem reduction generates AND arcs.
 One AND are may point to any number of
successor nodes.
 All these must be solved so that the arc will
rise to many arcs, indicating several
possible solutions.
 Hence the graph is known as AND - OR
instead of AND. Figure shows an AND - OR
graph.
SEARCHING AND/OR GRAPH
 A solution in an AND-OR tree is a sub tree
whose leafs are included in the goal set
 Cost function: sum of costs in AND node
f(n) = f(n1) + f(n2) + …. + f(nk)
 How can we extend A* to search AND/OR
trees? The AO* algorithm.
AO* ALGORITHM

1. Initialise the graph to start node.


2. Traverse the graph following the current
path accumulating nodes that have not
yet been expanded or solved or visited.
3. Pick any of these nodes and expand it
and if it has no successors call this
value FUTILITY otherwise calculate only
the value of f' for each of the
successors.
4. If f' is 0 then mark the node as SOLVED
or Visited.
AO* ALGORITHM

5. Change the value of f' for the newly created


node to reflect its successors by back
propagation.
6. Wherever possible use the most promising
routes and if a node is marked as SOLVED then
mark the parent node as SOLVED.
7. If starting node is SOLVED or value greater
than FUTILITY, stop, else repeat from 2.
AND/OR SEARCH
 We
must examine several nodes simultaneously
when choosing the next move.

A
(9)
A
(3) B D (5) 38
(4) C B C D
17 9 27
E F G H I J
(5) (10) (3) (4) (15) (10)
Thus we can see that to search an AND-OR graph,
the following three things must be done.
1. Traverse the graph starting at the initial node
and following the current best path, and
accumulate the set of nodes that are on the path
and have not yet been expanded.
2. Pick one of these unexpanded nodes and
expand it. Add its successors to the graph and
computer f ' (cost of the remaining distance) for
each of them.
3. Change the f ' estimate of the newly expanded
node to reflect the new information produced by
its successors. Propagate this change backward
through the graph. Decide which of the current
best path.
RULE-BASED SYSTEMS
KBS ARCHITECTURE (1)
 Thetypical architecture of an KBS is
often described as follows:

user inference knowledge


user
engine base
interface
KBS ARCHITECTURE (1)
 The inference engine and knowledge base
are separated because:
 the reasoning mechanism needs to be as stable
as possible;
 the knowledge base must be able to grow and
change, as knowledge is added;
 this arrangement enables the system to be built
from, or converted to, a shell.
KBS ARCHITECTURE (2)
 It is reasonable to produce a richer, more
elaborate, description of the typical KBS.
 A more elaborate description, which still

includes the components that are to be found


in almost any real-world system, would look
like this:
KBS ARCHITECTURE (2)
KBS ARCHITECTURE (2)
KBS ARCHITECTURE (2)
The system holds a collection of general
principles which can potentially be applied
to any problem - these are stored in the
knowledge base.
The system also holds a collection of
specific details that apply to the current
problem (including details of how the
current reasoning process is progressing) -
these are held in working memory.
Both these sorts of information are
processed by the inference engine.
KBS ARCHITECTURE (2)
KBS ARCHITECTURE (2)
 Any practical expert system needs an
explanatory facility. It is essential that an
expert system should be able to explain its
reasoning. This is because:
 it gives the user confidence in the system;

 it makes it easier to debug the system.


KBS ARCHITECTURE (2)
KBS ARCHITECTURE (2)
 It is not unreasonable to include an expert
interface & a knowledge base editor, since
any practical KBS is going to need a
mechanism for efficiently building and
modifying the knowledge base.
KBS ARCHITECTURE (2)
 As mentioned earlier, a reliable expert should
be able to explain and justify his/her advice
and actions.
RULE-BASED REASONING
RULE-BASED REASONING
 One can often represent the expertise that
someone uses to do an expert task as rules.
 A rule means a structure which has an if

component and a then component.


 This is actually a very old idea indeed -
THE EDWIN SMITH PAPYRUS
 The Edwin Smith papyrus is a 3700-year-old
ancient Egyptian text.

ABCDEECDBBACDACDBCDECDADCADBADE
ECDBBACDACDBCDECDADCADBADCDBBACDA
BCDEECDBBACDACDBCDECDAD
BBACDACDBCDECDADCADBADEDCDBBA
DCDBBADCDBBABCDECDADCADBADEACDA
BACDACDBCDECDADBACDACDBCDECDAD
THE EDWIN SMITH PAPYRUS
 It contains medical descriptions of 48
different types of head wound.
 There is a fixed format for each problem

description: Title - symptoms - diagnosis -


prognosis - treatment.
THE EDWIN SMITH PAPYRUS
 There's a fixed style for the parts of each
problem description. Thus, the prognosis
always reads "It is an injury that I will cure",
or "It is an injury that I will combat", or "It is
an injury against which I am powerless".

 An example taken from the Edwin Smith


papyrus:
THE EDWIN SMITH PAPYRUS
Title:
Instructions for treating a fracture of the
cheekbone.
Symptoms:
If you examine a man with a fracture of the
cheekbone, you will find a salient and red
fluxion, bordering the wound.
THE EDWIN SMITH PAPYRUS
Diagnosis and prognosis:
Then you will tell your patient: "A fracture of the
cheekbone. It is an injury that I will cure."
Treatment:
You shall tend him with fresh meat the first day. The
treatment shall last until the fluxion resorbs. Next
you shall treat him with raspberry, honey, and
bandages to be renewed each day, until he is cured.
RULE-BASED REASONING: RULES
 examples:
if - the leaves are dry, brittle and discoloured
then - the plant has been attacked by red
spider mite

if - the customer closes the account


then - delete the customer from the database
RULE-BASED REASONING: RULES
 The statement, or set of statements, after
the word if represents some pattern which
you may observe.

 The statement, or set of statements, after


the word then represents some conclusion
that you can draw, or some action that you
should take.
RULE-BASED REASONING: RULES
 A rule-based system, therefore, either
identifies a pattern and draws
conclusions about what it means,
or
identifies a pattern and advises
what should be done about it,
or
identifies a pattern and takes
appropriate action.
RULE-BASED REASONING: RULES
 The essence of a rule-based reasoning system is that it goes
through a series of cycles.
 In each cycle, it attempts to pick an appropriate rule from its
collection of rules, depending on the present circumstances,
and to use it as described above.
 Because using a rule produces new information, it's possible
for each new cycle to take the reasoning process further than
the cycle before. This is rather like a human following a chain
of ideas in order to come to a conclusion.
TERMINOLOGY
 A rule as described above is often referred to
as a production rule.

 A set of production rules, together with


software that can reason with them, is known
as a production system.
TERMINOLOGY

 Thereare several different terms for the


statements that come after the word if, and
those that come after the word then.
 The statements after if may be called the conditions, those
after then may be called the conclusions.
 The statements after if may be called the premises, those
after then may be called the actions.
 The statements after if may be called the antecedents, those
after then may be called the consequents.
TERMINOLOGY

 Some writers just talk about the if-part and the then-part.
TERMINOLOGY
 If a production system chooses a particular
rule, because the conditions match the
current state of affairs, and puts the
conclusions into effect, this is known as firing
the rule.
TERMINOLOGY
 In a production system, the rules are stored
together, in an area called the rulebase.
HISTORICAL NOTE
 Mathematicians, linguists, psychologists and
artificial intelligence specialists explored the
possibilities of production rules during the
40s, 50s and 60s.
 When the first expert systems were invented

in the 70s, it seemed natural to use


production rules as the knowledge
representation formalism for the knowledge
base.
HISTORICAL NOTE
 Production rules have remained the most
popular form of knowledge representation for
expert systems ever since.
CONDITIONAL BRANCHING
 Is a production rule the same as a conditional
branching statement?

 A production rule looks similar to the


if (statement to be evaluated) then (action)
pattern which is a familiar feature of all
conventional programming languages.
CONDITIONAL BRANCHING
 e.g. The following fragment from a C
program:
CONDITIONAL BRANCHING
{ int magic;
int guess;
magic = rand( );
printf(“guess the magic number: ”);
scanf(“%d”, &guess);
if (guess == magic) printf(“** Right **”);
else {
printf(“Wrong, ”);
if (guess > magic) printf(“too high”);
else printf(“too low”);
}
}
CONDITIONAL BRANCHING VS.
PRODUCTION RULES
 However, the similarity is misleading. There
is a radical difference between a production
system and a piece of conventional software.
In a conventional program, the
if...then... structure is an
integral part of the code, and
represents a point where the
execution can branch in one of
two (or more) directions.
CONDITIONAL BRANCHING VS.
PRODUCTION RULES

In a production system, the


if...then... rules are gathered
together in a rule base, and the
controlling part of the system
has some way of choosing a
rule from this knowledge base
which is appropriate to the
current circumstances, and then
using it.
REASONING WITH
PRODUCTION RULES
 The statements forming the conditions, or
the conclusions, in such rules, may be
structures, following some syntactic
convention (such as three items enclosed in
brackets).
REASONING WITH
PRODUCTION RULES
 Very often, these structures will include
variables - such variables can, of course, be
given a particular value, and variables with
the same name in the same rule will share
the same value.
REASONING WITH
PRODUCTION RULES
 For example (assuming words beginning with
capital letters are variables, and other words
are constants):
if [Person, age, Number] &
[Person, employment, none] &
[Number, greater_than, 18] &
[Number, less_than, 65]
then [Person, can_claim,
unemployment_benefit].
REASONING WITH
PRODUCTION RULES
 Architecture of a typical production
system:
observed data

working
select memory modify

rule
Inference
memory fire output
engine
REASONING WITH
PRODUCTION RULES
 Architecture of a typical production
system:
New information

working
select memory modify

rule
interpreter
memory fire output
REASONING WITH
PRODUCTION RULES
 Architecture of a typical production
system:
New information

select working
memory modify

rule
interpreter
memory fire output
REASONING WITH
PRODUCTION RULES
 Architecture of a typical production
system:
New information

working
select memory modify

Inference
rule engine
memory output
fire executes
actions
REASONING WITH
PRODUCTION RULES
 Architecture of a typical production
system:
New information

working
select memory modify

Inference
rule engine
memory
fire executes output
actions
REASONING WITH
PRODUCTION RULES
 Architecture of a typical production
system:
New information

select working
memory modify

rule
interpreter
memory fire output
REASONING WITH
PRODUCTION RULES
 Architecture of a typical production
system:
New information

working
select memory modify

Inference
rule engine
memory executes output
fire
actions
REASONING WITH
PRODUCTION RULES
 Architecture of a typical production
system:
New information

working
select memory modify

Inference
rule engine
memory executes
fire output
actions
ARCHITECTURE OF A TYPICAL
PRODUCTION SYSTEM
 Has a working memory.
Holds items of data. Their
presence, or their absence,
causes the inference engine to trigger
certain rules.
 e.g. W.M. contains [john, age, 29]
& [john, employment, none]
 The system decides: does this
match any rules in the rulebase? If
so, choose the rule.
ARCHITECTURE OF A TYPICAL
PRODUCTION SYSTEM
 has an inference engine. Behaviour of the
inference engine :
the system is started by putting a
suitable data item into working
memory.
recognise-act cycle: when data in
the working memory matches the
conditions of one of the rules in
the system, the rule fires (i.e.is
brought into action).
ADVANTAGES OF PRODUCTION
SYSTEMS ... AT FIRST GLANCE
 The principle advantage of production rules
is notational convenience - it’s easy to
express suitable pieces of knowledge in this
way.

 The principle disadvantage of production


rules is their restricted power of expression -
many useful pieces of knowledge don’t fit
this pattern.
ADVANTAGES OF PRODUCTION
SYSTEMS ... AT FIRST GLANCE
 This would seem to be a purely declarative form of
knowledge representation. One gathers pieces of
knowledge about a particular subject, and puts them
into a rulebase. One doesn't bother about when or
how or in which sequence the rules are used; the
production system can deal with that.
 When one wishes to expand the knowledge, one just

adds more rules at the end of the rulebase.


ADVANTAGES OF PRODUCTION
SYSTEMS ... AT FIRST GLANCE
 The rules themselves are very easy to understand,
and for someone (who is expert in the specific
subject the system is concerned with) to criticise
and improve.
ADVANTAGES OF PRODUCTION
SYSTEMS ... AT FIRST GLANCE
 It's fairly straightforward to implement a production
system interpreter. Following the development of
the Rete Matching Algorithm, and other
improvements, quite efficient interpreters are now
available.
ADVANTAGES OF PRODUCTION
SYSTEMS ... AT FIRST GLANCE

 However, it isn't that simple. See


"advantages reconsidered" later on.
OPERATION OF A PRODUCTION
SYSTEM IN MORE DETAIL
 The recognise-act cycle (forward-chaining):
Put the word "start"
Halt
in working memory
no
Set the cycle going
Any
Pick rules on the rules
basis of what's in eligible
no working memory to fire
?
Has Information
the rule sources & recipients yes
got the
command Use conflict resolution
"halt" at the working strategy to cut this
user memory down to one rule.
the
end?
Put the right-hand side
yes of the rule into effect,
using the information
from working memory
Halt Produce some output
GOAL-STACK
PLANNING
EXAMPLE (SUSSMAN’S
ANOMALY)

A
C B
A B C
Start state Goal state
ON(C, A)  ON(A, B) 
ONTABLE(A) ON(B, C) 
 ONTABLE(B) ONTABLE(C
 CLEAR(B)  )
CLEAR(C)
GOAL-STACK PLANNING
ALGORITHM
 :=  // plan is initially empty
C := start state // C is the current state Push the goal state on the stack
Push all its subgoals on the stack (in any order) Repeat until the stack is empty:
X := Pop the top of the stack
IF X is a compound goal THEN
push its subgoals which are unsatisfied in C on the stack
ELSE IF X a single goal FALSE in C THEN
push an action Q that satisfies X
push all preconditions of Q
ELSE IF X is an action THEN
execute X in current state C, change the new current state C
using the action’s effects, add X to plan 
ELSE IF X is a goal which is TRUE in current state C THEN NOTHING
C STAT
Stack
EC
ON(A, B) A B
ON(B, C)
 := []
ON(A, B)  ON(B, C)  ONTABLE(C) Plan
C Current
state C
ON(B, C) A B
ON(A, B)  ON(B, C) 
ONTABLE(C)  := []
Plan

X := ON(A, B)
single goal
FALSE in C

ON(A, B) is in
the effect of
action
Q := STACK(A,
B).
C Current
state C
HOLDING(A) A B
CLEAR(B)
CLEAR(B)  HOLDING(A)
 := [] Plan
STACK(A, B)
ON(B, C)
ON(A, B)  ON(B, C)  ONTABLE(C)

X := HOLDING(A) single goal FALSE in C


HOLDING(A) is in the effect of action Q :=
PICKUP(A)
// Q could be UNSTACK(A, B) as well

Push Q and its preconditions on stack


ONTABLE(A) C Current
state C
CLEAR(A) A B
ARMEMPTY
ARMEMPTY  CLEAR(A)
 ONTABLE(A)  := [] Plan
PICKUP(A)
CLEAR(B) X := ONTABLE(A)
CLEAR(B)  HOLDING(A) a single goal TRUE in C
So do nothing
STACK(A, B)
ON(B, C)
ON(A, B)  ON(B, C)  ONTABLE(C)
C Current
state C
CLEAR(A) A B
ARMEMPTY
ARMEMPTY  CLEAR(A)
 ONTABLE(A)  := [] Plan
PICKUP(A)
CLEAR(B) X := ONTABLE(A)
CLEAR(B)  HOLDING(A) a single goal TRUE in C
So do nothing
STACK(A, B)
ON(B, C)
ON(A, B)  ON(B, C)  ONTABLE(C)
C Current
state C
A B
ARMEMPTY
ARMEMPTY  CLEAR(A)
 ONTABLE(A)  := [] Plan
PICKUP(A)
X := CLEAR(A)
CLEAR(B) a single goal FALSE in C
CLEAR(A) is in the effect
CLEAR(B)  HOLDING(A)
of action Q :=
STACK(A, B) UNSTACK(C, A).
ON(B, C)
Push Q and its
ON(A, B)  ON(B, C)  ONTABLE(C) preconditions on stack
C Current
state C
A B
CLEAR(C)
ON(C, A)
ARMEMPTY
ARMEMPTY  ON(C, A)  CLEAR(C)  := [] Plan
UNSTACK(C, A)
X :=
ARMEMPTY R(A)
ARMEMPTY  CLEAR(A)  CLEA
al FALSE in C
ONTABLE(A) a single go
PICKUP(A)
) is in the effect of
CLEAR(B) UNSTACK(C, A).
CLEAR(A
CLEAR(B)  HOLDING(A) action
STACK(A, B) d its preconditions on
Q :=
ON(B, C) stack
Push Q an
ON(A, B)  ON(B, C)  ONTABLE(C)
C Current
state C
A B
ON(C, A)
ARMEMPTY
ARMEMPTY  ON(C, A)  CLEAR(C)  := [] Plan
UNSTACK(C, A)
ARMEMPTY
ARMEMPTY  CLEAR(A)  X := C LEAR(C)
ONTABLE(A) a singl So e goal TRUE in C
PICKUP(A) do othing
CLEAR(B)
CLEAR(B)  HOLDING(A)
STACK(A, B)
ON(B, C)
ON(A, B)  ON(B, C)  ONTABLE(C)
C Current
state C
A B

ARMEMPTY
ARMEMPTY  ON(C, A)  CLEAR(C)  := [] Plan
UNSTACK(C, A)
ARMEMPTY
ARMEMPTY  CLEAR(A)  X := O N(C, A)
ONTABLE(A) a singl So e goal TRUE in
PICKUP(A) do C
CLEAR(B) othing
CLEAR(B)  HOLDING(A)
STACK(A, B)
ON(B, C)
ON(A, B)  ON(B, C)  ONTABLE(C)
C Current
state C
A B

ARMEMPTY  ON(C, A)  CLEAR(C)  := [] Plan


UNSTACK(C, A)
ARMEMPTY
ARMEMPTY  CLEAR(A)  X := A RMEMPTY
ONTABLE(A) a singl So e goal TRUE in C
PICKUP(A) do othing
CLEAR(B)
CLEAR(B)  HOLDING(A)
STACK(A, B)
ON(B, C)
ON(A, B)  ON(B, C)  ONTABLE(C)
C Current
state C
A B

 := [] Plan
UNSTACK(C, A)
ARMEMPTY
ARMEMPTY  CLEAR(A)  X := ARMEMPTY  ON(C,
ONTABLE(A) A) 
PICKUP(A) CLEAR(C)
CLEAR(B) a compound goal TRUE in
CLEAR(B)  HOLDING(A) C
STACK(A, B)
So do nothing
ON(B, C)
ON(A, B)  ON(B, C)  ONTABLE(C)
C Current
state C
A B

 := [] Plan

ARMEMPTY
ARMEMPTY  CLEAR(A)  X := UNSTACK(C, A)
ONTABLE(A) action
PICKUP(A) Add to plan
CLEAR(B) Create new state by
CLEAR(B)  HOLDING(A) executing
STACK(A, B)
X in current state C.
ON(B, C)
ON(A, B)  ON(B, C)  ONTABLE(C)
C New
Current
A B state C

 := [UNSTACK(C, A)]

ARMEMPTY
ARMEMPTY  CLEAR(A)  X := ARMEMPTY is FALSE in C
ONTABLE(A)
PICKUP(A) ARMEMPTY is in the effect of
CLEAR(B) action Q := PUTDOWN(C).
CLEAR(B)  HOLDING(A)
STACK(A, B) Push Q and its preconditions on stack
ON(B, C)
ON(A, B)  ON(B, C)  ONTABLE(C)
C Current
state C
A B

HOLDING(C)
 := [UNSTACK(C,
PUTDOWN(C) A)]
ARMEMPTY
ARMEMPTY  CLEAR(A)  X := HOLDING(C) is TRUE in C
ONTABLE(A) So do not hing
PICKUP(A)
CLEAR(B)
CLEAR(B)  HOLDING(A)
STACK(A, B)
ON(B, C)
C Current
state C
A B

 := [UNSTACK(C, A)]

ARMEMPTY
ARMEMPTY  CLEAR(A)  X := PUTDOWN(C) is action.
ONTABLE(A) Add it to plan
PICKUP(A) Create new state by executing
CLEAR(B) X in current state C.
CLEAR(B)  HOLDING(A)
STACK(A, B)
ON(B, C)
ON(A, B)  ON(B, C)  ONTABLE(C)
New
Current
C A B state C

 := [UNSTACK(C, A),
PUTDOWN(C)]

ARMEMPTY  CLEAR(A)  X := ARMEMPTY is TRUE in


ONTABLE(A) C
PICKUP(A) So do nothing.
CLEAR(B)
CLEAR(B)  HOLDING(A)
STACK(A, B)
ON(B, C)
ON(A, B)  ON(B, C)  ONTABLE(C)
CURRENT STATE C C A B
 := [UNSTACK(C, A),
PUTDOWN(C)]

X := ARMEMPTY  CLEAR(A) 
ONTABLE(A) is TRUE in C
PICKUP(A) So do nothing.
CLEAR(B)
CLEAR(B)  HOLDING(A)
STACK(A, B)
ON(B, C)
ON(A, B)  ON(B, C)  ONTABLE(C)
CURRENT STATE C C A B
 := [UNSTACK(C, A),
PUTDOWN(C)]

X := PICKUP(A) is action.
Add it to plan
Create new state by executing
CLEAR(B) X in current state C.
CLEAR(B)  HOLDING(A)
STACK(A, B)
ON(B, C)
ON(A, B)  ON(B, C)  ONTABLE(C)
A New
Current
C B state C

 := [UNSTACK(C, A),
PUTDOWN(C), PICKUP(A)]

X := CLEAR(B) is TRUE in C

So do nothing

CLEAR(B)  HOLDING(A)
STACK(A, B)
ON(B, C)
ON(A, B)  ON(B, C)  ONTABLE(C)
A Current
state C
C B
 := [UNSTACK(C, A),
PUTDOWN(C), PICKUP(A)]

X := CLEAR(B)  HOLDING(A)
is TRUE in C

So do nothing

STACK(A, B)
ON(B, C)
ON(A, B)  ON(B, C)  ONTABLE(C)
A Current
state C
C B
 := [UNSTACK(C, A),
PUTDOWN(C), PICKUP(A)]

X := STACK(A, B) is action.

Add it to plan
Create new state by executing
X in current state C.

ON(B, C)
ON(A, B)  ON(B, C)  ONTABLE(C)
A
NEW CURRENT STATE C C B
 := [UNSTACK(C, A),
PUTDOWN(C), PICKUP(A),
STACK(A, B)]

X := ON(B, C) is FALSE in
C

ON(A, B)  ON(B, C)  ONTABLE(C)


ACHIEVING ON(B, C) IS VERY SIMILAR TO
ACHIEVING ON(A, B), WHICH WE HAVE
ALREADY DONE. THE NEW PLAN AND THE
NEW STATE ARE AS FOLLOWS:

B Current
state C
A C
ON(A, B)  ON(B, C) 
 := [UNSTACK(C, A),
ONTABLE(C) PUTDOWN(C), PICKUP(A),
STACK(A, B), UNSTACK(B,
A), PUTDOWN(B),
PICKUP(B), STACK(B, C)]
Subgoal ON(A, B) is FALSE in this state. So it is pushed on
stack
RENT
B STAT
EC
A C
 := [UNSTACK(C, A),
PUTDOWN(C), PICKUP(A),
ON(A, B)
STACK(A, B), UNSTACK(B,
ON(A, B)  ON(B, C)  ONTABLE(C)
A), PUTDOWN(B),
PICKUP(B), STACK(B, C)]
RENT
B STAT
EC
A C
  :=
[UNSTACK(C, A),
PUTDOWN(C),
ON(A, B)  ON(B, C)  ONTABLE(C) PICKUP(A),
STACK(A, B),
UNSTACK(B, A),
X := ON(A, B) is FALSE in C. PUTDOWN(B),
ON(A, B) is in the effect of action Q := PICKUP(B),
STACK(A,
B). STACK(B, C)]

Push Q and its preconditions on stack


B Current
state C
A C
CLEAR(B)
HOLDING(A)  := [UNSTACK(C, A),
HOLDING(A)  CLEAR(B) PUTDOWN(C), PICKUP(A),
STACK(A, B)
STACK(A, B), UNSTACK(B,
ON(A, B)  ON(B, C)  ONTABLE(C)
A), PUTDOWN(B),
PICKUP(B), STACK(B, C)]
B Current
state C
A C
HOLDING(A)  := [UNSTACK(C, A),
HOLDING(A)  CLEAR(B) PUTDOWN(C), PICKUP(A),
STACK(A, B)
STACK(A, B), UNSTACK(B,
ON(A, B)  ON(B, C)  ONTABLE(C)
A), PUTDOWN(B),
PICKUP(B), STACK(B, C)]

X := CLEAR(B) is TRUE in C.
So do nothing.
B Current
state C
A C
 := [UNSTACK(C, A),
HOLDING(A)  CLEAR(B) PUTDOWN(C), PICKUP(A),
STACK(A, B)
STACK(A, B), UNSTACK(B,
ON(A, B)  ON(B, C)  ONTABLE(C)
A), PUTDOWN(B),
PICKUP(B), STACK(B, C)]

X := HOLDING(A) is FALSE in
C.

HOLDING(A) is in effects of
action Q := PICKUP(A). Push Q
and its preconditions to stack
B Current
CLEAR(A) state C
ONTABLE(A) A C
ONTABLE(A)  CLEAR(A)
PICKUP(A)  := [UNSTACK(C, A),
HOLDING(A)  CLEAR(B) PUTDOWN(C), PICKUP(A),
STACK(A, B)
STACK(A, B), UNSTACK(B,
ON(A, B)  ON(B, C)  ONTABLE(C)
A), PUTDOWN(B),
PICKUP(B), STACK(B, C)]
B Current
state C
ONTABLE(A) A C
ONTABLE(A)  CLEAR(A)
PICKUP(A)  := [UNSTACK(C, A),
HOLDING(A)  CLEAR(B) PUTDOWN(C), PICKUP(A),
STACK(A, B)
STACK(A, B), UNSTACK(B,
ON(A, B)  ON(B, C)  ONTABLE(C)
A), PUTDOWN(B),
PICKUP(B), STACK(B, C)]

X := CLEAR(A) is TRUE in C.
So do nothing
B Current
state C
A C
ONTABLE(A)  CLEAR(A)
PICKUP(A)  := [UNSTACK(C, A),
HOLDING(A)  CLEAR(B) PUTDOWN(C), PICKUP(A),
STACK(A, B)
STACK(A, B), UNSTACK(B,
ON(A, B)  ON(B, C)  ONTABLE(C)
A), PUTDOWN(B),
PICKUP(B), STACK(B, C)]

X := ONTABLE(A) is TRUE in
C.
So do nothing
B Current
state C
A C
PICKUP(A)  := [UNSTACK(C, A),
HOLDING(A)  CLEAR(B) PUTDOWN(C), PICKUP(A),
STACK(A, B)
STACK(A, B), UNSTACK(B,
ON(A, B)  ON(B, C)  ONTABLE(C)
A), PUTDOWN(B),
PICKUP(B), STACK(B, C)]

X := ONTABLE(A)  CLEAR(A)
is TRUE in C.
So do nothing
B Current
state C
A C
 := [UNSTACK(C, A),
HOLDING(A)  CLEAR(B) PUTDOWN(C), PICKUP(A),
STACK(A, B)
STACK(A, B), UNSTACK(B,
ON(A, B)  ON(B, C)  ONTABLE(C)
A), PUTDOWN(B),
PICKUP(B), STACK(B, C)]

X := PICKUP(A) is an action.
Add it to plan
Create new state by executing
X in current state C.
A B New
Current
C state C

 :=
HOLDING(A)  CLEAR(B) [UNSTAC
STACK(A, B)
K(C, A),
ON(A, B)  ON(B, C)  ONTABLE(C)
PUTDOW
N(C),
PICKUP(
A),
STACK(A
, B),
UNSTAC
K(B, A),
PUTDOW
N(B),
PICKUP(
RENT
A B STAT
C EC
  :=
[UNSTACK(C, A),
STACK(A, B) PUTDOWN(C),
ON(A, B)  ON(B, C)  ONTABLE(C) PICKUP(A),
STACK(A, B),
UNSTACK(B, A),
X := HOLDING(A)  CLEAR(B) TRUEPUTDOWN(B),
in C.
So do nothing. PICKUP(B),
STACK(B, C),
PICKUP(A)]
RENT
A B STAT
C EC
  :=
[UNSTACK(C, A),
PUTDOWN(C),
ON(A, B)  ON(B, C)  ONTABLE(C) PICKUP(A),
STACK(A, B),
UNSTACK(B, A),
X := STACK(A, B) is an action. PUTDOWN(B),
Add it to plan, execute it in C and createPICKUP(B),
the new
state. STACK(B, C),
PICKUP(A)]
A
B New
Current
C state C

ON(A, B)  ON(B, C)  ONTABLE(C)

 := [UNSTACK(C, A),
PUTDOWN(C), PICKUP(A),
STACK(A, B), UNSTACK(B,
A), PUTDOWN(B),
PICKUP(B), STACK(B, C),
PICKUP(A), STACK(A, B)]
A
B New
Current
C state C

 := [UNSTACK(C, A),
PUTDOWN(C), PICKUP(A),
X := ON(A, B)  ON(B, C)  STACK(A, B), UNSTACK(B,
ONTABLE(C) is TRUE in A), PUTDOWN(B),
C. PICKUP(B), STACK(B, C),
So do nothing. PICKUP(A), STACK(A, B)]
A
B New
Current
C state C

 := [UNSTACK(C, A),
PUTDOWN(C), PICKUP(A),
Stack is empty. So STACK(A, B), UNSTACK(B,
STOP. A), PUTDOWN(B),
PICKUP(B), STACK(B, C),
PICKUP(A), STACK(A, B)]
• Note that the resultant plan is inefficient!
• ON(A, B) was first done and then undone.
• Could we fix it? For example, suppose in the first step
we pushed the subgoals so that ON(B, C) was on the
top of stack (instead of ON(A, B)).
– HOMEWORK: Check what happens!
• Sussman proved that the Goal-Stack Planning algorithm
does not always produce the most efficient plan.
– This is because Goal-Stack Planning is a linear planning
algorithm: given a conjunction of goals, it achieves the goals
in a linear sequential manner.
– Achieving one goal (ON(A, B)) destroys the preconditions of
an action to achieve the second goal (CLEAR(B) to achieve
ON(B, C)). So we have to redo the first goal.
CONSTRAINT ASTISFACTION
PROBLEM
CONSTRAINT SATISFACTION
PROBLEMS (CSPS)
 Standard search problem:

138
 state is a "black box“ – any data structure that supports
successor function, heuristic function, and goal test
 CSP:
 state is defined by variables Xi with some values from
domain Di
 a set of constraints Ci specifies allowable combinations of
values for subsets of variables
 aconsistent state violates none of the constraints C
 a complete assignment has values assigned to all
variables.
 A Solution is a complete, consistent assignment.
 Simple example of a formal representation
language
 Allows useful general-purpose algorithms with more
power than standard search algorithms
Based on Hwee Tou Ng, aima.eecs.berkeley.edu/slides-ppt,
CSC 8520 Spring 2010. Paula Matuszek which are based on Russell, aima.eecs.berkeley.edu/slides-pdf.
EXAMPLE: MAP-COLORING

 Variables WA, NT, Q, NSW, V, SA, T


 Domains Di = {red,green,blue}
 Constraints: adjacent regions must have different colors
 e.g., WA ≠ NT, or (WA,NT) in {(red,green),(red,blue),
(green,red), (green,blue),(blue,red),(blue,green)}
EXAMPLE: MAP-COLORING

140
 Solutions
are complete and consistent
assignments, e.g., WA = red, NT =
green,Q = red,NSW = green,V = red,SA
= blue,T = green
CONSTRAINT GRAPH
 Binary CSP: each constraint relates two

141
variables
 Constraint graph: nodes are variables, arcs

are constraints
VARIETIES OF CSPS
 Discrete variables
 finite domains:
 n variables, domain size d  O(dn) complete assignments
 e.g., Boolean CSPs, incl.~Boolean satisfiability (NP-

complete)
 infinite domains:
 integers, strings, etc.
 e.g., job scheduling, variables are start/end days for each

job
 need a constraint language, e.g., StartJob + 5 ≤ StartJob
1 3
 Continuous variables
 e.g.,start/end times for Hubble Space Telescope
observations
 linear constraints solvable in polynomial time by linear
programming algorithms from operations research
VARIETIES OF CONSTRAINTS
 Unary constraints involve a single variable,
 e.g., SA ≠ green
 Binary constraints involve pairs of
variables,
 e.g., SA ≠ WA
 Higher-order constraints involve 3 or more
variables,
 e.g., cryptarithmetic column constraints
 Global constraints: arbitrary # of constraints, not
necessarily all the variables in a problem
 e.g., AllDiff: all values must be different.
Sudoku rows, cols, squares
EXAMPLE: CRYPTARITHMETIC

 Variables: F T U W R O X1 X2 X3
 Domains: {0,1,2,3,4,5,6,7,8,9}
 Constraints: Alldiff (F,T,U,W,R,O)
 O + O = R + 10 · X1
 X1 + W + W = U + 10 · X2
 X2 + T + T = O + 10 · X3
 X3 = F, T ≠ 0, F ≠ 0
REAL-WORLD CSPS
 Common problems:
 Assignment problems
 e.g., who teaches what class
 Timetabling problems
 e.g., which class is offered when and
where?
 Transportationscheduling
 Factory scheduling
 Notice that many real-world problems
involve real-valued variables
 May also include preference constraints:
constraint optimization
STANDARD SEARCH FORMULATION
(INCREMENTAL)
Let's start with the straightforward approach, then fix
it:
States are defined by the values assigned so far
 Initial state: the empty assignment { }
 Successor function: assign a value to an
unassigned variable that does not conflict with
current assignment
 fail if no legal assignments
 Goal test: the current assignment is complete

1. This is the same for all CSPs


2. Every solution appears at depth n with n variables
 use depth-first search
3. Path is irrelevant, so can also use complete-state
formulation
4. b = (n - l )d at depth l, hence n! · dn leaves
BACKTRACKING SEARCH
 Variable assignments are commutative}, i.e.,
[ WA = red then NT = green ] same as [ NT =
green then WA = red ]
 Only need to consider assignments to a
single variable at each node
 b = d and there are dn leaves
 Depth-first search for CSPs with single-
variable assignments is called backtracking
search
 Backtracking search is the basic uninformed
algorithm for CSPs
 Can solve n-queens for n ≈ 25
BACKTRACKING SEARCH
BACKTRACKING EXAMPLE
BACKTRACKING EXAMPLE
BACKTRACKING EXAMPLE
BACKTRACKING EXAMPLE
IMPROVING BACKTRACKING
EFFICIENCY
 General-purpose methods can give huge
gains in speed:
 Which variable should be assigned next?
 In what order should its values be tried?
 Can we detect inevitable failure early?
MOST CONSTRAINED VARIABLE
 Most constrained variable:
choose the variable with the fewest legal values

 a.k.a. minimum remaining values (MRV)


heuristic
MOST CONSTRAINING VARIABLE
 Tie-breaker among most constrained
variables
 Most constraining variable:
 choosethe variable with the most constraints
on remaining variables
LEAST CONSTRAINING VALUE
 Given a variable, choose the least constraining
value:
 theone that rules out the fewest values in the
remaining variables

 Combining these heuristics makes 1000


queens feasible
N-QUEENS PROBLEM
 Find a configuration of n queens not
attacking each other
 What is the maximum number of queens that
can be placed on an n x n chessboard such
that no two attack one another?
 The answer is n queens, which gives eight

queens for the usual 8x8 board


12 UNIQUE SOLUTIONS
8-QUEENS PROBLEM
8-QUEENS PROBLEM
 There are 92 distinct solutions
 There are 12 unique solutions

discounting symmetrical answers


(rotatations/reflections)

 How many solutions for 4-Queens? N-


Queens?

Wikipedia.org
PROBLEMS N < 4

N<4
Cannot use N
3 Queens

1
 What is the minimum number of queens
needed to occupy or attack all squares of an
8x8 board?
SEARCH
 Solving problems by searching
 Some problems have a straightforward solution
 Just apply the formula, or follow a standardized procedure
 Example: solution of the quadratic equation
 Hardly a sign of intelligence

 More interesting problems require search:


 more than one possible alternative needs to be explored
before the problem is solved
 the number of alternatives to search among can be very
large, even infinite.
SEARCH PROBLEMS
is defined by:

 Search space:
– The set of objects among which we search for the
solution
Example: objects = routes between cities, or N-queen
configurations

 Goal condition
– What are the characteristics of the object we want
to find in the search space?
– Examples:
 Path between cities A and B
 Path between A and B with the smallest number of links
 Path between A and B with the shortest distance
 Non-attacking n-queen configuration
N-QUEENS PROBLEM
Problem:
-- How do we represent the search space?
N-QUEENS PROBLEM
Problem:
-- How do we represent the search
space?

N-Queens
– We look for a target configuration,
not a sequence of moves
– No distinguished initial state, no
operators (moves)
-- Don’t use a graph
A SOLUTION: KNOW YOUR DOMAIN
 Since there are N rows and N queens, there must be
a queen in each column (why?),

 Put a queen in each row, but also need to pick a row


for each queen.
 Randomize the rows at the beginning, and then, in
each iteration move one queen that reduces the
number of threatened queens by a maximum.
 We do it in each iteration, until the number if
threatened queens reach 0, or we get to a situation
where we can’t improve any more, because we had a
bad start - so we start over, randomizing locations
again, and doing the whole thing again, until the
queens a safe spot.
SOLUTION: BACKTRACKING
 place a queen in the top row, then note the column and
diagonals it occupies.
 then place a queen in the next row down, taking care not to
place it in the same column or diagonal. Keep track of the
occupied columns and diagonals and move on to the next
row.
 If no position is open in the next row, we back track to the
previous row and move the queen over to the next available
spot in its row and the process starts over again.
 Demo:
http://www.math.utah.edu/~alfeld/queens/queens.html

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy