Planning in Ai
Planning in Ai
One Solution:
The complexity of classical
planning
• PlanSAT is the question of whether there exists
any plan that solves a planning problem.
• Bounded PlanSAT asks whether there is a
solution of length k or less; this can be used to
find an optimal plan.
• Both PlanSAT and Bounded PlanSAT are in the
complexity class PSPACE
• a class that is larger (and hence more difficult)
than NP and refers to problems that can be solved
by a deterministic Turing machine with a
polynomial amount of space.
Algorithms For Planning As
State-space Search
• State space consists of the following:
• the initial state,
• set of goal states,
• set of actions or operations,
• set of states and
• the path cost.
• This state space needs to be searched to find a sequence of actions
leading to the goal state.
• This can be done in the forward or
backward direction.
Forward State Space Planning
(FSSP)
• FSSP behaves in the same way as Initial state: start state
forwarding state-space search.
Actions: Each action has a
• It says that given an initial state S in any
particular precondition to be
domain, we perform some necessary satisfied before the action can be
actions and obtain a new state S' (which performed and an effect that the
also contains some new terms), called a action will have on the environment.
progression.
• It continues until we reach the target Goal test: To check if the current
position. state is the goal state or not.
• Action should be taken in this matter. Step cost: Cost of each step which
• Disadvantage: Large branching factor is assumed to be 1.
• Advantage: The algorithm is Sound
Air cargo problem
Backward State Space Planning
(BSSP)
• BSSP behaves similarly to backward state-space search.
• In this, we move from the target state g to the sub-goal g, tracing the previous action to achieve that
goal.
• This process is called regression (going back to the previous goal or sub-goal).
• These sub-goals should also be checked for consistency.
• The action should be relevant in this case.
• Disadvantages: not sound algorithm (sometimes inconsistency can be found)
• Advantage: Small branching factor (much smaller than FSSP)
Total Order Planning
• FSSS and BSSS are examples of TOP.
• They only explore linear sequences of actions from start to goal state,
• They cannot take advantage of problem decomposition,
• i.e. splitting the problem into smaller sub-problems and solving them
individually.
Partial Order Planning (POP)
• It works on problem decomposition.
• It will divide the problem into parts and achieve these sub goals
independently.
• It solves the sub problems with sub plans and then combines these
sub plans and reorders them based on requirements.
• In POP, ordering of the actions is partial.
• It does not specify which action will come first out of the two actions
which are placed in the plan.
POP
• Partially ordered collection of steps with
• Start step has the initial state description as its effect
• Finish step has the goal description as its precondition
• causal links from outcome of one step to precondition of another
• temporal ordering between pairs of steps
• Open condition = precondition of a step not yet causally linked
• A plan is complete iff every precondition is achieved
• A precondition is achieved iff it is the effect of an earlier step and no
possibly intervening step undoes it
Planning Process
• Operators on partial plans:
• add a link from an existing action to an open condition
• add a step to fulfill an open condition
• order one step wrt another to remove possible conflicts
• Gradually move from incomplete/vague plans to complete, correct
plans
• Backtrack if an open condition is unachievable or if a conflict is
unresolvable
Example: The problem of wearing shoes can be
performed through total order or partial order
planning.
• Init: Barefoot
• Goal: RightShoeOn ^ LeftShoeOn
• Action:
1. RightShoeOn
Precondition: RightSockOn
Effect: RightShoeOn
2. LeftShoeOn
Precondition: LeftSockOn
Effect: LeftShoeOn
3. LeftSockOn
Precondition: Barefoot
Effect: LeftSockOn
4. RightSockOn
Precondition: Barefoot
Effect: RightSockOn
Planning Graphs
• Planning graphs play a vital role in AI planning by visually
representing possible states and actions that aid in decision-making.
• A Planning Graph is a data structure primarily used in automated
planning and artificial intelligence to find solutions to planning
problems.
• It represents a planning problem's progression through a series of
levels that describe states of the world and the actions that can be
taken.
A planning graph is a directed graph organized into levels: first a level S0 for
the initial state, consisting of nodes representing each fact that holds in S0;
then a level A0 consisting nodes for each ground action that might be
applicable in S0; then alternating levels Si followed by Ai; until we reach a
termination condition
Main components and it functions
• Levels: A Planning graph has two alternating types of levels: action levels and
state levels.
• The first level is always a state level, representing the initial state of the planning
problem.
• State Levels: These levels consist of nodes representing logical propositions or
facts about the world.
• Each successive state level contains all the propositions of the previous level plus any
that can be derived by the actions of the intervening action levels.
• Action Levels: These levels contain nodes representing actions.
• An action node connects to a state level if the state contains all the preconditions
necessary for that action.
• Actions in turn can create new state conditions, influencing the subsequent state level.
Main components and it functions…
• Edges: The graph has two types of edges:
• one connecting state nodes to action nodes (indicating that the state meets
the preconditions for the action), and
• another connecting action nodes to state nodes (indicating the effects of the
action).
• Mutual Exclusion (Mutex) Relationships: At each level, certain pairs
of actions or states might be mutually exclusive, meaning they cannot
coexist or occur together due to conflicting conditions or effects.
• These mutex relationships are critical for reducing the complexity of the
planning problem by limiting the combinations of actions and states that
need to be considered.
The phrase "have cake and eat it too" refers to a common paradox or dilemma where one desires to have two mutually
exclusive things at the same time.
In a problem context, it can be understood as a situation where one wishes to achieve two goals that are inherently conflicting.
Here are a few examples of this paradox in different fields:
1. Decision-Making and Trade-offs
•In decision theory or economics, this problem often surfaces when an individual wants to optimize multiple goals that are
inherently contradictory (like maximizing profit while minimizing risk).
•Solution strategies involve finding a compromise or optimizing one goal while accepting a certain loss in another.
2. Optimization Problems
•In computer science, particularly in optimization problems, the "have cake and eat it too" problem shows up when trying to
maximize or minimize competing objectives simultaneously (e.g., time vs. resources, speed vs. accuracy).
•Techniques like multi-objective optimization, Pareto efficiency, or weighted sums can be used to handle such trade-offs.
3. Resource Allocation
•In project management, one might want to use limited resources to achieve a high-quality outcome while also finishing quickly.
Balancing quality, time, and cost—often called the Project Management Triangle—is a classic "have cake and eat it too"
scenario.
4. Ethical Dilemmas
•In ethics or philosophy, it may refer to moral situations where one desires to do what is ethically right while still benefiting
personally, even if the two are at odds.
5. Theoretical Computer Science
•In theoretical computer science, it could relate to problems involving contradictions or inherent computational limits, where
one might wish to have a perfect solution that is both efficient and exact—a problem that might be impossible under certain
theoretical constraints.
“Have cake and eat cake too” problem
The planning graph for the “have cake and eat cake too” problem
up to level S2.
Note: Not all mutex links are shown, because the graph would be too
cluttered. In general, if two literals are mutex at Si, then the persistence
actions for those literals will be mutex at Ai and we need not draw that
Planning a Graph for a CAKE Problem
Note: At state level S1, all literals are obtained by considering any subset of actions at A0. In simple
terms, state level S1 holds all possible outcomes after the actions in A0 are considered. In our
example, since we only have the Eat(Cake) action at A0, S1 will list all possible outcomes with and
without the action being taken.
Mutual Exclusion
Mutual exclusion occurs when a conflict arises between literals, indicating that the two literals
cannot occur together. These conflicts are represented by mutex links, which reveal mutually
exclusive propositions in S1.
For instance, if we eat the cake, we cannot have the cake at the same time.
Thus, Have(Cake) and Eaten(Cake) would be mutually exclusive. For example, Eat(Cake) is
mutually exclusive with the persistence of either Have(Cake) or ¬Eaten(Cake). The mutex links
define the set of states, revealing which combinations of literals are not possible together.
For example:
•If Eat(Cake) is performed, Have(Cake) and ¬Eaten(Cake) cannot be true simultaneously.
•If Eat(Cake) is not performed, ¬Have(Cake) and Eaten(Cake) cannot be true together.
•Inconsistent Effects: One action negates the •Negation of Each Other: Two literals are
effect of another. mutually exclusive if one is the negation of the
•Interference: One action deletes a precondition other.
or creates an add-effect of another. •Achieved by Mutually Exclusive Actions:
•Competing Needs: Precondition of action a and No pair of non-mutex actions can make both
precondition of action b cannot be true literals true at the same level.
simultaneously.
2.Check for Plan Existence: If the planning graph levels off before
all goals are present and non-mutex, the algorithm fails.
3.Search for Valid Plan: The algorithm performs a back search from
the last level to the initial state to find a sequence of actions leading
to the goals without violating mutex constraints.
The elements in the planning graph are described as increasing or decreasing monotonically:
• Literals Increase Monotonically
• Actions Increase Monotonically
• Mutexes Decrease Monotonically
Due to these properties, the presence of a finite number of actions and literals enables the
planning graph to eventually level off.