Unit 4-1
Unit 4-1
● We have explored the idea that problems can be solved by searching in a space of
states.
● We use a factored representation for each state: a set of variables, each of which has
a value.
● A problem is solved when each variable has a value that satisfies all the constraints on
the variable.
● A problem described this way is called a constraint satisfaction problem, or CSP.
DEFINING CONSTRAINT SATISFACTION PROBLEMS
Each constraint Ci consists of a pair ⟨scope , rel ⟩, where scope is a tuple of variables that participate in the
constraint and rel is a relation that defines the values that those variables can take on.
A relation can be represented as an explicit list of all tuples of values that satisfy the constraint, or as an
abstract relation that supports two operations: testing if a tuple is a member of the relation and enumerating the
members of the relation.
For example, if X1 and X2 both have the domain {A,B}, then the constraint saying the two variables must have
different values can be written as ⟨(X1, X2), [(A, B), (B, A)]⟩ or as ⟨(X1, X2), X1 ̸= X2⟩.
● Explicit listing is useful when the number of valid tuples is small and known in advance (e.g.,
finite domains like {A,B}).
⟨(X, Y), [(1,2), (1,3), (2,1), (2,3), (3,1), (3,2)]⟩
● Abstract relations are more efficient for complex constraints (e.g., arithmetic constraints like
X+Y<10) where enumerating all possible tuples would be impractical.
⟨(X, Y), X ≠ Y⟩
for X in {1, 2, 3}:
for Y in {1, 2, 3}:
if X != Y:
yield (X, Y)
Example problem: Map coloring
We are given the task of coloring each region either red, green, or blue in such a way that
no neighboring regions have the same color. To formulate this as a CSP, we define the
variables to be the regions
X = {WA,NT,Q,NSW,V,SA,T} .
Backtracking search is the foundational algorithm for solving Constraint Satisfaction Problems
(CSPs).
2. Assign a value from its domain (using a value-ordering heuristic, e.g., least constraining value).
Forward Checking is an enhancement to the basic backtracking search for solving Constraint Satisfaction Problems (CSPs).
It reduces the search space by dynamically pruning the domains of unassigned variables whenever a new assignment is
made.
Problem Statement:
Color the regions (A, B, C, D) using Red (R), Green (G), Blue (B) such that no two adjacent regions have the same color.
Constraints:
● A≠B
● A≠C
● B≠C
● B≠D
● C≠D
Initial Domains:
● A: {R, G, B}
● B: {R, G, B}
● C: {R, G, B}
● D: {R, G, B}
Step-by-Step Forward Checking Execution
Step 1: Assign A = R
● Assign: A = R
● Prune neighboring domains:
○ B ≠ R → B: {G, B}
○ C ≠ R → C: {G, B}
○ D remains unchanged (not directly constrained by A).
Domains after pruning:
● A: {R}
● B: {G, B}
● C: {G, B}
● D: {R, G, B}
Step 2: Assign B = G
● Assign: B = G
● Prune neighboring domains:
○ A is already assigned (A ≠ B is already satisfied).
○ C ≠ G → C: {B}
○ D ≠ G → D: {R, B}
Domains after pruning:
● A: {R}
● B: {G}
● C: {B}
● D: {R, B}
Step 3: Assign C = B
● Assign: C = B
● Prune neighboring domains:
○ B is already assigned (B ≠ C is satisfied).
○ D ≠ B → D: {R}
Domains after pruning:
● A: {R}
● B: {G}
● C: {B}
● D: {R}
Step 4: Assign D = R
● Assign: D = R
● Check constraints:
○ B ≠ D (B=G, D=R) → Valid
○ C ≠ D (C=B, D=R) → Valid
● No pruning needed (all variables assigned).
Final Assignment:
● A=R
● B=G
● C=B
● D=R
What If Forward Checking Detects a Dead End?
Initial Setup
No domains were reduced in this simple case because all regions still have all colors available. Now let's make an
assignment and see how AC-3 propagates constraints.
Assign WA = Red:
● Remove R from neighbors NT and SA
● New domains:
○ WA: {R}
○ NT: {G, B}
○ SA: {G, B}
○ Q: {R, G, B}
Now add affected arcs back to queue:
(NT,WA), (NT,SA), (NT,Q),
(SA,WA), (SA,NT), (SA,Q),
(Q,NT), (Q,SA)
Process with WA assigned:
● WA: {R}
● NT: {G, B}
● SA: {G, B}
● Q: {R, G, B}
Planning
We saw that an agent can consider the consequence of a sequence of actions even before acting to arrive at
the best first move.
Planning in it is most abstract form can be seen as problem solving, planning is problem solving with the
agent using belief about it is action and consequences of the actions to get to a solution by searching through
a abstract space of plans.
So planning is one of the most useful ways that an intelligent agent can take advantage of the knowledge it
has and it is ability to reason about action and consequences. Here we have shown an agent which has both
sensors and actuators, so the sensors sense the environment and through the actuators it acts.
What is Planning?
Planning is reasoning about future events in order to establish a series of actions
to accomplish a goal.
Plans are created by searching through a space of possible actions until the
sequence necessary to accomplish the task is discovered.
Planning Components
The key components of planning include:
State Representation
● Defines the current state of the world and how actions modify it.
● Can be represented using:
○ Logical representations (e.g., propositional logic, first-order logic)
○ Structured representations (e.g., STRIPS, PDDL – Planning Domain Definition
Language)
○ Numerical/Probabilistic representations (for uncertain environments)
Actions
Action: Move(A, B) The agent (e.g., a robot) moves from location A to location B.
Preconditions: At(A) ∧ Connected(A, B) The agent must currently be at location A. There must be a
direct path (e.g., a door, road) between A and B.
Effects: ¬At(A) ∧ At(B) The agent is no longer at A. The agent is now at B
Example 1: Propositional Logic (Boolean variables)
● Problem: A robot needs to pick up a package in Room A and deliver it to
Room B.
● State Representation:
○ In(RoomA) → Robot is in Room A (True/False)
○ Holding(Package) → Robot is holding the package (True/False)
● Action: Pick(Package)
○ Precondition: In(RoomA) ∧ ¬Holding(Package)
○ Effect: Holding(Package)
Example 2: First-Order Logic (Variables, quantifiers)
● Problem: A blocks world where a robot stacks blocks.
● State Representation:
○ On(BlockA, BlockB) → BlockA is on BlockB.
○ Clear(BlockX) → Nothing is on top of BlockX.
● Action: Move(x, y, z) (Move block x from y to z)
○ Precondition: On(x, y) ∧ Clear(x) ∧ Clear(z)
○ Effect: ¬On(x, y) ∧ On(x, z) ∧ Clear(y)
Goal
the goal is the desired state (or set of conditions) that the planner must achieve by executing a sequence of
actions. It defines what the world should look like after the plan is successfully carried out.
Possible Path:
1. State 1 → [PickUp(Key)] → State 2
2. State 2 → [Move(RoomA, RoomB)] → State 3 (Goal)
Planner (Planning Algorithm)
● The computational method used to find a valid plan.
● Common approaches:
○ Forward (Progression) Planning – Starts from the initial state and applies actions
forward.
○ Backward (Regression) Planning – Works backward from the goal state.
○ Heuristic-Based Planning – Uses estimates (heuristics) to guide search (e.g., A*, greedy
search).
○ Hierarchical Planning – Breaks problems into subgoals (e.g., HTN – Hierarchical Task
Networks).
○ Partial-Order Planning – Allows flexible ordering of actions (e.g., least commitment
planning).
○ Probabilistic Planning – For uncertain environments (e.g., MDPs, POMDPs).
Example of Forward (Progression) Planning
Goal Conditions:
1. At(Robot, RoomB)
2. Holding(Key)
Step 0: Initial State Possible Actions:
1. PickUp(Key)
State: ○ Preconditions:
{ ■ At(Robot, RoomA) ✔
At(Robot, RoomA), # Robot is in RoomA ■ KeyIn(RoomA) ✔
KeyIn(RoomA), # Key is in RoomA ■ ¬Holding(Key) ✔
¬Holding(Key), # Robot isn't holding the key ○ Valid Action ✅
DoorOpen(RoomA, RoomB) # Door between 2. Move(RoomA, RoomB)
rooms is open ○ Preconditions:
} ■ At(Robot, RoomA) ✔
Decision Point: ■ DoorOpen(RoomA, RoomB) ✔
The planner must choose which action to apply first. ○ Valid Action ✅
● Option 1: Pick up the key → Leads to holding the key.
● Option 2: Move to RoomB → Leaves the key behind
(invalid for goal).
Choice: PickUp(Key) (Only this leads to the goal).
Possible Actions Now:
Step 1: After PickUp(Key)
1. Move(RoomA, RoomB)
Action Applied: PickUp(Key) ○ Preconditions:
Effects: ■ At(Robot, RoomA) ✔
● Adds: Holding(Key) ■ DoorOpen(RoomA, RoomB) ✔
● Removes: KeyIn(RoomA)
○ Valid Action ✅
2. Drop(Key)
○ (Hypothetical action; not useful here)
New State:
Choice: Move(RoomA, RoomB) (Only productive
{
At(Robot, RoomA),
action).
Holding(Key), # Robot now has the key
DoorOpen(RoomA, RoomB),
¬KeyIn(RoomA) # Key no longer in RoomA
}
Step 2: After Move(RoomA, RoomB)
1. Order Matters:
○ If the robot moved first (Move → PickUp), it would be in RoomB without the key
→ goal fails.
○ Correct order: PickUp → Move ensures the key is carried to the destination.
2. Dead Ends Avoided:
○ Moving first leads to an unrecoverable state (key left behind).
○ Forward planning explores all paths, eventually discarding dead ends.
Backward (Regression) Planning
Key Difference:
● Forward Planning: Starts at the initial state and simulates actions forward.
● Backward Planning: Starts at the goal state and works backward to find required
preconditions.
Goal State:
{
At(Robot, RoomB),
Holding(Key)
}
Step 1: Regress the Goal
New Subgoal
{
Goal State:
At(Robot, RoomA),
DoorOpen(RoomA, RoomB) {
} At(Robot, RoomB),
Holding(Key)
}
Step 2: Regress Further
Goal Stack Planning is a partial-order planning technique that uses a stack to manage goals and subgoals. It
works by decomposing the main goal into smaller subgoals, solving them sequentially, and maintaining a stack to
track pending actions and preconditions.
Planning in Artificial Intelligence (AI) can be viewed as a state space search problem, where:
● The initial state represents the starting conditions.
● The goal state represents the desired outcome.
● Actions (operators) transform one state into another.
● The solution is a sequence of actions (a path) from the initial state to the goal state.
● Search Tree: To visualize the search issue, a search tree is used, which is a tree-like structure that
represents the problem. The initial state is represented by the root node of the search tree, which is the
starting point of the tree.
Key Components:
1. State – A representation of the world at a given time.
2. Actions – Possible moves or operations that change the state.
3. Transition Model – Defines how actions affect states.
4. Goal Test – Checks if the current state is the goal.
5. Path Cost – Measures the cost of a solution (optional).
Example of State Space Search
Multi-Agent Planning involves coordinating multiple autonomous agents to achieve a shared or individual
goal while handling:
● Dependencies between agents' actions.
● Resource conflicts (e.g., two robots needing the same tool).
● Communication constraints (limited/uncertain information sharing).
Possible Plans:
Scenario: 1. Naive Plan (No Coordination):
● Two robots (R1 and R2) in a warehouse ○ R1 moves right to (3,1), then up to (3,3).
○ R2 moves down to (3,3), then left to (3,1).
must deliver packages to targets (T1 and
○ Conflict: Both robots try to occupy (3,3) at the same
T2). time → Collision!
● Obstacles (shelves) limit movement. 2. Coordinated Plan (MAP Solution):
● Goal: Minimize delivery time without ○ R1’s Path: (1,1) → (1,2) → (1,3) → (2,3) → (3,3).
collisions. ○ R2’s Path: (1,3) → (2,3) → (3,3) → (3,2) → (3,1).
○ Conflict Avoidance:
Initial State: ■ Agents communicate to reserve space-time
[R1] → Starts at (1,1), needs to reach T1 at (3,3). slots.
■ R1 waits at (1,3) until R2 passes (2,3).
[R2] → Starts at (1,3), needs to reach T2 at (3,1).
Shelves at (2,2) and (2,1).
Hierarchical Planning
Hierarchical Planning is an AI planning technique where complex problems are broken down into smaller,
manageable sub-problems organized in a hierarchy.
Key Concepts
1. Abstraction Levels:
○ High-level (Abstract): Broad goals (e.g., "Build a house").
○ Mid-level: Sub-goals (e.g., "Lay foundation", "Construct walls").
○ Low-level (Concrete): Executable actions (e.g., "Pour concrete", "Install bricks").
2. Advantages:
○ Reduces search space by ignoring irrelevant details early.
○ Enables reusability of sub-plans (e.g., "Construct walls" can be reused in different buildings).
○ Improves scalability for complex problems.
3. Disadvantages:
○ May miss optimal solutions due to abstraction.
○ Requires careful design of hierarchy levels
Planning a Trip from Delhi to Goa Level 3: Low-Level Actions
1. Book Flight:
Level 1: High-Level Goal ○ Search for flights.
○ Compare prices.
"Travel from Delhi to Goa"
○ Make payment.
Level 2: Mid-Level Decomposition 2. Go to Airport:
Choose a travel mode: ○ Pack luggage.
1. By Air → Book flight, go to airport, board ○ Take a taxi to IGI Airport.
plane. ○ Check in at counter.
2. By Train → Book train, go to station, board 3. Board Plane:
train. ○ Pass security.
3. By Road → Rent car, drive via highway. ○ Wait at gate.
Assume we pick "By Air". ○ Enter flight.
Hierarchy Visualization:
After a planner generates a sequence of actions (the plan), the agent must execute it in the real world. However, real-world environments are
often dynamic—unexpected changes can occur, requiring monitoring and replanning to ensure success.
Plan Execution
The agent carries out the planned actions step-by-step.
Example of Failure
● During Move(RoomA, RoomB), the robot senses:
○ ¬DoorOpen(RoomA, RoomB) (Door is now closed!).
● Plan is now invalid because the action’s precondition (DoorOpen) is false.
Replanning
If monitoring detects a mismatch, the agent must:
1. Stop execution.
2. Update its world state (e.g., mark the door as closed).
3. Generate a new plan from the current state.