UNit-3 Module - 1
UNit-3 Module - 1
2. Goal-Directed Agent
• A Goal-Directed Agent is designed to achieve specific objectives (goal states).
• Characteristics:
1. Perception: Identifies the current state of the environment.
2. Decision-Making: Chooses actions to progress toward the goal.
3. Action Execution: Performs the selected action.
3. Search Problem
• A Search Problem is defined by:
1. State Space: The collection of all possible states.
2. Transition Model: Describes how an action changes the state.
3. Goal Test: Determines whether a state is the goal.
4. Cost Function: Evaluates the efficiency of reaching a state.
• Example: The 8-puzzle problem, where the task is to rearrange tiles to match a goal
configuration.
4. Illustration of the Search Process
The search process involves systematically exploring states to reach the goal:
1. Initialization: Start with the initial state.
2. Expansion: Generate all successor states.
3. Selection: Choose the next state to explore (based on the strategy).
4. Goal Test: Check if the selected state meets the goal criteria.
5. Repeat until the goal state is found or all states are exhausted.
5. Search Strategies
1. Uninformed Search:
o Breadth-First Search (BFS): Explores all nodes at one depth before moving
deeper.
o Depth-First Search (DFS): Explores as far as possible along a branch before
backtracking.
2. Informed Search:
o Greedy Search: Prioritizes states closest to the goal (heuristics).
o A* Search:** Combines path cost and heuristics for optimal results.
Maze Grid:
[S, O, X]
[O, O, X]
[X, O, G]
Steps:
1. Start at S (0, 0).
2. Move to (0, 1).
3. Move to (1, 1).
4. Move to (2, 1).
5. Goal (2, 2) reached.
2. Tic-Tac-Toe
Tic-Tac-Toe is a two-player game where the objective is to align three symbols (X or O) in a row,
column, or diagonal. It can be framed as a search problem for optimal strategies.
State Space Representation
• States: Configurations of the 3x3 board.
• Initial State: An empty board.
• Actions: Place an X or O in an empty square.
• Goal State: A board configuration where one player wins or all squares are filled (draw).
Search Process
1. Start with an empty board.
2. Players take turns placing X or O.
3. After each move, check if the goal state is reached (win or draw).
4. Use a search strategy (e.g., Minimax) to determine the best move.
Example Execution:
1. Initial Board:
[_, _, _]
[_, _, _]
[_, _, _]
2. Player X places X at (1, 1):
[_, _, _]
[_, X, _]
[_, _, _]
3. Player O places O at (0, 0):
[O, _, _]
[_, X, _]
[_, _, _]
4. Players continue until a win or draw is reached.
2. Search Tree
A search tree is a graphical representation of the exploration process in state space search. It
captures:
1. Nodes: Represent states.
2. Edges: Represent actions leading from one state to another.
3. Root Node: Represents the initial state.
4. Leaf Nodes: Nodes with no successors (end of a path).
Structure of a Search Tree
• Levels/Depths: Correspond to the number of actions taken from the root.
• Branches: Represent possible choices at a state.
Example: Pathfinding
Find the shortest path from S to G using A*.
Graph:
S
/ \
A B
/\ /\
C DE G
Edge Costs (g(n)):
• S → A: 2, S → B: 3.
• A → C: 3, A → D: 2.
• B → E: 2, B → G: 5.
Heuristic Values (h(n)):
• S: 6, A: 4, B: 3, C: 5, D: 2, E: 3, G: 0.
Solution Path: S → A → D → G.
5. Applications
• Hill Climbing:
o Shortest path problems.
o Scheduling and resource allocation.
• Simulated Annealing:
o Traveling Salesperson Problem (TSP).
o Machine learning model tuning.