0% found this document useful (0 votes)
9 views13 pages

UNit-3 Module - 1

Uploaded by

tejasborseee
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views13 pages

UNit-3 Module - 1

Uploaded by

tejasborseee
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 13

State Space Search - Goal-Directed Agent - Search Problem -

Illustration of Search Process

1. Introduction to State Space Search


• State Space Search is a framework used to solve problems by exploring possible states
and transitions systematically.
• Components of State Space Search:
1. State: A configuration of the problem at any point.
2. Initial State: The starting configuration of the problem.
3. Goal State(s): The desired outcome or solution to the problem.
4. Actions/Operators: Set of operations that move the system from one state to
another.
5. Path Cost: The cumulative cost of the actions leading to a state.

2. Goal-Directed Agent
• A Goal-Directed Agent is designed to achieve specific objectives (goal states).
• Characteristics:
1. Perception: Identifies the current state of the environment.
2. Decision-Making: Chooses actions to progress toward the goal.
3. Action Execution: Performs the selected action.

3. Search Problem
• A Search Problem is defined by:
1. State Space: The collection of all possible states.
2. Transition Model: Describes how an action changes the state.
3. Goal Test: Determines whether a state is the goal.
4. Cost Function: Evaluates the efficiency of reaching a state.
• Example: The 8-puzzle problem, where the task is to rearrange tiles to match a goal
configuration.
4. Illustration of the Search Process
The search process involves systematically exploring states to reach the goal:
1. Initialization: Start with the initial state.
2. Expansion: Generate all successor states.
3. Selection: Choose the next state to explore (based on the strategy).
4. Goal Test: Check if the selected state meets the goal criteria.
5. Repeat until the goal state is found or all states are exhausted.

5. Search Strategies
1. Uninformed Search:
o Breadth-First Search (BFS): Explores all nodes at one depth before moving
deeper.
o Depth-First Search (DFS): Explores as far as possible along a branch before
backtracking.
2. Informed Search:
o Greedy Search: Prioritizes states closest to the goal (heuristics).
o A* Search:** Combines path cost and heuristics for optimal results.

6. Example: Solving a Maze Problem


Problem: Find the shortest path from the start (S) to the goal (G) in a maze.
State Space Representation:
• States: Each cell in the maze.
• Actions: Move Up, Down, Left, Right.
• Initial State: Starting cell (S).
• Goal State: Goal cell (G).
Search Process:
1. Initialization: Start at (S).
2. Expansion: Explore neighboring cells (actions).
3. Selection: Choose the next cell based on a BFS strategy.
4. Goal Test: Check if the selected cell is (G).
Example Execution:
Maze:
S - Start
G - Goal
X - Blocked
O - Open

Maze Grid:
[S, O, X]
[O, O, X]
[X, O, G]

Steps:
1. Start at S (0, 0).
2. Move to (0, 1).
3. Move to (1, 1).
4. Move to (2, 1).
5. Goal (2, 2) reached.

Examples of Search Problems: Eight Queens Problem and Tic-Tac-


Toe

1. Eight Queens Problem


The Eight Queens problem involves placing 8 queens on a chessboard so that no two queens
threaten each other. It is a classic constraint satisfaction problem that can be framed as a search
problem.
State Space Representation
• States: Configurations of the chessboard with some queens placed.
• Initial State: An empty chessboard.
• Actions: Place a queen on a valid square in the next row.
• Goal State: A configuration where 8 queens are placed without threatening each other.
Search Process
1. Start with an empty board.
2. Place a queen in the first row in a valid position.
3. Move to the next row and place another queen in a non-threatening position.
4. Repeat until all rows are filled, or backtrack if no valid position is found.
Example Execution (Partial Solution):
1. Place Q1 at (1,1).
2. Place Q2 at (2,3).
3. Place Q3 at (3,5).
4. Place Q4 at (4,2).
5. Continue until all 8 queens are placed.

2. Tic-Tac-Toe
Tic-Tac-Toe is a two-player game where the objective is to align three symbols (X or O) in a row,
column, or diagonal. It can be framed as a search problem for optimal strategies.
State Space Representation
• States: Configurations of the 3x3 board.
• Initial State: An empty board.
• Actions: Place an X or O in an empty square.
• Goal State: A board configuration where one player wins or all squares are filled (draw).
Search Process
1. Start with an empty board.
2. Players take turns placing X or O.
3. After each move, check if the goal state is reached (win or draw).
4. Use a search strategy (e.g., Minimax) to determine the best move.
Example Execution:
1. Initial Board:
[_, _, _]
[_, _, _]
[_, _, _]
2. Player X places X at (1, 1):
[_, _, _]
[_, X, _]
[_, _, _]
3. Player O places O at (0, 0):
[O, _, _]
[_, X, _]
[_, _, _]
4. Players continue until a win or draw is reached.

General State Space Search - Search Tree and Terminology

1. Introduction to State Space Search


• State Space Search is a systematic method of solving problems by exploring all possible
configurations (states) that the problem can take and identifying a path to the goal state.
• A search tree represents the process of exploring these states.

2. Search Tree
A search tree is a graphical representation of the exploration process in state space search. It
captures:
1. Nodes: Represent states.
2. Edges: Represent actions leading from one state to another.
3. Root Node: Represents the initial state.
4. Leaf Nodes: Nodes with no successors (end of a path).
Structure of a Search Tree
• Levels/Depths: Correspond to the number of actions taken from the root.
• Branches: Represent possible choices at a state.

3. Terminology of Search Tree


1. State: A configuration or situation in the problem domain.
o Example: In the 8-puzzle, each arrangement of tiles is a state.
2. Initial State: The starting point of the search.
o Example: The initial configuration of tiles in the 8-puzzle.
3. Goal State: The desired final configuration or solution.
o Example: The solved configuration in the 8-puzzle.
4. Action: A move that transitions the system from one state to another.
o Example: Moving a tile in the 8-puzzle.
5. Path: A sequence of states connected by actions, leading from the root to a particular
node.
6. Cost: A numerical value representing the effort to transition from one state to another (if
applicable).
7. Parent Node: The node that leads to the current node via an action.
8. Child Node: A node that can be reached from the current node via an action.
9. Depth: The number of edges from the root to the current node.
10. Breadth: The number of child nodes branching out from a node.

4. Example of a Search Tree


Problem: Solving the 3-puzzle (a simplified version of the 8-puzzle).
• Initial State:
[1, 2, 3]
[_, 4, 5]
• Goal State:
[1, 2, 3]
[4, 5, _]
• Actions: Move the blank (_) up, down, left, or right.
Search Tree Representation:
[1, 2, 3]
[_, 4, 5]
/ \
[_, 2, 3] [1, 2, 3]
[1, 4, 5] [1, 4, _]
| |
[1, _, 3] [1, _, 3]
[4, 2, 5] [4, 2, 5]
• Nodes: Represent the states of the puzzle.
• Edges: Represent actions like moving the blank up or left.
• Goal State: [1, 2, 3] [4, 5, _].
5. General State Space Search Algorithm
1. Initialization: Start with the initial state as the root node.
2. Expansion: Generate successors for the current node.
3. Selection: Choose a node to expand (based on strategy like BFS or DFS).
4. Goal Test: Check if the selected node is the goal state.
5. Repeat: Continue until the goal is found or all possibilities are exhausted.

Informed Search - Best-First Search and A* Search


Example: Pathfinding
Find the shortest path from A to G in the graph below using Best-First Search.
Graph:
A
/\
B C
/| \
DE F
/
G
Heuristic Values (h(n)):
• A: 6, B: 4, C: 2, D: 5, E: 3, F: 2, G: 0.
Steps:
1. Start at A. Frontier: [A (6)].
2. Expand A. Add B (4) and C (2) to the frontier. Frontier: [C (2), B (4)].
3. Expand C. Add F (2). Frontier: [F (2), B (4)].
4. Expand F. Add G (0). Frontier: [G (0), B (4)].
5. Expand G. Goal reached.
Solution Path: A → C → F → G.

Example: Pathfinding
Find the shortest path from S to G using A*.
Graph:
S
/ \
A B
/\ /\
C DE G
Edge Costs (g(n)):
• S → A: 2, S → B: 3.
• A → C: 3, A → D: 2.
• B → E: 2, B → G: 5.
Heuristic Values (h(n)):
• S: 6, A: 4, B: 3, C: 5, D: 2, E: 3, G: 0.
Solution Path: S → A → D → G.

4. Comparison: Best-First Search vs. A*

Feature Best-First Search A* Search

Evaluation f(n)=h(n)f(n) = f(n)=g(n)+h(n)f(n) = g(n) +


Function h(n)f(n)=h(n) h(n)f(n)=g(n)+h(n)

Focus Only heuristic cost Both path and heuristic cost

Optimality Not guaranteed Guaranteed if h(n)h(n)h(n) is admissible

Efficiency Less efficient More efficient due to balanced evaluation


Hill Climbing Search and Simulated Annealing

1. Introduction to Optimization in Search


In search problems, optimization involves finding the best solution from a set of possible
solutions. Two key approaches in heuristic optimization are Hill Climbing and Simulated
Annealing.

2. Hill Climbing Search


2.1. Concept
• Hill Climbing is a local search algorithm that continuously moves toward higher (or
lower) values of the objective function.
• It selects the best neighboring state as the next move, aiming to reach the optimal
solution.
2.2. Characteristics
• Works best for problems with a clear gradient or path to the goal.
• Can get stuck in:
o Local Maxima/Minima: Peaks or valleys that are not the global solution.
o Ridges: Narrow paths that are difficult to navigate.
o Plateaus: Flat areas with no gradient.
2.3. Variants
1. Simple Hill Climbing: Chooses the first improvement found.
2. Steepest-Ascent Hill Climbing: Evaluates all neighbors and selects the best.
3. Stochastic Hill Climbing: Randomly selects among better neighbors.
3. Simulated Annealing
3.1. Concept
• Simulated Annealing is inspired by the process of annealing in metallurgy, where
materials are heated and then cooled slowly to settle into a stable state.
• It explores the state space by occasionally accepting worse solutions to escape local
optima.
3.2. Characteristics
• Balances exploration (searching new areas) and exploitation (refining current solutions).
• Uses a probabilistic acceptance function:

4. Comparison: Hill Climbing vs. Simulated Annealing


Feature Hill Climbing Simulated Annealing
Search Method Deterministic or Stochastic Probabilistic
Exploration Limited to neighbors Allows jumps to worse states
Optimality Prone to local optima Escapes local optima
Speed Generally faster Slower due to exploration
Best Used For Problems with smooth gradients Problems with rugged landscapes

5. Applications
• Hill Climbing:
o Shortest path problems.
o Scheduling and resource allocation.
• Simulated Annealing:
o Traveling Salesperson Problem (TSP).
o Machine learning model tuning.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy