0% found this document useful (0 votes)
25 views16 pages

Unit - II (Part - I) - 250123 - 233656

The document discusses various search algorithms in artificial intelligence, focusing on bidirectional search, informed search algorithms (including Best First Search and A*), and hill climbing algorithms. It outlines their completeness, time and space complexities, advantages, and disadvantages, as well as specific algorithmic steps for implementation. Additionally, it addresses challenges faced by hill climbing algorithms and introduces simulated annealing as a potential solution to local maxima issues.

Uploaded by

cspriyanga26
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
25 views16 pages

Unit - II (Part - I) - 250123 - 233656

The document discusses various search algorithms in artificial intelligence, focusing on bidirectional search, informed search algorithms (including Best First Search and A*), and hill climbing algorithms. It outlines their completeness, time and space complexities, advantages, and disadvantages, as well as specific algorithmic steps for implementation. Additionally, it addresses challenges faced by hill climbing algorithms and introduces simulated annealing as a potential solution to local maxima issues.

Uploaded by

cspriyanga26
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 16

Completeness: Bidirectional Search is complete if we use BFS in both

searches.

Time Complexity: Time complexity of bidirectional search using BFS


is O(bd).

Space Complexity: Space complexity of bidirectional search is O(bd).

Optimal: Bidirectional search is Optimal.

Informed Search Algorithms

So far we have talked about the uninformed search algorithms which


looked through search space for all possible solutions of the problem
without having any additional knowledge about search space. But
informed search algorithm contains an array of knowledge such as
how far we are from the goal, path cost, how to reach to goal node,
etc. This knowledge helps agents to explore less to the search space
and find more efficiently the goal node.

The informed search algorithm is more useful for large search space.
Informed search algorithm uses the idea of heuristic, so it is also
called Heuristic search.

Heuristics function: Heuristic is a function which is used in Informed


Search, and it finds the most promising path. It takes the current
state of the agent as its input and produces the estimation of how
close agent is from the goal.

The heuristic method, however, might not always give the best
solution, but it guaranteed to find a good solution in reasonable time.
Heuristic function estimates how close a state is to the goal. It is
represented by h(n), and it calculates the cost of an optimal path
between the pair of states. The value of the heuristic function is
always positive.

Admissibility of the heuristic function is given as: h(n) <= h*(n)

Here h(n) is heuristic cost, and h*(n) is the estimated cost.

Hence heuristic cost should be less than or equal to the estimated


cost.
Pure Heuristic Search:

Pure heuristic search is the simplest form of heuristic search


algorithms. It expands nodes based on their heuristic value h(n). It
maintains two lists, OPEN and CLOSED list. In the CLOSED list, it
places those nodes which have already expanded and in the OPEN
list, it places nodes which have yet not been expanded.

On each iteration, each node n with the lowest heuristic value is


expanded and generates all its successors and n is placed to the
closed list. The algorithm continues unit a goal state is found.

In the informed search we will discuss two main algorithms which


are given below:

o Best First Search Algorithm(Greedy search)


o A* Search Algorithm

1.) Best-first Search Algorithm (Greedy Search):

Greedy best-first search algorithm always selects the path which


appears best at that moment. It is the combination of depth-first
search and breadth-first search algorithms. It uses the heuristic
function and search. Best-first search allows us to take the
advantages of both algorithms. With the help of best-first search, at
each step, we can choose the most promising node. In the best first
search algorithm, we expand the node which is closest to the goal
node and the closest cost is estimated by heuristic function, i.e.
f(n)= g(n).
Were, h(n)= estimated cost from node n to the goal.

The greedy best first algorithm is implemented by the priority queue.

Best first search algorithm:

o Step 1: Place the starting node into the OPEN list.

o Step 2: If the OPEN list is empty, Stop and return failure.

o Step 3: Remove the node n, from the OPEN list which has the
lowest value of h(n), and places it in the CLOSED list.
o Step 4: Expand the node n, and generate the successors of node
n.

o Step 5: Check each successor of node n, and find whether any


node is a goal node or not. If any successor node is goal node,
then return success and terminate the search, else proceed to
Step 6.

o Step 6: For each successor node, algorithm checks for


evaluation function f(n), and then check if the node has been in
either OPEN or CLOSED list. If the node has not been in both
list, then add it to the OPEN list.

o Step 7: Return to Step 2.

Advantages:

o Best first search can switch between BFS and DFS by gaining
the advantages of both the algorithms.
o This algorithm is more efficient than BFS and DFS algorithms.

Disadvantages:

o It can behave as an unguided depth-first search in the worst


case scenario.
o It can get stuck in a loop as DFS.
o This algorithm is not optimal.

Example:

Consider the below search problem, and we will traverse it using


greedy best-first search. At each iteration, each node is expanded
using evaluation function f(n)=h(n) , which is given in the below
table.
In this search example, we are using two lists which
are OPEN and CLOSED Lists. Following are the iteration for
traversing the above example.

Expand the nodes of S and put in the CLOSED list


Initialization: Open [A, B], Closed [S]

Iteration 1: Open [A], Closed [S, B]

Iteration2: Open[E,F,A],Closed[S,B]
: Open [E, A], Closed [S, B, F]

Iteration3: Open[I,G,E,A],Closed[S,B,F]
: Open [I, E, A], Closed [S, B, F, G]

Hence the final solution path will be: S----> B----->F----> G

Time Complexity: The worst case time complexity of Greedy best first
search is O(bm).

Space Complexity: The worst case space complexity of Greedy best


first search is O(bm). Where, m is the maximum depth of the search
space.

Complete: Greedy best-first search is also incomplete, even if the


given state space is finite.

Optimal: Greedy best first search algorithm is not optimal.

2.) A* Search Algorithm:

A* search is the most commonly known form of best-first search. It


uses heuristic function h(n), and cost to reach the node n from the
start state g(n). It has combined features of UCS and greedy best-first
search, by which it solve the problem efficiently. A* search algorithm
finds the shortest path through the search space using the heuristic
function. This search algorithm expands less search tree and
provides optimal result faster.

A* algorithm is similar to UCS except that it uses g(n)+h(n) instead of


g(n).
In A* search algorithm, we use search heuristic as well as the cost to
reach the node.

Hence we can combine both costs as following, and this sum is called
as a fitness number.
Algorithm of A* search:

Step1: Place the starting node in the OPEN list.

Step 2: Check if the OPEN list is empty or not, if the list is empty
then return failure and stops.

Step 3: Select the node from the OPEN list which has the smallest
value of evaluation function (g+h), if node n is goal node then return
success and stop, otherwise

Step 4: Expand node n and generate all of its successors, and put n
into the closed list. For each successor n', check whether n' is
already in the OPEN or CLOSED list, if not then compute evaluation
function for n' and place into Open list.

Step 5: Else if node n' is already in OPEN and CLOSED, then it


should be attached to the back pointer which reflects the lowest g(n')
value.

Step 6: Return to Step 2.

Advantages:
o A* search algorithm is the best algorithm than other search
algorithms.
o A* search algorithm is optimal and complete.
o This algorithm can solve very complex problems.

Disadvantages:
o It does not always produce the shortest path as it mostly based
on heuristics and approximation.
o A* search algorithm has some complexity issues.
o The main drawback of A* is memory requirement as it keeps all
generated nodes in the memory, so it is not practical for various
large-scale problems.

Example:

In this example, we will traverse the given graph using the A*


algorithm. The heuristic value of all states is given in the below table
so we will calculate the f(n) of each state using the formula f(n)= g(n)
+ h(n), where g(n) is the cost to reach any node from start state.

Here we will use OPEN and CLOSED list.

Solution :
Initialization: {(S, 5)}

Iteration1: {(S--> A, 4), (S-->G, 10)}

Iteration2: {(S--> A-->C, 4), (S--> A-->B, 7), (S-->G, 10)}

Iteration3: {(S--> A-->C--->G, 6), (S--> A-->C--->D, 11), (S--> A-->B,


7), (S-->G, 10)}

Iteration 4 will give the final result, as S--->A--->C--->G it provides


the optimal path with cost 6.

Points to remember:
o A* algorithm returns the path which occurred first, and it does
not search for all remaining paths.
o The efficiency of A* algorithm depends on the quality of
heuristic.
o A* algorithm expands all nodes which satisfy the condition
f(n)<="" li="">

Complete: A* algorithm is complete as long as:


o Branching factor is finite.
o Cost at every action is fixed.

Optimal: A* search algorithm is optimal if it follows below two


conditions:
o Admissible: the first condition requires for optimality is that
h(n) should be an admissible heuristic for A* tree search. An
admissible heuristic is optimistic in nature.
o Consistency: Second required condition is consistency for only
A* graph-search.

If the heuristic function is admissible, then A* tree search will always


find the least cost path.

Time Complexity: The time complexity of A* search algorithm


depends on heuristic function, and the number of nodes expanded is
exponential to the depth of solution d. So the time complexity is
O(b^d), where b is the branching factor.
Space Complexity: The space complexity of A* search algorithm
is O(b^d)

Hill Climbing Algorithm in Artificial Intelligence

o Hill climbing algorithm is a local search algorithm which


continuously moves in the direction of increasing
elevation/value to find the peak of the mountain or best
solution to the problem. It terminates when it reaches a peak
value where no neighbor has a higher value.

o Hill climbing algorithm is a technique which is used for


optimizing the mathematical problems. One of the widely
discussed examples of Hill climbing algorithm is Traveling-
salesman Problem in which we need to minimize the distance
traveled by the salesman.

o It is also called greedy local search as it only looks to its good


immediate neighbor state and not beyond that.

o A node of hill climbing algorithm has two components which are


state and value.

o Hill Climbing is mostly used when a good heuristic is available.

o In this algorithm, we don't need to maintain and handle the


search tree or graph as it only keeps a single current state.

Features of Hill Climbing:

Following are some main features of Hill Climbing Algorithm:

o Generate and Test variant: Hill Climbing is the variant of


Generate and Test method. The Generate and Test method
produce feedback which helps to decide which direction to move
in the search space.

o Greedy approach: Hill-climbing algorithm search moves in the


direction which optimizes the cost.
o No backtracking: It does not backtrack the search space, as it
does not remember the previous states.
State-space Diagram for Hill Climbing:

The state-space landscape is a graphical representation of the hill-


climbing algorithm which is showing a graph between various states
of algorithm and Objective function/Cost.

On Y-axis we have taken the function which can be an objective


function or cost function, and state-space on the x-axis. If the
function on Y-axis is cost then, the goal of search is to find the global
minimum and local minimum.

If the function of Y-axis is Objective function, then the goal of the


search is to find the global maximum and local maximum.

Different regions in the state space landscape:

Local Maximum: Local maximum is a state which is better than its


neighbor states, but there is also another state which is higher than
it.
C++ vs Java

Global Maximum: Global maximum is the best possible state of state


space landscape. It has the highest value of objective function.

Current state: It is a state in a landscape diagram where an agent is


currently present.
Flat local maximum: It is a flat space in the landscape where all the
neighbor states of current states have the same value.

Shoulder: It is a plateau region which has an uphill edge.

Types of Hill Climbing Algorithm:


o Simple hill Climbing:
o Steepest-Ascent hill-climbing:
o Stochastic hill Climbing:

1. Simple Hill Climbing:

Simple hill climbing is the simplest way to implement a hill climbing


algorithm. It only evaluates the neighbor node state at a time and
selects the first one which optimizes current cost and set it as a
current state. It only checks it's one successor state, and if it finds
better than the current state, then move else be in the same state.

This algorithm has the following features:


o Less time consuming
o Less optimal solution and the solution is not guaranteed

Algorithm for Simple Hill Climbing:


o Step 1: Evaluate the initial state, if it is goal state then return
success and Stop.
o Step 2: Loop Until a solution is found or there is no new
operator left to apply.
o Step 3: Select and apply an operator to the current state.
o Step 4: Check new state:
a. If it is goal state, then return success and quit.
b. Else if it is better than the current state then assign new
state as a current state.
c. Else if not better than the current state, then return to
step2.
o Step 5: Exit.

2. Steepest-Ascent hill climbing:

The steepest-Ascent algorithm is a variation of simple hill climbing


algorithm. This algorithm examines all the neighboring nodes of the
current state and selects one neighbor node which is closest to the
goal state. This algorithm consumes more time as it searches for
multiple neighbors
Algorithm for Steepest-Ascent hill climbing:
o Step 1: Evaluate the initial state, if it is goal state then return
success and stop, else make current state as initial state.
o Step 2: Loop until a solution is found or the current state does
not change.
a. Let SUCC be a state such that any successor of the
current state will be better than it.
b. For each operator that applies to the current state:
a. Apply the new operator and generate a new state.
b. Evaluate the new state.
c. If it is goal state, then return it and quit, else
compare it to the SUCC.
d. If it is better than SUCC, then set new state as
SUCC.
e. If the SUCC is better than the current state, then set
current state to SUCC.
o Step 5: Exit.

3. Stochastic hill climbing:

Stochastic hill climbing does not examine for all its neighbor before
moving. Rather, this search algorithm selects one neighbor node at
random and decides whether to choose it as a current state or
examine another state.

Problems in Hill Climbing Algorithm:

1. Local Maximum: A local maximum is a peak state in the landscape


which is better than each of its neighboring states, but there is
another state also present which is higher than the local maximum.

Solution: Backtracking technique can be a solution of the local


maximum in state space landscape. Create a list of the promising
path so that the algorithm can backtrack the search space and
explore other paths as well.
2. Plateau: A plateau is the flat area of the search space in which all
the neighbor states of the current state contains the same value,
because of this algorithm does not find any best direction to move. A
hill-climbing search might be lost in the plateau area.

Solution: The solution for the plateau is to take big steps or very little
steps while searching, to solve the problem. Randomly select a state
which is far away from the current state so it is possible that the
algorithm could find non-plateau region.

3. Ridges: A ridge is a special form of the local maximum. It has an


area which is higher than its surrounding areas, but itself has a
slope, and cannot be reached in a single move.

Solution: With the use of bidirectional search, or by moving in


different directions, we can improve this problem.
Simulated Annealing:

A hill-climbing algorithm which never makes a move towards a lower


value guaranteed to be incomplete because it can get stuck on a local
maximum. And if algorithm applies a random walk, by moving a
successor, then it may complete but not efficient.

Simulated Annealing is an algorithm which yields both efficiency and


completeness.

In mechanical term Annealing is a process of hardening a metal or


glass to a high temperature then cooling gradually, so this allows the
metal to reach a low-energy crystalline state. The same process is
used in simulated annealing in which the algorithm picks a random
move, instead of picking the best move. If the random move improves
the state, then it follows the same path.
Otherwise, the algorithm follows the path which has a probability of
less than 1 or it moves downhill and chooses another path.
MCQ

1. Problem solving agents are also called as

a. Simple agent
b. Reflex agent
c. Rational agent
d. Goal based agent

2. Which represents a set of possible solutions, which a system


may have?

a) Search space
b) Start state
c) Search tree
d) Goal test

3. If a solution has the lowest cost among all solutions, then it is


called as

a) Optimal solution
b) Path cost
c) Transition model
d) None of the above

4. Which search does not contain any domain knowledge such as


closeness, the location of the goal?

a) Uninformed search
b) Informed search
c) Blind search
d) Both A and C

5. Which algorithm is a combination of DFS and BFS algorithms?

a) Iterative deepening depth-first Search


b) Simple Search
c) Complex search
d) Bidirectional search
6. If the environment is not fully observable or deterministic, then
which type of problems will occur?

a) Contingency problem
b) Conformant problem
c) Sensorless problems
d) All the above

7. The Estimated cost of cheapest solution f(n) =

a) h(n)
b) g(n)
c) h(n) * g(n)
d) h(n) + g(n)

8. Which is defined by the value of the objective function or


heuristic cost function?

a) Location
b) Elevation
c) Both
d) None of the Above

9. Which type of Search Algorithm requires less computation?

a) Informed search
b) Uninformed search
c) Both
d) None of the above

10. A node of hill climbing algorithm has

a) State components
b) Value components
c) Both
d) None of the above

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy