0% found this document useful (0 votes)
15 views28 pages

UNIT 2. AI part 1

The document discusses problem-solving agents in artificial intelligence, highlighting their goal-driven nature and the steps involved in problem formulation. It covers various search techniques, including uninformed and informed search algorithms, such as Depth First Search, Breadth First Search, and A* Search, detailing their methodologies, complexities, and advantages. The A* Search algorithm is emphasized for its optimality, completeness, and efficiency in finding the shortest path in weighted graphs.

Uploaded by

Manas Thakur
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views28 pages

UNIT 2. AI part 1

The document discusses problem-solving agents in artificial intelligence, highlighting their goal-driven nature and the steps involved in problem formulation. It covers various search techniques, including uninformed and informed search algorithms, such as Depth First Search, Breadth First Search, and A* Search, detailing their methodologies, complexities, and advantages. The A* Search algorithm is emphasized for its optimality, completeness, and efficiency in finding the shortest path in weighted graphs.

Uploaded by

Manas Thakur
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 28

UNIT 2

PROBLEM-SOLVING APPROACH IN ARTIFICIAL INTELLIGENCE PROBLEMS

The reflex agents are known as the simplest agents because they directly map states
into actions. Unfortunately, these agents fail to operate in an environment where the mapping
is too large to store and learn. Goal-based agent, on the other hand, considers future actions
and the desired outcomes.
Here, we will discuss one type of goal-based agent known as a problem-solving agent,
which uses atomic representation with no internal states visible to the problem-solving
algorithms.
Problem-solving agent
The problem-solving agent perfoms precisely by defining problems and its several
solutions.
According to psychology, “a problem-solving refers to a state where we wish to reach
to a definite goal from a present state or condition.”
According to computer science, a problem-solving is a part of artificial intelligence
which encompasses a number of techniques such as algorithms, heuristics to solve a
problem.
Therefore, a problem-solving agent is a goal-driven agent and focuses on satisfying the goal.
Steps performed by Problem-solving agent
• Goal Formulation: It is the first and simplest step in problem-solving. It
organizes the steps/sequence required to formulate one goal out of multiple goals as well
as actions to achieve that goal. Goal formulation is based on the current situation and the
agent’s performance measure (discussed below).
• Problem Formulation: It is the most important step of problem-solving which
decides what actions should be taken to achieve the formulated goal. There are following
five components involved in problem formulation:
• Initial State: It is the starting state or initial step of the agent towards its goal.
• Actions: It is the description of the possible actions available to the agent.
• Transition Model: It describes what each action does.
• Goal Test: It determines if the given state is a goal state.
• Path cost: It assigns a numeric cost to each path that follows the goal. The
problem-solving agent selects a cost function, which reflects its performance measure.
Remember, an optimal solution has the lowest path cost among all the solutions.

Searching Techniques
Artificial Intelligence is the study of building agents that act rationally. Most of the
time, these agents perform some kind of search algorithm in the background in
order to achieve their tasks.
 A search problem consists of:

o A State Space. Set of all possible states where you can be.
o A Start State. The state from where the search begins.
o A Goal State. A function that looks at the current state returns whether
or not it is the goal state.
 The Solution to a search problem is a sequence of actions, called
the plan that transforms the start state to the goal state.
 This plan is achieved through search algorithms.
Uninformed Search Algorithms:
The search algorithms in this section have no additional information on the goal
node other than the one provided in the problem definition. The plans to reach the
goal state from the start state differ only by the order and/or length of actions.
Uninformed search is also called Blind search. These algorithms can only generate
the successors and differentiate between the goal state and non goal state.
Depth First Search:
Depth-first search (DFS) is an algorithm for traversing or searching tree or graph
data structures. The algorithm starts at the root node (selecting some arbitrary node
as the root node in the case of a graph) and explores as far as possible along each
branch before backtracking. It uses last in- first-out strategy and hence it is
implemented using a stack.
Example:
Question. Which solution would DFS find to move from node S to node G if
run on the graph below?

Solution. The equivalent search tree for the above graph is as follows. As
DFS traverses the tree “deepest node first”, it would always pick the deeper
branch until it reaches the solution (or it runs out of nodes, and goes to the
next branch). The traversal is shown in blue arrows.
Path: S -> A -> B -> C -> G

= the depth of the search tree = the number of levels of the search tree.
= number of nodes in level .

Time complexity: Equivalent to the number of nodes traversed in


DFS.
Space complexity: Equivalent to how large can the fringe
get.
Completeness: DFS is complete if the search tree is finite, meaning for a
given finite search tree, DFS will come up with a solution if it exists.
Optimality: DFS is not optimal, meaning the number of steps in reaching the
solution, or the cost spent in reaching it is high.
Breadth First Search:
Breadth-first search (BFS) is an algorithm for traversing or searching tree or
graph data structures. It starts at the tree root (or some arbitrary node of a
graph, sometimes referred to as a ‘search key’), and explores all of the
neighbor nodes at the present depth prior to moving on to the nodes at the
next depth level. It is implemented using a queue.

Example:
Question. Which solution would BFS find to move from node S to node G if
run on the graph below?

Solution. The equivalent search tree for the above graph is as follows. As
BFS traverses the tree “shallowest node first”, it would always pick the
shallower branch until it reaches the solution (or it runs out of nodes, and
goes to the next branch). The traversal is shown in blue arrows.
Path: S -> D -> G
= the depth of the shallowest solution.
= number of nodes in level .
Time complexity: Equivalent to the number of nodes traversed in BFS until
the shallowest
solution.
Space complexity: Equivalent to how large can the fringe
get.
Completeness: BFS is complete, meaning for a given search tree, BFS will
come up with a solution if it exists.

Optimality: BFS is optimal as long as the costs of all edges are equal.
Informed Search Algorithms:

Generate and Test Search


Generate and Test Search is a heuristic search technique based on Depth First Search
with Backtracking which guarantees to find a solution if done systematically and
there exists a solution. In this technique, all the solutions are generated and tested for
the best solution. It ensures that the best solution is checked against all possible
generated solutions.
It is also known as British Museum Search Algorithm as it’s like looking for an
exhibit at random or finding an object in the British Museum by wandering
randomly.
The evaluation is carried out by the heuristic function as all the solutions are
generated systematically in generate and test algorithm but if there are some paths
which are most unlikely to lead us to result then they are not considered. The
heuristic does this by ranking all the alternatives and is often effective in doing so.
Systematic Generate and Test may prove to be ineffective while solving complex
problems. But there is a technique to improve in complex cases as well by
combining generate and test search with other techniques so as to reduce the search
space. For example in Artificial Intelligence Program DENDRAL we make use of
two techniques, the first one is Constraint Satisfaction Techniques followed by
Generate and Test Procedure to work on reduced search space i.e. yield an effective
result by working on a lesser number of lists generated in the very first step.

Algorithm

1. Generate a possible solution. For example, generating a particular point


in the problem space or generating a path for a start state.
2. Test to see if this is a actual solution by comparing the chosen point or
the endpoint of the chosen path to the set of acceptable goal states
3. If a solution is found, quit. Otherwise go to Step 1
Properties of Good Generators:

The good generators need to have the following properties:


 Complete: Good Generators need to be complete i.e. they should
generate all the possible solutions and cover all the possible states. In
this way, we can guaranty our algorithm to converge to the correct
solution at some point in time.
 Non Redundant: Good Generators should not yield a duplicate solution
at any point of time as it reduces the efficiency of algorithm thereby
increasing the time of search and making the time complexity exponential.
In fact, it is often said that if solutions appear several times in the depth-
first search then it is better to modify the procedure to traverse a graph
rather than a tree.
 Informed: Good Generators have the knowledge about the search space
which they maintain in the form of an array of knowledge. This can be
used to search how far the agent is from the goal, calculate the path cost
and even find a way to reach the goal.
 Let us take a simple example to understand the importance of a good
generator. Consider a pin made up of three 2 digit numbers i.e. the
numbers are of the form,


 In this case, one way to find the required pin is to generate all the
solutions in a brute force manner for example,
The total number of solutions in this case is (100) 3 which is approximately
1M. So if we do not make use of any informed search technique then it
results in exponential time complexity. Now let’s say if we generate 5
solutions every minute. Then the total numbers generated in 1 hour are
5*60=300 and the total number of solutions to be generated are 1M. Let us
consider the brute force search technique for example linear search whose
average time complexity is N/2. Then on an average, the total number of the
solutions to be generated are approximately 5 lakhs. Using this technique
even if you work for about 24 hrs a day then also you will need 10 weeks to
complete the task.
Now consider using heuristic function where we have domain knowledge
that every number is a prime number between 0-99 then the possible
number of solutions are (25) 3 which is approximately 15,000. Now consider
the same case that you are generating 5 solutions every minute and working
for 24 hrs then you can find the solution in less than 2 days which was being
done in 10 weeks in the case of uninformed search.
We can conclude for here that if we can find a good heuristic then time
complexity can be reduced gradually. But in the worst-case time and space
complexity will be exponential. It all depends on the generator i.e. better the
generator lesser is the time complexity.

Best first Search


Best First Search falls under the category of Heuristic Search or Informed
Search.
Implementation of Best First Search:
We use a priority queue or heap to store the costs of nodes that have the
lowest evaluation function value. So the implementation is a variation of
BFS, we just need to change Queue to PriorityQueue

Illustration:
Let us consider the below example:

 We start from source “S” and search for goal “I” using given costs and
Best First search.

 pq initially contains S
o We remove S from pq and process unvisited neighbors of S to
pq.
o pq now contains {A, C, B} (C is put before B because C has
lesser cost)

 We remove A from pq and process unvisited neighbors of A to pq.


o pq now contains {C, B, E, D}

 We remove C from pq and process unvisited neighbors of C to pq.


o pq now contains {B, H, E, D}

 We remove B from pq and process unvisited neighbors of B to pq.


o pq now contains {H, E, D, F, G}
 We remove H from pq.
 Since our goal “I” is a neighbor of H, we return.
Analysis :
 The worst-case time complexity for Best First Search is O(n * log n)
where n is the number of nodes. In the worst case, we may have to visit
all nodes before we reach goal. Note that priority queue is implemented
using Min(or Max) Heap, and insert and remove operations take O(log n)
time.
 The performance of the algorithm depends on how well the cost or
evaluation function is designed.

A* Search Algorithm in Artificial Intelligence


An Introduction to A* Search Algorithm in AI
A* (pronounced "A-star") is a powerful graph traversal and pathfinding algorithm widely
used in artificial intelligence and computer science. It is mainly used to find the shortest
path between two nodes in a graph, given the estimated cost of getting from the current
node to the destination node. The main advantage of the algorithm is its ability to provide
an optimal path by exploring the graph in a more informed way compared to traditional
search algorithms such as Dijkstra's algorithm.

Algorithm A* combines the advantages of two other search algorithms: Dijkstra's


algorithm and Greedy Best-First Search. Like Dijkstra's algorithm, A* ensures that the path
found is as short as possible but does so more efficiently by directing its search through
a heuristic similar to Greedy Best-First Search. A heuristic function, denoted h(n), estimates
the cost of getting from any given node n to the destination node.

The main idea of A* is to evaluate each node based on two parameters:

1. g(n): the actual cost to get from the initial node to node n. It represents the sum of the
costs of node n outgoing edges.
2. h(n): Heuristic cost (also known as "estimation cost") from node n to destination node n.
This problem-specific heuristic function must be acceptable, meaning it never
overestimates the actual cost of achieving the goal. The evaluation function of node n is
defined as f(n) = g(n) h(n).

Algorithm A* selects the nodes to be explored based on the lowest value of f(n), preferring the
nodes with the lowest estimated total cost to reach the goal. The A* algorithm works:

1. Create an open list of foundbut not explored nodes.


2. Create a closed list to hold already explored nodes.
3. Add a startingnode to the open list with an initial value of g
4. Repeat the following steps until the open list is empty or you reachthe target node:
a. Find the node with the smallest f-value (i.e., the node with the minor g(n)
h(n)) in the open list.
b. Move the selected node from the open list to the closed list.
c. Createall valid descendantsof the selected node.
d. For each successor, calculateits g-value as the sum of the current node's g
value and the cost of movingfrom the current node to the successor node.
Update the g-value of the tracker when a better path is found.
e. If the followeris not in the open list, add it with the calculated g-value and
calculate its h-value. If it is already in the open list, update its g value if the
new path is better.
f. Repeat the cycle. Algorithm A* terminates when the target node is reached
or when the open list empties, indicating no paths from the start node to
the target node. The A* search algorithm is widely used in various fields such
as robotics, video games, network routing, and design problems because it
is efficient and can find optimal paths in graphs or networks.

However, choosing a suitable and acceptable heuristic function is essential so that the
algorithm performs correctly and provides an optimal solution.

Advantages of A* Search Algorithm in Artificial


Intelligence
The A* search algorithm offers several advantages in artificial intelligence and problem-
solving scenarios:

1. Optimal solution: A* ensures finding the optimal (shortest) path from the start
node to the destination node in the weighted graph given an acceptable heuristic
function. This optimality is a decisive advantage in many applications where finding
the shortest path is essential.
2. Completeness: If a solution exists, A* will find it, provided the graph does not have
an infinite cost This completeness property ensures that A* can take advantage of
a solution if it exists.
3. Efficiency: A* is efficient ifan efficient and acceptable heuristic function is used.
Heuristics guide the search to a goal by focusing on promising paths and avoiding
unnecessary exploration, making A* more efficient than non-aware search
algorithms such as breadth-first search or depth-first search.
4. Versatility: A* is widely applicable to variousproblem areas, including wayfinding,
route planning, robotics, game development, and more. A* can be used to find
optimal solutions efficiently as long as a meaningful heuristic can be defined.
5. Optimized search: A* maintains a priority order to select the nodes with the minor
f(n) value (g(n) and h(n)) for expansion. This allows it to explore promising paths
first, which reduces the search space and leads to faster convergence.
6. Memory efficiency: Unlike some other search algorithms, such as breadth-first
search, A* stores only a limited number of nodes in the priority queue, which makes
it memory efficient, especially for large graphs.
7. Tunable Heuristics: A*'s performancecan be fine-tuned by selecting different
heuristic functions. More educated heuristics can lead to faster convergence and
less expanded nodes.
8. Extensively researched: A* is a well-established algorithm with decades of
research and practical applications. Many optimizations and variations have been
developed, making it a reliable and well-understood troubleshooting tool.
9. Web search: A* can be used for web-based path search, where the algorithm
constantly updates the path according to changes in the environment or the
appearance of new It enables real-time decision-making in dynamic scenarios.

Disadvantages of A* Search Algorithm in Artificial


Intelligence
Although the A* (letter A) search algorithm is a widely used and powerful technique for
solving AI pathfinding and graph traversal problems, it has disadvantages and limitations.
Here are some of the main disadvantages of the search algorithm:

1. Heuristic accuracy: The performance of the A* algorithm depends heavily on the accuracy
of the heuristic function used to estimate the cost from the current node to the If the
heuristic is unacceptable (never overestimates the actual cost) or inconsistent (satisfies the
triangle inequality), A* may not find an optimal path or may explore more nodes than
necessary, affecting its efficiency and accuracy.
2. Memory usage: A* requires that all visited nodes be kept in memory to keep track of
explored paths. Memory usage can sometimes become a significant issue, especially when
dealing with an ample search space or limited memory resources.
3. Time complexity: AlthoughA* is generally efficient, its time complexity can be a concern
for vast search spaces or graphs. In the worst case, A* can take exponentially longer to find
the optimal path if the heuristic is inappropriate for the problem.
4. Bottleneck at the destination: In specific scenarios, the A* algorithm needs to explore
nodes far from the destination before finally reaching the destination region. This the
problem occurs when the heuristic needs to direct the search to the goal early effectively.
5. Cost Binding: A* faces difficulties when multiple nodes have the same f-value (the sum of
the actual cost and the heuristic cost). The strategy used can affect the optimality and
efficiency of the discovered path. If not handled correctly, it can lead to unnecessary nodes
being explored and slow down the algorithm.
6. Complexity in dynamic environments: In dynamic environments where the cost of
edges or nodes may change during the search, A* may not be suitable because it does not
adapt well to such changes. Reformulation from scratch can be computationally expensive,
and D* (Dynamic A*) algorithms were designed to solve this
7. Perfection in infinite space : A* may not find a solution in infinite state space. In such
cases, it can run indefinitely, exploring an ever-increasing number of nodes without finding
a solution. Despite these shortcomings, A* is still a robust and widely used algorithm
because it can effectively find optimal paths in many practical situations if the heuristic
function is well-designed and the search space is manageable. Various variations and
variants of A* have been proposed to alleviate some of its limitations.

Applications of the A* Search Algorithm in Artificial


Intelligence
The search algorithm A* (letter A) is a widely used and robust pathfinding algorithm in
artificial intelligence and computer science. Its efficiency and optimality make it suitable
for various applications. Here are some typical applications of the A* search algorithm in
artificial intelligence:

1. Pathfinding in Games: A* is oftenused in video games for character movement, enemy


AI navigation, and finding the shortest path from one location to another on the game
map. Its ability to find the optimal path based on cost and heuristics makes it ideal for
real-time applications such as games.
2. Robotics and Autonomous Vehicles: A* is used in robotics and autonomous vehicle
navigation to plan anoptimal route for robots to reach a destination, avoiding obstacles
and considering terrain costs. This is crucial for efficient and safe movement in natural
environments.
3. Maze solving: A* can efficiently find the shortest path through a maze, making it valuable
in many maze-solving applications, such as solving puzzles or navigating complex
structures.
4. Route planningand navigation: In GPS systems and mapping applications, A* can be
used to find the optimal route between two points on a map, considering factors such as
distance, traffic conditions, and road network topology.
5. Puzzle-solving: A* can solve various diagram puzzles, such as sliding puzzles, Sudoku,
and the 8-puzzle problem. Resource Allocation: In scenarios where resources must be
optimally allocated, A* can help find the most efficient allocation path, minimizing cost
and maximizing efficiency.
6. Network Routing: A* can be usedin computer networks to find the most efficient route
for data packets from a source to a destination node.
7. Natural Language Processing (NLP): In some NLP tasks, A* can generate coherent and
contextualresponses by searching for possible word sequences based on their likelihood
and relevance.
8. Path planningin robotics: A* can be used to plan the path of a robot from one point to
another, considering various constraints, such as avoiding obstacles or minimizing energy
consumption.
9. Game AI: A* is also used to makeintelligent decisions for non-player characters (NPCs),
such as determining the best way to reach an objective or coordinate movements in a
team-based game.

These are just a few examples of how the A* search algorithm finds applications in various
areas of artificial intelligence. Its flexibility, efficiency, and optimization make it a valuable
tool for many problems.

AO * Algorithm
Best-first search is what the AO* algorithm does. The AO* method divides any
given difficult problem into a smaller group of problems that are then resolved
using the AND-OR graph concept. AND OR graphs are specialized graphs that are
used in problems that can be divided into smaller problems. The AND side of the
graph represents a set of tasks that must be completed to achieve the main goal,
while the OR side of the graph represents different methods for accomplishing the
same main goal.

In the above figure, the buying of a car may be broken down into smaller problems
or tasks that can be accomplished to achieve the main goal in the above figure,
which is an example of a simple AND-OR graph. The other task is to either steal a
car that will help us accomplish the main goal or use your own money to purchase
a car that will accomplish the main goal. The AND symbol is used to indicate the
AND part of the graphs, which refers to the need that all subproblems containing
the AND to be resolved before the preceding node or issue may be finished.
The start state and the target state are already known in the knowledge-based
search strategy known as the AO* algorithm, and the best path is identified by
heuristics. The informed search technique considerably reduces the algorithm’s
time complexity. The AO* algorithm is far more effective in searching AND-OR
trees than the A* algorithm.
Working of AO* algorithm:
The evaluation function in AO* looks like this:
f(n) = g(n) + h(n)
f(n) = Actual cost + Estimated cost
here,
f(n) = The actual cost of traversal.
g(n) = the cost from the initial node to the current node.
h(n) = estimated cost from the current node to the goal state.
Difference between the A* Algorithm and AO* algorithm
 A* algorithm and AO* algorithm both works on the best first search.
 They are both informed search and works on given heuristics values
 A* always gives the optimal solution but AO* doesn’t guarantee to give the
optimal solution.
 Once AO* got a solution doesn’t explore all possible paths but A* explores
all paths.
 When compared to the A* algorithm, the AO* algorithm uses less memory.
 opposite to the A* algorithm, the AO* algorithm cannot go into an endless
loop.

Example:

Here in the above example below the Node which is given is the heuristic value
i.e h(n). Edge length is considered as 1.
Step 1

With help of f(n) = g(n) + h(n) evaluation function,


Start from node A,
f(A⇢B) = g(B) + h(B)
= 1 + 5 ……here g(n)=1 is taken by
default for path cost
= 6

f(A⇢C+D) = g(c) + h(c) + g(d) + h(d)


= 1 + 2 + 1 + 4 ……here we have added C & D
because they are in AND
= 8
So, by calculation A⇢B path is chosen which is the minimum path,
i.e f(A⇢B)

Step 2
According to the answer of step 1, explore node B
Here the value of E & F are calculated as follows,

f(B⇢E) = g(e) + h(e)


f(B⇢E) = 1 + 7
= 8

f(B⇢f) = g(f) + h(f)


f(B⇢f) = 1 + 9
= 10
So, by above calculation B⇢E path is chosen which is minimum path,
i.e f(B⇢E)
because B's heuristic value is different from its actual value The
heuristic is
updated and the minimum cost path is selected. The minimum value in
our situation is 8.
Therefore, the heuristic for A must be updated due to the change in
B's heuristic.
So we need to calculate it again.

f(A⇢B) = g(B) + updated h(B)


= 1 + 8
= 9
We have Updated all values in the above tree.

Step 3

By comparing f(A⇢B) & f(A⇢C+D)


f(A⇢C+D) is shown to be smaller. i.e 8 < 9
Now explore f(A⇢C+D)
So, the current node is C

f(C⇢G) = g(g) + h(g)


f(C⇢G) = 1 + 3
= 4

f(C⇢H+I) = g(h) + h(h) + g(i) + h(i)


f(C⇢H+I) = 1 + 0 + 1 + 0 ……here we have added H & I
because they are in AND
= 2

f(C⇢H+I) is selected as the path with the lowest cost and the
heuristic is also left unchanged
because it matches the actual cost. Paths H & I are solved because
the heuristic for those paths is 0,
but Path A⇢D needs to be calculated because it has an AND.

f(D⇢J) = g(j) + h(j)


f(D⇢J) = 1 + 0
= 1
the heuristic of node D needs to be updated to 1.

f(A⇢C+D) = g(c) + h(c) + g(d) + h(d)


= 1 + 2 + 1 + 1
= 5

as we can see that path f(A⇢C+D) is get solved and this tree has
become a solved tree now.
In simple words, the main flow of this algorithm is that we have to
find firstly level 1st heuristic
value and then level 2nd and after that update the values with going
upward means towards the root node.
In the above tree diagram, we have updated all the values.

LOCAL SEARCH ALGORITHMS


A local search algorithm in artificial intelligence is a type of optimization
algorithm used to find the best solution to a problem by repeatedly making minor
adjustments to an initial solution.
When trying to find an exact solution to a problem or when doing so would be too
computationally expensive, local search algorithms come in particularly handy.
A local search algorithm in artificial intelligence works by starting with an initial
solution and then making minor adjustments to it in the hopes of discovering a
better one. Every time the algorithm iterates, the current solution is assessed, and a
small modification to the current solution creates a new solution. The current
solution is then compared to the new one, and if the new one is superior, it replaces
the old one. This process keeps going until a satisfactory answer is discovered or a
predetermined stopping criterion is satisfied.
1) HILL CLIMBING ALGORITHM

o Hill climbing algorithm is a local search algorithm which continuously moves in the
direction of increasing elevation/value to find the peak of the mountain or best
solution to the problem. It terminates when it reaches a peak value where no
neighbor has a higher value.
o Hill climbing algorithm is a technique which is used for optimizing the
mathematical problems. One of the widely discussed examples of Hill climbing
algorithm is Traveling-salesman Problem in which we need to minimize the
distance traveled by the salesman.
o It is also called greedy local search as it only looks to its good immediate neighbor
state and not beyond that.
o A node of hill climbing algorithm has two components which are state and value.
o Hill Climbing is mostly used when a good heuristic is available.
o In this algorithm, we don't need to maintain and handle the search tree or graph
as it only keeps a single current state.

Advantages of Hill climb algorithm:


The merits of Hill Climbing algorithm are given below.

1. The first part of the paper is based on Hill's diagrams that can easily put things together,
ideal for complex optimization problem. A hill climbing algorithm is first invention of hill
climbers.
2. It uses much less RAM for the current problem state and the solutions located around it
than comparing the algorithm to a tree search method which will require inspecting the
entire tree. Consequently, reducing the total memory resources to be used. Space is what
matters solutions should occupy a convenient area to consume as little of memory as
possible.
3. When it comes to the acceleration of the hill up, most of the time it brings a closure in the
local maximum straight away. This is the route if having quickly getting a solution,
outshining acquiring a global maximum, is an incentive.

Disadvantages of Hill Climbing Algorithm

1. Concerning hill climbing, it seems that some solutions do not find the optimum point and
remain stuck at a local peak, particularly where the optimization needs to be done in
complex environments with many objective functions.
2. It is also superficial because it just seeks for the surrounding solution and does not get
farther than that. It could be on a wrong course which is based on a locally optimal
solution, and consequently Godbole needs to move far away from current position in the
first place.
3. It is highly likely that¬ end result will largely depend on¬ initial setup and state with a
precedent¬ of it being the most sensitive factor. It implies that in this case time is the
perimeter of the sphere within which people develop their skills dynamically, determining
the success.

Features of Hill Climbing:


Following are some main features of Hill Climbing Algorithm:

o Generate and Test variant: Hill Climbing is the variant of Generate and Test method. The
Generate and Test method produce feedback which helps to decide which direction to
move in the search space.
o Greedy approach: Hill-climbing algorithm search moves in the direction which optimizes
the cost.
o No backtracking: It does not backtrack the search space, as it does not remember the
previous states.
o Deterministic Nature:
Hill Climbing is a deterministic optimization algorithm, which means that given the same
initial conditions and the same problem, it will always produce the same result. There is no
randomness or uncertainty in its operation.
o Local Neighborhood:
Hill Climbing is a technique that operates within a small area around the current solution.
It explores solutions that are closely related to the current state by making small, gradual
changes. This approach allows it to find a solution that is better than the current one
although it may not be the global optimum.

Different regions in the state space landscape:


Local Maximum: Local maximum is a state which is better than its neighbor states, but
there is also another state which is higher than it.

Global Maximum: Global maximum is the best possible state of state space landscape.
It has the highest value of objective function.

Current state: It is a state in a landscape diagram where an agent is currently present.

Flat local maximum: It is a flat space in the landscape where all the neighbor states of
current states have the same value.

Shoulder: It is a plateau region which has an uphill edge.


Problems in Hill Climbing Algorithm:
1. Local Maximum: A local maximum is a peak state in the landscape which is better than
each of its neighboring states, but there is another state also present which is higher than
the local maximum.

Solution: Backtracking technique can be a solution of the local maximum in state space
landscape. Create a list of the promising path so that the algorithm can backtrack the
search space and explore other paths as well.

. Plateau: A plateau is the flat area of the search space in which all the neighbor states of
the current state contains the same value, because of this algorithm does not find any
best direction to move. A hill-climbing search might be lost in the plateau area.

Solution: The solution for the plateau is to take big steps or very little steps while
searching, to solve the problem. Randomly select a state which is far away from the current
state so it is possible that the algorithm could find non-plateau region.
3. Ridges: A ridge is a special form of the local maximum. It has an area which is higher
than its surrounding areas, but itself has a slope, and cannot be reached in a single move.

Solution: With the use of bidirectional search, or by moving in different directions, we


can improve this problem.

Simulated Annealing
Simulated Annealing is a flexible and effective optimization algorithm inspired by using
the physical method of annealing in metallurgy. It is broadly utilized in solving
combinatorial optimization problems throughout numerous domain names, which
include engineering, operations research, machine getting to know, and artificial
intelligence. In this article, we're going to delve into the standards at the back of Simulated
Annealing, its programs, and the way it works to find near-top-quality solutions to
complicated optimization problems.

Simulated Annealing is primarily based on the concept of mimicking the annealing system
used to gain the bottom energy state in strong substances. In metallurgy, annealing
includes heating a material to an excessive temperature after which gradually cooling it
to lessen defects and optimize its crystalline structure. Similarly, in simulated annealing,
the algorithm starts off evolved with a preliminary solution and iteratively explores the
answer space, gradually reducing the "temperature" to converge in the direction of a gold
standard or near-most beneficial answer.
Working principle of simulated annealing
The algorithm begins by setting the temperature and creating an initial solution. It then
iteratively performs the steps below:

1. Perturbation: A neighboring solution is created by making a minor random alteration to


the existing one. This disturbance adds exploration to the search process.
2. Evaluation: The new solution's energy is determined using the energy function.
Acceptance is granted if the new solution requires less energy (of higher quality) than the
present solution. Otherwise, it may be accepted probabilistically based on temperature
and energy differences.
3. Temperature Update: The temperature is updated based on the cooling schedule,
progressively lowering its value over iterations. This lowers the likelihood of accepting
poorer alternatives as the search advances.

Difference Between Hill Climbing and Simulated Annealing Algorithm


Simulated Annealing
Parameters Hill Climbing

Simulated Annealing is a probabilistic


Hill Climbing is a heuristic
optimization algorithm that simulates
optimization process that
the metallurgical annealing process in
iteratively advances towards a
Introduction order to discover the best solution in a
better solution at each step in
given search area by accepting less-
order to find the best solution in
than-ideal solutions with a
a given search space.
predetermined probability.

By iteratively progressing Simulated annealing seeks the global


towards a better solution at each optimum in a given search space by
Objective stage, Hill Climbing seeks to accepting poorer answers with a
locate the ideal solution within predetermined probability. This allows
a predetermined search space. it to bypass local optimum conditions.

In order to iteratively move Simulated annealing explores the


Strategy towards the best answer at each search space and avoids local optimum
stage, Hill Climbing employs a by employing a probabilistic method to
greedy method. It only accepts accept a worse solution with a given
Simulated Annealing
Parameters Hill Climbing

solutions that are superior to the probability. As the algorithm advances,


ones already in place. the likelihood of accepting an inferior
answer diminishes.

Hill Climbing may not locate


Local vs. Simulated annealing has a chance of
the global optimum because it
Global escaping the local optimum and
is susceptible to becoming
Optima locating the global optimum.
caught in local optima.

Hill Climbing comes to an end When the temperature hits a


Stopping after a certain number of predetermined level or the maximum
Criteria iterations or when it achieves a number of repetitions, simulated
local optimum. annealing comes to an end.

Simulated annealing is more efficient


Hill climbing is quick and easy,
at locating the global optimum than
but it has the potential to
Hill Climbing, particularly for
Performance become locked in local optima
complicated situations with numerous
and miss the overall best
local optima. Simulated annealing is
solution.
slower than Hill Climbing.

The beginning temperature, cooling


Tuning Hill Climbing has no tuning schedule, and acceptance probability
Parameters parameters. function are only a few of the tuning
factors for Simulated Annealing.

Many different applications,


Several fields, including logistics,
including image processing,
Applications scheduling, and circuit design, use
machine learning, and gaming,
simulated annealing.
use hill climbing.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy