0% found this document useful (0 votes)
30 views44 pages

Principals of AI Unit II

Uploaded by

kanekiken31111
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
30 views44 pages

Principals of AI Unit II

Uploaded by

kanekiken31111
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 44

Searching for solutions

• A problem con
• sist of input state , set of actions goal test function and path cost function..
• Together ,the initial state and successor function implicitly define the state space of the problem
• It deals with search techniques that use an explicit search tree that is generated by the initial
state and the successor function that together define the state space
Eg : Expansion in the search tree for finding a route from Arad to Bucharest.
• The root of the search tree is a search node corresponding to the initial state , in(Arad).
• The first step is to test whether this is goal state. clearly it is not.
• A search problem consists of:
• A State Space. Set of all possible states where you can be.
• A Start State. The state from where the search begins.
• A Goal State. A function that looks at the current state returns whether or not it is the goal state.
• The Solution to a search problem is a sequence of actions, called the plan that transforms the start state to the goal
state.
• This plan is achieved through search algorithms.
Uninformed /Blind search
➢The Uninformed search does not contain any domain knowledge
such as closeness, the location of the goal.
➢It operates in a brute force way , as it only includes information
about how to traverse the tree and how to identify leaf and goal
nodes
➢Uninformed search applies a way in which search tree is searched
without any information about the search space like intial state
operators and test for the goal, so it is called blind search.
➢It examines each nodes until it achieves the goal node.
Informed Search
➢It uses domain knowledge ,the problem information is available which
can guide the search.
➢Informed search strategies can find a solution more efficient than an
uninformed. Informed search is also called as Heuristic search.
➢A Heuristic is a way which might not always be guaranteed for the best
solution but guaranteed to find a good solution in reasonable time.
➢It can solve much complex problem which could not be solved in another
way.
Breadth-First Search(BFS)
➢It is the most common search strategy for traversing a tree or graph.
➢This algorithm searches breadthwise in a tree or graph , so it is called
breadth-first search.
➢BFS algorithm starts searching from the root node of the tree and expands
all successor node at the current level before.
➢BFS algorithm is an example of general-graph search algorithm
➢BFS implemented using FIFO queue data structure.
Merits of BFS
▪ BFS will provide a solution if any solution exists.
▪ If there is more than one solution for a given problem, then BFS will provide
the minimal solution which requires the least number of steps.
Demerits of BFS
▪ Its requires lots of memory since each level of the tree must be saved
into memory to expand the next level
▪ BFS needs lots of time if the solution is far away from the root node.
DEPTH- FRIST SEARCH
➢It is a recursive algorithm for traversing a tree or graph data structure.
➢It is called DFS because it starts from the root node and follows each
path to its greatest depth node before moving to the next path.
➢DFS uses a stack data structure for its implementation
➢The process of the DFS algorithm is similar to the BFS algorithm.
Demerits of DFS
▪ There is a possibility that many states keep re-occurring , and there is no
guarantee of finding the solution.
▪ DFS algorithm goes for deep down searching and sometimes it may go to the
infinite loop

Merits of DFS
▪ DFS requires very less memory as it only needs to store a stack of the nodes
on the path from root node to the current node
▪ It takes less time to reach goal node than BFS algorithm (if it traverses in the
right path)
Breadth-First Search and Depth-First Search
Diagram
Heuristic Search
• One of the core methods AI systems use to navigate problem-solving is
through heuristic search techniques.
• These techniques are essential for tasks that involve finding the best path
from a starting point to a goal state, such as in navigation systems, game
playing, and optimization problems.
• This article delves into what heuristic search is, its significance, and the
various techniques employed in AI.
• The primary benefit of using heuristic search techniques in AI is their ability
to handle large search spaces.
• Heuristics help to prioritize which paths are most likely to lead to a solution,
significantly reducing the number of paths that must be explored.
• This not only speeds up the search process but also makes it feasible to
solve problems that are otherwise too complex to handle with exact
algorithms
Example of Heuristic search
7 5 4 - 1 2 The purpose of heuristic
5 - 6 3 4 5 function is to guide the
8 3 1 6 7 8 search process in the most
Start state Goal state profitable path among all
that are available.
1 2

5 4

Total no.of moves(nodes) h1=8, h2=3+1+2+2+2+3+3+2


h1+h2=26 (here if want solve this puzzle (problem) have to take max-to-max 26
moves (less than 26 also can take but may be mat not be its solve)
“The main purpose of Heuristic search is to find out optimize path”
Hill Climbing
• Hill climbing algorithm is a local search algorithm which continuously
moves in the direction of increasing elevation/value to find the peak of the
mountain or best solution to the problem. It terminates when it reaches a
peak value where no neighbor has a higher value.
• Hill climbing algorithm is a technique which is used for optimizing the
mathematical problems. One of the widely discussed examples of Hill
climbing algorithm is Traveling-salesman Problem in which we need to
minimize the distance traveled by the salesman.
• It is also called greedy local search as it only looks to its good immediate
neighbor state and not beyond that.
• A node of hill climbing algorithm has two components which are state and
value.
• Hill Climbing is mostly used when a good heuristic is available.
• In this algorithm, we don't need to maintain and handle the search tree or
graph as it only keeps a single current state.
Features of Hill Climbing:
Following are some main features of Hill Climbing Algorithm:
• Generate and Test variant: Hill Climbing is the variant of Generate and
Test method. The Generate and Test method produce feedback which
helps to decide which direction to move in the search space.

• Greedy approach: Hill-climbing algorithm search moves in the direction


which optimizes the cost.

• No backtracking: It does not backtrack the search space, as it does not


remember the previous states.
State-space Diagram for Hill Climbing:
• The state-space landscape is a graphical representation of the hill-climbing
algorithm which is showing a graph between various states of algorithm and
Objective function/Cost.
• On Y-axis we have taken the function which can be an objective function or cost
function, and state-space on the x-axis. If the function on Y-axis is cost then, the
goal of search is to find the global minimum and local minimum. If the function
of Y-axis is Objective function, then the goal of the search is to find the global
maximum and local maximum.
• Different regions in the state space landscape:
• Local Maximum: Local maximum is a state which is better than its neighbor
states, but there is also another state which is higher than it.
• Global Maximum: Global maximum is the best possible state of state space
landscape. It has the highest value of objective function.
• Current state: It is a state in a landscape diagram where an agent is
currently present.
• Flat local maximum: It is a flat space in the landscape where all the
neighbor states of current states have the same value.
• Shoulder: It is a plateau region which has an uphill edge.
Types of Hill Climbing Algorithm:
1.Simple hill Climbing:
2.Steepest-Ascent hill-climbing:
3.Stochastic hill Climbing:
1. Simple Hill Climbing:
➢Simple hill climbing is the simplest way to implement a hill climbing
algorithm. It only evaluates the neighbor node state at a time and selects the
first one which optimizes current cost and set it as a current state. It only
checks it's one successor state, and if it finds better than the current state,
then move else be in the same state. This algorithm has the following features:
➢Less time consuming
➢Less optimal solution and the solution is not guaranteed
Algorithm for Simple Hill Climbing:
▪ Step 1: Evaluate the initial state, if it is goal state then return success
and Stop.
▪ Step 2: Loop Until a solution is found or there is no new operator left
to apply.
▪ Step 3: Select and apply an operator to the current state.
▪ Step 4: Check new state:
▪ If it is goal state, then return success and quit.
▪ Else if it is better than the current state then assign new state as a current
state.
▪ Else if not better than the current state, then return to step2.
▪ Step 5: Exit.
2. Steepest-Ascent hill climbing:
➢The steepest-Ascent algorithm is a variation of simple hill climbing algorithm.
This algorithm examines all the neighboring nodes of the current state and
selects one neighbor node which is closest to the goal state. This algorithm
consumes more time as it searches for multiple neighbors.
Algorithm for Steepest-Ascent hill climbing:
• Step 1: Evaluate the initial state, if it is goal state then return success and stop,
else make current state as initial state.
• Step 2: Loop until a solution is found or the current state does not change.
• Let SUCC be a state such that any successor of the current state will be better than it.
• For each operator that applies to the current state:
• Apply the new operator and generate a new state.
• Evaluate the new state.
• If it is goal state, then return it and quit, else compare it to the SUCC.
• If it is better than SUCC, then set new state as SUCC.
• If the SUCC is better than the current state, then set current state to SUCC.
• Step 5: Exit.
3. Stochastic hill climbing:
• Stochastic hill climbing does not examine for all its neighbor before moving.
Rather, this search algorithm selects one neighbor node at random and decides
whether to choose it as a current state or examine another state.
A* search Algorithm
• The A* (A-star) algorithm is a powerful and versatile search method used in
computer science to find the most efficient path between nodes in a graph.
• A* search algorithm finds the shortest path through the search space using
the heuristic functions.

• The core of the A* algorithm is based on cost functions and heuristics. It uses
two main parameters:
g(n): The actual cost from the starting node to any node n.
h(n): The heuristic estimated cost from node n to the goal.
This is where A* integrates knowledge beyond the graph to guide the
search.
• The sum, 𝑓(𝑛)=𝑔(𝑛)+ℎ(𝑛) represents the total estimated cost of the cheapest
solution through n. The A* algorithm functions by maintaining a priority
queue (or open set) of all possible paths along the graph, prioritizing them
based on their f values.
Applications of A*
The A* algorithm’s ability to find the most efficient path with a given
heuristic makes it suitable for various practical applications:
▪ Pathfinding in Games and Robotics: A* is extensively used in the gaming
industry to control characters in dynamic environments, as well as in
robotics for navigating between points.
▪ Network Routing: In telecommunications, A* helps in determining the
shortest routing path that data packets should take to reach the destination.
▪ AI and Machine Learning: A* can be used in planning and decision-making
algorithms, where multiple stages of decisions and movements need to be
evaluated.
Advantages of A*
• Optimality: When equipped with an admissible heuristic, A* is guaranteed
to find the shortest path to the goal.
• Completeness: A* will always find a solution if one exists.
• Flexibility: By adjusting heuristics, A* can be adapted to a wide range of
problem settings and constraints.
The steps of the A* algorithm are as follows:
1.Initialization: Start by adding the initial node to the open set with
its f(n).
2.Loop: While the open set is not empty, the node with the lowest
f(n) value is removed from the queue.
3.Goal Check: If this node is the goal, the algorithm terminates and
returns the discovered path.
4.Node Expansion: Otherwise, expand the node (find all its
neighbors), calculating g, h, and f values for each neighbor. Add
each neighbor to the open set if it’s not already present, or if a
better path to this neighbor is found.
5.Repeat: The loop repeats until the goal is reached or if there are
no more nodes in the open set, indicating no available path.
(AND-OR) AO* Search Algorithm
• The AO* method divides any given difficult problem into a smaller group of problems that
are then resolved using the AND-OR graph concept.
• The problem is divided into a set of sub-problems , where each sub-problem can be solved
seperately
• AND OR graphs are specialized graphs that are used in problems that can be divided into
smaller problems.
• The AND side of the graph represents a set of tasks that must be completed to achieve the
main goal, while the OR side of the graph represents different methods for accomplishing the
same main goal.
• In the above figure, the buying of a car may be broken down into
smaller problems or tasks that can be accomplished to achieve the
main goal in the above figure, which is an example of a simple AND-OR
graph. The other task is to either steal a car that will help us accomplish
the main goal or use your own money to purchase a car that will
accomplish the main goal. The AND symbol is used to indicate the AND
part of the graphs, which refers to the need that all subproblems
containing the AND to be resolved before the preceding node or issue
may be finished.
• Working of AO* algorithm:
• The evaluation function in AO* looks like this:
f(n) = g(n) + h(n)
f(n) = Actual cost + Estimated cost
here,
f(n) = The actual cost of traversal.
g(n) = the cost from the initial node to the current node.
h(n) = estimated cost from the current node to the goal state.
AO* Algorithm
Let’s briefly understand what the AO* algorithm is:
➢AO* is a variant of the A* algorithm. It is designed to be more flexible and
capable of adapting to changing environments.
➢AO* can repair its solution whenever it encounters a change in the
environment without having to start the search from scratch.
➢AO* is like a supercharged version of A*. It is designed to handle situations
where things might change like a dynamic environment.
➢For example, imagine a robot moving around a busy room. If furniture gets
rearranged AO* can quickly adjust its plan without starting from scratch.
➢One of the cool things about AO* is that it uses a combination of OR and AND
operations. This means it can consider multiple paths simultaneously making it
really good at adapting to new information.
➢It’s like being able to plan a route while also keeping an eye on alternative
options. This makes AO* a powerful tool for tasks that involve uncertainty and
change.
Problem Reduction
• In problem reduction, the main problem is divided into smaller sub-problems,
known as subgoals, that can be tackled individually.
• Each subgoal represents a part of the main problem that needs to be solved
in order to reach a solution.
• By solving these subgoals one by one, the overall problem can be solved by
combining the solutions of each subgoal.
• Problem reduction is a technique used in artificial intelligence to simplify
complex problems by breaking them down into smaller, more manageable
sub-problems. This approach is particularly useful when dealing with
problems that are difficult to solve directly or require a significant amount of
computational resources.
THE PROCESS OF PROBLEM REDUCTION
The problem reduction approach involves the following steps:
➢Identifying the main problem:
The first step in problem reduction is to identify the main problem that needs to be solved.
➢Breaking down the problem:
Once the main problem is identified, it is broken down into smaller sub-
problems. This is done by identifying the key components or factors that
contribute to the main problem.
➢Solving the sub-problems:
Each sub-problem is solved individually using appropriate techniques or
algorithms.
➢Combining the solutions:
Finally, the solutions to the sub-problems are combined to obtain the
solution to the main problem.
BENEFITS OF PROBLEM REDUCTION
The problem reduction approach offers several benefits
➢Complex problems become more manageable: By breaking down a complex
problem into smaller sub-problems

➢Efficiency and speed: Problem reduction allows. systems to solve problems


efficiently and quickly by dividing the problem-solving process into smaller,
more manageable tasks.

➢Applicability to various domains: The problem reduction approach is widely


applicable to various domains including natural language processing, computer
vision, robotics, and expert systems.
Game playing
Game playing is a popular application of artificial intelligence that involves the
development of computer programs to play games, such as chess, checkers etc.
The goal of game playing in artificial intelligence is to develop algorithms that can learn how
to play games and make decisions that will lead to winning outcomes.
One of the earliest examples of successful game playing AI is the chess program Deep Blue,
developed by IBM, which defeated the world champion Garry Kasparov in 1997.
Since then, AI has been applied to a wide range of games, including two-player games,
multiplayer games, and video games.
There are two main approaches to game playing in AI, rule-based systems and machine learning-based
systems.
1.Rule-based systems use a set of fixed rules to play the game.
2.Machine learning-based systems use algorithms to learn from experience and make decisions
based on that experience.
Game playing in AI is an active area of research and has many practical applications, including
game development, education, and military training. By simulating game playing scenarios,
AI algorithms can be used to develop more effective decision-making systems for real-world
applications.
Game playing is a search problem defined by
➢Initial state
➢ Successor function
➢Goal test
➢Path cost/utility/Pay off function
➢ AI has continued to improve, with aims set on a player being unable to tell
the difference between computer and human player.
A game must ‘feel’ natural
• Obey laws of the game
• Characters aware of the environment
• Path finding (A* algorithm)
• Decision making
• Planning
➢The Game AI is about the illusion of human behaviour like smart to a certain extent,
Non-repeating behaviour,Emotional influences and being integrated in the
environment.
➢Game AI needs various computer science disciplines
• Knowledge based systems
• Machine learning (its teach a machine how to perform a specific task and provide
accurate results by identifying patterns)
• Multi-agent systems
• Computer graphics & Animation
• Data structures.
Computer Games Types
• Strategy games
• Role-Playing games
• Action games
• Sports games
• Adventure games
• Puzzle game
Advantages of Game Playing in Artificial Intelligence:

1.Advancement of AI: Game playing has been a driving force behind the development of artificial
intelligence and has led to the creation of new algorithms and techniques that can be applied to
other areas of AI.
2.Education and training: Game playing can be used to teach AI techniques and algorithms to
students and professionals, as well as to provide training for military and emergency response
personnel.
3.Research: Game playing is an active area of research in AI and provides an opportunity to study
and develop new techniques for decision-making and problem-solving.
4.Real-world applications: The techniques and algorithms developed for game playing can be
applied to real-world applications, such as robotics, autonomous systems, and decision support
systems.
Disadvantages of Game Playing in Artificial Intelligence:
1.Limited scope: The techniques and algorithms developed for game playing may not be well-suited
for other types of applications and may need to be adapted or modified for different domains.
2.Computational cost: Game playing can be computationally expensive, especially for complex
games such as chess or Go, and may require powerful computers to achieve real-time performance.
Adversarial search
➢The Adversarial search is a well-suited approach in a competitive environment, where two or
more agents have conflicting goals.
➢The adversarial search can be employed in two-player zero-sum games which means what is good
for one player will be the misfortune for the other.
➢ In such a case, there is no win-win outcome. In artificial intelligence, adversarial search plays a
vital role in decision-making, particularly in competitive environments associated with games and
strategic interactions.
➢By employing adversarial search, AI agents can make optimal decisions while anticipating the
actions of an opponent with their opposing objectives.
➢It aims to establish an effective decision for a player by considering the possible moves and the
counter-moves of the opponents.
➢The adversarial search in competitive environments can be utilized in the below scenarios where the
AI system can assist in determining the best course of action by both considering the possible
moves and counter-moves of the opponents.
➢Each agent seeks to boost their utility or minimize their loss.
➢One agent’s action impacts the outcomes and objectives of the other agents.
➢Additionally, strategic uncertainty arises when the agents may lack sufficient information about
each other’s strategies.
Role of Adversarial Search in AI
➢Game-playing: The Adversarial search finds a significant
application in game-playing scenarios, including renowned games
like chess, Go, and poker. The adversarial search offers the
simplified nature of these games that represents the state of a
game in a straightforward approach and the agents are limited to
a small number of actions whose effects are governed by precise
rules.
➢Decision-making: Decision-making plays a central role in
adversarial search algorithms, where the goal is to find the best
possible move or strategy for a player in a competitive
environment against one or more components. This requires
strategic thinking, evaluation of potential outcomes, and adaptive
decision-making throughout the game.
Types of algorithms in Adversarial search
➢In a normal search , we follow a sequence of actions to reach the goal or to
finish the game optimally.
➢But in an adversarial search, the result depends on the players which will
decide the result of the game.
➢It is also obvious that the solution for the goal state will be an optimal
solution because the player will try to win the game with the shortest path
and under limited time.
There are following types of adversarial search:
▪ Min-max algorithm.
▪ Alpha-beta pruning.
Min-Max algorithm
The Mini-Max algorithm is a decision-making algorithm used in artificial intelligence, particularly
in game theory and computer games.
It is designed to minimize the possible loss in a worst-case scenario (hence “min”) and maximize
the potential gain (therefore “max”).
The Min-max algorithm is a decision-making process used in artificial intelligence for two-player
games.
It involves two players: the maximizer and the minimizer, each aiming to optimize their own
outcomes.
It is a specialized search that returns optimal sequence of moves for a player in zero sum game.
➢It is a recursive or backtracking algorithm which is use game theory and decision-making.
➢It is mostly used for game playing in AI such as Chess , Tic-tac-toe, Checkers etc.
➢There are two players Max & Min
➢Max for maximized value and Min for minimized value.
➢MinMax performs a DFS algorithm
Step 1:
A
-> Maximizer Maximizer = -∞
Minimizer = ∞

B C -> Minimizer

D E F G -> Maximizer

H I J K L M N O
-> Terminal node
-1 4 2 6 -3 -5 0 7
Terminal values
At node D {-1,4} Max(-1,-∞)=4 max(-1,4)
At node E{2,6} max(2,-∞)=6 max(2,6)
At node F{-3,-5} max(-3,-∞)= -3 max(-3,-5)
At node G{0,7} max(0,-∞)=7 max(0,7)
Step 2 ->max
A

B C
4 6 -3 7 -> min
D E
F G -> max Here Min Node B= min(4,6)=4
Node C=min(-3,7)=-3
H I J K L M N O
Step 3
A ->MAX
4 -3
B C
-> min
4 6 -3 7
D E F G
->max

H I J K L M N O
-1 4 2 6 -3 -5 0 7

Here Max node A{4,3}=4


Strengths of the Min-Max Algorithm

➢Optimal Decision Making: The Min-Max algorithm ensures optimal


decision making by considering all possible moves and their
outcomes. It provides a strategic advantage by predicting the
opponent’s best responses and choosing moves that maximize the
player’s benefit.

➢Simplicity and Clarity: The Min-Max algorithm is conceptually


simple and easy to understand. Its straightforward approach of
evaluating and propagating utility values through a game tree
makes it an accessible and widely taught algorithm in AI.
Weaknesses of the Min-Max Algorithm
➢Computational Complexity: The primary drawback of the Min-Max
algorithm is its computational complexity. As the depth and branching
factor of the game tree increase, the number of nodes to be evaluated
grows exponentially. This makes it computationally expensive and
impractical for games with deep and complex trees, like Go.
➢Depth Limitations: To manage computational demands, the Min-Max
algorithm often limits the depth of the game tree. However, this can lead
to suboptimal decisions if critical moves lie beyond the chosen depth.
Balancing depth and computational feasibility is a significant challenge.
➢Handling of Uncertain Environments: The Min-Max algorithm assumes
deterministic outcomes for each move, which may not be realistic in
uncertain or probabilistic environments. Real-world scenarios often
involve uncertainty and incomplete information, requiring modifications
to the basic Min-Max approach.
Limitation of Min-Max Algoritnm
It is slow for complex game such as chess etc. Because
these have branching factor so this limitation of minimax can be
improved from Alpha-beta pruning.
Alpha-Beta Pruning
➢Alpha-beta pruning is an optimization technique for the minimax algorithm.
➢It reduces the number of nodes evaluated in the game tree by eliminating
branches that cannot influence the final decision.
➢This is achieved by maintaining two values, alpha and beta, which represent
the minimum score that the maximizing player is assured of and the maximum
score that the minimizing player is assured of, respectively.
▪ Alpha: The best (highest) value that the maximizer can guarantee given the
current state.
▪ Beta: The best (lowest) value that the minimizer can guarantee given the
current state.
▪ As the algorithm traverses the tree, it updates these values. If it finds a move
that is worse than the current alpha for the maximizer or beta for the
minimizer, it prunes (cuts off) that branch, as it cannot affect the outcome
How Alpha-Beta Pruning Works
The Alpha-Beta pruning algorithm traverses the game tree
similarly to Minimax but prunes branches that do not need to be
explored.
The steps are discussed below:
1.Initialization: Start with alpha set to negative infinity and beta set to
positive infinity.
2.Max Node Evaluation:
2. For each child of a Max node:
2.Evaluate the child node using the Minimax algorithm with Alpha-Beta pruning.
3.Update alpha: 𝛼=𝑚𝑎𝑥(𝛼,child valueα=max(α,child value.
4.If alpha is greater than or equal to beta, prune the remaining children (beta cutoff).
3.Min Node Evaluation:
3. For each child of a Min node:
3.Evaluate the child node using the Minimax algorithm with Alpha-Beta pruning.
4.Update beta: 𝛽=𝑚𝑖𝑛(𝛽,𝑐ℎ𝑖𝑙𝑑𝑣𝑎𝑙𝑢𝑒)β=min(β,childvalue).
5.If beta is less than or equal to alpha, prune the remaining children (alpha cutoff).
Applications of Alpha-Beta Pruning
Alpha-Beta pruning is widely used in AI applications for
two-player games such as:
➢Chess: Enhances the efficiency of chess engines, allowing
them to evaluate deeper moves within time constraints.
➢Checkers: Optimizes move evaluation in checkers, making AI
opponents more challenging.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy