Ai-Unit 1
Ai-Unit 1
Introduction to Al: What is AI? Intelligent Agents: Agents and environment, the concept
ofRationality, the nature of the environment, the structure of agents; Problem-solving:
Problem-solving agents; Uninformed search strategies: DFS, BFS; Informed Search:
Best First Search,A* search, AO* search, Means End Analysis. Adversarial Search ;
Games: Two-player zero-sum games, Minimax Search, Alpha-Beta pruning.
Artificial intelligence (AI) is the simulation of human intelligence in machines that are
programmed to think and act like humans. .
● It helps you reduce the amount of time needed to perform specific tasks.
● Making it easier for humans to interact with machines.
● Facilitating human-computer interaction in a way that is more natural and
efficient.
● Improving the accuracy and speed of medical diagnoses.
● Helping people learn new information more quickly.
● Enhancing communication between humans and machines.
Fuzzy Logic: Fuzzy Logic is defined as a many-valued logic form that may
have truth values of variables in any real number between 0 and 1. It is the
handle concept of partial truth. In real life, we may encounter a situation
where we can’t decide whether the statement is true or false.
Agents
In the context of the AI field, an “agent” is an independent program or
entity that interacts with its environment by perceiving its surroundings
via sensors, then acting through actuators or effectors.
Agents use their actuators to run through a cycle of perception, thought,
and action.
Software: This Agent has file contents, keystrokes, and received network
packages that function as sensory input, then act on those inputs,
displaying the output on a screen.
Human: Yes, we’re all agents. Humans have eyes, ears, and other organs
that act as sensors, and hands, legs, mouths, and other body parts act as
actuators.
Robotic: Robotic agents have cameras and infrared range finders that act
as sensors, and various servos and motors perform as actuators.
These are the main four rules all AI agents must adhere to:
Intelligent agents
Intelligent agents in AI are autonomous entities that act upon an environment using
sensors and actuators to achieve their goals. In addition, intelligent agents may learn
from the environment to achieve those goals.
Driverless cars and the Siri virtual assistant are examples of intelligent agents in AI.
Simple reflex agents. These agents function in a current state, ignoring past history.
Responses are based on the event-condition-action rule, or ECA rule, where a user
initiates an event and then the agent refers to a list of preset rules and
preprogrammed outcomes.
Model-based reflex agents. These agents take action in the same way as a reflex
agent, but they have a more comprehensive view of their environments. A model of
the world is programmed into the internal system that incorporates the agent's
history.
Goal-based agents. These agents, also referred to as rational agents, expand on the
information that model-based agents store by also including goal information or
information about desirable situations.
Utility-based agents. These agents are similar to goal-based agents, but they provide
an extra utility measurement that rates each possible scenario on its desired result,
and then choose the action that maximizes the outcome. Rating criteria examples
include the probability of success or the resources required.
Learning agents. These agents have the ability to gradually improve and become
more knowledgeable about an environment over time through an additional learning
algorithm or element. The learning element uses feedback on performance measures
to determine how performance elements should be changed to improve gradually.
This concept describes how an AI system should operate.
Rationality is concerned with expected actions and results depending upon what the agent has perceived.
Performing actions with the aim of obtaining useful information is an important part of rationality.
Full or Partial Observability? The agent's sensors do not need to pre-store any
information if they have complete access. Partial access may be sensor inaccuracy or
insufficient environmental data, such as limited access to hostile territory.
Number of Agents — A single agent environment is used for the vacuum cleaner, but
for driverless taxis, each driverless cab is a different agent, resulting in a multi-agent
environment.
Deterministic — The number of unknowns in the environment impacts the
ecosystem's predictability. For example, cleaning floor space is generally predictable,
and furniture stays there most of the time, while taxi driving on the road is not.
Deterministic — The number of unknowns in the environment impacts the
ecosystem's predictability. For example, cleaning floor space is generally predictable,
and furniture stays there most of the time, while taxi driving on the road is not.
Static — How frequently does the surrounding environment change? Is it possible for
the agent t
Problem-Solving Agents In Artificial Intelligence
In artificial intelligence, a problem-solving agent refers to a type of intelligent
agent designed to address and solve complex problems or tasks in its
environment. These agents are a fundamental concept in AI and are used in
various applications, from game-playing algorithms to robotics and decision-
making systems. Here are some key characteristics and components of a
problem-solving agent:
There are far too many powerful search algorithms out there to fit in a single article.
Instead, this article will discuss six of the fundamental search algorithms, divided
into two categories, as shown below.
Note that there is much more to search algorithms than the chart I have provided
above. However, this article will mostly stick to the above chart, exploring the
algorithms given there.
Uninformed Search Algorithms:
The search algorithms in this section have no additional information on the goal node
other than the one provided in the problem definition. The plans to reach the goal state
from the start state differ only by the order and/or length of actions. Uninformed
search is also called Blind search. These algorithms can only generate the successors
and differentiate between the goal state and non goal state.
The following uninformed search algorithms are discussed in this section.
1. Depth First Search
2. Breadth First Search
3. Uniform Cost Search
o Depth-first search isa recursive algorithm for traversing a tree or graph data
structure.
o It is called the depth-first search because it starts from the root node and follows
each path to its greatest depth node before moving to the next path.
o DFS uses a stack data structure for its implementation.
o The process of the DFS algorithm is similar to the BFS algorithm.
Advantage:
o DFS requires very less memory as it only needs to store a stack of the nodes on
the path from root node to the current node.
o It takes less time to reach to the goal node than BFS algorithm (if it traverses in
the right path).
Disadvantage:
o There is the possibility that many states keep re-occurring, and there is no guarantee
of finding the solution.
o DFS algorithm goes for deep down searching and sometime it may go to the infinite
loop.
Example:
Question. Which solution would DFS find to move from node S to node G if run on
the graph below?
Solution. The equivalent search tree for the above graph is as follows. As DFS
traverses the tree “deepest node first”, it would always pick the deeper branch until
it reaches the solution (or it runs out of nodes, and goes to the next branch). The
traversal is shown in blue arrows.
This ad will end in 11
Advantages:
Disadvantages
o It requires lots of memory since each level of the tree must be saved into
memory to expand the next level.
o BFS needs lots of time if the solution is far away from the root node.
Example:
Question. Which solution would BFS find to move from node S to node G if run on
the graph below?
Solution. The equivalent search tree for the above graph is as follows. As BFS
traverses the tree “shallowest node first”, it would always pick the shallower branch
until it reaches the solution (or it runs out of nodes, and goes to the next branch).
The traversal is shown in blue arrows.
UCS is different from BFS and DFS because here the costs come into play. In other
words, traversing via different edges might not have the same cost. The goal is to
find a path where the cumulative sum of costs is the least.
Cost of a node is defined as:
cost(node) = cumulative cost of all nodes from root
cost(root) = 0
Example:
Question. Which solution would UCS find to move from node S to node G if run on
the graph below?
Solution. The equivalent search tree for the above graph is as follows. The cost of
each node is the cumulative cost of reaching that node from the root. Based on the
UCS strategy, the path with the least cumulative cost is chosen. Note that due to the
many options in the fringe, the algorithm explores most of them so long as their cost
is low, and discards them when a lower-cost path is found; these discarded
traversals are not shown below. The actual traversal is shown in blue.
Path: S -> A -> B -> G
Advantages:
● UCS is complete only if states are finite and there should be no loop with zero
weight.
● UCS is optimal only if there is no negative cost.
Disadvantages:
● Explores options in every “direction”.
● No information on goal location.
In greedy search, we expand the node closest to the goal node. The “closeness” is
estimated by a heuristic h(x).
Heuristic: A heuristic h is defined as-
h(x) = Estimate of distance of node x from the goal node.
Lower the value of h(x), closer is the node from the goal.
Strategy: Expand the node closest to the goal state, i.e. expand the node with a
lower h value.
Example:
Question. Find the path from S to G using greedy search. The heuristic values h of
each node below the name of the node.
● Here, h(x) is called the forward cost and is an estimate of the distance of the
current node from the goal node.
● And, g(x) is called the backward cost and is the cumulative cost of a node from
the root node.
● A* search is optimal only when for all nodes, the forward cost for a node h(x)
underestimates the actual cost h*(x) to reach the goal. This property
of A* heuristic is called admissibility.
Admissibility:
Strategy: Choose the node with the lowest f(x) value.
Example:
Question. Find the path to reach from S to G using A* search.
Solution. Starting from S, the algorithm computes g(x) + h(x) for all nodes in the
fringe at each step, choosing the node with the lowest sum. The entire work is
shown in the table below.
Note that in the fourth set of iterations, we get two paths with equal summed cost
f(x), so we expand them both in the next set. The path with a lower cost on further
expansion is the chosen path.
S -> A 9 3 12
S -> D 5 2 7
the Solution. We solve this question pretty much the same way we solved last
question, but in this case, we keep a track of nodes explored so that we don’t re-
explore them.
Path: S -> D -> B -> E -> G
Cost: 7
The following image shows how the target goal is divided into sub-goals,
that are then linked with executable actions.
Algorithm steps for Means End Analysis
The following are the algorithmic steps for means end analysis:
1. Conduct a study to assess the status of the current state. This can
be done at a macro or micro level.
2. Capture the problems in the current state and define the target
state. This can also be done at a macro or micro level.
3. Make a comparison between the current state and the end state that
you defined. If these states are the same, then perform no further
action. This is an indication that the problem has been tackled. If the
two states are not the same, then move to step 4.
4. Record the differences between the two states at the two
aforementioned levels (macro and micro).
5. Transform these differences into adjustments to the current state.
6. Determine the right action for implementing the adjustments in step
5.
7. Execute the changes and compare the results with the target goal.
8. If there are still some differences between the current state and the
target state, perform course correction until the end goal is
achieved.
The image above shows that there is a difference between the current
state and the target state. This indicates that there is a need to make
adjustments to the current state to reach the end goal.
The goal can be divided into sub-goals that are linked with executable
actions or operations.
The following are the three operators that can be used to solve the
problem.
1. Delete operator: The dot symbol at the top right corner in the initial
state does not exist in the goal state. The dot symbol can be removed by
applying the delete operator.
2. Move operator: We will then compare the new state with the end state.
The green diamond in the new state is inside the circle while the green
diamond in the end state is at the top right corner. We will move this
diamond symbol to the right position by applying the move operator.
3. Expand operator: After evaluating the new state generated in step 2,
we find that the diamond symbol is smaller than the one in the end state.
We can increase the size of this symbol by applying the expand operator.
After applying the three operators above, we will find that the state in step
3 is the same as the end state. There are no differences between these
two states, which means that the problem has been solved.
Organizational planning
Means end analysis is used in organizations to facilitate general
management. It helps organizational managers to conduct planning to
achieve the objectives of the organization. The management reaches the
desired goal by dividing the main goals into sub-goals that are linked with
actionable tasks.
Business transformation
This technique is used to implement transformation projects. If there are
any desired changes in the current state of a business project, means
end analysis is applied to establish the new processes to be
implemented. The processes are split into sub-processes to enhance
effective implementation.
Gap analysis
Gap analysis is the comparison between the current performance and the
required performance. Means end analysis is applied in this field to
compare the existing technology and the desired technology in
organizations. Various operations are applied to fill the existing gap in
technology.
Adversarial Search
Adversarial search is a search, where we examine the problem which arises when we try
to plan ahead of the world and other agents are planning against us.
o In previous topics, we have studied the search strategies which are only associated
with a single agent that aims to find the solution which often expressed in the form of
a sequence of actions.
o But, there might be some situations where more than one agent is searching for the
solution in the same search space, and this situation usually occurs in game playing.
o The environment with more than one agent is termed as multi-agent environment , in
which each agent is an opponent of other agent and playing against each other. Each
agent needs to consider the action of other agent and effect of that action on their
performance.
o So, Searches in which two or more players with conflicting goals are trying to
explore the same search space for the solution, are called adversarial searches,
often known as Games .
o Games are modelled as a Search problem and heuristic evaluation function, and these
are the two main factors which help to model and solve games in AI.
o Perfect information: A game with the perfect information is that in which agents can
look into the complete board. Agents have all the information about the game, and
they can see each other moves also. Examples are Chess, Checkers, Go, etc.
o Imperfect information: If in a game agents do not have all information about the game
and not aware with what's going on, such type of games are called the game with
imperfect information, such as tic-tac-toe, Battleship, blind, Bridge, etc.
o Deterministic games: Deterministic games are those games which follow a strict
pattern and set of rules for the games, and there is no randomness associated with
them. Examples are chess, Checkers, Go, tic-tac-toe, etc.
o Non-deterministic games: Non-deterministic are those games which have various
unpredictable events and has a factor of chance or luck. This factor of chance or luck
is introduced by either dice or cards. These are random, and each action response is
not fixed. Such games are also called as stochastic games.
Example: Backgammon, Monopoly, Poker, etc.
The 2-players 0-sum game is a basic model in game theory. There are two players, each
with an associated set of strategies. While one player aims to maximize her payoff, the other
player attempts to take an action to minimize this payoff. In fact, the gain of a player is the
loss of another.
Minimax is a kind of backtracking algorithm that is used in decision making and game
theory to find the optimal move for a player, assuming that your opponent also plays
optimally. It is widely used in two player turn-based games such as Tic-Tac-Toe,
Backgammon, Mancala, Chess, etc.
In Minimax the two players are called maximizer and minimizer. The maximizer tries
to get the highest score possible while the minimizer tries to do the opposite and get
the lowest score possible.
Every board state has a value associated with it. In a given state if the maximizer has
upper hand then, the score of the board will tend to be some positive value. If the
minimizer has the upper hand in that board state then it will tend to be some negative
value. The values of the board are calculated by some heuristics which are unique for
every type of game.
Example:
Consider a game which has 4 final states and paths to reach final state are from root
to 4 leaves of a perfect binary tree as shown below. Assume you are the maximizing
player and you get the first chance to move, i.e., you are at the root and your opponent
at next level. Which move you would make as a maximizing player considering that
your opponent also plays optimally?
Since this is a backtracking based algorithm, it tries all possible moves, then backtracks
and makes a decision.
● Maximizer goes LEFT: It is now the minimizers turn. The minimizer now has a
choice between 3 and 5. Being the minimizer it will definitely choose the least
among both, that is 3
● Maximizer goes RIGHT: It is now the minimizers turn. The minimizer now has a
choice between 2 and 9. He will choose 2 as it is the least among the two values.
Being the maximizer you would choose the larger value that is 3. Hence the optimal
move for the maximizer is to go LEFT and the optimal value is 3.
The above tree shows two possible scores when maximizer makes left and right moves.
Note: Even though there is a value of 9 on the right subtree, the minimizer will never
pick that. We must always assume that our opponent plays optimally.
ALPHA-BETA PRUNING
Alpha is the best value that the maximizer currently can guarantee at that level or
above.
Beta is the best value that the minimizer currently can guarantee at that level or
below.
● The initial call starts from A. The value of alpha here is -INFINITY and the value
of beta is +INFINITY. These values are passed down to subsequent nodes in the
tree. At A the maximizer must choose max of B and C, so A calls B first
● At B it the minimizer must choose min of D and E and hence calls D first.
● At D, it looks at its left child which is a leaf node. This node returns a value of 3.
Now the value of alpha at D is max( -INF, 3) which is 3.
● To decide whether its worth looking at its right node or not, it checks the
condition beta<=alpha. This is false since beta = +INF and alpha = 3. So it
continues the search.
● D now looks at its right child which returns a value of 5.At D, alpha = max(3, 5)
which is 5. Now the value of node D is 5
● D returns a value of 5 to B. At B, beta = min( +INF, 5) which is 5. The minimizer
is now guaranteed a value of 5 or lesser. B now calls E to see if he can get a lower
value than 5.
● At E the values of alpha and beta is not -INF and +INF but instead -INF and 5
respectively, because the value of beta was changed at B and that is
what B passed down to E
● Now E looks at its left child which is 6. At E, alpha = max(-INF, 6) which is 6. Here
the condition becomes true. beta is 5 and alpha is 6. So beta<=alpha is true.
Hence it breaks and E returns 6 to B
● Note how it did not matter what the value of E‘s right child is. It could have been
+INF or -INF, it still wouldn’t matter, We never even had to look at it because
the minimizer was guaranteed a value of 5 or lesser. So as soon as the maximizer
saw the 6 he knew the minimizer would never come this way because he can
get a 5 on the left side of B. This way we didn’t have to look at that 9 and hence
saved computation time.
● E returns a value of 6 to B. At B, beta = min( 5, 6) which is 5.The value of node B is
also 5
So far this is how our game tree looks. The 9 is crossed out because it was never
computed.
● B returns 5 to A. At A, alpha = max( -INF, 5) which is 5. Now the maximizer is
guaranteed a value of 5 or greater. A now calls C to see if it can get a higher value
than 5.
● At C, alpha = 5 and beta = +INF. C calls F
● At F, alpha = 5 and beta = +INF. F looks at its left child which is a 1. alpha = max(
5, 1) which is still 5.
● F looks at its right child which is a 2. Hence the best value of this node is 2. Alpha
still remains 5
● F returns a value of 2 to C. At C, beta = min( +INF, 2). The condition beta <= alpha
becomes true as beta = 2 and alpha = 5. So it breaks and it does not even have
to compute the entire sub-tree of G.
● The intuition behind this break-off is that, at C the minimizer was guaranteed a
value of 2 or lesser. But the maximizer was already guaranteed a value of 5 if he
choose B. So why would the maximizer ever choose C and get a value less than
2 ? Again you can see that it did not matter what those last 2 values were. We
also saved a lot of computation by skipping a whole sub-tree.
● C now returns a value of 2 to A. Therefore the best value at A is max( 5, 2) which
is a 5.
● Hence the optimal value that the maximizer can get is 5
This is how our final game tree looks like. As you can see G has been crossed out as it
was never computed.