AI
AI
3
• A Human Agent has eyes, ears, and other organs
for sensors and hands, legs, vocal tract, and so on
for actuators.
Figure 2.2 shows a configuration with just two squares, A and B. The
vacuum agent perceives which square it is in and whether there is
dirt in the square.
9
LEARNING
Performance Measure
Performance measure is the unit to define the success of an agent.
• Correct destination
• Minimizing
• Fuel consumption and Wear and tear
• Minimizing the trip time or cost
• Minimizing violations of traffic laws and Disturbances to other drivers
• Maximizing safety and passenger comfort
• Maximizing profits 11
Environment
The environment refers to
The agent's immediate surroundings at the time the agent is working in that
environment. Depending on the mobility of the agent
it might be static or dynamic.
The needed sensors and behaviours of the agent will also alter in response to a
slight change in the surroundings.
Actuators
Agents rely on actuators to function in their surroundings.
Display boards , object-picking arms
track-changing devices, etc. are examples of actuators.
The environment can alter as a result of actions taken by agents.
Sensors
By providing agents with a comprehensive collection of Inputs, Various sensing
devices, such as
cameras and GPS
12
odometers and others, are examples of sensors.
13
2.3.2 PROPERTIES OF TASK ENVIRONMENTS
The range of task environments that might arise in AI is
obviously vast.
14
FULLY OBSERVABLE VS. PARTIALLY OBSERVABLE
Example
• Chess – the board is fully observable, and so are the
opponent’s moves.
Example
• The vacuum world as we described it is deterministic.
• Taxi driving is clearly nondeterministic in this sense, because one can
never predict the behaviour of traffic exactly. 17
EPISODIC VS. SEQUENTIAL
Example
• Consider an example of Pick and Place robot, which is used to detect defective
parts from the conveyor belts. Here, every time robot(agent) will make the
decision on the current part i.e. there is no dependency between current and
previous decisions
Example
• Chess and taxi driving are sequential: in both cases, short-term actions can
have long-term consequences.
18
Episodic environments are much simpler than sequential environments
STATIC VS. DYNAMIC
• An idle environment with no change in its state is called a
static environment.
Example
• An empty house is static as there’s no change in the
surroundings when an agent enters.
Example
• A roller coaster ride is dynamic as it is set in motion and the
environment keeps changing every instant
19
DISCRETE VS. CONTINUOUS
Example
Example
known environment the results of all actions are know to the agent.
Example
• in solitaire card games, I know the rules but am still unable to see the
cards that have not yet been turned over.
Example
• In a new video game, the screen may show the entire game state but I
still don’t know what the buttons do until I try them
21
22
2.4 THE STRUCTURE OF AGENTS
• To understand the structure of Intelligent Agents, we should be familiar.
with Architecture and Agent programs. Architecture is the machinery
that the agent executes on. It is a device with sensors and actuators, for
example, a robotic car, a camera, and a PC
23
There are many examples of agents in artificial intelligence
• They choose actions only based on the current percept. They are rational only if
a correct decision is made only on the basis of current precept. Their
environment is completely observable.
• For example if a mars lander found a rock in a specific place it needed to collect
then it would collect it, if it was a simple reflex agent then if it found the same
rock in a different place it would still pick it up as it doesn't take into account
that it already picked it up.
• Self-driving cars are a great example of a model-based reflex agent. The car is
equipped with sensors that detect obstacles, such as car brake lights in front of
them or pedestrians walking on the sidewalk. As it drives, these sensors
feed percepts into the car's memory and internal model of its environment.
34
With that information, the agent can follow this four-phase problem-solving process.
Goal formulation
Goal Formulation: It is the first and simplest step in problem solving. It organizes. the
steps/sequence required to formulate one goal out of multiple goals as well as actions to
achieve that goal. Goal formulation is based on the current situation and the agent's
performance measure
Problem formulation
Problem formulation is the process of deciding what actions and states to consider, given a
goal. The process of looking for a sequence of actions that reaches the goal is called
search. A search algorithm takes a problem as input and returns a solution in the form of
an action sequence.
Search
Before taking any action in the real world, the agent simulates sequences of actions in its
model, searching until it finds a sequence of actions that reaches the goal .Such a sequence
is called a solution. The agent might have to simulate multiple sequences that do not reach
the goal, but eventually it will find a solution or it will find that no solution is possible.
Execution
The agent can now execute the actions in the solution, one at a time 35
• node.STATE: the state to which the
node corresponds;
• node.PARENT: the node in the tree
that generated this node;
• node.ACTION: the action that was
applied to the parent’s state to
generate this node
• node.PATH-COST: the total cost of the
path from the initial state to this node
36
The appropriate choice is a queue of some kind, because
the operations on a frontier are:
41
Breadth-first Search
Breadth-first search is the most common search strategy for traversing a tree or graph.
This algorithm searches breadth wise in a tree or graph, so it is called breadth-first search.
BFS algorithm starts searching from the root node of the tree and expands all successor
node at the current level before moving to nodes of next level.
Advantages
• BFS will provide a solution if any solution exists.
• If there are more than one solutions for a given problem, then BFS will provide the
minimal solution which requires the least number of steps.
Disadvantages
• It requires lots of memory since each level of the tree must be saved into memory to
expand the next level.
• BFS needs lots of time if the solution is far away from the root node.
42
S---> A--->B---->C--->D---->G--->H--->E---->F---->I---->K
43
Input:
V = 5, E = 4
adj = {{1,2,3},{},{4},{},{}}
Output:
01234
Explanation:
• 0 is connected to 1 , 2 , 3.2 is connected to 4.
• so starting from 0, it will go to 1 then 2 then 3.
• After this 2 to 4,
• thus bfs will be 0 1 2 3 4.
44
2. Dijkstra’s algorithm or Uniform cost search
Advantages
• Uniform cost search is optimal because at every state the
path with the least cost is chosen.
Disadvantages
• It does not care about the number of steps involve in
searching and only concerned about path cost. Due to which
this algorithm may be stuck in an infinite loop. 45
46
Output
Minimum cost from S to G is =3
47
3. Depth-first Search
• It moves through the whole depth, as much as it can go, after that it
backtracks to reach previous vertices to find the new path
Advantage
• DFS requires very less memory
• It takes less time to reach to the goal node than
Disadvantage
• no guarantee of finding the solution.
• it may go to the infinite loop.
49
4. A depth-limited search algorithm
• Depth-limited search can solve the drawback of the infinite path in the Depth-
first search.
• In this algorithm, the node at the depth limit will treat as it has no successor
nodes further.
Advantages
• Depth-limited search is Memory efficient.
Disadvantages
• Depth-limited search also has a disadvantage of incompleteness.
• It may not be optimal if the problem has more than one solution 50
51
52
53
Advantages
• Depth-limited search is Memory efficient.
Disadvantages
• Depth-limited search also has a disadvantage
of incompleteness.
54
Iterative deepening depth-first Search
• Combination of DFS and BFS algorithms.
• Increasing the limit until a goal is found.
• memory efficiency.
• The iterative search algorithm is useful uninformed search
when search space is large, and depth of goal node is unknown
Advantages
• It combines the benefits of BFS and DFS search algorithm in
terms of fast search and memory efficiency.
Disadvantages
• The main drawback of IDDFS is that it repeats all the work of
the previous phase.
55
1st Iteration--> A
2'nd Iteration-> A, B, C
3'rd Iteration--> A, B, D, E, C, F, G
4'th Iteration--> A, B, D, H, I, E, C, F, K, G
In the fourth iteration, the algorithm
will find the goal node. 56
Bidirectional Search Algorithm
Bidirectional search replaces one single search graph with two small
subgraphs in which
The search stops when these two graphs intersect each other.
Bidirectional search can use search techniques such as BFS, DFS, DLS,
57
etc.
Advantages
Disadvantages
62
INFORMED (HEURISTIC) SEARCH STRATEGIES
• The algorithms of an informed search contain
information regarding the goal state. It helps an
AI make more efficient and accurate searches.
63
INFORMED SEARCH
ITERATIVE- RECURSIVE
DEEPENING BEST-FIRST
A* SEARCH SEARCH
64
Greedy best-first search
65
An example of the best-first search algorithm is below graph, suppose
we have to find the path from A to G
66
• In this example, the cost is measured strictly using the heuristic value. In
other words, how close it is to the target.
67
• C has the lowest cost of 6. Therefore, the
search will continue like so
68
U has the lowest cost compared to M and R, so the search will continue by
exploring U. Finally, S has a heuristic value of 0 since that is the target node:
69
The total cost for the path (P -> C -> U -> S) evaluates to 11. The potential
problem with a greedy best-first search is revealed by the path (P -> R -> E -
> S) having a cost of 10, which is lower than (P -> C -> U -> S). Greedy best-first
search ignored this path because it does not consider the edge weights
70
A* SEARCH ALGORITHM
• A* search is
• f(n) = h(n) + g(n). where,
• h(n) is heuristics function
• g(n) is the past knowledge acquired while searching. 71
72
Exploring S
73
Exploring D:
74
Exploring F
75
4.1.1 HILL-CLIMBING SEARCH
• Hill climbing is a simple optimization algorithm used in
Artificial Intelligence (AI) to find the best possible solution
for a given problem.
77