0% found this document useful (0 votes)
44 views77 pages

AI

The document discusses the importance of artificial intelligence (AI) and its ability to create intelligent agents that can perceive their environment and act autonomously to achieve specific goals. It outlines different types of agents, including simple reflex, model-based, goal-based, utility-based, and learning agents, and describes their functionalities and decision-making processes. Additionally, it explains the properties of task environments and the structure of agents, emphasizing the role of performance measures, sensors, and actuators in AI systems.

Uploaded by

Kowshalyaa
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
44 views77 pages

AI

The document discusses the importance of artificial intelligence (AI) and its ability to create intelligent agents that can perceive their environment and act autonomously to achieve specific goals. It outlines different types of agents, including simple reflex, model-based, goal-based, utility-based, and learning agents, and describes their functionalities and decision-making processes. Additionally, it explains the properties of task environments and the structure of agents, emphasizing the role of performance measures, sensors, and actuators in AI systems.

Uploaded by

Kowshalyaa
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 77

INTRODUCTION

• Intelligence is so important Intelligence to us. For


thousands of years, we have tried to understand how
we think and act—that is, how our brain, a mere
handful of matter, can perceive, understand, predict,
and manipulate a world far larger and more
complicated than itself.

• The field of artificial intelligence, or AI, Artificial


intelligence is concerned with not just understanding
but also building intelligent entities—machines that
can compute how to act effectively and safely in a wide
variety of novel situations
1
AGENTS

An artificial intelligence (AI) agent is a software


program that can interact with its environment,
collect data and use the data to perform self
determined tasks to meet predetermined goals.
Humans set goals, but an AI agent independently
chooses the best actions it needs to perform to
achieve those goals.
An agent is anything that can be viewed as
Perceiving(Sense) its environment through
sensors and acting upon that environment
through actuators.
2
INTELLIGENT AGENTS
• An agent is anything that can be viewed as
perceiving its environment through sensors
and acting upon that environment through
actuators

3
• A Human Agent has eyes, ears, and other organs
for sensors and hands, legs, vocal tract, and so on
for actuators.

• A Robotic Agent might have cameras and infrared


range finders for sensors and various motors for
actuators.

• A Software Agent receives file contents, network


packets, and human input keyboard/mouse/touch
screen/voice) as sensory inputs and acts on the
environment by writing files, sending network
packets, and displaying information or generating4
• Intelligent agents in AI are autonomous entries that
act an environment using sensors and actuators to
achieve their goals..

• In addition, intelligent agents may learn from the


environment to achieve those goals. Example:
Driverless cars.

• In artificial intelligence, an agent is a computer


program or system that is designed to perceive its
environment, make decisions and take actions to
achieve a specific goal or set of goals.

• The agent operates autonomously, meaning it is not


directly controlled by a human operator. 5
6
In the simple world, the vacuum cleaner agent has location sensor
and a dirt sensor. So that it knows where it is (Room A or Room B)
and whether the room is dirty.

It can go left, go right, suck, and idle.


A possible performance measure is to maximize the number of clean
rooms over a certain period

Figure 2.2 shows a configuration with just two squares, A and B. The
vacuum agent perceives which square it is in and whether there is
dirt in the square.

The agent starts in square A.


The available actions are to move to the right, move to the left, suck
up the dirt, or do nothing.
One very simple agent function is the following:
if the current square is dirty, then suck;
7
otherwise, move to the other square.
8
GOOD BEHAVIOR: THE CONCEPT OF RATIONALITY
A rational agent is one that does the right thing. Obviously, doing
the right thing is better Rational agent than doing the wrong thing,
but what does it mean to do the right thing?
2.2.2 RATIONALITY
What is rational at any given time depends on four things:
1. The performance measure that defines the criterion of success.
2. The agent’s prior knowledge of the environment.
3. The actions that the agent can perform.
4. The agent’s percept sequence to date
If clean squares can become dirty again, the agent should
occasionally check and re-clean them if needed. If the geography of
the environment is unknown, the agent will need to explore it.

9
LEARNING

Rational agent not only to gather information but also


to learn as Learning much as possible from what it
perceives.

The agent’s initial configuration could reflect some prior


knowledge of the environment, but as the agent gains
experience this may be modified and augmented.

There are extreme cases in which the environment is


completely known a priori and completely predictable.

In such cases, the agent need not perceive or learn; it


simply acts correctly. 10
2.3 THE NATURE OF ENVIRONMENTS
• AI agent is to specify the task environment. The task
environment is comprised PEAS (Performance, Environment,
Actuators, Sensors) description

Performance Measure
Performance measure is the unit to define the success of an agent.
• Correct destination
• Minimizing
• Fuel consumption and Wear and tear
• Minimizing the trip time or cost
• Minimizing violations of traffic laws and Disturbances to other drivers
• Maximizing safety and passenger comfort
• Maximizing profits 11
Environment
The environment refers to
The agent's immediate surroundings at the time the agent is working in that
environment. Depending on the mobility of the agent
it might be static or dynamic.
The needed sensors and behaviours of the agent will also alter in response to a
slight change in the surroundings.

Actuators
Agents rely on actuators to function in their surroundings.
Display boards , object-picking arms
track-changing devices, etc. are examples of actuators.
The environment can alter as a result of actions taken by agents.

Sensors
By providing agents with a comprehensive collection of Inputs, Various sensing
devices, such as
cameras and GPS
12
odometers and others, are examples of sensors.
13
2.3.2 PROPERTIES OF TASK ENVIRONMENTS
The range of task environments that might arise in AI is
obviously vast.

• Fully observable vs. partially observable


• Single-agent vs. multiagent
• Deterministic vs. nondeterministic
• Episodic vs. Sequential
• Static vs. dynamic
• Discrete vs. Continuous
• Known vs. unknown

14
FULLY OBSERVABLE VS. PARTIALLY OBSERVABLE

• A fully observable - complete information about the current


state of the environment

• A partially observable - does not have complete


information about the current state of the environment.

Example
• Chess – the board is fully observable, and so are the
opponent’s moves.

• Driving – partially observable because what’s around the


15
SINGLE-AGENT VS. MULTIAGENT

• An environment consisting of only one agent. A


person left alone in a maze is an example of the single-
agent system.

• An environment involving more than one agent . The


game of football is multi-agent as it involves 11 players
in each team.

• Cooperative: All agents working towards a common


goal.
16
DETERMINISTIC VS. NONDETERMINISTIC

• In a deterministic algorithm, for a given particular input, the computer will


always produce the same output going through the same

• In a non-deterministic algorithm, for the same input, the compiler may


produce different output in different runs. In fact, non-deterministic
algorithms can’t solve the problem in polynomial time and can’t
determine what is the next step.

• The non-deterministic algorithms can show different behaviours for the


same input on different execution and there is a degree of randomness to
it. For a particular input the computer will give different outputs on
different execution.

Example
• The vacuum world as we described it is deterministic.
• Taxi driving is clearly nondeterministic in this sense, because one can
never predict the behaviour of traffic exactly. 17
EPISODIC VS. SEQUENTIAL

• In an Episodic task environment, each of the agent’s actions is divided into


atomic incidents or episodes. There is no dependency between current and
previous incidents.

Example

• Consider an example of Pick and Place robot, which is used to detect defective
parts from the conveyor belts. Here, every time robot(agent) will make the
decision on the current part i.e. there is no dependency between current and
previous decisions

• In sequential environments, on the other hand, the current decision could


affect all future decisions.

Example

• Chess and taxi driving are sequential: in both cases, short-term actions can
have long-term consequences.
18
Episodic environments are much simpler than sequential environments
STATIC VS. DYNAMIC
• An idle environment with no change in its state is called a
static environment.

Example
• An empty house is static as there’s no change in the
surroundings when an agent enters.

• An environment that keeps constantly changing itself when


the agent is up with some action is said to be dynamic.

Example
• A roller coaster ride is dynamic as it is set in motion and the
environment keeps changing every instant

19
DISCRETE VS. CONTINUOUS

• consists of a finite number of actions to obtain the output

Example

• Chess is discrete as it has only a finite number of moves.


the number of moves might vary with every game, but still,
it’s finite.

• The actions are performed cannot be numbered

Example

• Self-driving cars are an example of continuous


environments
20
KNOWN VS. UNKNOWN
Known and unknown are not actually a feature of an environment, but it
is an agent's state of knowledge to perform an action.

known environment the results of all actions are know to the agent.

Example
• in solitaire card games, I know the rules but am still unable to see the
cards that have not yet been turned over.

unknown environment, for an agent to make a decision, it has to gain


knowledge about how the environment

Example
• In a new video game, the screen may show the entire game state but I
still don’t know what the buttons do until I try them
21
22
2.4 THE STRUCTURE OF AGENTS
• To understand the structure of Intelligent Agents, we should be familiar.
with Architecture and Agent programs. Architecture is the machinery
that the agent executes on. It is a device with sensors and actuators, for
example, a robotic car, a camera, and a PC

• An agent program is an implementation of an agent


function.
• An agent function is a map from the percept sequence to an
action.
Agent = Architecture + Agent Program

23
There are many examples of agents in artificial intelligence

Intelligent personal assistants: These are agents that are designed


to help users with various tasks, such as scheduling
appointments, sending messages, and setting reminders.
Examples of intelligent personal assistants include Siri, Alexa, and
Google Assistant.

Autonomous robots: These are agents that are designed to


operate autonomously in the physical world. They can perform
tasks such as cleaning, sorting, and delivering goods. Examples of
autonomous robots include the Roomba vacuum cleaner and the
Amazon delivery robot.

Gaming agents: These are agents that are designed to play


games, either against human opponents or other agents.
Examples of gaming agents include chess-playing agents and
poker-playing agents 24
TYPES OF AGENTS
• Simple Reflex Agents
• Model-Based Reflex Agents
• Goal-Based Agents
• Utility-Based Agents
• Intelligent Agents
25
Simple Reflex Agents

• They choose actions only based on the current percept. They are rational only if
a correct decision is made only on the basis of current precept. Their
environment is completely observable.

• Condition-Action Rule − It is a rule that maps a state condition to an action.

• For example if a mars lander found a rock in a specific place it needed to collect
then it would collect it, if it was a simple reflex agent then if it found the same
rock in a different place it would still pick it up as it doesn't take into account
that it already picked it up.

Problems with Simple reflex agents are

• Very limited intelligence.


• No knowledge of non-perceptual parts of the state.
• Usually too big to generate and store.
• If there occurs any change in the environment, then the collection of rules
26
needs to be updated.
27
Model-Based Reflex Agents

• A model-based reflex agent is one that uses internal memory and


a percept history to create a model of the environment in which it's operating
and make decisions based on that model. The term percept means something
that has been observed or detected by the agent.

• Self-driving cars are a great example of a model-based reflex agent. The car is
equipped with sensors that detect obstacles, such as car brake lights in front of
them or pedestrians walking on the sidewalk. As it drives, these sensors
feed percepts into the car's memory and internal model of its environment.

• It works by finding a rule whose condition matches the current situation. A


model-based agent can handle partially observable environments by the use of
a model about the world. Updating the state requires information about:

• How the world evolves independently from the agent?

• How do the agent’s actions affect the world?


28
29
Goal-Based Agents
• Achieve a specific goal.
• choose the best strategy to achieve it based on the
environment
• Furthermore, it uses search algorithms to find the
most efficient path to the goal.
• The goal-based agent’s behavior can easily be
changed.
• Simple example would be the shopping list. Our goal
is to pick up every thing on that list. This makes it
easier to decide if you need to choose between milk
and orange juice because you can only afford one.
As milk is a goal on our shopping list and the orange
juice is not we chose the milk. 30
31
Utility-Based Agents
• Acts based not only on what the goal is, but the best
way to reach that goal.
• Multiple possible alternatives, and an agent has to
Choose to perform the best action.
• Efficiently each action achieves the goals.
• For example, many action sequences will get the taxi
to its destination (thereby achieving the goal), but
some are quicker, safer, more reliable, or cheaper than
others
• Typically, the GPS system suggests the shortest path.
However, unforeseen circumstances, such as traffic or
roadblocks, can lead to an unhappy state where you
are unable to reach your destination on time 32
Learning Agent
• learn from its past experiences or it has learning capabilities. It
starts to act with basic knowledge and then is able to act and
adapt automatically through learning.
• Example: A spam filter that learns from user feedback. It gains
basic knowledge from past and uses that learning to act and adapt
automatically
A learning agent has mainly four conceptual components, which are:
• Learning element: Making improvements by learning from the
environment.
• Critic: Takes feedback from critics which describes how well the
agent is doing with respect to a fixed performance standard.
• Performance element: Responsible for selecting external action.
• Problem Generator: This component is responsible for suggesting
actions that will lead to new and informative experiences
33
3.1 PROBLEM-SOLVING AGENTS
• Solve complex problems or tasks in
its environment.
• These agents are a fundamental
concept in AI and are used in
various applications, from game-
playing algorithms to robotics and
decision-making systems.

34
With that information, the agent can follow this four-phase problem-solving process.

Goal formulation
Goal Formulation: It is the first and simplest step in problem solving. It organizes. the
steps/sequence required to formulate one goal out of multiple goals as well as actions to
achieve that goal. Goal formulation is based on the current situation and the agent's
performance measure

Problem formulation
Problem formulation is the process of deciding what actions and states to consider, given a
goal. The process of looking for a sequence of actions that reaches the goal is called
search. A search algorithm takes a problem as input and returns a solution in the form of
an action sequence.

Search
Before taking any action in the real world, the agent simulates sequences of actions in its
model, searching until it finds a sequence of actions that reaches the goal .Such a sequence
is called a solution. The agent might have to simulate multiple sequences that do not reach
the goal, but eventually it will find a solution or it will find that no solution is possible.

Execution
The agent can now execute the actions in the solution, one at a time 35
• node.STATE: the state to which the
node corresponds;
• node.PARENT: the node in the tree
that generated this node;
• node.ACTION: the action that was
applied to the parent’s state to
generate this node
• node.PATH-COST: the total cost of the
path from the initial state to this node
36
The appropriate choice is a queue of some kind, because
the operations on a frontier are:

• IS-EMPTY(frontier) returns true only if there are no nodes


in the frontier

• POP(frontier) removes the top node from the frontier


and returns it.

• TOP(frontier) returns (but does not remove) the top node


of the frontier.

• ADD(node, frontier) inserts node into its proper place in


the queue.
37
Three kinds of queues are used in search algorithms:

1. A priority queue first pops the node with the minimum


cost according to some evaluation function, f . It is used in
best-first search.

2. A FIFO queue or first-in-first-out queue first pops the node


that was added to the queue first; we shall see it is used in
breadth-first search.

3. A LIFO queue or last-in-first-out queue (also known as a


stack) pops first the most recently added node; we shall
see it is used in depth-first search. The reached states can
be stored as a lookup table (e.g. a hash table) where each
key is a state and each value is the node for that state
38
3.3.2 Measuring problem-solving performance

• Completeness: Is the algorithm guaranteed to find a


solution when there is one, and to correctly report failure
when there is not?

• Cost optimality: Does it find a solution with the lowest path


cost of all solutions?

• Time complexity: How long does it take to find a solution?


This can be measured in seconds, or more abstractly by
the number of states and actions considered.

• Space complexity: How much memory is needed to


perform the search? 39
3.4 UNINFORMED SEARCH STRATEGIES
• Uninformed search is a class of general-purpose search
algorithms which operates in brute force-way.

• Uninformed search algorithms do not have additional


information about state or search space other than how
to traverse the tree, so it is also called blind search.
40
Following are the various types of
uninformed search algorithms:
• Breadth-first Search
• Dijkstra’s algorithm or Uniform cost search
• Depth-first Search
• Depth-limited Search
• Iterative deepening depth-first search
• Bidirectional Search

41
Breadth-first Search
Breadth-first search is the most common search strategy for traversing a tree or graph.
This algorithm searches breadth wise in a tree or graph, so it is called breadth-first search.

BFS algorithm starts searching from the root node of the tree and expands all successor
node at the current level before moving to nodes of next level.

The breadth-first search algorithm is an example of a general-graph search algorithm.

Breadth-first search implemented using FIFO queue data structure.

Advantages
• BFS will provide a solution if any solution exists.

• If there are more than one solutions for a given problem, then BFS will provide the
minimal solution which requires the least number of steps.

Disadvantages
• It requires lots of memory since each level of the tree must be saved into memory to
expand the next level.

• BFS needs lots of time if the solution is far away from the root node.
42
S---> A--->B---->C--->D---->G--->H--->E---->F---->I---->K
43
Input:
V = 5, E = 4
adj = {{1,2,3},{},{4},{},{}}

Output:
01234
Explanation:
• 0 is connected to 1 , 2 , 3.2 is connected to 4.
• so starting from 0, it will go to 1 then 2 then 3.
• After this 2 to 4,
• thus bfs will be 0 1 2 3 4.
44
2. Dijkstra’s algorithm or Uniform cost search

• Traversing a weighted tree or graph.

• The primary goal of the uniform cost at lowest cumulative


cost.

Advantages
• Uniform cost search is optimal because at every state the
path with the least cost is chosen.

Disadvantages
• It does not care about the number of steps involve in
searching and only concerned about path cost. Due to which
this algorithm may be stuck in an infinite loop. 45
46
Output
Minimum cost from S to G is =3

47
3. Depth-first Search

• Depth-first search is a recursive algorithm for traversing a tree or graph


data structure
• In this algorithm, one starting vertex is given, and when an adjacent
vertex is found, it moves to that adjacent vertex first and tries to traverse
in the same manner

• It moves through the whole depth, as much as it can go, after that it
backtracks to reach previous vertices to find the new path

Advantage
• DFS requires very less memory
• It takes less time to reach to the goal node than

Disadvantage
• no guarantee of finding the solution.
• it may go to the infinite loop.
49
4. A depth-limited search algorithm

• A depth-limited search algorithm is similar to depth-first search with a


predetermined limit.

• Depth-limited search can solve the drawback of the infinite path in the Depth-
first search.

• In this algorithm, the node at the depth limit will treat as it has no successor
nodes further.

Advantages
• Depth-limited search is Memory efficient.

Disadvantages
• Depth-limited search also has a disadvantage of incompleteness.

• It may not be optimal if the problem has more than one solution 50
51
52
53
Advantages
• Depth-limited search is Memory efficient.

Disadvantages
• Depth-limited search also has a disadvantage
of incompleteness.

• It may not be optimal if the problem has more


than one solution.

54
Iterative deepening depth-first Search
• Combination of DFS and BFS algorithms.
• Increasing the limit until a goal is found.
• memory efficiency.
• The iterative search algorithm is useful uninformed search
when search space is large, and depth of goal node is unknown

Advantages
• It combines the benefits of BFS and DFS search algorithm in
terms of fast search and memory efficiency.

Disadvantages
• The main drawback of IDDFS is that it repeats all the work of
the previous phase.
55
1st Iteration--> A
2'nd Iteration-> A, B, C
3'rd Iteration--> A, B, D, E, C, F, G
4'th Iteration--> A, B, D, H, I, E, C, F, K, G
In the fourth iteration, the algorithm
will find the goal node. 56
Bidirectional Search Algorithm

Bidirectional search algorithm runs two simultaneous searches

1. One form initial state called as forward-search and


2. Other from goal node called as backward-search, to find the goal
node

Bidirectional search replaces one single search graph with two small
subgraphs in which

• One starts the search from an initial vertex


• Other starts from goal vertex.

The search stops when these two graphs intersect each other.
Bidirectional search can use search techniques such as BFS, DFS, DLS,
57
etc.
Advantages

• Bidirectional search is fast.

• Bidirectional search requires less memory

Disadvantages

• Implementation of the bidirectional search tree is


difficult.

• In bidirectional search, one should know the goal


state in advance. 58
59
60
61
UNIT II

62
INFORMED (HEURISTIC) SEARCH STRATEGIES
• The algorithms of an informed search contain
information regarding the goal state. It helps an
AI make more efficient and accurate searches.

• A function obtains this data/info to estimate


the closeness of a state to its goal in the system.

• HEURISTIC - Solving problems in a quick way that


delivers a result that is sufficient enough to be useful
given time constraints

63
INFORMED SEARCH

GREEDY BEST- A*SEARCH MEMORY


FIRST SEARCH BOUNDED
SEARCH

ITERATIVE- RECURSIVE
DEEPENING BEST-FIRST
A* SEARCH SEARCH

64
Greedy best-first search

Greedy best-first search is a form of best-first


search that expands first the node with the
lowest h(n) value—the node that appears to be
closest to the goal—on the grounds that this is
likely to lead to a solution quickly. So the
evaluation function f (n) = h(n). Consider the
graph which is given below:

65
An example of the best-first search algorithm is below graph, suppose
we have to find the path from A to G

66
• In this example, the cost is measured strictly using the heuristic value. In
other words, how close it is to the target.

67
• C has the lowest cost of 6. Therefore, the
search will continue like so

68
U has the lowest cost compared to M and R, so the search will continue by
exploring U. Finally, S has a heuristic value of 0 since that is the target node:

69
The total cost for the path (P -> C -> U -> S) evaluates to 11. The potential
problem with a greedy best-first search is revealed by the path (P -> R -> E -
> S) having a cost of 10, which is lower than (P -> C -> U -> S). Greedy best-first
search ignored this path because it does not consider the edge weights

70
A* SEARCH ALGORITHM

• It is mainly used to find the shortest path between two


nodes in a graph, given the estimated cost of getting
from the current node to the destination node.

• Algorithm A* combines the advantages of two other


search algorithms: Dijkstra's algorithm and Greedy
Best-First Search.

• A* search is
• f(n) = h(n) + g(n). where,
• h(n) is heuristics function
• g(n) is the past knowledge acquired while searching. 71
72
Exploring S

A is the current most promising


path, so it is explored next

73
Exploring D:

74
Exploring F

75
4.1.1 HILL-CLIMBING SEARCH
• Hill climbing is a simple optimization algorithm used in
Artificial Intelligence (AI) to find the best possible solution
for a given problem.

• It belongs to the family of local search algorithms and is


often used in optimization problems where the goal is to
find the best solution from a set of possible solutions.

• In Hill Climbing, the algorithm starts with an initial solution


and then iteratively makes small changes to it in order to
improve the solution. These changes are based on a
heuristic function that evaluates the quality of the solution.

• The algorithm continues to make these small changes until


it reaches a local maximum, meaning that no further 76
• One of the widely discussed examples of Hill
climbing algorithm is Traveling-salesman
Problem in which we need to minimize the
distance traveled by the salesman. It is also
called greedy local search as it only looks to its
good immediate neighbor state and not
beyond that.

77

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy