0% found this document useful (0 votes)
7 views41 pages

AI Chapter 2

The document discusses intelligent agents in artificial intelligence, defining them as systems that perceive their environment and act upon it using sensors and actuators. It categorizes agents into human, robotic, and software types, and outlines the fundamental rules and properties of AI agents, including their rationality and performance measures. Additionally, it describes different types of agent programs based on their intelligence levels, such as simple reflex agents, model-based reflex agents, goal-based agents, utility-based agents, and learning agents.

Uploaded by

asimamawwolde89
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views41 pages

AI Chapter 2

The document discusses intelligent agents in artificial intelligence, defining them as systems that perceive their environment and act upon it using sensors and actuators. It categorizes agents into human, robotic, and software types, and outlines the fundamental rules and properties of AI agents, including their rationality and performance measures. Additionally, it describes different types of agent programs based on their intelligence levels, such as simple reflex agents, model-based reflex agents, goal-based agents, utility-based agents, and learning agents.

Uploaded by

asimamawwolde89
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 41

MIZAN TEPI UNIVERSITY

SCHOOL OF COMPUTING AND INFORMATICS


DEPARTMENT OF SOFTWARE ENGINEERING

ARTIFICIAL INTELLIGENCE

Chapter 2
Intelligent
Agents

By: Melkamu D.
Intelligent agents
2

Agents in Artificial Intelligence


 An AI system can be defined as the study of the rational agent and its
environment.
 The agents sense the environment through sensors and act on their environment
through actuators.
 An AI agent can have mental properties such as knowledge, belief, intention, etc.
 An agent can be anything that perceive its environment through sensors and act
upon that environment through actuators.
Intelligent agents
3

 An Agent runs in the cycle of perceiving, thinking, and acting.


 An agent can be:
 Human-Agent: A human agent has eyes, ears, and other organs which work for
sensors and hand, legs, vocal tract work for actuators.
 Robotic Agent: A robotic agent can have cameras, infrared range finder, NLP
for sensors and various motors for actuators.
 Software Agent: Software agent can have keystrokes, file contents as sensory
input and act on those inputs and display output on the screen.
Intelligent agents
4

 Before moving forward, we should first know about sensors, effectors, and actuators.

 Sensor: Sensor is a device which detects the change in the environment and sends the
information to other electronic devices. An agent observes its environment through sensors.

 Actuators: Actuators are the component of machines that converts energy into motion. The
actuators are only responsible for moving and controlling a system. An actuator can be an
electric motor, gears, rails, etc.

 Effectors: Effectors are the devices which affect the environment. Effectors can be legs,
wheels, arms, fingers, wings, fins, and display screen.
Intelligent agents
5

 An intelligent agent: is a system that perceives its environment, learns from it, and
interacts with it intelligently.
 Intelligent agents can be divided into two broad categories: software agents and
physical agents.

 software agents
 A software agent is a set of programs that are designed to do particular tasks.
 For example, a software agent is a search engine used to search the World Wide Web
and find sites that can provide information about a requested subject.
Physical agent
6

 A physical agent (robot) is a programmable system that can be used to perform


a variety of tasks.
 Simple robots can be used in manufacturing to do routine jobs such as
assembling, welding, or painting.
 Some organizations use mobile robots that do routine delivery jobs such as
distributing mail or correspondence to different rooms.
Cont…
7

 Following are the main four rules for an AI agent:


 Rule 1: An AI agent must have the ability to perceive the
environment.
 Rule 2: The observation must be used to make decisions.
 Rule 3: Decision should result in an action.
 Rule 4: The action taken by an AI agent must be a
rational action.
Types of Intelligence
8

Intelligence Description
Linguistic intelligence The ability to speak, recognize, Narrators, Orators
and use mechanisms of
phonology (speech sounds),
syntax (grammar), and
semantics (meaning).
Musical intelligence The ability to create, Musicians, Singers, Composers
communicate with, and
understand meanings made of
sound, understanding of pitch,
rhythm.
Logical-mathematical The ability of use and Mathematicians, Scientists
intelligence understand relationships in the
absence of action or objects.
Understanding complex and
abstract idea
Spatial intelligence The ability to perceive visual or Map readers, Astronauts,
spatial information, change it, Physicists
Cont…
9

Intelligence Description
Bodily-Kinesthetic The ability to use complete
intelligence or part of the body to solve
problems or fashion
Players, Dancers
products, control over fine
and coarse motor skills, and
manipulate the objects.
The ability to distinguish
among one’s own feelings,
Intra-personal intelligence Gautam Buddhha
intentions, and motivations.

The ability to recognize and


make distinctions among Mass Communicators,
Interpersonal intelligence
other people’s feelings, Interviewers
beliefs, and intentions.
Agents and Environments
10

 An agent: is anything that can be viewed as perceiving its environment


through sensors and acting upon that environment through actuators/effectors.
Figure 1:Agents interact with environments through sensors and effectors.
Terminologies
11

 Sensor: something that detects events and changes in the environment.


 Actuator / Effectors: movable parts of agent.
 Perception: way in which something is understood and interpreted.
 Percept: Agent’s perceptual inputs at giving instance.
 Percept sequence: the complete history of everything the agent has
perceived.
 Agent function: maps any given percept sequence to an action

[f: p* A]
Terminologies
 Behavior of Agent − It is the action that agent
performs after any given sequence of percepts.
 The agent program runs on the physical architecture to
produce f.
Agent = architecture + program
Acting of Intelligent Agents (Rationality)
13

 An agent should strive to "do the right thing", based on what it can
perceive and the actions it can perform.
 The right action is the one that will cause the agent to be most
successful.
 Performance measure: An objective criterion for success of an agent's
behavior.
Con’t…
14

 E.g. performance measure of a vacuum-cleaner agent could be amount of


dirt cleaned up, amount of time taken, amount of electricity consumed,
amount of noise generated, etc.
 Rational Agent: For each possible percept sequence, a rational
agent should select an action that is expected to maximize its
performance measure, given the evidence provided by the percept
sequence and whatever built-in knowledge the agent has.
Con’t…
15

 Agents can perform actions in order to modify future percepts so as to obtain


useful information (information gathering, exploration)
 An agent is autonomous if its behavior is determined by its own experience
(with ability to learn and adapt).
 In summary what is rational at any given point depends on PEAS
(Performance measure, Environment, Actuators, Sensors) framework.
Con’t…
16

 Performance measure
 The performance measure that defines degrees of success of the
agent
 Environment:- is the surrounding of an agent at every instant. It
keeps changing with time if the agent is set in motion.
Knowledge: What an agent already knows about the environment
 Actuators – generating actions
 The actions that the agent can perform back to the environment
 Sensors – receiving percepts
 Perception: Everything that the agent has perceived so far
Example 1: PEAS
17

 Consider the task of designing an automated taxi driver agent:


 Performance measure: Safe, fast, legal, comfortable trip, maximize profits.

 Environment: Roads, other traffic, pedestrians, customers

 Actuators: Artificial legs & hands, Speaker


 Sensors: Cameras, GPS, engine sensors, recorder (microphone)

 Goal: driving safely from source to destination point


Example 2: PEAS(Medical
Diagnosis System)
Performance Environment Actuators Sensors
Measure
healthy patient, patient, display keyboard
minimize costs, hospital, staff questions, entry of
lawsuits tests, symptoms,
diagnosis, findings,
treatments, patient’s
referrals answers
Properties of Task Environments
 Fully observable (vs. partially observable):
 When an agent sensor is capable to sense or access the

complete state of an agent at each point in time, it is said


to be a fully observable environment else it is
partially observable.
 Maintaining a fully observable environment is easy as

there is no need to keep track of the history of the


surrounding.
 An environment is called unobservable when the agent

has no sensors in all environments.


Example:
 Chess – the board is fully observable, so are the

opponent’s moves
 Driving – the environment is partially observable
Properties of Task Environments
 Deterministic (vs. stochastic):
 When a uniqueness in the agent’s current
state completely determines the next state of
the agent, the environment is said to be
deterministic.
 The stochastic environment is random in
nature which is not unique and cannot be
completely determined by the agent.
Example:
 Chess – there would be only a few possible
moves for a coin at the current state and
these moves can be determined.
 Self Driving Cars – the actions of a self-
driving car are not unique, it varies time to
Properties of Task Environments
 Episodic (vs. sequential):
 Episodic is an environment where each state is
independent of each other.
 The action on a state has nothing to do with the next state.

 Example: A support bot (agent) answer to a question and

then answer to another question and so on. So each


question-answer is a single episode.
 Sequential environment is an environment where the

next state is dependent on the current action.


 So agent current action can change all of the future states

of the environment.

Properties of Task Environments
 Static (vs. dynamic):
 If the environment can change itself while an agent is
deliberating then such environment is called a dynamic
environment else it is called a static environment.
 Static environments are easy to deal because an
agent does not need to continue looking at the world
while deciding for an action.
 However for dynamic environment, agents need to
keep looking at the world at each action.
 Taxi driving is an example of a dynamic environment
whereas Crossword puzzles are an example of a static
Properties of Task
Environments
Static (vs. dynamic)………………………
 The Static environment is completely unchanged while
an agent is precepting the environment.
 Example: Cleaning a room (Environment) by a dry-cleaner
reboot (Agent ) is an example of a static environment
where the room is static while cleaning.
 Dynamic Environment could be changed while an agent
is precepting the environment. So agents keep looking at
the environment while taking action.
 Example: Playing soccer is a dynamic environment where
players’ positions keep changing throughout the game. So
Properties of Task
Environments
 Discrete (vs. continuous):
 Discrete Environment consists of a finite number of
states and agents have a finite number of actions.
 Example: Choices of a move (action) in a tic-tac game
are finite on a finite number of boxes on the board
(Environment).
 While in a Continuous environment, the environment
can have an infinite number of states. So the possibilities
of taking an action are also infinite.
 Example: In a basketball game, the position of players
(Environment) keeps changing continuously and hitting
Properties of Task
Environments
 Single agent (vs. multiagent):
 Single agent environment where an
environment is explored by a single agent. All
actions are performed by a single agent in the
environment.
 Example: Playing tennis against the ball is a
single agent environment where there is only one
player.
 If two or more agents are taking actions in the
environment, it is known as a multi-agent
Basic “skeleton” agent designs

Agent = Architecture +
Program
 The job of AI is to design the agent program that implements the agent
function mapping percepts to actions.

 Aim: find a way to implement the rational agent function concisely.

 Same skeleton for agent program: it takes the current percept as input from the
sensors and returns an action to the actuators.
Basic “skeleton” agent designs (Cont’d)
Agent Program vs. Agent
Function
 Agent program takes the current percept as input
 Nothing is available from the environment

 Agent function takes the entire percept history


 To do this, remember all the percepts
Types of agent programs
28

 Agents mainly can be grouped into five classes based on their


degree of perceived intelligence and capability :

1. Simple Reflex Agents


2. Model-Based Reflex Agent
3. Goal based agents
4. Utility based agents
5. Learning Agents
1. Simple Reflex Agents
29

 Simple reflex agents are very simple


 Ignores all percept history, acts only on current percept.
 Perception- way in which something understand or interprets.
 Based on condition-action rule (if - then)

i.e. if the condition is true, action is taken else not.


 Works only on fully-observable environment.
30
Structure of a simple reflex agent

(Current
situation)
Example:

31

Rule:
 IF it is dark THEN turn-on light
 Assume that environment is your room and your room is dark.

 Problems of Simple Reflex Agent:


 Very limited intelligence
 No knowledge about the non-perceptual parts of the state.
 If there is any change in the environment, the whole rule should be updated.
 Operating in a partially observable environment, infinite loops are
unavailable.
32
2. Model-Based Reflex Agent
 It works by finding a rule whose condition matches the
current situation/state.
 It can handle partially observable environments.
 E.g. driving a car and changing track

 Updating the state requires information about,


 How the world evolves independently of the agent
 How the agent’s actions affect the world
33
Structure of Model-Based reflex agent
34
3. Goal based agents
 The goal based agent focuses only on reaching the goal-set and hence the decision took
by the agent is based on how far it is currently from their goal or desire state.
 Their every action is intended to minimize their distance from the goal.
 This agent is more flexible and develops its decision making skill by choosing the
right from various options available.
 Example: At a road junction, the taxi can turn left, turn right, or go straight on. The
correct decision depends on where the taxi is trying to get it.
 The goal is another issue to achieve
 Judgment of rationality / correctness
35
Structure of a Goal-based agent
Con’t…
36

 Conclusion
 Goal-based agents are less efficient. But more flexible, because
the knowledge that supports its decision is represented explicitly
and can be modified.
 Search and planning are two other sub-fields in AI to find out the
action sequences to achieve its goal.
37
4. Utility based agents
 These agents are similar with the goal-based agent, but provides an extra
component of utility measurement which makes them different by providing a
measure of success at a given state.
 Utility based agent act not only goals but also the best way to achieve the goal.
 The utility based agent is useful when there are multiple possible alternatives and
an agent has to choose in order to perform the best action.

Example: Many action sequences will get the taxi to its destination (thereby achieving
the goal) but some are quicker, safer, more reliable, or cheaper than others.
38
Structure of a utility-based agent
5. Learning Agents
39

 It can learn from its past experience or it has a learning capability.


 A learning agent has mainly four conceptual components.

These are:
 Learning element: its responsible for making improvements by learning from
environment.
 Critic: a learning element takes feedback from critic which describes that how
well the agent is doing with respect to a fixed performance standard.
 Performance element: it is responsible for selecting external actions.
 Problem generator: is responsible for suggesting actions that will lead to new
and informative experiences.
Cont.
40

 Hence, learning agents are able to learn, analyze performance and look for
new ways to improve the performance.
THANK YOU
Q&A
?

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy