Chapter 2 Intelligent Agent
Chapter 2 Intelligent Agent
Intelligent Agents
1
Outlines
Introduction to agent
Agent and Environment
Structure of agents
Intelligent and Rational agent
Types of intelligent agents
07/28/2021 2
Introduction to Agents
• An agent is anything that can be viewed as perceiving its
environment through sensors and acting upon that environment
through effectors.
• A human agent has eyes, ears, and other organs for sensors, and
hands, legs, mouth, and other body parts for effectors.
• A robotic agent substitutes cameras and infrared range finders
for the sensors and various motors for the effectors.
07/28/2021 3
Agent and Environment
07/28/2021 4
Agents
• Operate in an environment.
• Perceives its environment through sensors.
• Acts upon its environment through actuators/effectors.
• Have goals.
07/28/2021 5
Sensors & Effectors
07/28/2021 6
…Con
• It can change the environment through effectors.
• An operation involving an actuator is called an action.
• Actions can be grouped in to action sequences.
• So an agent function implement mapping from percept
sequences to actions.
• Performance Measure is the criteria, which determines
how successful an agent is.
07/28/2021 7
Task environment
• To design a rational agent we need to specify a task environment
– a problem specification for which the agent is a solution
Performance measure:
– safe, fast, legal, comfortable, maximize profits
Environment:
– roads, other traffic, pedestrians, customers
Actuators:
– steering, accelerator, brake, signal, horn(sounding a warning)
Sensors:
– cameras, sonar, speedometer, GPS
07/28/2021 9
Example of agent environment
07/28/2021 10
Rational Agent
• AI is about building rational agents.
• A rational agent always does the right thing.
07/28/2021 11
Structure of Agents
• Agent’s structure can be viewed as −
Agent = Architecture + Agent Program
07/28/2021 12
Intelligent Agents
An intelligent agent is an autonomous entity which act upon
an environment using sensors and actuators for achieving
goals. An intelligent agent may learn from the environment to
achieve their goals.
Intelligent Agent:
• must sense,
• must act,
• must be autonomous(to some extent)
• must be rational.
07/28/2021 13
Properties of Environments
Fully observable versus partially observable: If the agent's sensory apparatus
gives it access to the complete state of the environment, then we say that the
environment is accessible to the agent.
Ex. Chess game
Partially observable: the relevant feature of the environment are only partially
observable. Ex: self driving car.
Static vs. dynamic: If the environment can change while the agent is choosing
an action, the environment is dynamic.
Episodic vs. sequential: means the subsequent episode not depends upon what action
happens in previous episode; Sequential: the agent operates in the series of connected
episode.
14
…
Deterministic vs. stochastic: If an agent's current state and selected action can
completely determine the next state of the environment, then such environment is
called deterministic environment. A stochastic environment is random in nature
and cannot be determined completely by an agent.
Next state of the environment is fully determined by current state and agents action
then the environment is called deterministic.
Discrete vs. continuous: If in an environment there are a finite number of percepts
and actions that can be performed within it, then such an environment is called a
discrete environment else it is called continuous environment.
Single vs. multi-agent: If only one agent is involved in an environment. 15
Types of Intelligent Agents
Intelligent agents are grouped in to five classes based on their
degree of perceived intelligence and capability.
– Simple reflex agents
– Model based reflex agents
– Goal based agents
– Utility based agents
– Learning agents
07/28/2021 16
Simple reflex Agents
Simple reflex agents act only on the basis of the current
percept
No notion of percept history
The agent function is based on the
condition-action rule: if condition then action.
Succeeds when the environment is fully observable.
Eg. Email filtering, security surveillance camera
07/28/2021 17
…..
07/28/2021 18
Figure: Simple reflex Agents
…..
07/28/2021 19
Model based reflex agents
• A model-based agent can handle a partially observable environment.
• Needs memory for storing the percept history, it uses the percept history
to help revealing the current unobservable aspects of the environment
(internal state).
• The agent combines current percept with the internal state to generate
updated description of the current state.
• Updating the state requires information about :
– how the world evolves in-dependently from the agent, and
– how the agent actions affects the world.
07/28/2021 20
…..
07/28/2021 21
Figure: Model based reflex agents
…
07/28/2021 22
Goal based agents
• Goal-based agents further expand on the capabilities of
the model-based agents, by using "goal" information.
• Goal information describes situations that are desirable.
This allows the agent a way to choose among multiple
possibilities, selecting the one which reaches a goal state.
• Search and planning are the subfields of artificial
intelligence devoted to finding action sequences that
achieve the agent's goals.
07/28/2021 23
…
07/28/2021 24
…..
07/28/2021 25
Figure: Goal based agents
Utility based agents
• Goal-based agents only distinguish between goal states and non-
goal states.
• It is possible to define a measure of how desirable a particular
state is. This measure can be obtained through the use of a utility
function which maps a state to a measure of the utility of the
state.
• A more general performance measure should allow a comparison
of different world states according to exactly how happy they
would make the agent. The term utility, can be used to describe
how "happy" the agent is.
07/28/2021 26
…
07/28/2021 27
….
07/28/2021 28
Figure: Utility based agents
Learning has an advantage that it allows the agents to initially operate in unknown
environments and to become more competent than its initial knowledge alone
might allow. 29
07/28/2021 30
Figure: Learning agents
Con…
• E.g. automate taxi: using Performance element the taxi goes
out on the road and drives. The critic observes the shocking
language used by other drivers. From this experience, the
learning element is able to formulate a rule saying this was a
bad action, and the performance element is modified by
installing new rule. The problem generator might identify
certain areas in need of improvement, such as trying out the
brakes on different roads under different conditions.
07/28/2021 31
07/28/2021 32