Chapter 2
Chapter 2
Intelligent Agents
1
Outlines
➢ Introduction to agent
➢ Agent and Environment
➢ Structure of agents
➢ Intelligent and Rational agent
➢ Types of intelligent agents
11/18/2022 2
2.1. Introduction to Agents
11/18/2022 4
Agents
• Operate in an environment.
• Perceives its environment through sensors.
• Acts upon its environment through actuators/effectors.
• Have goals.
11/18/2022 5
Sensors & Effectors
11/18/2022 6
…Con
11/18/2022 8
PEAS: Specifying an automated taxi driver
Performance measure:
– safe, fast, legal, comfortable, maximize profits
Environment:
– roads, other traffic, pedestrians, customers
Actuators:
– steering, accelerator, brake, signal, horn(sounding a warning)
Sensors:
– cameras, sonar, speedometer, GPS
11/18/2022 9
Example of agent environment
11/18/2022 10
2.3. Structure of Agents
11/18/2022 11
2.4. Intelligent and Rational Agent
Rational Agent
• AI is about building rational agents.
• A rational agent always does the right thing.
11/18/2022 12
Intelligent Agents
❖ An intelligent agent is an autonomous entity which act
upon an environment using sensors and actuators for
achieving goals. An intelligent agent may learn from the
environment to achieve their goals.
❖ Intelligent Agent:
• must sense,
• must act,
• must be autonomous(to some extent)
• must be rational.
11/18/2022 13
Properties of Environments
1. Fully observable versus partially observable:
Fully observable
• If the agent's sensory apparatus gives it access to the complete state of the
environment, then we say that the environment is accessible to the agent.
Ex. Chess game
Partially observable
• The relevant feature of the environment are only partially observable. Ex:
self driving car.
14
Properties of Environments
2. Static vs. dynamic:
• If the environment can change while the agent is choosing an action, the
environment is dynamic otherwise, it is static.
• Static environments are easy to deal with because the agent need not keep
looking at the world while it is deciding on an action, nor need it worry about
the passage of time.
• Dynamic environments, on the other hand, are continuously asking the agent
what it wants to do; if it hasn’t decided yet, that counts as deciding to do
nothing.
15
Properties of Environments
3. Episodic vs. sequential:
❑ Episodic:
• In an episodic task environment, the agent’s experience is divided into atomic
episodes.
• In each episode the agent receives a percept and then performs a single action.
Crucially, the next episode does not depend on the actions taken in previous
episodes.
❑ Sequential:
• In sequential environments, on the other hand, the current decision could affect
all future decisions. Chess and taxi driving are sequential: in both cases, short-
term actions can have long-term consequences. 16
…
❑ Deterministic vs. stochastic:
• If an agent's current state and selected action can completely determine the next state
of the environment, then such environment is called deterministic environment.
• A stochastic environment is random in nature and cannot be determined completely
by an agent.
• If the next state of the environment is fully determined by current state and agents
action then the environment is called deterministic.
❑ Discrete vs. continuous: If in an environment there are a finite number of percepts
and actions that can be performed within it, then such an environment is called a
17
discrete environment else it is called continuous environment.
2.5. Types of Intelligent Agents
❖ Intelligent agents are grouped in to five classes based on their
degree of perceived intelligence and capability.
1. Simple reflex agents
2. Model based reflex agents
3. Goal based agents
4. Utility based agents
5. Learning agents
11/18/2022 18
1. Simple reflex Agents
❖ Simple reflex agents act only on the basis of the current
percept
❖ No notion of percept history
❖ The agent function is based on the
condition-action rule: if condition then action.
❖ Succeeds when the environment is fully observable.
✓ Eg. Email filtering, security surveillance camera
11/18/2022 19
…..
11/18/2022 20
Figure: Simple reflex Agents
2. Model based reflex agents
• A model-based agent can handle a partially observable environment.
• Needs memory for storing the percept history, it uses the percept history
to help revealing the current unobservable aspects of the environment
(internal state).
• The agent combines current percept with the internal state to generate
updated description of the current state.
• Updating the state requires information about :
– how the world evolves in-dependently from the agent, and
– how the agent actions affects the world.
11/18/2022 21
…..
11/18/2022 22
Figure: Model based reflex agents
3. Goal based agents
• Goal-based agents further expand on the capabilities of
the model-based agents, by using "goal" information.
• Goal information describes situations that are desirable.
• This allows the agent a way to choose among multiple
possibilities, selecting the one which reaches a goal
state.
• Search and planning are the subfields of artificial
intelligence devoted to finding action sequences that
achieve the agent's goals.
11/18/2022 23
…
11/18/2022 24
…..
11/18/2022 25
Figure: Goal based agents
4. Utility based agents
• Goal-based agents only distinguish between goal states and non-goal
states.
• It is possible to define a measure of how desirable a particular state is.
• This measure can be obtained through the use of a utility function which
maps a state to a measure of the utility of the state.
• A more general performance measure should allow a comparison of
different world states according to exactly how happy they would make
the agent.
• The term utility, can be used to describe how "happy" the agent is.
11/18/2022 26
…
11/18/2022 27
….
11/18/2022 28
Figure: Utility based agents
• Learning has an advantage that it allows the agents to initially operate in
unknown environments and to become more competent than its initial
knowledge alone might allow. 29
11/18/2022 30
Figure: Learning agents
Con…
• E.g. automate taxi: using Performance element the taxi goes out on
the road and drives. The critic observes the shocking language used
by other drivers.
• From this experience, the learning element is able to formulate a
rule saying this was a bad action, and the performance element is
modified by installing new rule.
• The problem generator might identify certain areas in need of
improvement, such as trying out the brakes on different roads under
different conditions.
11/18/2022 31