0% found this document useful (0 votes)
56 views27 pages

Intelligent Agents

The document discusses different types of intelligent agents and their environments. It describes agents as entities that perceive their environment through sensors and act upon it through effectors. Environments can be fully or partially observable. The document outlines different types of agent programs including simple reflex agents, model-based reflex agents, goal-based agents, and utility-based agents. It also discusses properties of task environments such as deterministic vs stochastic, episodic vs sequential, static vs dynamic, discrete vs continuous, known vs unknown. Learning agents are described as improving their actions based on feedback from their environments.

Uploaded by

Malik Awan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
56 views27 pages

Intelligent Agents

The document discusses different types of intelligent agents and their environments. It describes agents as entities that perceive their environment through sensors and act upon it through effectors. Environments can be fully or partially observable. The document outlines different types of agent programs including simple reflex agents, model-based reflex agents, goal-based agents, and utility-based agents. It also discusses properties of task environments such as deterministic vs stochastic, episodic vs sequential, static vs dynamic, discrete vs continuous, known vs unknown. Learning agents are described as improving their actions based on feedback from their environments.

Uploaded by

Malik Awan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 27

INTELLIGENT

AGENTS

Ayesha Irfan
Department of Information Technology
University Of The Punjab, Jhelum Campus
AGENTS AND
ENVIRONMENTS
 Agent: “An agent is anything that can be viewed
as perceiving its environment through sensors and
acting upon that environment through actuators.”
 E.g. A human agent has eyes, ears, and other
organs for sensors and hands, legs, vocal tract, and
soon for actuators while A robotic agent might
have cameras and infrared range finders for sensors
and various motors for actuators.
AGENTS AND
ENVIRONMENTS

 Agents interact with environments through sensors and actuators.


AGENTS AND
ENVIRONMENTS
 Agent percept: Agent’s perceptual inputs at any given
instant.
 Percept Sequence: Percept sequence is the complete history
of everything the agent has ever perceived.
 Agent function: An agent’s choice of action at any given
instant can depend on the entire percept sequence observed
to date, but not on anything it hasn’t perceived. This
sequence is called agent function. It is an abstract
mathematical description.
 Agent program: The agent function for an artificial agent
are implemented by an agent program. The agent program is
a concrete implementation.
AGENTS AND
ENVIRONMENTS

A vacuum-cleaner world with just two locations.


AGENTS AND
ENVIRONMENTS

Partial tabulation of a simple agent function for the vacuum-cleaner


world
THE CONCEPT OF
RATIONALITY
 Rational Agent: A rational agent is one that does the
right thing conceptually. When an agent does the
right things by doing right action we call it rational
agent.
 Performance measure: The sequence of actions
done by the rational agent causes the environment to
go through a sequence of states. If the states are
desirable then the agent has performed well. This
notion of desirability is captured by a performance
measure.
RATIONALITY
 Rationality depends on four things:
1) The performance measure that defines the
criterion of success.
2) The agent’s prior knowledge of the environment.
3) The actions that the agent can perform.
4) The agent’s percept sequence to date.
THE CONCEPT OF
RATIONALITY
 Omni Science: omniscience is impossible in
reality. It is doing anything with perfection.
 Omniscient agent: An agent that knows the actual
outcome of its actions and can act accordingly.
 Information gathering: Doing actions in order to
modify future precepts.
 Learning: We requires our rational agent not only
to gather information but also to learn as much as
possible from what it perceives.
THE NATURE OF
ENVIRONMENTS
 In designing an agent, the first step must always be to
specify the task environment as fully as possible.
 PEAS (Performance, Environment, Actuators,
Sensors)

 let us consider a problem: an automated taxi driver.


 What would be the PEAS description of the task
environment for an automated taxi?
PEAS description of the task environment for an
automated taxi.
PROPERTIES OF TASK ENVIRONMENTS

 Fully observable: If an agent’s sensors give it access


to the complete state of the environment at each point
in time, then we say that the task environment is fully
observable.
 Partially observable: Due to noisy and inaccurate
sensors or parts of the state are missing from the
sensor data then task environment is partially
observable.
 Unobservable: If the agent has no sensors at all then
the environment is unobservable.
PROPERTIES OF TASK ENVIRONMENTS

Single Agent Multi Agent


When an agent solve a problem by itself then If an agent solve a problem in a two agent
it is called single agent environment then it is called multi agent.
E.g. An agent solving a crossword puzzle by E.g. An agent playing chess is in a two
itself is clearly in a single-agent environment agent environment.

 Types of Multi Agent:


 Competitive Agents: If in a multi agent environment
both agents strive to maximize their performance then
this is called competitive agent.
 Cooperative Agents: If in a multi agent environment
the agent avoid collisions then it is called cooperative
agent.
PROPERTIES OF TASK ENVIRONMENTS

Deterministic stochastic
If the next state of the environment is If the next state can't be determined
completely determined by the current by the current state and then we say
state and the action executed by the the stochastic agent environment.
agent, then we say the environment
is deterministic.

Episodic Sequential
In an episodic task environment, the In sequential environments, on the
agent’s experience is divided into other hand, the current decision could
atomic episodes. In each episode the affect all future decisions.
agent receives a percept and then In sequential environment, the agent
performs a single action. Crucially, need to think ahead.
the next episode does not depend on
the actions taken in previous
episodes.
PROPERTIES OF TASK ENVIRONMENTS

Static Dynamic Semi-Dynamic


• If the environment • If the environment • If the environment itself
doesn’t change according change depending upon does not change with the
to the world’s action or the agent action then we passage of time but the
agent’s action then it is say it is dynamic. agent’s performance score
does, then we say the
static environment. • Dynamic environments environment is semi-
• Static environments are are continuously asking dynamic.
easy. the agent what it wants • Chess, when played
• It doesn’t need to worry to do; if it hasn’t with a clock, is semi-
about the passage of time decided yet, that counts dynamic.
• Crossword puzzles are as deciding to do
static nothing.
• Taxi driving is dynamic
PROPERTIES OF TASK ENVIRONMENTS

Discrete Continuous
• If the time given in an • If the time given in an
environment has finite number of environment has infinite number
states then it is a discrete of states then it is a continuous
environment. environment.
• The chess environment has a finite • Taxi driving is a continuous-state
number of distinct states. and continuous-time problem.

Known Unknown
• If the agent knows work place and • If the agent is unknown to its
environment then it is known as working place or working
known environment. conditions then it is unknown
environment.
Task environments and their characteristics

Task Observable Agents Deterministi Episodic Static Discrete


Environment c

Chess with a
clock
Fully Multi Deterministi Sequential Semi Discret
c e

Taxi driving Partially Stochastic Sequential Dynamic Continuous


Multi

Medical
diagnosis Partially Single Stochastic Sequential Dynamic Continuous
AGENT PROGRAMS
The agent program implements the agent function.
There exists a variety of basic agent-program designs
reflecting the kind of information used in the decision
process. The appropriate design of the agent program
depends on the nature of the environment.
 There are four basic kinds of agent programs.

 Simple reflex agents.

 Model-based reflex agents.

 Goal-based agents.

 Utility-based agents.
Simple reflex agents
 These agents select actions on the basis of current
percept, ignoring the rest of the percept history.

Schematic diagram of a simple reflex agent.


Model-based reflex agents
 The agent should maintain some sort of internal
state that depends on the percept history and
thereby reflects at least some of the unobserved
aspects of the current state.
 Need information about how the world evolves
independently of the agent.
 Some information about how the agent’s own
actions effect the world.
Model-based reflex agent
Goal-based agents
 Knowing something about the current state of the
environment is not always enough to decide what
to do. The agent needs some sort of goal
information that describes situations that are
desirable.
 The goal-based agent’s behaviour can easily be
changed to go to a different destination, simply by
specifying that destination as the goal.
Goal-based agents
 A model-based, goal-based agent. It keeps track of the world state as well as a set
of goals it is trying to achieve, and chooses an action that will (eventually) lead to
the achievement of its goals.
Utility-based agents
 Goals alone are not enough to generate high-quality
behaviour in most environments.
 Utility-based agents try to maximize their own
expected “happiness.”
Learning agents
 These agents observe the environment and improve actions
depending upon the feedback.
 All agents can improve their performance through learning.
 Four components of Learning agents are:
1. Learning Element: Responsible for making improvements
2. Performance Element: Responsible for selecting external
actions.
3. Critic: Responsible for giving feedback on how the agent is
doing and how the performance element should modify.
4. Problem Generator: Responsible for suggesting actions
that will lead to new information.
A general learning agent
Assignment # 1
 PEAS description for the starting player in tic-tac-
toe?
 Classify the tic-tac-toe task environment according
to these properties:
1. Fully observable/partially observable
2. Deterministic/stochastic
3. Episodic/sequential
4. Static/dynamic/semi-dynamic
5. Discrete/continuous

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy