0% found this document useful (0 votes)
22 views54 pages

Intelligent Agents

Uploaded by

test case
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
22 views54 pages

Intelligent Agents

Uploaded by

test case
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 54

CHAPTER 2

Intelligent Agents
CHAPTER 2
INTELLIGENT AGENTS

What is an agent ?
 An agent is anything that perceiving its environment through
sensors and acting upon that environment through actuators
 Example:
 Human is an agent
 A robot is also an agent with cameras and motors
 A thermostat detecting room temperature.
INTELLIGENT AGENTS
DIAGRAM OF AN AGENT

What AI should fill


SIMPLE TERMS

Percept
 Agent’s perceptual inputs at any given instant

Percept sequence
 Complete history of everything that the agent has ever perceived.
AGENT FUNCTION & PROGRAM

Agent’s behavior is mathematically described by


 Agent function
 A function mapping any given percept sequence to an action

Practically it is described by
 An agent program
 The real implementation
VACUUM-CLEANER WORLD

Perception: Clean or Dirty? where it is in?


Actions: Move left, Move right, suck, do nothing
VACUUM-CLEANER WORLD
PROGRAM IMPLEMENTS THE
AGENT FUNCTION TABULATED IN
FIG. 2.3
Function Reflex-Vacuum-Agent([location,status])
return an action
If status = Dirty then return Suck
else if location = A then return Right
else if location = B then return left
RATIONALITY

What is rational at any given time depends on four things:


 The performance measure defining the criterion of success
 The agent’s prior knowledge of the environment
 The actions that the agent can perform
 The agents’s percept sequence up to now
RATIONAL AGENT

For each possible percept sequence,


 an rational agent should select
 an action expected to maximize its performance measure, given the
evidence provided by the percept sequence and whatever built-in
knowledge the agent has

E.g., an exam
 Maximize marks, based on
the questions on the paper & your knowledge
EXAMPLE OF A RATIONAL AGENT

Performance measure
 Awards one point for each clean square
 at each time step, over 10000 time steps

Prior knowledge about the environment


 The geography of the environment
 Only two squares
 The effect of the actions
EXAMPLE OF A RATIONAL AGENT

Actions that can perform


 Left, Right, Suck and NoOp
Percept sequences
 Whereis the agent?
 Whether the location contains dirt?

Under this circumstance, the agent


is rational.
LEARNING

Does a rational agent depend on only current percept?


 No, the past percept sequence should also be used
 This is called learning
 After experiencing an episode, the agent
 should adjust its behaviors to perform better for the same job
next time.
AUTONOMY

If an agent just relies on the prior knowledge


of its designer rather than its own percepts
then the agent lacks autonomy
A rational agent should be autonomous- it
should learn what it can to compensate
for partial or incorrect prior knowledge.
E.g., a clock
 No input (percepts)
 Run only but its own algorithm (prior knowledge)
 No learning, no experience, etc.
SOFTWARE AGENTS

Sometimes, the environment may not be


the real world
 E.g., flight simulator, video games, Internet
 Theyare all artificial but very complex
environments
 Those agents working in these environments
are called
 Software agent (softbots)
 Because all parts of the agent are software
TASK ENVIRONMENTS

Task environments are the problems


 While the rational agents are the solutions
Specifying the task environment
 PEAS description as fully as possible
 Performance
 Environment

 Actuators

 Sensors

In designing an agent, the first step must always be to


specify the task environment as fully as possible.
Use automated taxi driver as an example
TASK ENVIRONMENTS

Performance measure
 How can we judge the automated driver?
 Which factors are considered?
 getting to the correct destination
 minimizing fuel consumption
 minimizing the trip time and/or cost
 minimizing the violations of traffic laws
 maximizing the safety and comfort, etc.
TASK ENVIRONMENTS

Environment
A taxi must deal with a variety of
roads
 Traffic
lights, other vehicles,
pedestrians, stray animals, road
works, police cars, etc.
 Interact with the customer
TASK ENVIRONMENTS

Actuators (for outputs)


 Control over the accelerator, steering,
gear shifting and braking
 A display to communicate with the
customers
Sensors (for inputs)
 Detect other vehicles, road situations
 GPS (Global Positioning System) to know
where the taxi is
 Many more devices are necessary
TASK ENVIRONMENTS

A sketch of automated taxi driver


(PEAS)
WHAT WE DID IN LAST CLASS?
(RECAP)

Intro to Intelligent System


Agents and environment.
Sensors and actuators
Few examples discussions
Example discussion with percept sequence
and actions mapping.
The concept of rationality.
Nature of Environment (PEAS)
WHAT WE WILL TRY TO COVER THIS
CLASS?

Doubts regarding :
Presentation and Projects.
----------------------------------------------------------
Properties of Task Environments
Structure of Agent
Types of Agent Program
PROPERTIES OF TASK
ENVIRONMENTS
Fully observable vs. Partially observable
Deterministic vs. Stochastic
Episodic vs. Sequential
Static vs. Dynamic
Discrete vs. Continuous
Single vs. Multi Agent
Known vs. Unknown
PROPERTIES OF TASK
ENVIRONMENTS

Fully observable vs. Partially observable


 If an agent’s sensors give it access to the complete
state of the environment at each point in time then the
environment is effective and fully observable
 if the sensors detect all aspects
 That are relevant to the choice of action
Partially observable
An environment might be Partially
observable because of noisy and
inaccurate sensors or because parts of the
state are simply missing from the sensor
data.
Example:
 A local dirt sensor of the cleaner cannot tell
 Whether other squares are clean or not
PROPERTIES OF TASK
ENVIRONMENTS
Deterministic vs. stochastic
• Deterministic models produce the same output every time for a given set of inputs, while stochastic
models account for randomness and uncertainty (general meaning)

 next state of the environment Completely


determined by the current state and the actions
executed by the agent, then the environment is
deterministic, otherwise, it is Stochastic.
 Strategic environment: deterministic except for

actions of other agents


-Cleaner and taxi driver are:
 Stochastic because of some unobservable aspects
 noise or unknown
PROPERTIES OF TASK
ENVIRONMENTS
Episodic vs. sequential
 An episode = agent’s single pair of perception & action
 The quality of the agent’s action does not depend on
other episodes
 Every episode is independent of each other
 Episodic environment is simpler
 The agent does not need to think ahead

Sequential
 Current action may affect all future decisions
-Ex. Taxi driving and chess.
PROPERTIES OF TASK
ENVIRONMENTS
Static vs. dynamic
 A dynamic environment is always
changing over time
 E.g., the number of people in the street
 While static environment
 E.g., the destination

Semi-dynamic
 environment is not changed over time
 but the agent’s performance score does
PROPERTIES OF TASK
ENVIRONMENTS

Discrete vs. continuous


 Ifthere are a limited number of distinct
states, clearly defined percepts and
actions, the environment is discrete
 E.g., Chess game
 Continuous: Taxi driving
PROPERTIES OF TASK
ENVIRONMENTS
Single agent VS. multiagent
 Playing a crossword puzzle – single agent
 Chess playing – two agents
 Competitive multiagent environment
 Chess playing
 Cooperative multiagent environment
 Automated taxi driver
 Avoiding collision
PROPERTIES OF TASK
ENVIRONMENTS
Known vs. unknown
This distinction refers not to the environment itself
but to the agent’s (or designer’s) state of
knowledge about the environment.
-In known environment, the outcomes for all
actions are
given. ( example: solitaire card games).
- If the environment is unknown, the agent will
have to learn how it works in order to make
good decisions.( example: new video game).
EXAMPLES OF TASK
ENVIRONMENTS
STRUCTURE OF AGENTS
STRUCTURE OF AGENTS

Agent = architecture + program


 Architecture
= some sort of
computing device (sensors +
actuators)
 (Agent)
Program = function that
implements the agent mapping = “?”
 Agent Program = Job of AI
AGENT PROGRAMS

Input for Agent Program


 Only the current percept

Input for Agent Function


 The entire percept sequence
 The agent must remember all of them

Implement the agent program as


 A look up table (agent function)
TYPES OF AGENT PROGRAMS

Four types
 Simple reflex agents
 Model-based reflex agents
 Goal-based agents
 Utility-based agents
SIMPLE REFLEX AGENTS

It uses just condition-action rules


 The rules are like the form “if … then …”
 efficient but have narrow range of applicability
 Because knowledge sometimes cannot be stated explicitly
 Work only
 if the environment is fully observable
SIMPLE REFLEX AGENTS (2)
MODEL-BASED REFLEX AGENTS

For the world that is partially observable


 the agent has to keep track of an internal state
 That depends on the percept history
 Reflecting some of the unobserved aspects
 E.g., driving a car and changing lane

Requiring two types of knowledge


 How the world evolves independently of the agent
 How the agent’s actions affect the world
MODEL-BASED REFLEX AGENTS
GOAL-BASED AGENTS

Current state of the environment is always not enough


The goal is another issue to achieve
 Judgment of rationality / correctness

Actions chosen  goals, based on


 the current state
 the current percept
GOAL-BASED AGENTS

Conclusion
 Goal-based agents are less efficient
 but more flexible
 Agent  Different goals  different tasks
 Search and planning
 These two other sub-fields in AI
 Helps to plan and find out the action sequences to achieve its goal.
GOAL-BASED AGENTS
UTILITY-BASED AGENTS

Main focus is on Utility not Goal


An agent that acts based not only on what the goal is,
but the best way to reach that goal.
Example: a feature of getting fastest way to reach to your destination
using google maps.

A more general performance measure should allow to


compare different world states with respect to “how
happy they would make the agent”.
The term utility can be used to describe how HAPPY the
agent is in that state.
UTILITY-BASED AGENTS
LEARNING AGENTS

After an agent is programmed, can it work immediately?


 No, it still need teaching
In AI,
 Once an agent is done
 We teach it by giving it a set of examples
 Test it by using another set of examples
We then say the agent learns
 A learning agent
LEARNING AGENTS

Four conceptual components


 Learning element
 Making improvement
 Performance element
 Selecting external actions
 Critic
 Tells the Learning element how well the agent is doing with
respect to fixed performance standard.
(Feedback from user or examples, good or not?)
 Problem generator
 Suggest actions that will lead to new and informative
experiences.
LEARNING AGENTS
SUMMARY

• An agent is something that perceives and act in


an environment.
• The agent function specifies the action.
• Performance measure evaluates the behaviour
and rational agents acts as to maximize the
expected value of performance measure.
• Task environment : Includes PEAS (performance,
environment, actuator , and sensor)
SUMMARY

• We discussed Task environment properties to


be considered.
• Agent Program.
• Types of Agents
COMPLETED PART OF
SYLLABUS
PROBLEM IDENTIFICATION

• Performance measure assessment


• assessment of utility (happiness quotient)
• time complexity
• space complexity

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy