0% found this document useful (0 votes)
17 views37 pages

COMP 469 - CH 2 - Intelligent Agents and Environments

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views37 pages

COMP 469 - CH 2 - Intelligent Agents and Environments

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 37

Lesson 2

Intelligent Agents and Environments –


Agents, environments, sensors, actuator, rational agents, intelligent
agents, performance measure, task environment, PRES, task
environment’s properties, agent architecture, agent program
Outline

• Agents
• Environments
• Concept of rationality
• Task environments
• Properties of task environments
• Type of agents
• Type of environment states and their transitions
What is an Agent?

• An artificial entity that interacts with its environment to


achieve specific goals.
– the concept of an agent can be broadened to include human beings in certain
contexts
• An agent is an entity that perceives its environment through
sensors and acts upon that environment through actuators.
What is an Environment?

• An environment is a surrounding (external world) where an


agent operates and interacts.
• The environment can affect the agent’s behavior and action,
and it is affected by the agent’s action.
– Understanding the environment is crucial for designing AI agents that can
effectively interact with and adapt to their surroundings.
Percept, Percept Sequences,
Agent Function, Agent Program

• Percept
– The content an agent’s sensors are perceiving
• Percept sequence
– The complete history of content the agent has ever perceived
• Agent function
– The description of an agent’s behavior that maps any given percept
sequence to an actions
• Agent program
– the implementation of the agent function
Agent, Environment, Sensors and
Actuators

Agent Sensors
Percepts

Environment
??

Actuators
Actions
Agent, Environment, Sensors and
Actuators
Rational Agent, Intelligent Agent

• An (simple reflex) agent is anything


– that can be viewed as perceiving its environment through sensors and
acting upon that environment through actuators.
• A rational agent is an agent
– that acts to achieve the best possible outcome, or, when there is
uncertainty, the best expected outcome.
• An intelligent agent is a rational agent capable of learning,
adaptation, and autonomous behavior
– The focus is on adaptability, learning, and interacting with the
environment in a way that improves the agent's performance over
time.
Vacuum-cleaner World

Percepts: location and contents, e.g., [A, Dirty]

Actions: Lef t, Right, Suck, NoOp


Good Behavior, Rationality,
Performance Measures

• Good behavior
– Refers to effective, efficient, adaptable, safe, and consistent actions by
an AI agent that align with its goals.
• Rationality
– The principle of making decisions that maximize the performance
measure based on goals and available information.
• Performance measures
– Metrics or criteria used to evaluate how well an AI agent achieves its
goals.
Definition of a Rational Agent

• “For each possible percept sequence, a rational agent should


select an action that is expected to maximize its performance
measure, given the evidence provided by the percept
sequence and whatever built-in knowledge the agent has.”
• “A rational agent is an agent that acts to achieve the best
possible outcome, or, when there is uncertainty, the best
expected outcome.”
Is the Simple Vacuum-Cleaner a
Rational Agent?

• It depends.
– We need to specify what the performance measure is, what is known
about the environment, and what sensors and actuators the agent
has.
Autonomy, Rational Agent,
Intelligent Agent
• Autonomy
– Autonomy in AI involves the capability of AI systems to perform tasks,
make decisions, and solve problems without continuous human oversight.
• Rational Agent
– A rational agent is an agent that acts to achieve the best possible outcome
according to its performance measure, given its knowledge and abilities.
Autonomy in this context means the agent can make independent
decisions, perceive and interpret, plan and execute and learn and adapt
• Intelligent Agent
– Intelligent agents are systems that exhibit some form of intelligence, such
as learning, reasoning, or problem-solving. Autonomy in intelligent agents
means these agents can operate independently, learn and improve, adapt
and respond
What is an Environment?

• An environment is a surrounding (external world) where an


agent operates and interacts.
• The environment can affect the agent’s behavior and action,
and it is affected by the agent’s action.
What is a Task Environment?

• A task environment is a specific type of environment that is


defined by the task an AI agent is designed to perform.
• Task environments are characterized by
– Goals
– Performance Measures
– Environment
– Actions
– Percept sequences
How to Specify the Task
Environment?
• To design a rational agent, we must specify the task
environment using PEAS which stands for Performance
measure, Environment, Actuators, and Sensors.
• This framework is used to describe the task environment for
intelligent agents, helping to design and understand the
functionality and context in which an agent operates.
– Performance measure
– Environment
– Actuators
– Sensors
An Example of PEAS Description
Examples of Agent Types and Their
PEAS Descriptions
Properties of Task Environments
• A task environment is a specific type of environment that is defined by the
task an AI agent is designed to perform.
• Understanding the properties of these environments helps in designing
and evaluating intelligent systems
– Fully Observable vs. Partially Observable vs. Unobservable
– Deterministic vs. Nondeterministic vs. Stochastic
– Episodic vs. Sequential
– Static vs. Dynamic
– Discrete vs. Continuous
– Single-Agent vs. Multi-Agent
– Known vs. Unknown
• The distinction between known and unknown environments is not the
same as the one between fully and partially observable environments.
• The hardest case is partially observable, multiagent, nondeterministic,
sequential, dynamic, continuous, and unknown.
Examples of task environments and
their characteristics

• The environment type largely determines the agent design.


• The real world is (of course) partially observable, stochastic, sequential,
dynamic, continuous, multi-agent.
The Structure of Agents

• Agent
• Environment
• Agent function
• Agent program
• Agent architecture
• Agent = architecture + program
– In our class and the textbook, we focus agent programs (functions)
Four Basic Kinds of Agents
(Programs)

• Four basic types in order of increasing generality:


– Simple reflex agent
– Model-based reflex agent
– Goal-based agent
– Utility-based agent

• All these can be turned into learning agents


Agent, Environment, Sensors and
Actuators

Agent Sensors
Percepts

Environment
??

Actuators
Actions
Simple Reflex Agent
• Agents operate based on the current percept and a set of
predefined condition-action rules.
– They do not have memory of past percepts or states and make decisions purely based
on the current situation.
• Characteristics
– Reactive
– No memory
– Rule-based
• Suitable Task Environments
– Fully Observable, Deterministic, Episodic, Static, Discrete
• Pros
– Simple to design and implement, effective in predictable and fully observable
environments.
• limitations
– Limited by lack of memory and learning capabilities, not suitable for complex, partially
observable, or dynamic environments.
Simple Reflex Agent
Model-based Reflex Agent
• An extension of simple reflex agents that can work in a partially
observable environment and track the situation by maintaining internal
state and models
• Components
– Internal state
– Transition model
– Sensor model
– Rules
• Suitable Task Environments
– Observable or partially observable, static or dynamic, deterministic or stochastic,
sequential
• Pros
– Handling Partially Observable Environments
– Flexibility and Adaptability
• Limitations
– Complexity, model accuracy, limited learning, scalability, adaptability
Model-based Reflex Agent
Goal-based Agent
• An agent that takes actions with the objective of achieving specific goals.
– These agents use a combination of current state information and goal-related information to
determine the best course of action to reach their goals.
• Components
– Goal
– State
– Models
– Search and Planning Algorithms
• Suitable Task Environments
– static or dynamic, deterministic or stochastic, sequential
• Pros
– directed behavior, flexibility, improved decision-making, handling complex
environments, adaptation to new problems, and improved efficiency.
• Limitations
– Complexity, model accuracy, limited learning, scalability, adaptability
Goal-based agent
Utility-based Agent
• Intelligent agents that aims to maximize a utility function, which quantifies
the desirability of different states or outcomes.
– These agents make decisions based on a measure of how "good" or "useful" a particular
state is, rather than merely achieving a goal or following predefined rules.
• Components
– Utility function
– State
– Models
– Search and Planning Algorithms
• Suitable Task Environments
– static or dynamic, nondeterministic or stochastic
• Pros
– Rational Decision-Making, Flexibility and Adaptability, Handling Uncertainty, Multi-
Objective Optimization, Scalability
• Limitations
– Complexity, Computationally Intensive, Subjectivity
Utility-based agent
Learning Agents
• An agent that takes actions with the objective of achieving specific
goals.
– These agents an improve their performance over time through experience.
• Components
– Performance Element
– Learning Element
– Critic
– Problem Generator
• How Learning Agents Operate
– Interaction
– Feedback
– Learning
– Adaptation
• Pros
– Adaptability, improvement over time, handling complexity
Learning agents
Three Ways to Represent States
and the Transitions between them
• Atomic
– Each state is indivisible and is treated as a single, unique entity
without any internal structure. Transitions between states are also
treated at this atomic level
• Factored
– In a factored representation, each state is described by a set of
variables (or factors), and the state is represented as a vector of these
variables. Transitions involve changes to the values of these variables.
• Structured
– Structured representation involves even more detailed and
sophisticated modeling such as hierarchical structures, graphs, and
relational models, where states are described using objects, their
properties, and the relationships between them.
Summary
• Agents interact with environments through actuators and sensors
• The agent function describes what the agent does in all
circumstances
• The performance measure evaluates the environment sequence
• A perfectly rational agent maximizes expected performance
• Agent programs implement agent functions
• PEAS descriptions define task environments
• Environments are categorized along several dimensions:
– observable? deterministic? episodic? static? discrete? single-agent?
• Several basic agent architectures exist:
– reflex, reflex with state, goal-based, utility-based
• All agents can improve their performance through learning
– Learning agents

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy