COMP 469 - CH 2 - Intelligent Agents and Environments
COMP 469 - CH 2 - Intelligent Agents and Environments
• Agents
• Environments
• Concept of rationality
• Task environments
• Properties of task environments
• Type of agents
• Type of environment states and their transitions
What is an Agent?
• Percept
– The content an agent’s sensors are perceiving
• Percept sequence
– The complete history of content the agent has ever perceived
• Agent function
– The description of an agent’s behavior that maps any given percept
sequence to an actions
• Agent program
– the implementation of the agent function
Agent, Environment, Sensors and
Actuators
Agent Sensors
Percepts
Environment
??
Actuators
Actions
Agent, Environment, Sensors and
Actuators
Rational Agent, Intelligent Agent
• Good behavior
– Refers to effective, efficient, adaptable, safe, and consistent actions by
an AI agent that align with its goals.
• Rationality
– The principle of making decisions that maximize the performance
measure based on goals and available information.
• Performance measures
– Metrics or criteria used to evaluate how well an AI agent achieves its
goals.
Definition of a Rational Agent
• It depends.
– We need to specify what the performance measure is, what is known
about the environment, and what sensors and actuators the agent
has.
Autonomy, Rational Agent,
Intelligent Agent
• Autonomy
– Autonomy in AI involves the capability of AI systems to perform tasks,
make decisions, and solve problems without continuous human oversight.
• Rational Agent
– A rational agent is an agent that acts to achieve the best possible outcome
according to its performance measure, given its knowledge and abilities.
Autonomy in this context means the agent can make independent
decisions, perceive and interpret, plan and execute and learn and adapt
• Intelligent Agent
– Intelligent agents are systems that exhibit some form of intelligence, such
as learning, reasoning, or problem-solving. Autonomy in intelligent agents
means these agents can operate independently, learn and improve, adapt
and respond
What is an Environment?
• Agent
• Environment
• Agent function
• Agent program
• Agent architecture
• Agent = architecture + program
– In our class and the textbook, we focus agent programs (functions)
Four Basic Kinds of Agents
(Programs)
Agent Sensors
Percepts
Environment
??
Actuators
Actions
Simple Reflex Agent
• Agents operate based on the current percept and a set of
predefined condition-action rules.
– They do not have memory of past percepts or states and make decisions purely based
on the current situation.
• Characteristics
– Reactive
– No memory
– Rule-based
• Suitable Task Environments
– Fully Observable, Deterministic, Episodic, Static, Discrete
• Pros
– Simple to design and implement, effective in predictable and fully observable
environments.
• limitations
– Limited by lack of memory and learning capabilities, not suitable for complex, partially
observable, or dynamic environments.
Simple Reflex Agent
Model-based Reflex Agent
• An extension of simple reflex agents that can work in a partially
observable environment and track the situation by maintaining internal
state and models
• Components
– Internal state
– Transition model
– Sensor model
– Rules
• Suitable Task Environments
– Observable or partially observable, static or dynamic, deterministic or stochastic,
sequential
• Pros
– Handling Partially Observable Environments
– Flexibility and Adaptability
• Limitations
– Complexity, model accuracy, limited learning, scalability, adaptability
Model-based Reflex Agent
Goal-based Agent
• An agent that takes actions with the objective of achieving specific goals.
– These agents use a combination of current state information and goal-related information to
determine the best course of action to reach their goals.
• Components
– Goal
– State
– Models
– Search and Planning Algorithms
• Suitable Task Environments
– static or dynamic, deterministic or stochastic, sequential
• Pros
– directed behavior, flexibility, improved decision-making, handling complex
environments, adaptation to new problems, and improved efficiency.
• Limitations
– Complexity, model accuracy, limited learning, scalability, adaptability
Goal-based agent
Utility-based Agent
• Intelligent agents that aims to maximize a utility function, which quantifies
the desirability of different states or outcomes.
– These agents make decisions based on a measure of how "good" or "useful" a particular
state is, rather than merely achieving a goal or following predefined rules.
• Components
– Utility function
– State
– Models
– Search and Planning Algorithms
• Suitable Task Environments
– static or dynamic, nondeterministic or stochastic
• Pros
– Rational Decision-Making, Flexibility and Adaptability, Handling Uncertainty, Multi-
Objective Optimization, Scalability
• Limitations
– Complexity, Computationally Intensive, Subjectivity
Utility-based agent
Learning Agents
• An agent that takes actions with the objective of achieving specific
goals.
– These agents an improve their performance over time through experience.
• Components
– Performance Element
– Learning Element
– Critic
– Problem Generator
• How Learning Agents Operate
– Interaction
– Feedback
– Learning
– Adaptation
• Pros
– Adaptability, improvement over time, handling complexity
Learning agents
Three Ways to Represent States
and the Transitions between them
• Atomic
– Each state is indivisible and is treated as a single, unique entity
without any internal structure. Transitions between states are also
treated at this atomic level
• Factored
– In a factored representation, each state is described by a set of
variables (or factors), and the state is represented as a vector of these
variables. Transitions involve changes to the values of these variables.
• Structured
– Structured representation involves even more detailed and
sophisticated modeling such as hierarchical structures, graphs, and
relational models, where states are described using objects, their
properties, and the relationships between them.
Summary
• Agents interact with environments through actuators and sensors
• The agent function describes what the agent does in all
circumstances
• The performance measure evaluates the environment sequence
• A perfectly rational agent maximizes expected performance
• Agent programs implement agent functions
• PEAS descriptions define task environments
• Environments are categorized along several dimensions:
– observable? deterministic? episodic? static? discrete? single-agent?
• Several basic agent architectures exist:
– reflex, reflex with state, goal-based, utility-based
• All agents can improve their performance through learning
– Learning agents