The document discusses the concept of rationality in AI, defining a rational agent as one that maximizes its performance based on its perceptions and knowledge. It outlines the structure of an AI agent, including architecture, agent function, and agent program, and introduces the PEAS representation model for defining agent properties. Additionally, it describes various features of the environment in which agents operate, such as observability, determinism, and accessibility.
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0 ratings0% found this document useful (0 votes)
25 views20 pages
Rational Agent
The document discusses the concept of rationality in AI, defining a rational agent as one that maximizes its performance based on its perceptions and knowledge. It outlines the structure of an AI agent, including architecture, agent function, and agent program, and introduces the PEAS representation model for defining agent properties. Additionally, it describes various features of the environment in which agents operate, such as observability, determinism, and accessibility.
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 20
Rational Agent and
Agent Environment Lecture 4 Rationality
• Rationality is nothing but the status of being reasonable, sensible
and having a good sense of judgment. • Rationality is concerned with expected actions and results depending upon what the agent has perceived. • Performing actions to obtain useful information is an important part of rationality. Rationality: The rationality of an agent is measured by its performance measure.
Rationality can be judged based on the following points:
• Performance measure which defines the success criterion. • Agent prior knowledge of its environment. • Best possible actions that an agent can perform. • The sequence of percepts. Rational Agent An ideal rational agent is one, which is capable of doing expected actions to maximize its performance measure, based on − • Its percept sequence • Its built-in knowledge base • A rational agent is an agent which has a clear preference, models uncertainty and acts in a way to maximize its performance measure with all possible actions. • A rational agent is said to perform the right things. • AI is about creating rational agents to use for game theory and decision theory for various real-world scenarios. • For an AI agent, the rational action is most important because in a reinforcement learning algorithm, for each best possible action, the agent gets the positive reward and for each wrong action, an agent gets a negative reward. Structure of an AI Agent • The task of AI is to design an agent program that implements the agent function. • The structure of an intelligent agent is a combination of architecture and agent program. It can be viewed as: Agent = Architecture + Agent program The following are the main three terms involved in the structure of an AI agent: • Architecture: Architecture is machinery that an AI agent executes on. • Agent Function: Agent function is used to map a percept to an action. f:P* → A • Agent program: An agent program is an implementation of agent function. An agent program executes on the physical architecture to produce function f. PEAS Representation • PEAS is a type of model on which an AI agent works upon. When we define an AI agent or rational agent, then we can group its properties under PEAS representation model. It is made up of four words: • P: Performance measure • E: Environment • A: Actuators • S: Sensors Performance measure is the objective for the success of an agent’s behaviour. PEAS for self-driving cars Let's self-driving car then PEAS representation will be: • Performance: Safety, time, legal drive, comfort • Environment: Roads, other vehicles, road signs, pedestrian • Actuators: Steering, accelerator, brake, signal, horn • Sensors: Camera, GPS, speedometer, odometer, accelerometer, sonar. Agent Environment in AI
• An environment is everything in the world that surrounds the agent,
but it is not a part of the agent itself. • An environment can be described as a situation in which an agent is present. • The environment is where the agent lives, operates and provides the agent with something to sense and act upon it. • An environment is mostly said to be non-feministic. Features of Environment As per Russell and Norvig, an environment can have various features from the point of view of an agent:
1. Fully observable vs Partially Observable
2. Static vs Dynamic 3. Discrete vs Continuous 4. Deterministic vs Stochastic 5. Single-agent vs Multi-agent 6. Episodic vs sequential 7. Known vs Unknown 8. Accessible vs Inaccessible Fully observable vs Partially Observable:
• If an agent sensor can sense or access the complete state of an
environment at each point of time then it is a fully observable environment, else it is partially observable. • A fully observable environment is easy as there is no need to maintain the internal state to keep track history of the world. • An agent with no sensors in all environments then such an environment is called as unobservable. Ex • Chess (observable) • Poker (Partially Observable) Deterministic vs Stochastic:
• If an agent's current state and selected action can completely
determine the next state of the environment, then such environment is called a deterministic environment. • Ex. Chess Game • A stochastic environment is random in nature and cannot be determined completely by an agent. • Ex. Weather forecasting • In a deterministic, fully observable environment, the agent does not need to worry about uncertainty. • Episodic vs Sequential:
• In an episodic environment, there is a series of one-shot actions, and
only the current percept is required for the action. • Example: Playing Individual Games of Chess • However, in a Sequential environment, an agent requires memory of past actions to determine the next best actions. • Example: Learning to Drive a Car Single-agent vs Multi-agent:
• If only one agent is involved in an environment, and operating by
itself then such an environment is called a single agent environment. • Example: Chess Puzzle Solver
• However, if multiple agents are operating in an environment, then
such an environment is called a multi-agent environment. • The agent design problems in the multi-agent environment are different from the single-agent environment. • Example: Robotic Soccer Team Static vs Dynamic:
• If the environment can change itself while an agent is deliberating
then such environment is called a dynamic environment else it is called a static environment. • Static environments are easy to deal with because an agent does not need to continue looking at the world while deciding for an action. • Example: Tic-Tac-Toe Game, Crossword puzzles • However for a dynamic environment, agents need to keep looking at the world at each action. • Example: Taxi driving Discrete vs Continuous:
• If in an environment there are a finite number of percepts and actions
that can be performed within it, then such an environment is called a discrete environment. • A chess game comes under a discrete environment as there is a finite number of moves that can be performed. • In a continuous environment, there is an infinite range of possible states, actions, or outcomes. The transition between states or the set of possible actions is not countable and can take any value within a range • A self-driving car is an example of a continuous environment. Known vs Unknown
• Known and unknown are not actually a feature of an environment,
but it is an agent's state of knowledge to perform an action. • In a known environment, the results for all actions are known to the agent. • Example: Chess with Perfect Information • In an unknown environment, the agent needs to learn how it works in order to perform an action. • Example: Exploration of an Uncharted Territory • It is quite possible for a known environment to be partially observable and an Unknown environment to be fully observable. Accessible vs Inaccessible
• If an agent can obtain complete and accurate information about the
state's environment, then such an environment is called an Accessible environment. • An empty room whose state can be defined by its temperature is an example of an accessible environment. • Example: Chess Game with Visible Board • In an inaccessible environment, the agent does not have direct access to complete information about the current state or the available actions. • Information about an event on Earth is an example of an Inaccessible environment. • Example: Poker with Hidden Cards