0% found this document useful (0 votes)
32 views5 pages

Week 2

This document summarizes different types of agents in artificial intelligence: 1) Simple reflex agents act solely based on current percepts and ignore percept history, using condition-action rules. Model-based reflex agents maintain an internal state model to handle partial observability. 2) Goal-based agents choose actions intended to reduce distance from a specified goal state, allowing flexibility. 3) Utility-based agents choose actions maximizing expected utility, a measure of how "happy" different states make the agent. 4) Learning agents can adapt through learning from experiences, with components for learning, receiving feedback, selecting actions, and generating new experiences.

Uploaded by

susanabdullahi1
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
32 views5 pages

Week 2

This document summarizes different types of agents in artificial intelligence: 1) Simple reflex agents act solely based on current percepts and ignore percept history, using condition-action rules. Model-based reflex agents maintain an internal state model to handle partial observability. 2) Goal-based agents choose actions intended to reduce distance from a specified goal state, allowing flexibility. 3) Utility-based agents choose actions maximizing expected utility, a measure of how "happy" different states make the agent. 4) Learning agents can adapt through learning from experiences, with components for learning, receiving feedback, selecting actions, and generating new experiences.

Uploaded by

susanabdullahi1
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

Week-2

Agents in Artificial Intelligence


Artificial intelligence is defined as the study of rational agents. A rational agent could be
anything that makes decisions, as a person, firm, machine, or software. It acts with the best
outcome after considering past and current percepts(agent’s perceptual inputs at a given
instance). An AI system is composed of an agent and its environment. The agents act in their
environment. The environment may contain other agents.
An agent is anything that can be viewed as:

• perceiving its environment through sensors and


• acting upon that environment through actuators
Note: Every agent can perceive its actions (but not always the effects)

To understand the structure of Intelligent Agents, we should be familiar with Architecture and
Agent programs. Architecture is the machinery that the agent executes on. It is a device with
sensors and actuators, for example, a robotic car, a camera, a PC. Agent program is an
implementation of an agent function. An agent function is a map from the percept
sequence(history of all that an agent has perceived to date) to an action.
Agent = Architecture + Agent Program
Examples of Agent:

• A software agent has Keystrokes, file contents, received network packages which act
as sensors and displays on the screen, files, sent network packets acting as actuators.
• A Human-agent has eyes, ears, and other organs which act as sensors, and hands, legs,
mouth, and other body parts acting as actuators.
• A Robotic agent has Cameras and infrared range finders which act as sensors and
various motors acting as actuators.
Types of Agents
Agents can be grouped into four classes based on their degree of perceived intelligence and
capability:

• Simple Reflex Agents


• Model-Based Reflex Agents
• Goal-Based Agents
• Utility-Based Agents
• Learning Agent

Simple reflex agents


Simple reflex agents ignore the rest of the percept history and act only based on the current
percept. Percept history is the history of all that an agent has perceived to date. The agent
function is based on the condition-action rule. A condition-action rule is a rule that maps a
state i.e, condition to an action. If the condition is true, then the action is taken, else not. This
agent function only succeeds when the environment is fully observable. For simple reflex
agents operating in partially observable environments, infinite loops are often unavoidable. It
may be possible to escape from infinite loops if the agent can randomize its actions.
Problems with Simple reflex agents are :

• Very limited intelligence.


• No knowledge of non-perceptual parts of the state.
• Usually too big to generate and store.
• If there occurs any change in the environment, then the collection of rules need to be
updated.
Model-based reflex agents
It works by finding a rule whose condition matches the current situation. A model-based
agent can handle partially observable environments by the use of a model about the world.
The agent has to keep track of the internal state which is adjusted by each percept and that
depends on the percept history. The current state is stored inside the agent which maintains
some kind of structure describing the part of the world which cannot be seen.
Updating the state requires information about :

• how the world evolves independently from the agent, and


• how the agent’s actions affect the world.
Goal-based agents
These kinds of agents take decisions based on how far they are currently from their
goal(description of desirable situations). Their every action is intended to reduce its distance
from the goal. This allows the agent a way to choose among multiple possibilities, selecting
the one which reaches a goal state. The knowledge that supports its decisions is represented
explicitly and can be modified, which makes these agents more flexible. They usually require
search and planning. The goal-based agent’s behavior can easily be changed.

Utility-based agents
The agents which are developed having their end uses as building blocks are called utility-
based agents. When there are multiple possible alternatives, then to decide which one is best,
utility-based agents are used. They choose actions based on a preference (utility) for each
state. Sometimes achieving the desired goal is not enough. We may look for a quicker, safer,
cheaper trip to reach a destination. Agent happiness should be taken into consideration.
Utility describes how “happy” the agent is. Because of the uncertainty in the world, a utility
agent chooses the action that maximizes the expected utility. A utility function maps a state
onto a real number which describes the associated degree of happiness.
Learning Agent:
A learning agent in AI is the type of agent that can learn from its past experiences or it has
learning capabilities. It starts to act with basic knowledge and then can act and adapt
automatically through learning.
A learning agent has mainly four conceptual components, which are:

• Learning element: It is responsible for making improvements by learning from the


environment
• Critic: The learning element takes feedback from critics which describes how well the
agent is doing concerning a fixed performance standard.
• Performance element: It is responsible for selecting external action
• Problem Generator: This component is responsible for suggesting actions that will lead
to new and informative experiences.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy