0% found this document useful (0 votes)
83 views6 pages

Module 2 Ai Viva Questions

The document discusses key concepts in artificial intelligence including agents, environments, and problem solving. It defines agents as entities that can perceive and act, and describes different types of agents including intelligent agents, rational agents, and learning agents. It then outlines different types of environments an agent may operate in such as fully/partially observable, single/multi-agent, static/dynamic, and known/unknown. Finally, it characterizes problem solving in AI as a systematic search through possible actions to reach a predefined goal, often using means-end analysis.

Uploaded by

Ritika dwivedi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
83 views6 pages

Module 2 Ai Viva Questions

The document discusses key concepts in artificial intelligence including agents, environments, and problem solving. It defines agents as entities that can perceive and act, and describes different types of agents including intelligent agents, rational agents, and learning agents. It then outlines different types of environments an agent may operate in such as fully/partially observable, single/multi-agent, static/dynamic, and known/unknown. Finally, it characterizes problem solving in AI as a systematic search through possible actions to reach a predefined goal, often using means-end analysis.

Uploaded by

Ritika dwivedi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

MODULE 2 AI VIVA QUESTIONS

1. Agent:
"Anything that can be viewed a perceiving its environment through sensors and acting
upon that environment through effectors." [Russel, Norvig 1995]

Types of Agent in AI:

1. Intelligent Agent: An intelligent agent is an autonomous entity which act upon an


environment using sensors and actuators for achieving goals. An intelligent agent may
learn from the environment to achieve their goals. A thermostat is an example of an
intelligent agent.

Following are the main four rules for an AI agent:

o Rule 1: An AI agent must have the ability to perceive the environment.


o Rule 2: The observation must be used to make decisions.
o Rule 3: Decision should result in an action.
o Rule 4: The action taken by an AI agent must be a rational action.

Simple Reflex Agents: These agents are based on a set of predefined rules that
determine how the agent should react to a specific situation. They don't take into
account the current state of the environment or any past actions.
Model-Based Reflex Agents: These agents also use predefined rules, but they also
have a model of the environment that allows them to take into account the current
state of the environment and any past actions.

Goal-Based Agents: These agents have a goal to achieve and take actions that will
help them achieve that goal. They use a set of rules to determine which actions to take.
Utility-Based Agents: These agents take actions based on the expected utility or
benefit of each action. They consider both the immediate and long-term consequences
of each action.

2. Rational Agent: A rational agent is an agent which has clear preference, models
uncertainty, and acts in a way to maximize its performance measure with all possible
actions.

A rational agent is said to perform the right things. AI is about creating rational agents
to use for game theory and decision theory for various real-world scenarios.

The rationality of an agent is measured by its performance measure. Rationality can


be judged on the basis of following points:

o Performance measure which defines the success criterion.


o Agent prior knowledge of its environment.
o Best possible actions that an agent can perform.
o The sequence of percepts.

3. Learning Agent:
A learning agent is an intelligent agent that can improve its performance over time by
adapting to its environment through learning. The main components of a learning agent
are:
1) Learning element: The learning element is responsible for updating the agent's
knowledge or model of the environment based on the data it receives. It uses
various learning algorithms to learn from data or feedback.
2) Performance element: The performance element is responsible for selecting
actions that maximize the agent's expected utility or reward. It uses the agent's
current knowledge and objectives to determine which action to take.
3) Critic: The critic evaluates the agent's performance by providing feedback or a
reward signal based on the quality of the agent's action. It helps the agent to
learn from its experiences and improve its decision-making abilities.
4) Problem generator: The problem generator is responsible for generating new
problems or goals for the agent to solve. It helps the agent to explore new areas
of the environment and learn new skills.

Working of a learning agent:


1) The agent observes the environment through sensors or other means and
collects data about the current state of the environment.
2) The learning element processes the data and updates the agent's knowledge or
model of the environment.
3) The performance element selects an action based on the agent's current state
and the available information.
4) The agent receives feedback or a reward signal from the critic based on the
quality of its action.
5) The learning element uses the feedback to update the agent's knowledge or
model of the environment.
6) The problem generator generates new problems or goals for the agent to solve.
7) The agent repeats the process, observing the environment, selecting actions,
receiving feedback, updating its knowledge, and generating new problems.
2. Environments in AI:

1) Fully observable vs partially observable: A fully observable environment is one in


which the agent can observe and access the complete state of the environment. A
partially observable environment is one in which the agent can only observe a
limited part of the environment at any given time.
Example: playing a game of chess is a fully observable environment, while playing a
card game is a partially observable environment.

2) Single agent vs multiagent: A single agent environment is one in which there is


only one agent operating. A multiagent environment is one in which there are
multiple agents operating, and each agent interacts with the environment and other
agents.
Example: a game of chess is a single agent environment, while an auction is a
multiagent environment.

3) Static vs dynamic: A static environment is one in which the environment does not
change while the agent is deliberating. A dynamic environment is one in which the
environment can change while the agent is deliberating.
Example: playing a game of chess on a fixed board is a static environment, while
playing a game of football on a changing field is a dynamic environment.

4) Deterministic vs stochastic: In a deterministic environment, the outcome of an


action is entirely determined by the current state of the environment and the agent's
action. In a stochastic environment, the outcome of an action is uncertain or
probabilistic.
Example: solving a mathematical equation is a deterministic environment, while
predicting the weather is a stochastic environment.
5) Episodic vs sequential: An episodic environment is one in which the agent's
experience is divided into atomic episodes, where each episode is self-contained and
has no effect on subsequent episodes. A sequential environment is one in which the
agent's experience is a sequence of events, where each event depends on the
previous events.
Example: playing a game of chess is an episodic environment, while driving a car in
traffic is a sequential environment.

6) Discrete vs continuous: A discrete environment is one in which the state space,


action space, and time are all discrete or finite. A continuous environment is one in
which the state space, action space, and time are all continuous or infinite.
Example: playing a game of tic-tac-toe is a discrete environment, while driving a car
is a continuous environment.

7) Known vs unknown: In a known environment, the agent has complete knowledge


about the environment and its dynamics. In an unknown environment, the agent has
incomplete or no knowledge about the environment and its dynamics.
Example: playing a game of chess with known rules is a known environment, while
exploring a new planet is an unknown environment.

8) Accessible vs inaccessible: In an accessible environment, the agent can access any


part of the environment at any time. In an inaccessible environment, the agent
cannot access certain parts of the environment.
Example: exploring an open field is an accessible environment, while exploring a
deep sea is an inaccessible environment.

3. Problem solving:
Problem solving, particularly in artificial intelligence, may be characterized as a
systematic search through a range of possible actions in order to reach some predefined
goal or solution. Problem-solving methods divide into special purpose and general
purpose. A special-purpose method is tailor-made for a particular problem and often
exploits very specific features of the situations in which the problem is embedded. In
contrast, a general-purpose technique used in AI is means-end analysis- a step-by-by, or
incremental, reduction of the difference between the current state and the final goal.
The program selects actions from a list of means in the case of a simple robot this might
consists of PICKUP, PUTDOWN, MOVEFORWARD, MOVEBACK, MOVELEFT and
MOVERIGHT - until the goal is reached.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy