Module 2 Ai Viva Questions
Module 2 Ai Viva Questions
1. Agent:
"Anything that can be viewed a perceiving its environment through sensors and acting
upon that environment through effectors." [Russel, Norvig 1995]
Simple Reflex Agents: These agents are based on a set of predefined rules that
determine how the agent should react to a specific situation. They don't take into
account the current state of the environment or any past actions.
Model-Based Reflex Agents: These agents also use predefined rules, but they also
have a model of the environment that allows them to take into account the current
state of the environment and any past actions.
Goal-Based Agents: These agents have a goal to achieve and take actions that will
help them achieve that goal. They use a set of rules to determine which actions to take.
Utility-Based Agents: These agents take actions based on the expected utility or
benefit of each action. They consider both the immediate and long-term consequences
of each action.
2. Rational Agent: A rational agent is an agent which has clear preference, models
uncertainty, and acts in a way to maximize its performance measure with all possible
actions.
A rational agent is said to perform the right things. AI is about creating rational agents
to use for game theory and decision theory for various real-world scenarios.
3. Learning Agent:
A learning agent is an intelligent agent that can improve its performance over time by
adapting to its environment through learning. The main components of a learning agent
are:
1) Learning element: The learning element is responsible for updating the agent's
knowledge or model of the environment based on the data it receives. It uses
various learning algorithms to learn from data or feedback.
2) Performance element: The performance element is responsible for selecting
actions that maximize the agent's expected utility or reward. It uses the agent's
current knowledge and objectives to determine which action to take.
3) Critic: The critic evaluates the agent's performance by providing feedback or a
reward signal based on the quality of the agent's action. It helps the agent to
learn from its experiences and improve its decision-making abilities.
4) Problem generator: The problem generator is responsible for generating new
problems or goals for the agent to solve. It helps the agent to explore new areas
of the environment and learn new skills.
3) Static vs dynamic: A static environment is one in which the environment does not
change while the agent is deliberating. A dynamic environment is one in which the
environment can change while the agent is deliberating.
Example: playing a game of chess on a fixed board is a static environment, while
playing a game of football on a changing field is a dynamic environment.
3. Problem solving:
Problem solving, particularly in artificial intelligence, may be characterized as a
systematic search through a range of possible actions in order to reach some predefined
goal or solution. Problem-solving methods divide into special purpose and general
purpose. A special-purpose method is tailor-made for a particular problem and often
exploits very specific features of the situations in which the problem is embedded. In
contrast, a general-purpose technique used in AI is means-end analysis- a step-by-by, or
incremental, reduction of the difference between the current state and the final goal.
The program selects actions from a list of means in the case of a simple robot this might
consists of PICKUP, PUTDOWN, MOVEFORWARD, MOVEBACK, MOVELEFT and
MOVERIGHT - until the goal is reached.