Artificial Intelligence Qna
Artificial Intelligence Qna
1. Thinking Humanly: This involves mimicking human thought processes. Cognitive science
models how humans think, using psychological experiments, brain imaging, and
introspection.
2. Acting Humanly: Based on the Turing Test, this approach focuses on machines behaving in a
way indistinguishable from humans, requiring capabilities like natural language processing
and reasoning.
4. Acting Rationally: The rational agent approach where agents act to achieve the best
outcome, even under uncertainty, focusing on decision-making and learning
The concept that machines can Focuses on building systems that behave
Definition genuinely have a mind, consciousness, intelligently, without the need for
and self-awareness. consciousness or understanding.
To create machines that can actually To create systems that simulate human
Goal
think and have real intelligence. behavior for specific tasks.
Theoretical; does not exist in practice Chatbots, virtual assistants like Siri, and
Examples
yet. chess-playing computers like Deep Blue.
Full autonomy with the ability to think Operates within a limited set of
Autonomy
and act like humans. predefined tasks.
1. Agent Definition:
o An agent is any system or entity that perceives its environment through sensors and
acts upon that environment using actuators to achieve specific goals.
2. Environment:
o The environment refers to everything outside the agent that can influence the
agent’s decisions or behavior. This includes objects, conditions, and rules that the
agent interacts with.
o The agent perceives the environment by gathering information through sensors (like
a robot's camera) and acts on the environment through actuators (like robot arms or
wheels).
4. Interaction Cycle:
5. Environment Types:
6. Goal Achievement:
o The agent’s goal is to take actions that move it toward achieving its objective based
on the information it perceives from the environment.
7. Feedback Loop:
o The environment responds to the agent’s actions, providing new information, which
the agent uses to update its understanding and choose the next actions.
State-of-the-Art Methods in AI
AI has seen tremendous advancements with several state-of-the-art methods applied across various
fields:
1. Robotic Vehicles: AI-powered autonomous vehicles have navigated difficult terrains, such as
STANLEY, a driverless robotic car that finished first in DARPA's Grand Challenge. These
vehicles use sophisticated sensors and AI-driven decision-making.
2. Speech Recognition: Systems like those used by United Airlines handle customer service
tasks through automated voice interaction.
3. Autonomous Planning and Scheduling: NASA's spacecraft control programs like Remote
Agent and MAPGEN autonomously plan and monitor space operations, even detecting and
recovering from issues.
4. Game Playing: AI systems, like IBM's DEEP BLUE, defeated world chess champion Garry
Kasparov, showcasing advanced decision-making and learning in competitive environments.
5. Spam Filtering: AI-powered spam classification systems save users from billions of unwanted
emails daily using learning algorithms to adapt to evolving spamming techniques.
These examples demonstrate AI's application in critical areas, emphasizing learning, decision-making,
and adaptability.
Nature of AI Environments
AI environments can be categorized based on various characteristics that influence how agents
operate:
o In fully observable environments, agents can access the complete state at each
moment.
o In partially observable environments, only part of the state is available due to sensor
limitations or noise.
o In episodic tasks, agents’ actions are divided into discrete episodes with no long-
term effect on future actions.
o In sequential tasks, each action impacts future events, requiring agents to plan
several steps ahead.
5. Types of Agents
There are four basic types of agents in AI:
o Use condition-action rules (e.g., "if dirty, then suck" in a vacuum cleaner robot).
o They maintain an internal state to track parts of the environment that are not
directly observable.
o Use a model of how the world works and update the state based on their actions
and observations.
3. Goal-Based Agents:
o They evaluate various possible actions to determine which action brings them
closest to the goal.
o They require planning and search algorithms to choose the right action.
4. Utility-Based Agents:
o They use a utility function to evaluate different actions and decide based on
maximizing expected utility.
o These agents work well in environments where multiple actions may achieve the
same goal, but some actions are more preferable.
5. Learning Agents:
o Learning agents can modify their own behavior based on feedback from the
environment
6. Four Phases of the Problem-Solving Process
1. Goal Formulation:
o The agent defines what its goal is based on its current situation and performance
measure. This is the first step in solving any problem.
2. Problem Formulation:
o The agent decides what actions and states to consider in order to reach the goal. It
must understand the possible actions available and their consequences.
o The agent searches for an action sequence that will lead to the goal. This is done by
exploring the state space to find the best path.
4. Execution:
o After finding a solution, the agent executes the action sequence to achieve the goal.
The agent might update its plan based on new information and feedback during this
phase
1. Initial State: The state in which the agent starts. For example, if an agent is tasked with
navigating a map, the initial state could be the starting location on that map.
2. Actions: These are the possible actions available to the agent from any given state. An action
leads the agent to a new state. For instance, the action might be to move from one city to
another on a map.
3. Transition Model: Describes what happens when an action is performed, mapping a state
and action to a new state. This model helps the agent understand the result of its actions.
4. Goal Test: This determines whether a particular state is a goal state. The agent checks if the
current state satisfies the goal condition.
5. Path Cost: The function that assigns a numeric cost to each path, representing the cost of
getting from one state to another. It helps the agent optimize its actions by considering both
the path and the goal
8. Domains of AI
AI encompasses a wide variety of subfields and application areas, including:
1. Learning and Perception: This involves creating systems that can learn from data and
perceive their environment through sensors or other means.
2. Natural Language Processing (NLP): NLP enables machines to understand, interpret, and
generate human languages. Examples include chatbots and translation systems.
3. Robotics: Robotics involves creating machines that can perform tasks autonomously or semi-
autonomously, including navigation, manipulation, and interaction with the environment.
4. Expert Systems: These are systems that use knowledge-based reasoning to solve specific
problems, often in fields like medicine or finance.
5. Speech Recognition: AI systems that can recognize and interpret spoken language, such as
virtual assistants like Siri and Alexa.
6. Machine Vision: Systems that allow machines to interpret and understand visual inputs,
enabling tasks such as object recognition and autonomous driving
Aims to achieve the best possible outcome May not have a goal or does not
Goal
or expected utility given the current state optimize for achieving any specific
Orientation
and environment. outcome.
Learning Can learn and adapt its behavior based on Typically follows rigid or fixed patterns
Capability past experiences or feedback. with no learning capability.
1. Simple Reflex Agents: These agents select actions based solely on the current percept,
ignoring the rest of the percept history. They are limited to fully observable environments
because they make decisions using condition-action rules triggered by the current percepts.
2. Model-Based Reflex Agents: These agents use an internal model of the world to keep track
of parts of the environment that are not observable in the current state. This internal state
allows the agent to handle partially observable environments.
3. Goal-Based Agents: These agents go beyond just considering the current state of the
environment by aiming to achieve certain goals. The agent chooses actions that are expected
to lead towards the goal. The notion of a goal adds a layer of complexity and long-term
decision-making ability to the agent.
4. Utility-Based Agents: These agents aim to maximize their own expected "utility," a measure
of satisfaction. The agent chooses actions that optimize this utility function, enabling it to
handle trade-offs between competing goals or objectives.
5. Learning Agents: These agents improve their performance over time by learning from their
experiences. They consist of four main components: the learning element, performance
element, critic, and problem generator. Learning agents adjust their actions to maximize
performance based on feedback from the environment.
11. History of AI
The history of artificial intelligence (AI) is divided into several key periods:
1. Early Beginnings (1943-1955): The idea of AI started when scientists like Warren McCulloch
and Walter Pitts created a model of artificial neurons, the building blocks of neural networks.
Donald Hebb also developed ideas about how neurons learn, which still influences AI today.
2. The Birth of AI (1956): AI officially started at a famous event called the Dartmouth
Conference, organized by John McCarthy. Key figures like Marvin Minsky and Claude
Shannon attended. This is where the term "Artificial Intelligence" was first used, and early AI
programs like the Logic Theorist were shown.
3. Excitement and High Hopes (1952-1969): Early AI programs, like the General Problem Solver,
gave people big hopes about AI’s potential. These programs were able to solve problems that
seemed to mimic human thinking.
4. Challenges (1966-1973): AI ran into difficulties when applied to more complex problems, and
excitement faded. Progress slowed down, leading to a period known as the "AI Winter,"
when funding and interest in AI decreased.
5. The Comeback (1986-present): AI made a strong return with the rediscovery of neural
networks, especially using a method called backpropagation. This led to advances in things
like speech recognition and computer vision, which are crucial in modern AI.