0% found this document useful (0 votes)
18 views111 pages

Ai Unit 1

The document outlines a course curriculum for an Introduction to Artificial Intelligence, detailing objectives and outcomes related to intelligent agents, search strategies, and knowledge representation. It discusses the definition, future applications, and characteristics of AI, as well as the various types of agents and their functionalities. Key topics include cognitive modeling, the Turing test, rationality, and the structure of intelligent agents, emphasizing the importance of autonomy, learning, and cooperation in AI systems.

Uploaded by

balamurugang441
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views111 pages

Ai Unit 1

The document outlines a course curriculum for an Introduction to Artificial Intelligence, detailing objectives and outcomes related to intelligent agents, search strategies, and knowledge representation. It discusses the definition, future applications, and characteristics of AI, as well as the various types of agents and their functionalities. Key topics include cognitive modeling, the Turing test, rationality, and the structure of intelligent agents, emphasizing the importance of autonomy, learning, and cooperation in AI systems.

Uploaded by

balamurugang441
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 111

ARTIFICIAL INTELLIGENCE

Unit 1 – Introduction to Artificial Intelligence


Course Curriculum Development
Content prepared by
Dheepan Raj, B.E.
Software Developer & Trainer
Specialized technologies
• Python full stack web development
• AI/ML
• Data science & Analytics
• DSA & Problem solving
Linkedin
OBJECTIVES:
• To understand the various characteristics of Intelligent agents
• To learn the different search strategies in AI
• To learn to represent knowledge in solving AI problems
• To understand the different ways of designing software agents and
about the various applications of AI
OUTCOMES:
Upon completion of the course, the students should be able to
1. Infer the agent characteristics and its problem solving approaches.(K2)
2. Select appropriate search algorithms for any AI problem.(K1)
3. Apply the principles of AI in game playing.(K3)
4. Construct and solve a problem using first order and predicate logic.(K3)
5. Identify the methods of solving problems using planning and
learning.(K3)
6. Implement applications for Natural Language Processing that use
Artificial Intelligence (K3).
Unit 1 – Introduction to Artificial Intelligence
1.1 Introduction
1.2 Definition
1.3 Future of Artificial Intelligence
1.4 Characteristics of Intelligent Agents
1.5 Typical Intelligent Agents
1.6 Problem Solving Approach to Typical AI Problems
1.7 Search Strategies
1.8 Uninformed search
1.9 Informed search
1.10 Heuristics
1.1 Introduction to Artificial Intelligence
• Artificial Intelligence (AI) refers to the field of computer science that
aims to create machines and systems that can perform tasks that
would typically require human intelligence.
• These tasks include learning, reasoning, problem-solving, perception,
language understanding, and decision-making.
• The goal of AI is to develop systems that can simulate or enhance
human cognitive abilities to execute complex tasks autonomously or
with minimal human intervention.
1.2 Definition of Artificial Intelligence
Artificial Intelligence can be defined as:
• AI is the simulation of human intelligence in machines programmed
to think and learn like humans.
• It involves creating algorithms and models that allow machines to
understand, reason, adapt, and perform tasks typically associated
with human intelligence.
• AI can be divided into two categories: Narrow AI (designed for a
specific task, like voice assistants) and General AI (hypothetical
systems that possess general cognitive abilities).
Artificial Intelligence Systems
AI is about teaching computers to do things that humans are doing
good right now.
“Artificial Intelligence is the ability of a computer to act like a human
being”.
• Systems that think like humans
• Systems that act like humans
• Systems that think rationally (logically).
• Systems that act rationally (logically).
Key words
• Intelligence - Ability to learn, understand, think and apply knowledge in
order to perform better in an environment.
• Artificial Intelligence - Study and construction of agent programs that
perform well in a given environment, for a given agent architecture.
• Agent - An entity that takes action in response to precepts from an
environment.
• Rationality - property of a system which does the “right thing” given what
it knows.
• Logical Reasoning - Logical reasoning is a way of thinking that uses logic
and common sense to solve problems. It involves using information from a
given context to make connections and draw conclusions.
Four Approaches of Artificial Intelligence:
• Thinking humanly: The cognitive modelling approach.
• Acting humanly: The Turing test approach.
• Thinking rationally: The laws of thought approach.
• Acting rationally: The rational agent approach.
Human- Like Rationally
Think Cognitive Science Approach Laws of thought Approach
“Machines that think like “ Machines that think
humans” Rationally”

Act Turing Test Approach Rational Agent Approach


“Machines that behave like “Machines that behave
humans” Rationally”
Thinking Humanly: Cognitive Modelling

• This method tries to copy how humans think and understand things. It aims to
understand how humans think and solve problems. i.e. Cognitive science.
• “Cognitive science/modelling” refers to the scientific study of how humans think
and reason.
• Example; Cognitive psychology-inspired AI systems, such as virtual assistants like
Siri or Alexa, try to act like humans. They listen to what people say, try to
understand what they mean, and respond like a person would in a conversation.
• It is not related to how a computer program solves a problem correctly. Thinking
Humanly is more interested in seeing how its steps compare to how a human
would solve the same problem.
• https://www.youtube.com/watch?v=mDntbGRPeEU&ab_channel=drrobertepstei
n
Acting Humanly: The Turing Test
• This approach aims to create AI systems
that perform tasks in a manner very
similar to human behavior. It focuses on
achieving human-like performance in
various tasks.
• The Turing Test, proposed by Alan Turing
(1950), was designed to provide a
satisfactory operational definition of
intelligence. A computer passes the test
if a human interrogator, after posing
some written questions, cannot tell
whether the written responses come
from a person or from a computer.
Acting Humanly: The Turing Test
Example: Turing Test
1. 3 rooms contain: a person, a computer and an interrogator.
2. The interrogator can communicate with the other 2 by text type (to
avoid the machine imitate the appearance of voice of the person)
3. The interrogator tries to determine which the person is and which the
machine is.
4. The machine tries to fool the interrogator to believe that it is the
human, and the person also tries to convince the interrogator that it
is the human.
5. If the machine succeeds in fooling the interrogator, then conclude
that the machine is intelligent and it is passed the test.
Thinking Rationally: Laws of Thought
• This approach emphasizes on designing AI systems that follow
principles of logic and rational(logical) decision-making, regardless of
whether it mirrors human thought process.
• In simple words, if your thoughts are based on facts and not
emotions, it is called rational thinking.
• Example: Expert systems, such as medical diagnosis systems, rely on
rules and logical inference to make decisions. These systems use
knowledge bases of medical expertise and logical reasoning to
diagnose diseases based on symptoms reported by patients.
Acting Rationally: Rational Agent
• This approach focuses on creating AI systems that make decisions to
achieve the best outcome, irrespective of whether the
decision-making process resembles human thinking.
• Acting rationally is more related to scientific development than
human-based approaches.
• Example: Autonomous vehicles, like self-driving cars, operate based
on the principle of acting rationally. They analyze sensor data in
real-time, process it using algorithms to detect obstacles, and make
decisions to navigate safely to their destination, optimizing factors
like speed, distance and safety.
1.3 Future of Artificial Intelligence
• Healthcare: AI will revolutionize healthcare by enabling early diagnosis, personalized
treatment plans, and advanced medical research. AI-powered tools can analyze medical
images, predict disease outbreaks, and assist in drug discovery.
• Autonomous Vehicles: Self-driving cars and drones will become more prevalent,
improving transportation efficiency and safety. AI will enable vehicles to navigate complex
environments, reduce traffic congestion, and lower accident rates.
• Education: AI will transform education by providing personalized learning experiences,
intelligent tutoring systems, and automated grading. AI-driven tools can adapt to
individual learning styles, identify areas for improvement, and offer tailored resources.
• Finance: AI will enhance financial services by automating trading, fraud detection, and risk
management. AI algorithms can analyze market trends, predict economic shifts, and
optimize investment strategies.
• Customer Service: AI-powered chatbots and virtual assistants will improve customer
service by providing instant, accurate responses and handling routine inquiries. This will
free up human agents to focus on more complex tasks.
1.3 Future of Artificial Intelligence
• Manufacturing: AI will optimize manufacturing processes through predictive maintenance,
quality control, and supply chain management. AI systems can monitor equipment, detect
anomalies, and streamline production.
• Environmental Sustainability: AI will contribute to environmental sustainability by optimizing
energy usage, monitoring natural resources, and predicting climate change impacts. AI tools can
help in conservation efforts and promote sustainable practices.
• Entertainment and Media: AI will revolutionize the entertainment industry by creating realistic
virtual environments, enhancing content recommendation systems, and generating new forms of
interactive media.
• Ethical and Societal Impact: As AI becomes more integrated into daily life, ethical considerations
will gain importance. Issues such as data privacy, bias in AI algorithms, and job displacement will
need to be addressed through policy and regulation.
• AI and Human Collaboration: The future will see increased collaboration between humans and
AI. AI will augment human capabilities, assist in decision-making, and provide valuable insights
across various fields.
1.4 Characteristics of Intelligent Agents
Intelligent agents are systems that perceive their environment and take
actions to maximize their chances of success.
The key characteristics include:
• Autonomy
• Reactivity
• Proactiveness
• Social ability
• Mobility
• Rationality
• Learning
• Cooperation
• Coordination
CHARACTERISTICS OF INTELLIGENT AGENTS
● Autonomy is the most important property of an IA and is defined as the
ability of an agent to make decisions and control its actions and internal
states without direct intervention from other entities (human or machine).
In other words, an IA is independent and makes its own decisions.
● Reactivity refers to the ability of an agent to perceive and react to
environmental changes in order to achieve the goal(s).
● Proactivity is the ability of an agent to take initiative, plan and perform
the required actions to achieve its goal(s).
● Social Ability enables agents to communicate and interact with each
other and other entities in the environment. This interaction can be in the
form of coordination, cooperation, negotiation, and even competition.
● Mobility is the agent’s ability to move from its origin to other machines across a
network and perform design objectives locally on remote hosts. Mobile agents can
increase the processing speeds of the system as a whole and reduce network traffic and
communication costs.

● Rationality is the ability of an agent to make decisions that are dynamically based on
the state of the environment. A detailed analysis of what rationality means can be
found in. This analysis forms the basis of the Beliefs, Desires, and Intentions (BDI)
model for software agents.
● Learning is the ability of an agent to learn from interactions and changes in the
environment through experience in order to improve its performance over time. With a
learning ability, an agent is able to add and improve its features dynamically.
● Cooperation is establishing a voluntary relationship with another agent to adopt its
goal. Cooperation with an agent enables the two agents to establish a voluntary
relationship with each other to adopt mutual goals and form a combined team.
● Coordination is the ability to manage the interdependencies between humans or other
agents and form a team with them. Depending on the application and purpose of where
and how agents are used, these properties can be desirable or undesirable.
1.5 Agents and its types
An agent can be anything that perceive its environment through sensors and
act upon that environment through actuators. An Agent runs in the cycle of
perceiving, thinking, and acting. An agent can be:
○ Human-Agent: A human agent has eyes, ears, and other organs which
work for sensors and hand, legs, vocal tract work for actuators.
○ Robotic Agent: A robotic agent can have cameras, infrared range finder,
NLP for sensors and various motors for actuators.
○ Software Agent: Software agent can have keystrokes, file contents as
sensory input and act on those inputs and display output on the screen.
Sensors, Actuators, Effectors
• Sensor: Sensor is a device which detects the change in the
environment and sends the information to other electronic devices.
An agent observes its environment through sensors.
• Actuators: Actuators are the component of machines that converts
energy into motion. The actuators are only responsible for moving
and controlling a system. An actuator can be an electric motor, gears,
rails, etc.
• Effectors: Effectors are the devices which affect the environment.
Effectors can be legs, wheels, arms, fingers, wings, fins, and display
screen.
Rules of AI agent
○ Rule 1: An AI agent must have the ability to perceive the
environment.
○ Rule 2: The observation must be used to make decisions.
○ Rule 3: Decision should result in an action.
○ Rule 4: The action taken by an AI agent must be a rational action.
Environment
• An environment is everything in the world which surrounds the agent, but it is not a part
of an agent itself. An environment can be described as a situation in which an agent is
present.
• The environment is where agent lives, operate and provide the agent with something to
sense and act upon it.
• Fully observable vs Partially Observable:
• If an agent sensor can sense or access the complete state of an environment at each point
of time then it is a fully observable environment, else it is partially observable.
• A fully observable environment is easy as there is no need to maintain the internal state
to keep track history of the world.
• An agent with no sensors in all environments then such an environment is called as
unobservable.
• Example: chess – the board is fully observable, as are opponent’s moves. Driving – what is
around the next bend is not observable and hence partially observable.
PEAS: Performance Measure, Environment,
Actuators, Sensors
• Performance: The output which we get from the agent. All the
necessary results that an agent gives after processing comes under its
performance.
• Environment: All the surrounding things and conditions of an agent
fall in this section. It basically consists of all the things under which
the agents work.
• Actuators: The devices, hardware or software through which the
agent performs any actions or processes any information to produce
a result are the actuators of the agent.
• Sensors: The devices through which the agent observes and perceives
its environment are the sensors of the agent.
Rational Agent
• Rational Agent - A system is rational if it does the “right thing”. Given
what it knows.
• Characteristic of Rational Agent
• The agent's prior knowledge of the environment.
• The performance measure that defines the criterion of success.
• The actions that the agent can perform.
• The agent's percept sequence to date.
• For every possible percept sequence, a rational agent should select
an action that is expected to maximize its performance measure,
given the evidence provided by the percept sequence and whatever
built-in knowledge the agent has.
The Structure of Intelligent Agents
The Structure of Intelligent Agents
• Agent = Architecture + Agent Program
• Architecture = the machinery that an agent executes on. (Hardware)
• Agent Program = an implementation of an agent function. (Algorithm,
Logic – Software)
Types of agents
Agents can be grouped into five classes based on their degree of perceived intelligence and capability.
All these agents can improve their performance and generate better action over the time.

○ Simple Reflex Agent


○ Model-based reflex agent
○ Goal-based agents
○ Utility-based agent
○ Learning agent
○ Multi-agent systems
○ Hierarchical agents
Simple reflex agent
○ The Simple reflex agents are the simplest agents. These agents take decisions on the basis of the current
percepts and ignore the rest of the percept history (past state).
○ The Simple reflex agent does not consider any part of percepts history during their decision and action
process.
○ They have very limited intelligence
○ They do not have knowledge of non-perceptual parts of the current state.
○ Simple reflex agent work in a Fully observable environment
Model based reflex agent
Model-based reflex agent
The Model-based agent can work in a partially
observable environment, and track the situation.
●Model: It is knowledge about "how things
happen in the world," so it is called a
Model-based agent.
●Internal State: It is a representation of the
current state based on percept history.
Goal based agents
○ The knowledge of the current state environment is not always sufficient to
decide for an agent to what to do.
○ The agent needs to know its goal which describes desirable situations.
○ Goal-based agents expand the capabilities of the model-based agent by having
the "goal" information.
○ Possible actions before deciding whether the goal is achieved or not. Such
considerations of different scenario are called searching and planning, which
makes an agent proactive.
Utility based agents
○ These agents are similar to the goal-based agent but provide an extra
component of utility measurement (“Level of Happiness”) which makes
them different by providing a measure of success at a given state.
○ Utility-based agent act based not only goals but also the best way to
achieve the goal.
Learning agents
○ A learning agent in AI is the type of agent which can learn from its past experiences, or it has learning capabilities.
○ It starts to act with basic knowledge and then able to act and adapt automatically through learning.
○ A learning agent has mainly four conceptual components, which are:
a. Learning element: It is responsible for making improvements by learning from environment
b. Critic: Learning element takes feedback from critic which describes that how well the agent is doing with respect to a
fixed performance standard.
c. Performance element: It is responsible for selecting external action
d. Problem generator: This component is responsible for suggesting actions that will lead to new and informative
experiences.
Multi agent systems

• A multi-agent system consists of multiple decision-making agents


which interact in a shared environment to achieve common or
conflicting goals.
Types of Multi agent Systems
Hierarchical agents
● Agent hierarchies are a way for you to organize agents into teams and
groups for reporting purposes. It's useful to organize them based on
their location and their skill sets.
1.5 Typical Examples of Intelligent Agents
• Some typical examples of intelligent agents include:
1. Autonomous Vehicles: Self-driving cars that perceive their surroundings
and make driving decisions without human input.
2. Personal Assistants: AI-based assistants like Siri, Alexa, or Google Assistant
that interact with users and help manage tasks.
3. Robots: Robots in manufacturing or medical surgeries that perform specific
tasks autonomously.
4. Chatbots: AI-based software agents that communicate with humans to
solve problems or provide services.
5. Recommendation Systems: Systems like Netflix or Amazon that suggest
products or content based on user preferences.
AI Problem & Techniques
Following are some problems that can be solved by using
AI. Following categories of problems are considered as AI problems.
Ordinary Problems
1.Perception
➢ Vision
➢ Voice Recognition
➢ Speech Recognition
2.Natural Language
➢ Understanding
➢ Generation
➢ Translation
3.Robot Control
Formal Problems
➢ Game Playing
➢ Solving complex mathematical Problem
Expert Problems
➢ Design
➢ Fault Finding
➢ Scientific Analysis
➢ Medical Diagnosis
➢ Financial Analysis
There are three important AI techniques:
Search — Provides a way of solving problems for which no direct approach is
available. It also provides a framework into which any direct techniques that are
available can be embedded.
Use of knowledge — Provides a way of solving complex problems by exploiting
the structure of the objects that are involved.
Abstraction — Provides a way of separating important features and variations
from many unimportant ones that would otherwise overwhelm any process.
1.6 Steps to Solve Problem Using AI
Steps to Solve AI Problem
1. Defining The Problem: The definition of the problem must be included precisely. It should contain the
possible initial as well as final situations which should result in acceptable solution.
2. Analyzing The Problem: Analyzing the problem and its requirement must be done as few features can have
immense impact on the resulting solution.
3. Identification Of Solutions: This phase generates reasonable amount of solutions to the given problem in a
particular range.
4. Choosing a Solution: From all the identified solutions, the best solution is chosen basis on the results
produced by respective solutions.
5. Implementation: After choosing the best solution, its implementation is done.
Example: Maze-solving problem
• In a maze-solving problem:
• Initial State: Start position in the maze.
• Goal State: Exit of the maze.
• Actions: Moving up, down, left, or right.
• The algorithm searches for the best path through the maze, evaluates
possible routes, and selects the most efficient one to reach the exit.
• By following this structured approach, AI systems can solve complex
problems, like navigation, scheduling, and game playing, effectively.
AI Problem as State Space Search
➢ A set of all possible states for a given problem is known as state space of the problem.
➢ Representation of states is highly beneficial in AI because they provide all possible states, operations and
the goals.
➢ If the entire sets of possible states are given, it is possible to trace the path from the initial state to the goal
state and identify the sequence of operators necessary for doing it.
➢ Representation allows for a formal definition of a problem using a set of permissible operations as the need
to convert some given situation into some desired situation.
➢ We are free to define the process of solving a particular problem as a combination of known techniques,
each of which are represented as a rule defining a single step in the space, and search, the general technique of
exploring the space to try to find some path from the current state to a goal state.
Search and Control strategies
Problem-solving agents:
• In Artificial Intelligence, Search techniques are universal problem-solving methods.
• Rational agents or Problem-solving agents in AI mostly used these search strategies or
algorithms to solve a specific problem and provide the best result.
• Problem- solving agents are the goal-based agents and use atomic representation.
• Some of the most popularly used problem solving with the help of artificial intelligence
are:
o Chess problem.
o Travelling Salesman Problem.
o Tower of Hanoi Problem.
o Water-Jug Problem.
o N-Queen Problem.
o Vacuum world
o 8 – Puzzle Problem
Problem Searching
• In general, searching refers to as finding information one needs.
• Searching is the most commonly used technique of problem solving in
artificial intelligence.
• The searching algorithm helps us to search for solution of particular
problem.
• Problem: Problems are the issues which comes across any system. A
solution is needed to solve that particular problem.
Measuring problem-solving performance
• We can evaluate an algorithm’s performance in four ways:

• Completeness: Is the algorithm guaranteed to find a solution when there is one?


• Optimality: Does the strategy find the optimal solution?
• Time complexity: How long does it take to find a solution?
• Space complexity: How much memory is needed to perform the search?
Search Algorithm Terminologies
Search: Searching is a step by step procedure to solve a
search-problem in a given search space. A search problem can have
three main factors:
1. Search Space: Search space represents a set of possible solutions,
which a system may have.
2. Start State: It is a state from where agent begins the search.
3. Goal test: It is a function which observe the current state and
returns whether the goal state is achieved or not.
Search Algorithm Terminologies
• Search tree: A tree representation of search problem is called Search
tree. The root of the search tree is the root node which is corresponding
to the initial state.
• Actions: It gives the description of all the available actions to the agent.
• Transition model: A description of what each action do, can be
represented as a transition model.
• Path Cost: It is a function which assigns a numeric cost to each path.
• Solution: It is an action sequence which leads from the start node to the
goal node.
• Optimal Solution: If a solution has the lowest cost among all solutions.
Example Problems
• A Toy Problem is intended to illustrate or exercise various
problem-solving methods. A real- world problem is one whose
solutions people actually care about.
• Toy Problems
• Vacuum world problem
• 8- Puzzle Problem
• Queens Problem
• Water jug problem
• Tower of Hanoi
Vacuum world problem
Vacuum world problem
• States: The state is determined by both the agent location and the dirt
locations. The agent is in one of the 2 locations, each of which might or might
not contain dirt. Thus there are 2*2^2=8 possible world states.
• Initial state: Any state can be designated as the initial state.
• Actions: In this simple environment, each state has just three actions: Left,
Right, and Suck. Larger environments might also include Up and Down.
• Transition model: The actions have their expected effects, except that moving
Left in the leftmost squ are, moving Right in the rightmost square, and Sucking
in a clean square have no effect. The complete state space is shown in Figure.
• Goal test: This checks whether all the squares are clean.
• Path cost: Each step costs 1, so the path cost is the number of steps in the path.
8 Puzzle problem
8 – Puzzle problem
• States: A state description specifies the location of each of the eight tiles and
the blank in one of the nine squares.
• Initial state: Any state can be designated as the initial state. Note that any given
goal can be reached from exactly half of the possible initial states.
• The simplest formulation defines the actions as movements of the blank space
Left, Right, Up, or Down. Different subsets of these are possible depending on
where the blank is.
• Transition model: Given a state and action, this returns the resulting state; for
example, if we apply Left to the start state in Figure 3.4, the resulting state has
the 5 and the blank switched.
• Goal test: This checks whether the state matches the goal configuration shown
in Figure.
• Path cost: Each step costs 1, so the path cost is the number of steps in the path.
Queens problem
Queens Problem
• States: Any arrangement of 0 to 8 queens on the board is a state.
• Initial state: No queens on the board.
• Actions: Add a queen to any empty square.
• Transition model: Returns the board with a queen added to the
specified square.
• Goal test: 8 queens are on the board, none attacked.
Water jug problem
Water jug problem
Water jug problem
Water jug problem
Problem solving by search
• An important aspect of intelligence is goal-based problem solving.
• The solution of many problems can be described by finding a
sequence of actions that lead to a desirable goal. Each action
changes the state and the aim is to find the sequence of actions and
states that lead from the initial (start) state to a final (goal) state.
Problem solving by search
• A well-defined problem can be described by: Initial state
• Operator or successor function - for any state x returns s(x), the set
of states reachable from x with one action
• State space - all states reachable from initial by any sequence of
actions
• Path - sequence through state space
• Path cost - function that assigns a cost to a path. Cost of a path is
the sum of costs of individual actions along the path
• Goal test - test to determine if at goal state
What is search?
• Search is the systematic examination of states to find path from the
start/root state to the goal state.
• The set of possible states, together with operators defining their
connectivity constitute the search space.
• The output of a search algorithm is a solution, that is, a path from
the initial state to a state that satisfies the goal test.
Problem solving agents
• A Problem solving agent is a goal-based agent. It decide what to do by
finding sequence of actions that lead to desirable states. The agent can
adopt a goal and aim at satisfying it.
• To illustrate the agent’s behavior, let us take an example where our agent
is in the city of Arad, which is in Romania. The agent has to adopt a goal
of getting to Bucharest.
• Goal formulation, based on the current situation and the agent’s
performance measure, is the first step in problem solving.
• The agent’s task is to find out which sequence of actions will get to a goal
state.
• Problem formulation is the process of deciding what actions and states to
consider given a goal.
Example problems
Toy problems
• Vacuum world
• 8 puzzle
• 8 queens
Real world problems
• Route-finding problem
• Airline travel problem
• Touring problems
• The travelling salesperson problem (tsp)
• VLSI Layout
• Robot navigation
• Automatic assembly sequencing
Vacuum world
Vacuum world
• States: The agent is in one of two locations, each of which might or
might not contain dirt. Thus there are 2 x 22 = 8 possible world
states.
• Initial state: Any state can be designated as initial state.
• Successor function: This generates the legal states that results from
trying the three actions (left, right, suck). The complete state space
is shown in figure
• Goal Test: This tests whether all the squares are clean.
• Path test: Each step costs one, so that the path cost is the number
of steps in the path.
8-Puzzle
• An 8-puzzle consists of a 3x3 board with eight numbered tiles and a
blank space. A tile adjacent to the balank space can slide into the
space. The object is to reach the goal state, as shown in Figure 2.4
8-Puzzle
• The problem formulation is as follows:
• o States : A state description specifies the location of each of the eight tiles and the blank in one of the nine squares.
• o Initial state : Any state can be designated as the initial state. It can be noted that any given goal can be reached from
exactly half of the possible initial states.
• o Successor function : This generates the legal states that result from trying the four actions(blank moves Left, Right, Up or
down).
• o Goal Test : This checks whether the state matches the goal configuration shown in Figure(Other goal configurations are
possible)
• o Path cost : Each step costs 1,so the path cost is the number of steps in the path.
• The 8-puzzle belongs to the family of sliding-block puzzles, which are often used as test problems for new search algorithms
in AI. This general class is known as NP-complete. The 8-puzzle has 9!/2 = 181,440 reachable states and is easily solved.
• The 15 puzzle ( 4 x 4 board ) has around 1.3 trillion states, an the random instances can be solved optimally in few milli
seconds by the best search algorithms.
• The 24-puzzle (on a 5 x 5 board) has around 1025 states and random instances are still quite difficult to solve optimally with
current machines and algorithms.
8-Queens problem
8-Queens problem
• The goal of 8-queens problem is to place 8 queens on the
chessboard such that no queen attacks any other.(A queen attacks
any piece in the same row, column or diagonal).
• Figure 2.3 shows an attempted solution that fails: the queen in the
right most column is attacked by the queen at the top left.
• An Incremental formulation involves operators that augments the
state description, starting with an empty state. For 8-queens
problem, this means each action adds a queen to the state. A
complete-state formulation starts with all 8 queens on the board
and move them around. In either case the path cost is of no interest
because only the final state counts.
8-Queens problem
• The first incremental formulation one might try is the following:
• o States: Any arrangement of 0 to 8 queens on board is a state.
• o Initial state: No queen on the board.
• o Successor function: Add a queen to any empty square.
• o Goal Test: 8 queens are on the board, none attacked.
• In this formulation, we have 64.63…57 = 3 x 1014 possible
sequences to investigate.
8-Queens problem
• A better formulation would prohibit placing a queen in any square that is
already attacked.
• o States : Arrangements of n queens ( 0 <= n < = 8 ),one per column in the
left most columns, with no queen attacking another are states.
• o Successor function : Add a queen to any square in the left most empty
column such that it is not attacked by any other queen.
• This formulation reduces the 8-queen state space from 3 x 1014 to just
2057,and solutions are easy to find.
• For the 100 queens the initial formulation has roughly 10400 states
whereas the improved formulation has about 1052 states. This is a huge
reduction, but the improved state space is still too big for the algorithms
to handle.
Real world problems
Route finding problem
• Route-finding problem is defined in terms of specified locations and
transitions along links between them. Route-finding algorithms are
used in a variety of applications, such as routing in computer
networks, military operations planning, and airline travel planning
systems.
Airline Travel problem
• The airline travel problem is specifies as follows:
• o States: Each is represented by a location (e.g., an airport) and the
current time.
• o Initial state: This is specified by the problem.
• o Successor function: This returns the states resulting from taking any
scheduled flight (further specified by seat class and location),leaving later
than the current time plus the within-airport transit time, from the
current airport to another.
• o Goal Test: Are we at the destination by some prespecified time?
• o Path cost: This depends upon the monetary cost, waiting time, flight
time, customs and immigration procedures, seat quality, time of date,
type of air plane, frequent-flyer mileage awards, and so on.
Touring problems
• Touring problems are closely related to route-finding problems, but
with an important difference. Consider for example, the problem,
“Visit every city at least once” as shown in Romania map.
• As with route-finding the actions correspond to trips between
adjacent cities. The state space, however, is quite different.
• The initial state would be “In Bucharest; visited{Bucharest}”.
• A typical intermediate state would be “In Vaslui;visited {Bucharest,
Urziceni,Vaslui}”.
• The goal test would check whether the agent is in Bucharest and all
20 cities have been visited
The Travelling Salesperson Problems(TSP)
• Is a touring problem in which each city must be visited exactly once.
The aim is to find the shortest tour. The problem is known to be
NP-hard. Enormous efforts have been expended to improve the
capabilities of TSP algorithms. These algorithms are also used in
tasks such as planning movements of automatic circuit-board drills
and of stocking machines on shop floors.
VLSI Layout
• A VLSI layout problem requires positioning millions of components
and connections on a chip to minimize area, minimize circuit delays,
minimize stray capacitances, and maximize manufacturing yield.
The layout problem is split into two parts: cell layout and channel
routing.
Robot Navigation
• ROBOT navigation is a generalization of the route-finding problem.
Rather than a discrete set of routes, a robot can move in a
continuous space with an infinite set of possible actions and states.
For a circular Robot moving on a flat surface, the space is essentially
two-dimensional. When the robot has arms and legs or wheels that
also must be controlled, the search space becomes
multi-dimensional. Advanced techniques are required to make the
search space finite.
Automatic Assembly Sequence
• The example includes assembly of intricate objects such as electric
motors. The aim in assembly problems is to find the order in which
to assemble the parts of some objects. If the wrong order is
choosen, there will be no way to add some part later without
undoing some work already done. Another important assembly
problem is protein design, in which the goal is to find a sequence of
Amino acids that will be fold into a three-dimensional protein with
the right properties to cure some disease.
Internet searching
• In recent years there has been increased demand for software
robots that perform Internet searching, looking for answers to
questions, for related information, or for shopping deals. The
searching techniques consider internet as a graph of nodes(pages)
connected by links.
Different search algorithms
1.7 Search Strategies
• Search strategies in AI help find solutions to problems by navigating
through the state space. These strategies can be divided into two
main categories: uninformed and informed.
• "Uninformed Search (BFS, DFS, etc.)"
• "Informed Search (A* Search, Greedy Best-First)"
1.8 Uninformed Search Strategies
• Uninformed search strategies (blind search) are those that explore the
state space without any additional information other than the current
state. Common types include:
1. Breadth-First Search (BFS): Explores all nodes at the current depth level
before moving on to the next level.
2. Depth-First Search (DFS): Explores as far down a branch of the search tree
as possible before backtracking.
3. Uniform Cost Search: Expands the node with the lowest path cost,
ensuring the least costly solution is found first.
4. Depth-Limited Search: A variant of DFS that imposes a limit on how deep
the search can go.
5. Iterative Deepening Search: Combines BFS's completeness and DFS's space
efficiency by performing DFS repeatedly with increasing depth limits.
Breadth-First Search
• ✓Breadth-first search is the most common search strategy for traversing a tree or graph. This algorithm
searches breadth wise in a tree or graph, so it is called breadth-first search.
• ✓BFS algorithm starts searching from the root node of the tree and expands all successor node at the
current level before moving to nodes of next level.
• ✓The breadth-first search algorithm is an example of a general-graph search algorithm.
• ✓Breadth-first search implemented using FIFO queue data structure.
Advantages:
• ✓BFS will provide a solution if any solution exists.
• ✓If there are more than one solutions for a given problem, then BFS will provide the minimal solution
which requires the least number of steps.
Disadvantages:
• ✓It requires lots of memory since each level of the tree must be saved into memory to expand the next
level.
• ✓BFS needs lots of time if the solution is far away from the root node.
Breadth-First Search
In the below tree structure, we have shown the traversing of the tree using BFS algorithm from the root node S
to goal node K. BFS search algorithm traverse in layers, so it will follow the path which is shown by the dotted
arrow, and the traversed path will be: S---> A--->B---->C--->D---->G--->H--->E---->F---->I---->K
Depth first search
• ✓Depth-first search is a recursive algorithm for traversing a tree or graph data structure.
• ✓It is called the depth-first search because it starts from the root node and follows each path to its greatest depth
node before moving to the next path.
• ✓DFS uses a stack data structure for its implementation.
• ✓The process of the DFS algorithm is similar to the BFS algorithm.
Advantage:
• ✓DFS requires very less memory as it only needs to store a stack of the nodes on the path from root node to the
current node.
• ✓It takes less time to reach to the goal node than BFS algorithm (if it traverses in the right path).
Disadvantage:
• ✓There is the possibility that many states keep re-occurring, and there is no
• guarantee of finding the solution.
• ✓DFS algorithm goes for deep down searching and sometime it may go to
• the infinite loop.
Depth first search
In the below search tree, we have shown the flow of depth-first search, and it will follow the order as:
Root node--->Left node ----> right node.
It will start searching from root node S, and traverse A, then B, then D and E, after traversing E, it will
backtrack the tree as E has no other successor and still goal node is not found. After backtracking it will
traverse node C and then G, and here it will terminate as it found goal node.
Depth Limited Search
• A depth-limited search algorithm is similar to depth-first search with a predetermined limit.
Depth-limited search can solve the drawback of the infinite path in the Depth-first search. In this
algorithm, the node at the depth limit will treat as it has no successor nodes further.
• Depth-limited search can be terminated with two Conditions of failure:
• Standard failure value: It indicates that problem does not have any solution.
• Cutoff failure value: It defines no solution for the problem within a given depth limit.
Advantages:
• Depth-limited search is Memory efficient.
Disadvantages:
• Depth-limited search also has a disadvantage of incompleteness.
• It may not be optimal if the problem has more than one solution.
Depth Limited Search
Uniform cost search
✓Uniform-cost search is a searching algorithm used for traversing a weighted tree or graph.
✓This algorithm comes into play when a different cost is available for each edge. The primary goal of the uniform cost
search is to find a path to the goal node which has the lowest cumulative cost.
✓Uniform-cost search expands nodes according to their path costs form the root node.
✓It can be used to solve any graph/tree where the optimal cost is in demand.
✓A uniform-cost search algorithm is implemented by the priority queue.
✓It gives maximum priority to the lowest cumulative cost.
✓Uniform cost search is equivalent to BFS algorithm if the path cost of all edges is the same.
Advantages:
✓Uniform cost search is optimal because at every state the path with the least cost is chosen.
Disadvantages:
✓It does not care about the number of steps involve in searching and only concerned about path cost. Due to which this
algorithm may be stuck in an infinite loop.
Uniform cost search
Iterative Deepening Depth-First search
✓The iterative deepening algorithm is a combination of DFS and BFS algorithms. This search algorithm finds out the
best depth limit and does it by gradually increasing the limit until a goal is found.
✓This algorithm performs depth-first search up to a certain "depth limit", and it keeps increasing the depth limit after
each iteration until the goal node is found.
✓This Search algorithm combines the benefits of Breadth-first search's fast search and depth-first search's memory
efficiency.
✓The iterative search algorithm is useful uninformed search when search space is large, and depth of goal node is
unknown.
Advantages:
✓It combines the benefits of BFS and DFS search algorithm in terms of fast
search and memory efficiency.
Disadvantages:
✓The main drawback of IDDFS is that it repeats all the work of the
previous phase.
Iterative Deepening Depth-First search
Following tree structure is showing the iterative deepening depth-first search. IDDFS algorithm performs
various iterations until it does not find the goal node. The iteration performed by the algorithm is given as:
• 1'st Iteration-----> A
• 2'nd Iteration----> A, B, C
• 3'rd Iteration------>A, B, D, E, C, F, G
• 4'th Iteration------>A, B, D, H, I, E, C, F, K, G
• In the fourth iteration, the algorithm will find the goal node.
Bidirectional Search
✓Bidirectional search algorithm runs two simultaneous searches, one form initial state called as forward-search
and other from goal node called as backward-search, to find the goal node.
✓Bidirectional search replaces one single search graph with two small sub graphs in which one starts the
search from an initial vertex and other starts from goal vertex.
✓The search stops when these two graphs intersect each other.
✓Bidirectional search can use search techniques such as BFS, DFS, DLS, etc.
Advantages:
✓Bidirectional search is fast.
✓Bidirectional search requires less memory
Disadvantages:
✓Implementation of the bidirectional search tree is difficult.
✓In bidirectional search, one should know the goal state in advance.
Bidirectional Search
In the below search tree, bidirectional search algorithm is applied. This algorithm divides one graph/tree into
two sub-graphs. It starts traversing from node 1 in the forward direction and starts from goal node 16 in the
backward direction. The algorithm terminates at node 9 where two searches meet.
Backtracking search
• A variant of depth-first search called backtracking search uses less
memory and only one successor is generated at a time rather than
all successors.; Only O(m) memory is needed rather than O(bm)
Searching with partial information
Different types of incompleteness lead to three distinct problem
types:
• Sensorless problems (conformant): If the agent has no sensors at all

• Contingency problem: if the environment if partially observable or if


action are uncertain (adversarial)
• Exploration problems: When the states and actions of the
environment are unknown.
Searching with partial information
• Partial knowledge of states and actions:
sensorless or conformant problem
• – Agent may have no idea where it is; solution (if any) is a sequence.
contingency problem
• – Percepts provide new information about current state; solution is a tree
or policy; often interleave search and execution.
• – If uncertainty is caused by actions of another agent: adversarial
problem
exploration problem
• – When states and actions of the environment are unknown
1.10 Informed Search Strategies
• Informed search strategies use additional knowledge about the
problem to make the search process more efficient. They rely on
heuristics, which provide estimates of how close a given state is to
the goal. Common informed search strategies include:
1. Greedy Best-First Search: Expands the node that appears to be
closest to the goal based on a heuristic.
2. A* Search: Combines the advantages of BFS and greedy search by
considering both the cost to reach a node and the heuristic estimate
of the cost from that node to the goal.
Best-first search
• Best-first search algorithm always selects the path which appears best at that
moment.
• The aim is to reach the goal from the initial state via the shortest path.

Heuristic
• A heuristic is an approximate measure of how close you are to the target.
• Must be zero if node represents a goal state.
Greedy best first search
• Combination of depth-first search and breadth-first search
• In the Greedy best first search algorithm, we expand the node which is closest to
the goal node which is estimated by heuristic function h(n).

f(n)= h(n)

Where,
h(n)= estimated cost from node n to the goal.
Greedy best first search
A* search
• In A* search algorithm, we use search heuristic h(n) as well as the
cost to reach the node g(n). Hence we can combine both costs as
following, and this sum is called as a fitness number f(n).
• It has combined features of uniform cost search and greedy
best-first search.
• f(n)= g(n) + h(n)

where,
• g(n) -> cost of the path from start node to node n

• h(n) -> cost of the path from node n to goal state(heuristic function)
A* search
Unit 1 - Completed

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy