0% found this document useful (0 votes)
9 views74 pages

Unit 4

Probabilistic reasoning combines probability theory with logic to manage uncertainty in AI, allowing systems to make informed decisions based on incomplete or ambiguous data. Bayesian networks are used to represent dependencies among variables and facilitate reasoning in uncertain domains. Dempster Shafer Theory further enhances uncertainty handling by combining evidence from multiple sources to derive beliefs and plausibility in decision-making.

Uploaded by

dnyangitte01
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views74 pages

Unit 4

Probabilistic reasoning combines probability theory with logic to manage uncertainty in AI, allowing systems to make informed decisions based on incomplete or ambiguous data. Bayesian networks are used to represent dependencies among variables and facilitate reasoning in uncertain domains. Dempster Shafer Theory further enhances uncertainty handling by combining evidence from multiple sources to derive beliefs and plausibility in decision-making.

Uploaded by

dnyangitte01
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 74

UNIT 4

PROBABILISTIC
REASONING

1
Introduction
• Probabilistic reasoning is a way of knowledge representation
where we apply the concept of probability to indicate the
uncertainty in knowledge.
• In probabilistic reasoning, we combine probability theory
with logic to handle the uncertainty.
• In AI, probabilistic models are used to examine data using
statistical codes.
• It was one of the first machine learning methods.

2
Introduction
• Probabilistic models provides a framework for accepting the
concept of learning.
• The probabilistic framework specifies how to express and
deploy model reservations.
• AI makes use of probabilistic reasoning:
When we are uncertain about the premises
When the number of possible predicates becomes
unmanageable
When it is known that an experiment contains an error

3
Introduction
• Probability model, which is completely determined by the
joint distribution for all of the random variables is called as
full joint probability distribution.
• Independence and conditional independence relationships
among variables can greatly reduce the number of
probabilities that need to be specified in order to define the
full joint distribution.

4
Joint Probability Distribution
• If we have variables x1, x2, x3,....., xn, then the probabilities of a
different combination of x1, x2, x3.. xn, are known as Joint
probability distribution P[x1, x2, x3,....., xn],
• It can be written as the following way in terms of the joint
probability distribution.
• = P[x1| x2, x3,....., xn]P[x2, x3,....., xn]
• = P[x1| x2, x3,....., xn]P[x2|x3,....., xn]....
• = P[xn-1|xn]P[xn].
• In general for each variable Xi, we can write the equation as:
• P(Xi|Xi-1,........., X1) = P(Xi |Parents(Xi )

5
Uncertainty
• Uncertainty is when there’s not enough information or ambiguity
in data or decision-making.
• It is a fundamental concept in AI, as real-world data is often noisy
and incomplete.
• AI systems must account for uncertainty to make informed
decisions.
• AI deals with uncertainty by using models and methods that
assign probabilities to different outcomes.
• Managing uncertainty is important for AI applications like
self-driving cars and medical diagnosis, where safety and accuracy
are key.

6
Uncertainty
• Techniques for Addressing Uncertainty in AI:-
Probabilistic Logic Programming
Fuzzy Logic Programming
Nonmonotonic Logic Programming
Paraconsistent Logic Programming
Hybrid Logic Programming

7
Uncertainty
• Probability plays a central role in AI by providing a formal
framework for handling uncertainty.
• AI systems use probabilistic models and reasoning to make
informed decisions, assess risk, and quantify uncertainty,
allowing them to operate effectively in complex and
uncertain real-world scenarios.
• In probability, there are two ways to solve problems of
uncertain information:
• Bayes’ rule:- P(A|B) = (P(B|A) * P(A)) / P(B) .
• Bayesian statistics:Bayesian statistics is a type of statistics
that uses probability to analyze data.

8
Representing Knowledge in an Uncertain
Domain
• AI systems operate in environments where uncertainty is a
fundamental aspect.
• Representing and reasoning about knowledge in such uncertain
domains is crucial for building robust and intelligent systems.
• In real-world applications, AI systems frequently encounter
incomplete, ambiguous, or noisy information.
• Traditional deterministic approaches fall short in such scenarios,
necessitating the use of probabilistic and fuzzy methods to handle
uncertainty effectively.
• These methods enable AI systems to make informed decisions,
predict outcomes, and adapt to changing environments.

9
Representing Knowledge in an
Uncertain Domain
• For representing a knowledge in an uncertain domain a data
structure called a Bayesian network
• It is used to represent the dependencies among variables.
• Bayesian networks can represent essentially any full joint
probability distribution and in many cases can do so very
concisely.

10
Representing Knowledge in an Uncertain
Domain
• A Bayesian network is a directed graph in which each node is
annotated with quantitative probability information.
• The full specification is as follows:
• 1. Each node corresponds to a random variable, which may be
discrete or continuous.
• 2. A set of directed links or arrows connects pairs of nodes.
• If there is an arrow from node X to node Y , X is said to be a parent
of Y.
• The graph has no directed cycles and hence is a directed acyclic
graph, or DAG.
• 3. Each node Xi has a conditional probability distribution P(Xi
|Parents(Xi)) that quantifies the effect of the parents on the node.

11
Representing Knowledge in an Uncertain
Domain
• The topology of the network i.e. the set of nodes and links
specifies the conditional independence relationships that hold in
the domain.
• The intuitive meaning of an arrow is typically that X has a direct
influence on Y.

12
Representing Knowledge in an Uncertain
Domain
• Example: Harry installed a new burglar alarm at his home to
detect burglary. The alarm reliably responds at detecting a
burglary but also responds for minor earthquakes. Harry has two
neighbors David and Sophia, who have taken a responsibility to
inform Harry at work when they hear the alarm. David always
calls Harry when he hears the alarm, but sometimes he got
confused with the phone ringing and calls at that time too. On the
other hand, Sophia likes to listen to high music, so sometimes she
misses to hear the alarm. Here we would like to compute the
probability of Burglary Alarm.
• Problem:
• Calculate the probability that alarm has sounded, but there is neither a
burglary, nor an earthquake occurred, and David and Sophia both called
the Harry.

13
Representing Knowledge in an Uncertain
Domain
• Solution:
• The Bayesian network structure is showing that burglary and
earthquake is the parent node of the alarm and directly affecting
the probability of alarm's going off, but David and Sophia's calls
depend on alarm probability.
• The network is representing that our assumptions do not directly
perceive the burglary and also do not notice the minor
earthquake, and they also not confer before calling.
• The conditional distributions for each node are given as
conditional probabilities table or CPT.
• Each row in the CPT must be sum to 1 because all the entries in
the table represent an exhaustive set of cases for the variable.
• In CPT, a boolean variable with k boolean parents contains
2K probabilities. Hence, if there are two parents, then CPT will
contain 4 probability values

14
Representing Knowledge in an Uncertain
Domain
• List of all events occurring in this network:
• Burglary (B)
• Earthquake(E)
• Alarm(A)
• David Calls(D)
• Sophia calls(S)

15
Representing Knowledge in an Uncertain
Domain
• We can write the events of problem statement in the form of
probability: P[D, S, A, B, E],
• This can be rewritten as probability statement using joint
probability distribution:
• P[D, S, A, B, E]= P[D | S, A, B, E]. P[S, A, B, E]
• =P[D | S, A, B, E]. P[S | A, B, E]. P[A, B, E]
• = P [D| A]. P [ S| A, B, E]. P[ A, B, E]
• = P[D | A]. P[ S | A]. P[A| B, E]. P[B, E]
• = P[D | A ]. P[S | A]. P[A| B, E]. P[B |E]. P[E]

16
Representing Knowledge in an Uncertain
Domain

17
Representing Knowledge in an Uncertain
Domain
• Let's take the observed probability for the Burglary and
earthquake component:
• P(B= True) = 0.002, which is the probability of burglary.
• P(B= False)= 0.998, which is the probability of no
burglary.
• P(E= True)= 0.001, which is the probability of a minor
earthquake
• P(E= False)= 0.999, Which is the probability that an
earthquake not occurred.

18
Representing Knowledge in an Uncertain
Domain
• We can provide the conditional probabilities as per the
below tables:
• Conditional probability table for Alarm A:
• The Conditional probability of Alarm A depends on Burglar
and earthquake:
• B E P(A= True) P(A= False)
• True True 0.94 0.06
• True False 0.95 0.04
• False True 0.31 0.69
• False False 0.001 0.999

19
Representing Knowledge in an Uncertain
Domain
• Conditional probability table for David Calls:
• The Conditional probability of David that he will call
depends on the probability of Alarm.
• A P(D= True) P(D= False)
• True 0.91 0.09
• False 0.05 0.95

20
Representing Knowledge in an Uncertain
Domain
• Conditional probability table for Sophia Calls:
• The Conditional probability of Sophia that she calls is
depending on its Parent Node "Alarm."
• A P(S= True) P(S= False)
• True 0.75 0.25
• False 0.02 0.98

21
Representing Knowledge in an Uncertain
Domain
• From the formula of joint distribution, we can write the
problem statement in the form of probability distribution:
• P(S, D, A, ¬B, ¬E) = P (S|A) *P (D|A)*P (A|¬B ^ ¬E) *P (¬B) *P
(¬E).
• = 0.75* 0.91* 0.001* 0.998*0.999
• = 0.00068045.
• Hence, a Bayesian network can answer any query about the
domain by using Joint distribution.

22
The Semantics of Bayesian Networks

• There are two ways in which one can understand the


semantics of Bayesian networks.
• 1. The network as a representation of the joint probability
distribution.
• 2. View it as an encoding of a collection of conditional
independence statements.
• The two views are equivalent, but the first turns out to be
helpful in understanding how to construct networks,
whereas the second is helpful in designing inference
procedures.

23
The Semantics of Bayesian Networks
• Representing the full joint distribution:-
• A Bayesian network is a directed acyclic graph with some numeric
parameters attached to each node.
• One way to define what the network means is its semantics
• That means is to define the way in which it represents a specific
joint distribution over all the variables.
• To do this, first need to retract (temporarily) the parameters
associated with each node.
• Those parameters correspond to conditional probabilities P(Xi
|Parents(Xi)); this is a true statement, but until we assign
semantics to the network as a whole, we should think of them just
as numbers θ(Xi |Parents(Xi)).

24
The Semantics of Bayesian Networks
• A generic entry in the joint distribution is the probability of a
conjunction of particular
• assignments to each variable, such as P(X1 =x1 ∧ . . . ∧ Xn =xn).
We use the notation
• P(x1, . . . , xn) as an abbreviation for this. The value of this entry is
given by the formula

• where parents(Xi) denotes the values of Parents(Xi) that appear in


x1, . . . , xn. Thus, each entry in the joint distribution is
represented by the product of the appropriate elements of the
conditional probability tables (CPTs) in the Bayesian network.

25
The Semantics of Bayesian Networks
• From this definition, it is easy to prove that the parameters
θ(Xi |Parents(Xi)) are exactly the conditional probabilities
P(Xi |Parents(Xi)) implied by the joint distribution.

26
Dempster Shafer Theory
• Dempster Shafer Theory(DST) is an evidence theory, it
combines all possible outcomes of the problem.
• It is used to solve problems where there may be a chance
that a different evidence will lead to some different result.
• The uncertainty in this model is given by:-
• Consider all possible outcomes.
• Belief will lead to belief in some possibility by taking out
some evidence.
• Plausibility will make evidence compatible with possible
outcomes.

27
Dempster Shafer Theory
• Example:
• Let us consider a room where four people are present, A, B,
C, and D. Suddenly the lights go out and when the lights
come back, B has been stabbed in the back by a knife,
leading to his death. No one came into the room and no one
left the room. We know that B has not committed suicide.
Now we have to find out who the murderer is.
• To solve these there are the following possibilities:
• Either {A} or {C} or {D} has killed him.
• Either {A, C} or {C, D} or {A, D} have killed him.
• Or the three of them have killed him i.e.; {A, C, D}
• None of them have killed him {o}

28
Dempster Shafer Theory
• There will be possible evidence by which we can find the
murderer by the measure of plausibility.
• Set of possible conclusion (P): {p1, p2….pn}
where P is a set of possible conclusions and cannot be exhaustive,
i.e. at least one (p)I must be true.
• (p)I must be mutually exclusive.
• Power Set will contain 2n elements where n is the number of
elements in the possible set.
For e.g.:-
If P = { a, b, c}, then Power set is given as
{o, {a}, {d}, {c}, {a, d}, {d ,c}, {a, c}, {a, c ,d }}= 23 elements.

29
Dempster Shafer Theory
• Mass function m(K): It is an interpretation of m({K or B}) i.e;
• it means there is evidence for {K or B} which cannot be divided
among more specific beliefs for K and B.
• Belief in K: The belief in element K of Power Set is the sum of
masses of the element which are subsets of K. This can be
explained through an example
Lets say K = {a, d, c}
Bel(K) = m(a) + m(d) + m(c) + m(a, d) + m(a, c) + m(d, c) + m(a, d,
c)
• Plausibility in K: It is the sum of masses of the set that intersects
with K.
i.e.; Pl(K) = m(a) + m(d) + m(c) + m(a, d) + m(d, c) + m(a, c) + m(a,
d, c)

30
Characteristics of DST

• Uncertainty Representation : The DST is designed to handle


situations where there is uncertainty of information and it
provides a way to represent and reason incomplete evidence.
• Conflict of Evidence : The DST allows for the combination of
multiple sources of evidence. It provides a rule, Dempster’s
rule of combination, to combine belief functions from
different sources.
• Decision-Making Ability : By deriving measures such as belief,
probability and plausibility from the combined belief function
it helps in decision making.

31
Advantages of DST
• DST has a much lower level of ignorance.
• Diagnose hierarchies can be represented using this.
• Person dealing with such problems is free to think about
evidence.
• Information can be added easily to uncertainty interval
reduces.

32
Fuzzy Set
• In artificial intelligence, fuzzy set is a fundamental construct
that facilitates the modeling of vague and ambiguous data.
• It reflects the complexities of human cognition and linguistic
expressions.
• It offers a more sophisticated approach to handling
uncertainty.
• Lotfi A. Zadeh pioneered the development of fuzzy set
theory in the 1960s, aims to provide a mathematical
framework for representing and manipulating ambiguous
data points within a set.

33
Fuzzy Set
• The integration of fuzzy set theory in artificial intelligence
has exceeded traditional binary logic systems, enabling AI
algorithms to interpret and process imprecise data with
enhanced precision.
• Its significance lies in its ability to bridge the gap between
human cognitive reasoning and computational
methodologies.
• Fuzzy set theory plays a crucial role in facilitating human-like
reasoning within AI frameworks, providing a flexible and
adaptive approach to handling uncertain and ambiguous
inputs.

34
Fuzzy Set
• Advantages:-
• Flexibility: Fuzzy set theory offers a flexible framework for
modeling and reasoning under uncertainty, allowing for the
nuanced representation of imprecise data.
• Human-Centric Reasoning: It enables AI systems to emulate
human-like reasoning, enhancing their adaptability in
complex decision-making scenarios influenced by subjective
interpretations.
• Real-World Applicability: Fuzzy set theory's real-world
applications span diverse domains, including medicine,
engineering, finance, and control systems, highlighting its
practical utility.

35
Fuzzy Logic
• Fuzzy logic is a type of multi-valued logic in which the truth
value of a variable can be any real number between 0 and 1,
and not just the traditional values of true or false.
• It is used when dealing with imprecise or uncertain
information, and is a mathematical technique that
represents ambiguity or uncertainty in decision-making.
• Fuzzy logic allows for partial truths, where a statement is not
completely true or false, but rather partly true or false.
• Fuzzy logic is used in a wide range of applications, including
control systems, image processing, natural language
processing, medical diagnostics, and artificial intelligence.

36
Fuzzy Logic
• Fuzzy logic is a mathematical technique for expressing
ambiguity and uncertainty in decision making, allowing
partial truths and used in a wide range of applications.
• It is based on the concept of membership functions and its
implementation is done using fuzzy rules, which are if-then
statements that express the relationships between input
and output variables in a fuzzy way.

37
Fuzzy Logic
• Its Architecture contains four parts :
• RULE BASE: It contains the set of rules and the IF-THEN
conditions provided by the experts to govern the
decision-making system, on the basis of linguistic
information. Recent developments in fuzzy theory offer
several effective methods for the design and tuning of fuzzy
controllers. Most of these developments reduce the
number of fuzzy rules.
• FUZZIFICATION: It is used to convert inputs i.e. crisp
numbers into fuzzy sets. Crisp inputs are basically the exact
inputs measured by sensors and passed into the control
system for processing, such as temperature, pressure,
rpm’s, etc.

38
Fuzzy Logic
• INFERENCE ENGINE: It determines the matching
degree of the current fuzzy input with respect to each
rule and decides which rules are to be fired according
to the input field. Next, the fired rules are combined to
form the control actions.
• DEFUZZIFICATION: It is used to convert the fuzzy sets
obtained by the inference engine into a crisp value.
There are several defuzzification methods available
and the best-suited one is used with a specific expert
system to reduce the error.

39
Fuzzy Logic

40
Fuzzy Logic
• Membership Functions
• A graph that defines how each point in an input space
maps to a membership value between 0 and 1.
• The input space is often called the domain of discourse or
the universal set (u) and contains all elements that may
be of interest in each particular application.
• There are three main types of fuzzyfire.
• Singleton Fuzzyfire
• Gaussian Fuzzy Fire
• Trapezoid or triangular fuzzy fire

41
Fuzzy Control
• This is a technology that brings human-like thinking to
control systems.
• It may not be designed to make accurate inferences, but it is
designed to make acceptable inferences.
• It can emulate human deductive thinking - the process
people use to infer a conclusion from what they know.
• Any uncertainty can be easily dealt with with the help of
fuzzy logic.

42
Fuzzy Logic
• Advantages of Fuzzy Logic Systems
• The system works with any type of input, even if the input
information is inaccurate, distorted, or noisy.
• Fuzzy logic systems are easy to build and understand.
• Fuzzy logic is based on the mathematical concept of set
theory, and the reason for this is quite simple.
• It mimics human reasoning and decision-making, providing
highly efficient solutions to complex problems in all areas of
life.
• The algorithms can be written with small amounts of data and
therefore require small amounts of memory.

43
Applications of Fuzzy Logic
• In the aerospace industry, it is used for altitude control of
spacecraft and satellites.
• Used in automotive systems such as speed control and traffic
control.
• It is used as a decision support system and personnel evaluation
system in large business enterprises.
• In the chemical industry, applications include pH control, drying,
and chemical distillation processes.
• It is used in various intensive applications of natural language
processing and artificial intelligence.
• It is widely used in modern control systems such as expert
systems.
• Fuzzy logic is used in neural networks because it mimics the way
humans make decisions and is much faster. It does this by
aggregating data and forming partial truths as fuzzy sets to
transform the data into something more meaningful.
44
Planning in AI
• Planning is an important part of Artificial Intelligence which deals
with the tasks and domains of a particular problem.
• Planning is considered the logical side of acting i.e it is about
deciding the tasks to be performed by the artificial intelligence
system and the system's functioning under domain-independent
conditions.
• Planning in AI is the process of coming up with a series of actions
or procedures to accomplish a particular goal.
• It require assessing the existing situation, identifying the intended
outcome, and developing a strategy that specifies the steps to
take to get there

45
Planning in AI
• We require domain description, task specification, and goal
description for any planning system.
• A plan is considered a sequence of actions, and each action has its
preconditions that must be satisfied before it can act and some
effects that can be positive or negative.

46
Planning in AI
• AI planning is of different types, each suitable for a particular
situation.
• Classical Planning: In this style of planning, a series of actions is
created to accomplish a goal in a predetermined setting. It assumes
that everything is static and predictable.
• Hierarchical planning: It divides By dividing large problems into
smaller ones, hierarchical planning makes planning more effective.
A hierarchy of plans must be established, with higher-level plans
supervising the execution of lower-level plans.
• Temporal Planning: Planning for the future considers time
restrictions and interdependencies between actions. It ensures that
the plan is workable within a certain time limit by taking into
account the duration of tasks.

47
Components of Planning System in AI
• A planning system in AI is made up of many crucial parts that
cooperate to produce successful plans. These components of
planning system in AI consist of:
• Representation: The component that describes how the planning
problem is represented is called representation. The state space,
actions, objectives, and limitations must all be defined.
• Search: the search component searches the state space. To locate
the best plans, a variety of search techniques, including depth-first
search & A* search, can be used.
• Heuristics: Heuristics are used to direct search efforts and measure
the benefit of certain actions. They aid in locating prospective
routes and enhancing the effectiveness of the planning process.

48
1. Classical Planning
• 1. Forward State Space Planning (FSSP)
• Forward State Space Planning (FSSP) is a subset of classical planning.
• In this approach, the planning system starts from the initial state and
explores all possible actions to progress toward the goal.
• It systematically explores the state space by applying actions and
transitions from one state to another until the goal is achieved.
• It starts from the initial state and moves forward toward the goal.
• FSSP can suffer from a large branching factor, meaning the number
of potential actions can grow exponentially, leading to high
computational costs.
• Example: Solving a maze by exploring all possible paths from the
starting point.

49
1. Classical Planning
• 2. Backward State Space Planning (BSSP)
• Backward State Space Planning (BSSP) is a variant of classical
planning, but it works in reverse.
• It starts from the goal state and works backward to find the
sequence of actions that leads to the initial state.
• This approach is often more efficient in certain cases where the
goal state is clearly defined and specific actions need to be
reversed to reach the initial state.
• It starts from the goal and works backward toward the initial
state.
• BSSP often has a smaller branching factor, making it more
computationally efficient.
• The algorithm may not always be sound, and inconsistencies
might arise, leading to failure in finding a solution.
• Example: Planning the steps needed to achieve a goal in chess,
working backward from the checkmate position.
50
2. Probabilistic Planning
• Probabilistic planning is designed to handle environments that
contain uncertainty.
• The AI system must account for the fact that actions may have
different possible outcomes with associated probabilities.
• It can handling uncertainty by taking into account the likelihood of
various outcomes for each action.
• It requires more complex computations due to the need to
consider all possible action outcomes and their probabilities.
• Example: Autonomous vehicles navigating traffic, where road
conditions and other drivers’ behaviors are uncertain.

51
3. Reactive Planning
• Reactive planning is suitable for highly dynamic and unpredictable
environments.
• Rather than following a pre-defined plan, the AI agent continuously
reacts to changes in the environment in real-time.
• This approach doesn’t rely on creating a full plan ahead of time but
focuses on immediate responses to the current situation.
• It provides real-time adaptation such as AI reacts dynamically to
changes in the environment.
• It Focuses on immediate actions rather than long-term planning.
• Faces challenges such as lack of long-term strategy or foresight,
focusing only on immediate responses.
• Example: A robot avoiding obstacles in an unknown environment or
video game AI adapting to player actions.

52
Goal Stack Planning (GSP)
• It is an AI technique that organizes multiple goals in a
specific order to manage them.
• It's useful when multiple goals or actions need to be
completed in sequence.
• In GSP, goals are stacked on top of each other, with the most
immediate goal at the top.
• The method works by starting at the goal state and working
backwards to the initial state.
• The preconditions for each state are solved using a stack
until the initial state is reached.

53
Goal Stack Planning (GSP)
• Some key features of GSP:
• Systematic approach: GSP breaks down complex tasks into
smaller, more manageable steps.
• Stacking: Goals are stacked on top of each other, with the
most immediate goal at the top.
• Working backwards: GSP starts at the goal state and works
backwards to the initial state.
• Solving goals and sub-goals: GSP uses a stack to solve goals
and sub-goals until the initial state is reached.
• Validating plans: The plan can be tested by applying each
action in the thread and seeing if the goal state is reached.

54
Goal Stack Planning (GSP)
• We start at the goal state and we try fulfilling the
preconditions required to achieve the initial state.
• These preconditions in turn have their own set of
preconditions, which are required to be satisfied first.
• We keep solving these goals and sub-goals until we finally
arrive at the Initial State.
• We make use of a stack to hold these goals that need to be
fulfilled as well the actions that we need to perform for the
same.

55
Goal Stack Planning (GSP)
• Apart from the Initial State and the Goal State, we maintain
a World State configuration as well.
• Goal Stack uses this world state to work its way from Goal
State to Initial State.
• World State on the other hand starts off as the Initial State
and ends up being transformed into the Goal state.
• At the end of this algorithm we are left with an empty stack
and a set of actions which helps us navigate from the Initial
State to the World State.

56
Goal Stack Planning (GSP)
• Predicates can be thought of as a statement which helps us
convey the information about a configuration in Blocks World.
• Given below are the list of predicates as well as their intended
meaning
ON(A,B) : Block A is on B
ONTABLE(A) : A is on table
CLEAR(A) : Nothing is on top of A
HOLDING(A) : Arm is holding A.
ARMEMPTY : Arm is holding nothing

57
Goal Stack Planning (GSP)
• Using these predicates, initial state and goal state are as
follows:
• Initial State — ON(B,A) ∧ ONTABLE(A) ∧ ONTABLE(C) ∧
ONTABLE(D) ∧ CLEAR(B) ∧ CLEAR(C) ∧ CLEAR(D) ∧
ARMEMPTY

58
Goal Stack Planning (GSP)
• Goal State — ON(C,A) ∧ ON(B,D) ∧ ONTABLE(A) ∧
ONTABLE(D) ∧ CLEAR(B) ∧ CLEAR(C) ∧ ARMEMPTY

59
Goal Stack Planning (GSP)
• Thus a configuration can be thought of as a list of predicates
describing the current scenario.
• Operations performed by the robot arm:-
The Robot Arm can perform 4 operations:
STACK(X,Y) : Stacking Block X on Block Y
UNSTACK(X,Y) : Picking up Block X which is on top of Block Y
PICKUP(X) : Picking up Block X which is on top of the table
PUTDOWN(X) : Put Block X on the table
• All the four operations have certain preconditions which need
to be satisfied to perform the same.
• These preconditions are represented in the form of
predicates.

60
Goal Stack Planning (GSP)
• The effect of these operations is represented using two
lists ADD and DELETE.
• DELETE List contains the predicates which will cease to
be true once the operation is performed.
• ADD List on the other hand contains the predicates
which will become true once the operation is
performed.
• The Precondition, Add and Delete List for each
operation is rather intuitive and have been listed
below.

61
Goal Stack Planning (GSP)

62
Goal Stack Planning (GSP)
• For example, to perform the STACK(X,Y) operation i.e. to Stack
Block X on top of Block Y, No other block should be on top of
Y (CLEAR(Y)) and the Robot Arm should be holding the Block X
(HOLDING(X)).
• Once the operation is performed, these predicates will cease to
be true, thus they are included in DELETE List as well. (Note : It is
not necessary for the Precondition and DELETE List to be the
exact same).
• On the other hand, once the operation is performed, The robot
arm will be free (ARMEMPTY) and the block X will be on top of Y
(ON(X,Y)).
• The other 3 Operators follow similar logic, and this part is the
cornerstone of Goal Stack Planning.

63
Hierarchical Planning
• Hierarchical planning allows AI systems to effectively handle
complex tasks and environments, and make decisions at
many levels of abstraction.
• Hierarchical planning allows AI systems to efficiently handle
relationships, prioritize tasks, and allocate resources in a
structured way, which can be extremely useful in complex
situations.

64
Components of a Hierarchy Plan
• An artificial intelligence (AI) hierarchy plan typically includes these
key elements:
• High-level goals: High-level goals provide the initial direction for
the planning process and guide you in breaking down tasks into
smaller sub-goals.
• Tasks: Tasks are actions that must be taken to achieve a high-level
goal.
• Sub-goals: Sub-goals are intermediate goals that contribute to the
achievement of a higher-level goal. Sub-goals are obtained by
breaking down a higher-level goal into smaller, more manageable
tasks.
• Hierarchical Structure: Hierarchical planning involves organizing
tasks and goals into a hierarchical structure, where higher-level
goals are broken down into sub-goals, which are further broken
down to basic actions that can be taken directly.

65
Components of a Hierarchy Plan
• Task dependencies and constraints : It considers dependencies
and constraints between tasks and sub-goals. These dependencies
determine the order in which tasks are performed and what
prerequisites must be met before a task can be performed.
• Plan Representation : In hierarchical planning, plans are
represented as hierarchical structures that capture the sequence
of tasks and sub-goals required to achieve a high-level goal. This
representation makes it easy to generate, execute, and monitor
efficient plans.
• Evaluate and optimize the plan : Hierarchical planning evaluates
and optimize the plan to ensure it meets desired criteria such as
efficiency, feasibility, resource utilization, etc. This may involve
iteratively refining the plan structure and adjusting task priorities
to improve performance.

66
Hierarchical planning methods in AI
• 1. Hierarchical Task Network (HTN)
• They are used to represent and reason about hierarchical task
decompositions.
• HTNs consist of a set of tasks organized into a hierarchy, where
higher-level tasks are decomposed into a set of lower-level tasks.
• HTNs provide a structured framework for planning and execution,
allowing the efficient generation of plans that satisfy complex
goals and constraints.

67
Hierarchical planning methods in AI
• 2. Hierarchical Reinforcement Learning (HRL)
• It is an extension of reinforcement learning that controls
hierarchical structures to facilitate learning and decision-making in
complex environments.
• In HRL, tasks are organized into a hierarchy of sub-goals, and the
agent learns policies for achieving these sub-goals at different
levels of abstraction.
• By learning a hierarchy of policies, HRL enables more efficient
exploration and exploitation of the environment, leading to faster
learning and improved performance.

68
Hierarchical planning methods in AI
• 3. Hierarchical state space exploration
• It is a planning technique that explores the state space of a
problem hierarchically.
• Rather than directly exploring individual states, hierarchical
state-space search organizes the states into a hierarchical
structure, where higher-level states represent abstract
representations of the problem space.
• This hierarchical search results in more efficient state-space
searching and pruning, resulting in faster convergence and
improved scalability.

69
Example of Hierarchical Planning
• Hierarchical Planning in Autonomous Driving
• Consider the example of a self-driving car, where hierarchical
planning is employed as follows:
1. High-level goal: Follow the traffic rules and get from point A to
point B safely.
2. Main steps:
– Route planning: Determine the best route to B
– Obstacle Avoidance: Identify obstacles such as vehicles and
people.
– Traffic Light Recognition: Detects traffic lights and signs
– Lane Keeping: Stays in designated lane and adjusts vehicle
position to avoid collisions

70
Example of Hierarchical Planning
3. Minor steps:
– Route planning:
• Map Analysis: Analyze the map to find the best route
• Traffic Prediction: Predict traffic patterns to avoid traffic
jams.
• Obstacle Avoidance:
– Sensor Data Processing: Processes data from on-board sensors
to detect nearby objects
– Path Planning: Generating a path to avoid obstacles
• Traffic signal recognition:
– Image Recognition: Analyze images to detect traffic lights
– Interpreting traffic rules: interpreting and detecting traffic
signals and deciding what to do
71
Example of Hierarchical Planning
• Lane Keeping:
– Lane Detection: Detects lane markings using computer vision
algorithms
– Control system: Adjusts speed, steering and braking commands
to keep the vehicle within the detected lane.
4. Hierarchical planning:
– First-level planning : defines high-level goals and key steps,
such as route planning, obstacle avoidance, traffic signal
recognition, and lane keeping.
– Second level planning : As mentioned above, break down each
major step into subtasks and minor steps to address the
complexity of each component.
– Third Level Planning : Break down the small steps even further
to create the detailed actions and algorithms needed to execute
effectively.

72
Hierarchical Planning Technology for
Autonomous Driving
• In autonomous driving, hierarchical planning techniques are
important for safe navigation.
• HTN Planning : Decomposes route planning into subtasks such as
map analysis and traffic prediction to ensure optimal routes.
• Hierarchical Reinforcement Learning (HRL) : Learns hierarchical
policies for obstacle avoidance and adjusts the vehicle's trajectory
to avoid collisions.
• Hierarchical Task Network (HTN) : Decomposes traffic signal
recognition into subtasks for accurate detection and rule
interpretation.
• Hierarchical State Space Exploration : Explore the lane keeping
state space and adjust vehicle commands for an effective lane
keeping strategy.

73
Benefits of Hierarchical Planning
• User capabilities: It enables planning and reasoning at different
levels of abstraction, enabling users to deal effectively with
complex tasks and situations.
• Internal flexibility: It allows the plan to be adjusted to reflect
changes in the environment or goals, making the plan stronger
and adaptive.
• Personal Reuse and Abstraction: Adopting a hierarchy of activities
or sub-goals allows plans to be reused and abstracted, improving
the effectiveness of plans and reducing the need for duplicate
plans.
• High-level reasoning adaptability: AI systems can make strategic
decisions and coordinate actions at higher levels of abstraction,
thus facilitates high-level reasoning and decision-making.

74

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy