Ai Unit 4 Digital Notes
Ai Unit 4 Digital Notes
2
Pleaseread this disclaimerbeforeproceeding:
This document is confidential and intended solely for the educational purpose of
RMK Group of Educational Institutions. If you have received this document
through email in error, please notify the system manager. This document
contains proprietary information and is intended only to the respective group /
learning community as intended. If you are not the addressee you should not
disseminate, distribute or copy through e-mail. Please notify the sender
immediately by e-mail if you have received this document by mistake and delete
this document from your system. If you are not the intended recipient you are
notified that disclosing, copying, distributing or taking any action in reliance on
the contentsof this information is strictlyprohibited.
3
4
22AI301 ARTIFICIAL INTELLIGENCE
(Lab Integrated)
Department: CSE
Batch/Year: 2023-2027/II
Created by : Dr.R.Sasikumar
Dr . M. Arun Manicka Raja
Dr. G. Indra
Dr. M. Raja Suguna
Mr.K.Mohanasundaram
Mrs.S.Logesswari
Date: 25.01.2024
4
Table of Contents
1 Course Objectives 7
2 Pre-Requisites 9
3 Syllabus 11
4 Course outcomes 13
6 Lecture Plan 17
8 Lecture Notes 23
9 Assignments 109
11 Part B Qs 117
5
1. COURSE OBJECTIVES
7
3. PRE REQUISITES
PRE-REQUISITE CHART
21CS201-Data Structures
21MA402-Probability and
Statistics
21CS502-Artificial Intelligence
8
4.SYLLABUS
22AI301-ARTIFICIAL INTELLIGENCE L TPC
(Lab Integrated)
3 02 4
Strategies
Lab Programs:
1. Implement basic search strategies – 8-Puzzle, 8 - Queens problem.
2. Implement Breadth First Search & Depth first Search Algorithm
3. Implement Water Jug problem.
4. Solve Tic-Tac-Toe problem.
Course P P P P P P P P P P P P PS PS PS
Outcomes
O O O O O O O O O O O O O O O
(Cos)
1 2 3 4 5 6 7 8 9 10 11 12 1 2 3
21AI401.1 K
2 3 3 1 - - - - - - - - - - - -
21AI401.2 K
3 3 2 1 - - - - - - - - - - - -
21AI401.3 K
3 3 2 1 - - - - - - - - - - - -
5
21AI401.4 K
3 3 3 2 - - - - - - - - - - - -
21AI401.5 K
2 3 2 2 - - - - - - - - - - - -
11
LECTURE PLAN – UNIT IV
12
LECTURE PLAN – UNIT IV
UNIT IV
Sl. NO
No PRO ACT
OF
P UAL
PER OSE LECT PERTAINI
NG CO(s)
TAXONO
MY
MODE OF
DELIVERY
TOPIC D URE
I LEC LEVEL
TU
ODS
RE
PEROID PERI
OD
MD1, MD5
1 Knowledge Representation-
Ontological Engineering CO4 K3
CO4 K3 MD1, MD5
2 Categories & objects,Events
MD1, MD5
Mental objects and Model
CO4 K3
3 Logic
MD1, MD5
Reasoning System-Reasoning
4 with Default Information CO4 K3
MD1, MD5
CO4 K3
9 Non-determininstic domains
MD1, MD5
Time schedule
CO4 K3
10 &resources,Analysis
9
Activity Based
Learning
14
Activity Based Learning
1. Play a game where the other person thinks of an animal you must identify. Does it
have feathers? Does it have fur? Does it walk on four legs? Is it black? In a small
number of questions we narrow down the possibilities until we know what it is. The
more we know about the features of animals, the easier it is to narrow it down. A
machine learning algorithm learns about the features of the specific things it
classifies. Each feature rules out some possibilities, but leaves others. Returning to
our robot that makes expressions. Was it a loud sound? Yes. Was it sudden? No. …
The robot should look unhappy. The right combination of features allows the
algorithm to narrow the possibilities down to one thing. The more data the
algorithm was trained on, the more patterns it can spot. The more patterns it spots,
the more rules about features it can create. The more rules about features it has,
the more questions it can make its decision on.
Materials
15
LECTURE NOTES
UNIT IV SYLLABUS
Ontological Engineering -Categories and Objects – Events –
Mental Events and Mental Objects - Reasoning Systems for
Categories - Reasoning with Default Information-Classical
planning-Algorithms for Classical Planning- Heuristics for planning-
Hierarchical planning - Non-Deterministic domains- Time,
Schedule and resources-Analysis
4.1 ONTOLOGICAL ENGINEERING
In “toy” domains, the choice of representation is not that important; many choices
will work. Complex domains such as shopping on the Internet or driving a car in
traffic require more general and flexible representations. This chapter shows how to
create these representations, concentrating on general concepts—such as Events,
Time, Physical Objects, and Beliefs— that occur in many different domains.
17
Before considering the ontology further, we should state one important caveat. We
have elected to use first-order logic to discuss the content and organization of
knowledge, although certain aspects of the real world are hard to capture in FOL.
The principal difficulty is that most generalizations have exceptions or hold only to a
degree. For example, although “tomatoes are red” is a useful rule, some tomatoes
are green, yellow, or orange. Similar exceptions can be found to almost all the rules
in this chapter. The ability to handle exceptions and uncertainty is extremely
important, but is orthogonal to the task of understanding the general ontology.
For any special-purpose ontology, it is possible to make changes like these to move
toward greater generality. An obvious question then arises: do all these ontologies
converge on a general-purpose ontology? After centuries of philosophical and
computational investigation, the answer is “Maybe.” In this section, we present one
general-purpose ontology that synthesizes ideas from those centuries. Two major
characteristics of general-purpose ontologies distinguish them from collections of
special-purpose ontologies:
18
Those ontologies that do exist have been created along four routes:
1. By a team of trained ontologist/logicians, who architect the ontology and write
axioms.
The CYC system was mostly built this way (Lenat and Guha, 1990).
2.Byimporting categories, attributes, and values from an existing database or
databases.
DBPEDIA was built by importing structured facts from Wikipedia (Bizer et al., 2007).
3.By parsing text documents and extracting information from them. TEXTRUNNER
was
built by reading a large corpus of Web pages (Banko and Etzioni, 2008).
4. By enticing unskilled amateurs to enter
commonsense knowledge. The OPENMIND
system was built by volunteers who proposed facts in English (Singh et al., 2002;
Chklovski and Gil, 2005).
objects. That is, we can use the predicate Basketball (b), or we can reify1 the
category as an object, Basketballs. We could then say Member(b, Basketballs ),
which we will abbreviate as b Є Basketballs, to say that b is a member of the
category of basketballs. We say Subset(Basketballs, Balls), abbreviated as
Basketballs Balls, to say that Basketballs is a subcategory of Balls. We will use
subcategory, subclass, and subset interchangeably.
19
Categories serve to organize and simplify the knowledge base through inheritance.
If we say that all instances of the category Food are edible, and if we assert that
Fruit is a subclass of Food and Apples is a subclass of Fruit , then we can infer that
every apple is edible. We say that the individual apples inherit the property of
edibility, in this case from their membership in the Food category. Subclass relations
organize categories into a taxonomy, or taxonomic hierarchy.
BB9 Є Basketballs
Basketballs Balls
• All members of a category have some properties. (x Є
Dogs Є DomesticatedSpecies
20
Notice that because Dogs is a category and is a member of Domesticated Species ,
the latter must be a category of categories. Of course there are exceptions to many
of the above rules (punctured basketballs are not spherical); we deal with these
exceptions later. Although subclass and member relations are the most important
ones for categories, we also want to be able to state relations between categories
that are not subclasses of each other.
For example, if we just say that Males and Females are subclasses of Animals, then
we have not said that a male cannot be a female. We say that two or more
categories are
disjoint if they have no members in common. And even if we know that males and
females are disjoint, we will not know that an animal that is not a male must be a
female, unless we say that males and females constitute an exhaustive
decomposition of the animals. A disjoint exhaustive decomposition is known as a
partition.
Disjoint({Animals,Vegetables})
21
Categories can also be defined by providing necessary and sufficient conditions for
membership. For example, a bachelor is an unmarried adult male:
The PartOf relation is transitive and reflexive; that is, PartOf (x, y) 𝖠 PartOf (y, z) ⇒
PartOf (x, z) .
PartOf (x, x) .
Therefore, we can conclude PartOf (Bucharest , Earth). Categories of composite
objects are often characterized by structural relations among parts. An object is
composed of the parts in its PartPartition and can be viewed as deriving some
properties from those parts. For example, the mass of a composite
object is the sum of the masses of the parts For example, if the apples are Apple1,
Apple2, and Apple3, then “BunchOf ({Apple1,Apple2,Apple3})“ denotes the
composite object with the three apples as parts (not elements).
22
We can then use the bunch as a normal, albeit unstructured, object. Notice that
BunchOf ({x})= x. Furthermore, BunchOf (Apples) is the composite object consisting
of all apples—not to be confused with Apples, the category or set of all apples. We
can define BunchOf in terms of the PartOf relation. Obviously, each element of s is
part of BunchOf (s):
4.2.2 Measurements
In both scientific and commonsense theories of the world, objects have height,
mass, cost, and so on. The values that we assign for these properties are called
measures. Ordinary quantitative measures are quite easy to represent. We imagine
that the universe includes abstract “measure objects,” such as the length that is the
length of this line segment.If the line segment is called L1, we can write
Length(L1)=Inches(1.5)=Centimeters(3.81) .
Centimeters(2.54 × d)=Inches(d) .
23
Similar axioms can be written for pounds and kilograms, seconds and days, and
dollars and cents. Measures can be used to describe objects as follows:
d Є Days ⇒ Duration(d)=Hours(24)
Note that $(1) is not a dollar bill! One can have two dollar bills, but there is only one
object named $(1). Note also that, while Inches(0) and Centimeters(0) refer to the
same zero length, they are not identical to other zero measures, such as
Seconds(0). The field of qualitative physics is a subfield of AI that investigates how
to reason about physical systems without plunging into detailed equations and
numerical simulations.
For example, suppose I have some butter and an aardvark in front of me. I can say
there is one aardvark, but there is no obvious number of “butter-objects,” because
any part of a butter-object is also a butter-object, at least until we get to very small
parts indeed. This is the major distinction between stuff and things. If we cut an
aardvark in half, we do not get two aardvarks (unfortunately). The English language
distinguishes clearly between stuff and things. We say “an aardvark,” but, except in
pretentious California restaurants, one cannot say “a butter.” Linguists distinguish
between count nouns, such as aardvarks, holes, and theorems, and mass nouns,
such as butter, water, and energy.
24
With some caveats about very small parts that we w omit for now, any part of a
butter-object is also a butter-object:
b Є Butter ⇒ MeltingPoint(b,Centigrade(30)) .
We could go on to say that butter is yellow, is less dense than water, is soft at room
temperature, has a high fat content, and so on. On the other hand, butter has no
particular size, shape, or weight. We can define more specialized categories of
butter such as UnsaltedButter , which is also a kind of stuff. Note that the category
PoundOfButter , which includes as members all butter-objects weighing one pound,
is not a kind of stuff. If we cut a pound of butter in half, we do not, alas, get two
pounds of butter. What is actually going on is this: some properties are intrinsic:
they belong to the very substance of the object, rather than to the object as a
whole. When you cut an instance of stuff in half, the two pieces retain the intrinsic
properties—things like density, boiling point, flavor, color, ownership, and so on. On
the other hand, their extrinsic properties—weight, length, shape, and so on—are not
retained under subdivision.
A category of objects that includes in its definition only intrinsic properties is then a
substance, or mass noun; a class that includes any extrinsic properties in its
definition is a count noun. The category Stuff is the most general substance
category, specifying no intrinsic properties. The category Thing is the most general
discrete object category, specifying no extrinsic properties.
4.3 EVENTS
Consider a continuous action, such as filling a bathtub. Situation calculus can say
that the tub is empty before the action and full when the action is done, but it can’t
talk about what happens during the action. It also can’t describe two actions
happening at the same time—such as brushing one’s teeth while waiting for the tub
to fill. To handle such cases we introduce an alternative formalism known as event
calculus, which is based on points of time rather than on situations.
25
Event calculus reifies fluents and events. The fluent At(Shankar , Berkeley) is an object that
refers to the fact of Shankar being in Berkeley, but does not by itself say anything about
whether it is true. To assert that a fluent is actually true at some point in time we use the
predicate T, as in T(At(Shankar , Berkeley), t). Events are described as instances of event
categories. The event E1 of Shankar flying from San Francisco to Washington, D.C. is
described as
E1 Є Flyings(Shankar , SF,DC) .
We then use Happens(E1, i) to say that the event E1 took place over the time interval i, and
we say the same thing in functional form with Extent(E1)=i. We represent time intervals by
a (start, end) pair of times; that is, i = (t1, t2) is the time interval that starts at t1 and ends
at t2. The complete set of predicates for one version of the event calculus is ,
Happens(e, i) Event e happens over the time interval i Initiates(e, f, t) Event e causes fluent
26
Happens(e, (t1, t2)) 𝖠 Initiates(e, f, t1) 𝖠 ¬Clipped(f, (t1, t)) 𝖠 t1 < t ⇒ T(f, t)
Happens(e, (t1, t2)) 𝖠 Terminates(e, f, t1)𝖠 ¬Restored (f, (t1, t)) 𝖠 t1 < t ⇒ ¬T(f, t)
Restored (f, (t1, t2)) ⇔ Ǝe, t, t3 Happens(e, (t, t3)) 𝖠 t1 ≤ t < t2 𝖠 Initiates(e, f, t)
holds over an interval if it holds on every point within the interval: T(f, (t1, t2)) ⇔ [∀
Shootings(a)
4.3.1 Processes
The events we have seen so far are what we call discrete events—they have a
definite structure. Shankar’s trip has a beginning, middle, and end. If interrupted
halfway, the event would be something different—it would not be a trip from San
Francisco to Washington, but instead a trip from San Francisco to somewhere over
Kansas. On the other hand, the category of events denoted by Flyings has a
different quality. If we take a small interval of Shankar’s flight, say, the third 20-
minute segment (while he waits anxiously for a bag of peanuts), that event is still a
member of Flyings. In fact, this is true for any subinterval
27
4.3.2 Time
intervals
28
Two intervals Meet if the end time of the first equals the start time of the second.
The complete set of interval relations, as proposed by Allen (1983), is shown
graphically in Figure 3.14 and logically below:
29
On the other hand, if the question were “Is your mother sitting down right now?”
then Bob should realize that thinking harder is unlikely to help. We begin with the
propositional attitudes that an agent can have toward mental objects: attitudes such
as Believes, Knows, Wants, Intends, and Informs. The difficulty is that these
attitudes do not behave like “normal” predicates. For example, suppose we try to
assert that Lois knows that Superman can fly:
Knows(Lois, CanFly(Superman)) .
One minor issue with this is that we normally think of CanFly(Superman) as a
sentence, but here it appears as a term. That issue can be patched up just be
reifying CanFly(Superman); making it a fluent. A more serious problem is that, if it is
true that Superman is Clark Kent, then we must conclude that Lois knows that Clark
can fly:
30
Modal logic is designed to address this problem. Regular logic is concerned with a
single modality, the modality of truth, allowing us to express “P is true.” Modal logic
includes special modal operators that take sentences (rather than terms) as
arguments. For example, “A knows P” is represented with the notation KAP, where K
is the modal operator for knowledge. It takes two arguments, an agent (written as
the subscript) and a sentence. The syntax of modal logic is the same as first-order
logic, except that sentences can also be formed with modal operators.
In general, a knowledge atom KAP is true in world w if and only if P is true in every
world accessible from w. The truth of more complex sentences is derived by
recursive application of this rule and the normal rules of first-order logic. That
means that modal logic can be used to reason about nested knowledge sentences:
what one agent knows about another agent’s knowledge. Figure 3.15 shows some
possible worlds for this domain, with accessibility relations for Lois and Superman.
In the TOP-LEFT diagram, it is common knowledge that Superman knows his own
identity, and neither he nor Lois has seen the weather report. So in w0 the worlds
w0 and w2 are accessible to Superman; maybe rain is predicted, maybe not. For Lois
all four worlds are accessible from each other; she doesn’t know anything about the
report or if Clark is Superman. But she does know that Superman knows whether he
is Clark, because in every world that is accessible to Lois, either Superman knows I,
or he knows ¬I. Lois does not know which is the case, but either way she knows
Superman knows.
In the TOP-RIGHT diagram it is common knowledge that Lois has seen the weather
report. So in w4 she knows rain is predicted and in w6 she knows rain is not
predicted Superman does not know the report, but he knows that Lois knows,
because in every world that is accessible to him, either she knows R or she knows
¬R.
Modal logic solves some tricky issues with the interplay of quantifiers and
knowledge. The English sentence “Bond knows that someone is a spy” is
ambiguous. The first reading is that there is a particular someone who Bond knows
is a spy; we can write this as
31
Ǝx KBondSpy(x) ,
which in modal logic means that there is an x that, in all accessible worlds, Bond
knows to be a spy. The second reading is that Bond just knows that there is at least
one spy:
KBond Ǝx Spy(x) .
The modal logic interpretation is that in each accessible world there is an x that is a
spy, but it need not be the same x in each world. Now that we have a modal
operator for knowledge, we can write axioms for it. First, we can say that agents are
able to draw deductions; if an agent knows P and knows that P implies Q, then the
agent knows Q:
32
(KaP 𝖠 Ka(P ⇒ Q)) ⇒ KaQ.
From this (and a few other rules about logical identities) we can establish that KA(P
∨ ¬P) is a tautology; every agent knows every proposition P is either true or false.
On the other hand, (KAP) ∨ (KA¬P) is not a tautology; in general, there will be lots
of propositions that an agent does not know to be true and does not know to be
false. It is said (going back to Plato) that knowledge is justified true belief. That is, if
it is true, if you believe it, and if you have an unassailably good reason, then you
know it. That
means that if you know something, it must be true, and we have the axiom: KaP ⇒
P.
Furthermore, logical agents should be able to introspect on their own knowledge. If
they know something, then they know that they know it:
KaP ⇒ Ka(KaP) .
33
Semantic networks
The notation that semantic networks provide for certain kinds of sentences is often
more convenient, but if we strip away the “human interface” issues, the underlying
concepts—objects, relations, quantification, and so on—are the same. There are
many variants of semantic networks, but all are capable of representing individual
objects, categories of objects, and relations among objects. A typical graphical
notation displays object or category names in ovals or boxes, and connects them
with labeled links.
For example, Figure 3.15 has a MemberOf link between Mary and FemalePersons ,
corresponding to the logical assertion Mary Є FemalePersons ; similarly, the SisterOf
link between Mary and John corresponds to the assertion SisterOf (Mary, John). We
can connect categories using SubsetOf links, and so on. It is such fun drawing
bubbles and arrows that one can get carried away. For example, we know that
persons have female persons as mothers, so can we draw a HasMother link from
Persons to FemalePersons? The answer is no, because HasMother is a relation
between a person and his or her mother, and categories do not have mothers For
this reason, we have used a special notation—the double-boxed link—in Figure 3.15
We might also want to assert that persons have two legs—that is,
∀x xЄ Persons ⇒ Legs(x, 2) .
34
As before, we need to be careful not to assert that a category has legs; the single-
boxed link in Figure 3.15 is used to assert properties of every member of a category.
Thus, to find out how many legs Mary has, the inheritance algorithm follows the
MemberOf link from Mary to the category she belongs to, and then follows SubsetOf
links up the hierarchy until it finds a category for which there is a boxed Legs link—
in this case, the Persons category. The simplicity and efficiency of this inference
mechanism, compared with logical theorem proving, has been one of the main
attractions of semantic networks.
Figure 3.15 A semantic network with four objects (John, Mary, 1, and 2) and four
categories.
Description logics
The syntax of first-order logic is designed to make it easy to say things about
objects. Description logics are notations that are designed to make it easier to
describe definitions and properties of categories. Description logic systems evolved
from semantic networks in response to pressure to formalize what the networks
mean while retaining the emphasis on taxonomic structure as an organizing
principle.
The principal inference tasks for description logics are subsumption (checking if one
category is a subset of another by comparing their definitions) and classification
(checking whether an object belongs to a category).. Some systems also include
consistency of a categorydefinition—whether the membership criteria are logically
satisfiable. The syntax of descriptions in a subset of the CLASSIC language is given
below:
36
For example, to say that bachelors are unmarried adult males we would write
Bachelor = And(Unmarried, Adult ,Male) . The equivalent in first-order logic would
be
37
We leave it as an exercise to translate this into first-order logic. Perhaps the most
important aspect of description logics is their emphasis on tractability of inference. A
problem instance is solved by describing it and then asking if it is subsumed by one
of several possible solution categories. This sounds wonderful in principle, until one
realizes that it can only have one of two consequences: either hard problems cannot
be stated at all, or they require exponentially large descriptions!
However, the tractability results do shed light on what sorts of constructs cause
problems and thus help the user to understand how different representations
behave. For example, description logics usually lack negation and disjunction. Each
forces firstorder logical systems to go through a potentially exponential case analysis
in order to ensure completeness. CLASSIC allows only a limited form of disjunction
in the Fills and one of constructs, which permit disjunction over explicitly
enumerated individuals but not over descriptions. With disjunctive descriptions,
nested definitions can lead easily to an exponential number of alternative routes by
which one category can subsume another.
default logic
38
If new evidence arrives—for example, if one sees the owner carrying a wheel and
notices that the car is jacked up—then the conclusion can be retracted. This kind of
reasoning is said to exhibit nonmonotonicity, because the set of beliefs does not
grow monotonically over time as new evidence arrives. Nonmonotonic logics have
been devised with modified notions of truth and entailment in order to capture such
behavior. We will look at two such logics that have been studied extensively:
circumscription and default logic.
Circumscription can be seen as a more powerful and precise version of the closed
world assumption. The idea is to specify particular predicates that are assumed to
be “as false as possible”—that is, false for every object except those for which they
are known to be true. For example, suppose we want to assert the default rule that
birds fly. We would introduce a predicate, say Abnormal 1(x), and write
39
Republican(Nixon) 𝖠 Quaker(Nixon) .
Bird(x) : Flies(x)/Flies(x) .
This rule means that if Bird(x) is true, and if Flies(x) is consistent with the
knowledge base, then Flies(x) may be concluded by default. In general, a default
rule has the form P : J1, . . . , Jn/C where P is called the prerequisite, C is the
conclusion, and Ji are the justifications—if any one of them can be proven false,
then the conclusion cannot be drawn. Any variable that appears in Ji or C must also
appear in P.The Nixon-diamond example can be represented in default logic with
one fact and two default rules:
Republican(Nixon) 𝖠Quaker(Nixon) .
40
To interpret what the default rules mean, we define the notion of an extension of a
default theory to be a maximal set of consequences of the theory. That is, an
extension S consists of the original known facts and a set of conclusions from the
default rules, such that no additional conclusions can be drawn from S and the
justifications of every default conclusion in S are consistent with S.
This sounds easy enough. The obvious “solution”—retracting all sentences inferred
from P—fails because such sentences may have other justifications besides P. For
example, if R and R ⇒ Q are also in the KB, then Q does not have to be removed
after all. Truth maintenance systems, or TMSs, are designed to handle exactly these
kinds of complications.
One simple approach to truth maintenance is to keep track of the order in which
sentences are told to the knowledge base by numbering them from P1 to Pn. When
the call RETRACT(KB, Pi) is made, the system reverts to the state just before Pi was
added, thereby removing both Pi and any inferences that were derived from Pi. The
sentences Pi+1 through Pn can then be added again. Amore efficient approach JTMS
is the justification-based truth maintenance system, or JTMS. In a JTMS, each
sentence in the knowledge base is annotated with a justification consisting of the set
of sentences from which it was inferred.
41
The JTMS assumes that sentences that are considered once will probably be
considered again, so rather than deleting a sentence from the knowledge base
entirely when it loses all justifications, we merely mark the sentence as being out of
the knowledge base. If a subsequent assertion restores one of the justifications,
then we mark the sentence as being back in. In addition to handling the retraction
of incorrect information, TMSs can be used to speed up the analysis of multiple
hypothetical situations. Suppose, for example, that the Romanian Olympic
Committee is choosing sites for the swimming, athletics, and equestrian events at
the 2048 Games to be held in Romania.
42
Truth maintenance systems also provide a mechanism for generating explanations.
Technically, an explanation of a sentence P is a set of sentences E such that E entails
P.If the sentences in E are already known to be true, then E simply provides a
sufficient basis for proving that P must be the case. But explanations can also
include assumptions— sentences that are not known to be true, but would suffice to
prove P if they were true The exact algorithms used to implement truth
maintenance systems are a little complicated, and we do not cover them here. The
computational complexity of the truth maintenance problem is at least as great as
that of propositional inference—that is, NP-hard. When used carefully, however, a
TMS can provide a substantial increase in the ability of a logical system to handle
∙ Noninterleaved planners of the early 1970s were unable to solve this problem,
hence it is considered as anomalous.
∙ In blocks-world problem, three blocks labeled as 'A', 'B', 'C' are allowed to rest on
the flat surface. The given condition is that only one block can be moved at a time
to achieve the goal.
43
∙ The start state and goal state are shown in the following
diagram.
∙ Choose the best rule for applying the next rule based on the best available heuristics.
∙ Apply the chosen rule for computing the new problem state.
Goal stack planning: This is one of the most important planning algorithms, which is
specifically used by STRIPS.
∙ The stack is used in an algorithm to hold the action and satisfy the goal. A
knowledge base is used to hold the current state, actions.
∙ Goal stack is similar to a node in a search tree, where the branches are created if
there is a choice of an action.
iii. If stack top is an action, pop it from the stack, execute it and change the knowledge
base by the effects of the action.
consideration.
Algorithm
1. Choose a goal 'g' from the goalset
2. If 'g' does not match the state, then
∙ plan = [plan; o]
4.7.3 Planners: Forward Planners
a forward planner, or progression planner starts at the initial state, and applies actions in
an effort to find a path to the goal state until about the year 2000, forward planning was
thought to be too inefficient to be practical
o e.g. the textbook gives the example of buying a book online using the action
scheme Buy(isbn), where isbn is an ISBN number for a book
o an ISBN has at least 10 digits, so a planner would have to essentially
enumerate all 10 billion ISBN numbers to choose among Buy(isbn) actions
another problem with forward planning is that the search spaces can be huge for
∙
surprisingly, forward planning’s reputation changed in the early 2000s as good heuristics
were found that lead to the creation of efficient forward planning systems
for many decades, backwards planners were thought to be inherently more efficient
than
forward planners because they only consider actions that are relevant to the goal
the reason for their presumed superiority was they result in smaller branching factors
because they focus on relevant states
∙ when you start at the goal state, you can focus your search just on the actions
that are relevant to achieving that state
however, it does require that you apply actions “in reverse”, which results in having
to deal with sets of states instead of individual states (as in forward planning)
for example, one good general-purpose forward planning heuristic is to ignore pre-
conditions of actions: this gives a relaxed version of the planning problem that
is still hard to solve, but for which pretty good approximation algorithms exist
another useful technique is to create a so-called planning graph, which can give good
estimates of how many actions might be needed to reach a goal
∙ if you are curious, see the textbook for details on how to create planning
graphs, and how a planning graph can be used to make a complete planner
knowns as GraphPlan
In HTN planning, the initial plan is viewed as a very high level description of what
is to be done. This plan is refined by applying decomposition actions. Each action
decomposition reduces a higher level action to a partially ordered set of lower level
actions. This decomposition continues until only the primitive actions remain in the
plan.
∙ Then, “Take-Bus” plan can be further broken down into set of actions like: Goto
Mumbai – Bus stop, Buy-Ticket for Bus, Hop-on Bus, & Leave for Goa.
∙ Now, the four actions in previous point can be individually broken down. Take,
“By-Ticket for Bus”.
∙ It can be decomposed into: Go to Bus stop counter, Request Ticket & Pay for
Ticket.
∙ Thus, each of these actions can be decomposed further, until we reach the level
of actions that can be executed without deliberation to generate the required
motor control sequences.
Here, we also use the concept of “One level Partial Order Planner” where say, if you
plan to take a trip, you need to decide a location first. This can be done by One
Level Planner as:
Switch on computer > Start web browser > Open Redbus website > Select date >
Select class > Select bus > …
• HTN methods can create the very large plans required by many real-world
applications.
Many of the HTN planners are unable to handle uncertain outcomes of actions.
Assignment
50
Assignment
2. Behavior-Based Robotics
Students implement a behavior-based simulated tank agent in the AutoTank
environment with a reactive behavior-based architecture built using the Unified
Behavior Framework (UBF).
51
Part - A Question &
Answers
52
Part - A Question & Answers
1. Define ontological engineering (K1, CO4)
Data-driven search uses knowledge and constraints found in the given data to
search along lines known to be true. Use data-driven search if:
• All or most of the data are given in the initial problem statement.
• There are a large number of potential goals, but there are only a few ways to use
the facts and the given information of a particular problem.
Data-driven search uses knowledge and constraints found in the given data to
search along lines known to be true. Use data-driven search if:
• All or most of the data are given in the initial problem statement.
• There are a large number of potential goals, but there are only a few ways to use
the facts and the given information of a particular problem.
54
Part - B Questions
55
Part - B Questions
56
Supportive Online
Certification
Courses
57
Supportive Online Certification Courses
nanodegree--nd898
https://nptel.ac.in/courses/106/102/106102220/
58
Assessment
Schedule
59
15. ASSESSMENT SCHEDULE
Name of the
S.NO Start Date End Date Portion
Assessment
UNIT 5 , 1 &
5 Revision 1
2
6 Revision 2 UNIT 3 & 4
03.04.2025 17.04.2025
7 Model ALL 5 UNITS
60
Prescribed Text
Books and
Reference Books
60
17. PRESCRIBED TEXT BOOKS & REFERENCE BOOKS
TEXT BOOKS:
1. Peter Norvig and Stuart Russel, Artificial Intelligence: A Modern Approach‖,
Pearson, Fourth Edition, 2020.
REFERENCES:
1. Elaine Rich,Kevin Knight and B.Nair, Artificial Intelligence 3rd Edition, McGraw
Hill,2017.
1. Nils J.Nilsson, The Quest for Artificial Intelligence, Cambridge University Press,2009
2. Dan W.Patterson Introduction to Artificial Intelligence and expert systems,1st
Edition by Patterson, Pearson, India, 2015
Mini Project
Suggestions
63
Mini Project Suggestions
ontological engineering
The project idea is to find the optimal path for a vehicle to travel so that cost and
time can be minimized. This is a business problem that needs solutions.
64
Thank
you
Disclaimer:
This document is confidential and intended solely for the educational purpose of RMK Group
of Educational Institutions. If you have received this document through email in error, please
notify the system manager. This document contains proprietary information and is intended
only to the respective group / learning community as intended. If you are not the addressee
you should not disseminate, distribute or copy through e-mail. Please notify the sender
immediately by e-mail if you have received this document by mistake and delete this
document from your system. If you are not the intended recipient you are notified that
disclosing, copying, distributing or taking any action in reliance on the contentsof this
information is strictly prohibited.
65