0% found this document useful (0 votes)
102 views65 pages

Ai Unit 4 Digital Notes

This document outlines the course structure for 'Artificial Intelligence' at RMK Group of Educational Institutions, detailing objectives, prerequisites, syllabus, and course outcomes. It includes a comprehensive lecture plan, activity-based learning strategies, and lab programs to enhance practical understanding. The document is confidential and intended solely for educational purposes within the institution.

Uploaded by

23102208
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
102 views65 pages

Ai Unit 4 Digital Notes

This document outlines the course structure for 'Artificial Intelligence' at RMK Group of Educational Institutions, detailing objectives, prerequisites, syllabus, and course outcomes. It includes a comprehensive lecture plan, activity-based learning strategies, and lab programs to enhance practical understanding. The document is confidential and intended solely for educational purposes within the institution.

Uploaded by

23102208
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 65

1

2
Pleaseread this disclaimerbeforeproceeding:
This document is confidential and intended solely for the educational purpose of
RMK Group of Educational Institutions. If you have received this document
through email in error, please notify the system manager. This document
contains proprietary information and is intended only to the respective group /
learning community as intended. If you are not the addressee you should not
disseminate, distribute or copy through e-mail. Please notify the sender
immediately by e-mail if you have received this document by mistake and delete
this document from your system. If you are not the intended recipient you are
notified that disclosing, copying, distributing or taking any action in reliance on
the contentsof this information is strictlyprohibited.

3
4
22AI301 ARTIFICIAL INTELLIGENCE
(Lab Integrated)
Department: CSE
Batch/Year: 2023-2027/II
Created by : Dr.R.Sasikumar
Dr . M. Arun Manicka Raja
Dr. G. Indra
Dr. M. Raja Suguna
Mr.K.Mohanasundaram
Mrs.S.Logesswari

Date: 25.01.2024

4
Table of Contents

S.No Contents Page No.

1 Course Objectives 7

2 Pre-Requisites 9

3 Syllabus 11

4 Course outcomes 13

5 CO- PO/PSO Mapping 15

6 Lecture Plan 17

7 Activity based learning 19

8 Lecture Notes 23

9 Assignments 109

10 Part A Q & A 111

11 Part B Qs 117

12 Supportive online Certification courses 119

13 Real time Applications in day to day life and to Industry 121

14 Assessment Schedule 125

15 Prescribed Text Books & Reference Books 127

16 Mini Project suggestions 129

5
1. COURSE OBJECTIVES

To Explain the Foundations of AI and various intelligent agents


To discuss problem solving search strategies and game playing To describe

logical agents and first-order logic

To illustrate problem-solving strategies with knowledge representation

mechanism for hard problems

To Explain the basics of learning and expert systems

7
3. PRE REQUISITES

PRE-REQUISITE CHART

21CS202 –Python Programming


(Lab Integrated)

21CS201-Data Structures

21MA402-Probability and
Statistics

21CS502-Artificial Intelligence

8
4.SYLLABUS
22AI301-ARTIFICIAL INTELLIGENCE L TPC
(Lab Integrated)
3 02 4

Unit-I ARTIFICIAL INTELLIGENCE AND INTELLIGENT AGENTS 9+6


Introduction to AI–Foundations of Artificial Intelligence-Intelligent Agents-Agents and
Environment-Concept of rationality – Nature of environments – Structure of agents –
Problem Solving Agents–Example Problems – Search Algorithms – Uninformed Search

Strategies
Lab Programs:
1. Implement basic search strategies – 8-Puzzle, 8 - Queens problem.
2. Implement Breadth First Search & Depth first Search Algorithm
3. Implement Water Jug problem.
4. Solve Tic-Tac-Toe problem.

Unit II : PROBLEM SOLVING 9+6


Heuristics Search Strategies – Heuristic Functions - Game Play ing – Mini Max Algorithm-
Optimal Decisions in Games – Alpha - Beta Search – Monte Carlo Search for Games -
Constraint Satisfaction Problems – Constraint Propagation - Backtracking Search for CSP-
Local Search for CSP-Structure of CS
Lab Programs:
1. Implement A* and memory bounded A* algorithms.
2. Implement Minimax algorithm & Alpha-Beta pruning for game playing.
3. Constraint Satisfaction Problem
4. Mini Project – Chess. Sudoku

Unit III : LOGICAL AGENTS 9+6


Knowledge-based agents – Logic - Propositional logic – Propositional theorem proving –
Propositional model checking – Agents based on propositional logic First-Order Logic –
Syntax and semantics – Using First-Order Logic - Knowledge representation and engineering
– Inferences in first-order logic – Propositional Vs First Order Inference - Unification and
First-Order Inference - Forward chaining – Backward chaining – Resolution.
Lab Programs:
1. Implement Unification algorithm for the given logic.
2. Implement forward chaining and backward chaining using Python.
4.SYLLABUS
22AI301-ARTIFICIAL INTELLIGENCE L TPC
(Lab Integrated)
3 02 4

Unit IV : KNOWLEDGE REPRESENTATION AND PLANNING 9+6


Ontological engineering – Categories and objects – Events – Mental objects and modal logic
– Reasoning systems for categories – Reasoning with default information Classical planning –
Algorithms for classical planning – Heuristics for planning – Hierarchical planning – non-
deterministic domains – Time, schedule, and resources – Analysis
Lab Programs:
1. Implementation of object detection.
2. Implement classical planning algorithms

Unit V : LEARNING AND EXPERT SYSTEMS 9+6

Forms of Learning – Developing Machine Learning systems – Statistical Learning - Deep


Learning: Simple feed-forward network - Neural Networks – Reinforcement Learning:
Learning from rewards – Passive and active Reinforcement learning. Expert Systems:
Functions – Main structure – if-then rules for representing knowledge – developing the shell
– Dealing with uncertainty
Lab Programs:
1. Develop an Expert system.
2. 2. Mini-Project – Develop Machine Learning based classification Models.
6.CO-PO/PSO MAPPING

Programme Outcomes (POs), Programme Specific Outcomes (PSOs)

Course P P P P P P P P P P P P PS PS PS
Outcomes
O O O O O O O O O O O O O O O
(Cos)
1 2 3 4 5 6 7 8 9 10 11 12 1 2 3

21AI401.1 K
2 3 3 1 - - - - - - - - - - - -

21AI401.2 K
3 3 2 1 - - - - - - - - - - - -

21AI401.3 K
3 3 2 1 - - - - - - - - - - - -

5
21AI401.4 K
3 3 3 2 - - - - - - - - - - - -

21AI401.5 K
2 3 2 2 - - - - - - - - - - - -

11
LECTURE PLAN – UNIT IV

ASSESSMENTCOMPONENTS MODE OF DELEIVERY


AC 1. Unit Test MD 1. Oral presentation
AC 2. Assignment MD 2. Tutorial
AC 3. Course Seminar MD 3. Seminar
AC 4. Course Quiz MD 4 Hands On
AC 5. Case Study MD 5. Videos
AC 6. Record Work MD 6. Field Visit
AC 7. Lab / Mini Project
AC 8. Lab Model Exam
AC 9. Project Review

12
LECTURE PLAN – UNIT IV

UNIT IV
Sl. NO
No PRO ACT
OF
P UAL
PER OSE LECT PERTAINI
NG CO(s)
TAXONO
MY
MODE OF
DELIVERY
TOPIC D URE
I LEC LEVEL
TU
ODS
RE
PEROID PERI
OD

MD1, MD5
1 Knowledge Representation-
Ontological Engineering CO4 K3
CO4 K3 MD1, MD5
2 Categories & objects,Events

MD1, MD5
Mental objects and Model
CO4 K3
3 Logic
MD1, MD5
Reasoning System-Reasoning
4 with Default Information CO4 K3

5 CO4 K3 MD1, MD5


Classical Planning
MD1, MD5

Algorithm for Classical CO4 K3


6
planning
MD1, MD5
7 Heuristics for CO4 K3
planning/Planning Graph
MD1, MD5
8 CO4 K3
Hierarchical Planning

MD1, MD5
CO4 K3
9 Non-determininstic domains

MD1, MD5
Time schedule
CO4 K3
10 &resources,Analysis

9
Activity Based
Learning

14
Activity Based Learning

1. Play a game where the other person thinks of an animal you must identify. Does it
have feathers? Does it have fur? Does it walk on four legs? Is it black? In a small
number of questions we narrow down the possibilities until we know what it is. The
more we know about the features of animals, the easier it is to narrow it down. A
machine learning algorithm learns about the features of the specific things it
classifies. Each feature rules out some possibilities, but leaves others. Returning to
our robot that makes expressions. Was it a loud sound? Yes. Was it sudden? No. …
The robot should look unhappy. The right combination of features allows the
algorithm to narrow the possibilities down to one thing. The more data the
algorithm was trained on, the more patterns it can spot. The more patterns it spots,
the more rules about features it can create. The more rules about features it has,
the more questions it can make its decision on.

2. The Intelligent Piece of Paper Activity


This activity aims to introduce the topic of what a computer program is and how
everything computers do simply involves following instructions written by (creative)
computer programmers. It also aims to start a discussion about what intelligence is
and whether something that just blindly follows rules can be considered intelligent.

Materials

•A whiteboard or flipchart to write on so all can see.

• Two flip chart / whiteboard pens

•A copy of the intelligent piece of paper (possibly laminated)


• (optional) A musical greeting card that plays some appropriately horrible song.
Choose one that is recognizable by the different age group people.

Activity Document Link

15
LECTURE NOTES

UNIT IV SYLLABUS
Ontological Engineering -Categories and Objects – Events –
Mental Events and Mental Objects - Reasoning Systems for
Categories - Reasoning with Default Information-Classical
planning-Algorithms for Classical Planning- Heuristics for planning-
Hierarchical planning - Non-Deterministic domains- Time,
Schedule and resources-Analysis
4.1 ONTOLOGICAL ENGINEERING
In “toy” domains, the choice of representation is not that important; many choices
will work. Complex domains such as shopping on the Internet or driving a car in
traffic require more general and flexible representations. This chapter shows how to
create these representations, concentrating on general concepts—such as Events,
Time, Physical Objects, and Beliefs— that occur in many different domains.

Representing these abstract concepts is sometimes called ontological engineering.


The prospect of representing everything in the world is daunting. Of course, we
won’t actually write a complete description of everything—that would be far too
much for even a 1000-page textbook—but we will leave placeholders where new
knowledge for any domain can fit in.

17
Before considering the ontology further, we should state one important caveat. We
have elected to use first-order logic to discuss the content and organization of
knowledge, although certain aspects of the real world are hard to capture in FOL.
The principal difficulty is that most generalizations have exceptions or hold only to a
degree. For example, although “tomatoes are red” is a useful rule, some tomatoes
are green, yellow, or orange. Similar exceptions can be found to almost all the rules
in this chapter. The ability to handle exceptions and uncertainty is extremely
important, but is orthogonal to the task of understanding the general ontology.

For any special-purpose ontology, it is possible to make changes like these to move
toward greater generality. An obvious question then arises: do all these ontologies
converge on a general-purpose ontology? After centuries of philosophical and
computational investigation, the answer is “Maybe.” In this section, we present one
general-purpose ontology that synthesizes ideas from those centuries. Two major
characteristics of general-purpose ontologies distinguish them from collections of
special-purpose ontologies:

A general-purpose ontology should be applicable in more or less any special-purpose


domain (with the addition of domain-specific axioms). This means that no
representational issue can be finessed or brushed under the carpet.

• In any sufficiently demanding domain, different areas of knowledge must be


unified, because reasoning and problem solving could involve several areas
simultaneously. A robot circuit-repair system, for instance, needs to reason about
circuits in terms of electrical connectivity and physical layout, and about time, both
for circuit timing analysis and estimating labor costs.

18
Those ontologies that do exist have been created along four routes:
1. By a team of trained ontologist/logicians, who architect the ontology and write
axioms.

The CYC system was mostly built this way (Lenat and Guha, 1990).
2.Byimporting categories, attributes, and values from an existing database or
databases.

DBPEDIA was built by importing structured facts from Wikipedia (Bizer et al., 2007).
3.By parsing text documents and extracting information from them. TEXTRUNNER
was
built by reading a large corpus of Web pages (Banko and Etzioni, 2008).
4. By enticing unskilled amateurs to enter
commonsense knowledge. The OPENMIND
system was built by volunteers who proposed facts in English (Singh et al., 2002;
Chklovski and Gil, 2005).

4.2 CATEGORIES AND OBJECTS


The organization of objects into categories is a vital part of knowledge
representation. Although interaction with the world takes place at the level of
individual objects, much reasoning takes place at the level of categories. For
example, a shopper would normally have the goal of buying a basketball, rather
than a particular basketball such as BB9. Categories also serve to make predictions
about objects once they are classified. There are two choices for representing
categories in first-order logic: predicates and

objects. That is, we can use the predicate Basketball (b), or we can reify1 the
category as an object, Basketballs. We could then say Member(b, Basketballs ),
which we will abbreviate as b Є Basketballs, to say that b is a member of the
category of basketballs. We say Subset(Basketballs, Balls), abbreviated as
Basketballs Balls, to say that Basketballs is a subcategory of Balls. We will use
subcategory, subclass, and subset interchangeably.

19
Categories serve to organize and simplify the knowledge base through inheritance.
If we say that all instances of the category Food are edible, and if we assert that
Fruit is a subclass of Food and Apples is a subclass of Fruit , then we can infer that
every apple is edible. We say that the individual apples inherit the property of
edibility, in this case from their membership in the Food category. Subclass relations
organize categories into a taxonomy, or taxonomic hierarchy.

Taxonomies are also an important aspect of general commonsense knowledge. First-


order logic makes it easy to state facts about categories, either by relating objects to
categories or by quantifying over their members. Here are some types of facts, with
examples of each:

• An object is a member of a category.

BB9 Є Basketballs

•A category is a subclass of another category.

Basketballs Balls
• All members of a category have some properties. (x Є

Basketballs) ⇒ Spherical (x)

• Members of a category can be recognized by some properties.

Orange(x) 𝖠 Round (x) 𝖠 Diameter(x)=9.5 𝖠 x Є Balls ⇒ x Є Basketballs

•A category as a whole has some properties.

Dogs Є DomesticatedSpecies

20
Notice that because Dogs is a category and is a member of Domesticated Species ,
the latter must be a category of categories. Of course there are exceptions to many
of the above rules (punctured basketballs are not spherical); we deal with these
exceptions later. Although subclass and member relations are the most important
ones for categories, we also want to be able to state relations between categories
that are not subclasses of each other.

For example, if we just say that Males and Females are subclasses of Animals, then
we have not said that a male cannot be a female. We say that two or more
categories are

disjoint if they have no members in common. And even if we know that males and
females are disjoint, we will not know that an animal that is not a male must be a
female, unless we say that males and females constitute an exhaustive
decomposition of the animals. A disjoint exhaustive decomposition is known as a
partition.

The following examples illustrate these three concepts:

Disjoint({Animals,Vegetables})

ExhaustiveDecomposition({Americans,Canadians, Mexicans}, NorthAmericans)

Partition({Males, Females}, Animals) .

(Note that the ExhaustiveDecomposition of NorthAmericans is not a Partition,


because some people have dual citizenship.) The three predicates are defined as
follows:

Disjoint(s) ⇔ (∀ c1, c2 c1 Є s 𝖠 c2 Є s 𝖠 c1 _= c2 ⇒ Intersection(c1, c2)={ })

ExhaustiveDecomposition(s, c) ⇔ (∀i I Є c ⇔ ∃c2 c2 Є s 𝖠 i Є c2)

Partition(s, c) ⇔ Disjoint(s) 𝖠 ExhaustiveDecomposition(s, c) .

21
Categories can also be defined by providing necessary and sufficient conditions for
membership. For example, a bachelor is an unmarried adult male:

x Є Bachelors ⇔ Unmarried(x) 𝖠 x Є Adults 𝖠 x Є Males .


As we discuss in the sidebar on natural kinds on page 443, strict logical definitions
for categories are neither always possible nor always necessary.

4.2.1 Physical composition


We use the general PartOf relation to say that one thing is part of another. Objects
can be grouped into PartOf hierarchies, reminiscent of the Subset hierarchy:

PartOf (Bucharest , Romania) PartOf (Romania, EasternEurope)

PartOf (EasternEurope, Europe) PartOf (Europe, Earth) .

The PartOf relation is transitive and reflexive; that is, PartOf (x, y) 𝖠 PartOf (y, z) ⇒

PartOf (x, z) .

PartOf (x, x) .
Therefore, we can conclude PartOf (Bucharest , Earth). Categories of composite
objects are often characterized by structural relations among parts. An object is
composed of the parts in its PartPartition and can be viewed as deriving some
properties from those parts. For example, the mass of a composite

object is the sum of the masses of the parts For example, if the apples are Apple1,
Apple2, and Apple3, then “BunchOf ({Apple1,Apple2,Apple3})“ denotes the
composite object with the three apples as parts (not elements).

22
We can then use the bunch as a normal, albeit unstructured, object. Notice that
BunchOf ({x})= x. Furthermore, BunchOf (Apples) is the composite object consisting
of all apples—not to be confused with Apples, the category or set of all apples. We
can define BunchOf in terms of the PartOf relation. Obviously, each element of s is
part of BunchOf (s):

∀x x Є s ⇒ PartOf (x, BunchOf (s)) .


Furthermore, BunchOf (s) is the smallest object satisfying this condition. In other
words, BunchOf (s) must be part of any object that has all the elements of s as
parts:

∀ y [∀x x Є s ⇒ PartOf (x, y)] ⇒ PartOf (BunchOf (s), y) .


These axioms are an example of a general technique called logical minimization,
which means defining an object as the smallest one satisfying certain conditions.

4.2.2 Measurements
In both scientific and commonsense theories of the world, objects have height,
mass, cost, and so on. The values that we assign for these properties are called
measures. Ordinary quantitative measures are quite easy to represent. We imagine
that the universe includes abstract “measure objects,” such as the length that is the
length of this line segment.If the line segment is called L1, we can write

Length(L1)=Inches(1.5)=Centimeters(3.81) .

Conversion between units is done by equating multiples of one unit to another:

Centimeters(2.54 × d)=Inches(d) .

23
Similar axioms can be written for pounds and kilograms, seconds and days, and
dollars and cents. Measures can be used to describe objects as follows:

Diameter (Basketball 12)=Inches(9.5) . ListPrice(Basketball 12)=$(19) .

d Є Days ⇒ Duration(d)=Hours(24)

Note that $(1) is not a dollar bill! One can have two dollar bills, but there is only one
object named $(1). Note also that, while Inches(0) and Centimeters(0) refer to the
same zero length, they are not identical to other zero measures, such as
Seconds(0). The field of qualitative physics is a subfield of AI that investigates how
to reason about physical systems without plunging into detailed equations and
numerical simulations.

4.2.3 Objects: Things and stuff


The real world can be seen as consisting of primitive objects (e.g., atomic particles)
and composite objects built from them. By reasoning at the level of large objects
such as apples and cars, we can overcome the complexity involved in dealing with
vast numbers of primitive objects individually. There is, however, a significant portion
of reality that seems to defy any obvious individuation—division into distinct objects.
We give this portion the generic name stuff.

For example, suppose I have some butter and an aardvark in front of me. I can say
there is one aardvark, but there is no obvious number of “butter-objects,” because
any part of a butter-object is also a butter-object, at least until we get to very small
parts indeed. This is the major distinction between stuff and things. If we cut an
aardvark in half, we do not get two aardvarks (unfortunately). The English language
distinguishes clearly between stuff and things. We say “an aardvark,” but, except in
pretentious California restaurants, one cannot say “a butter.” Linguists distinguish
between count nouns, such as aardvarks, holes, and theorems, and mass nouns,
such as butter, water, and energy.

24
With some caveats about very small parts that we w omit for now, any part of a
butter-object is also a butter-object:

b Є Butter 𝖠 PartOf (p, b) ⇒ p Є Butter .


We can now say that butter melts at around 30 degrees centigrade:

b Є Butter ⇒ MeltingPoint(b,Centigrade(30)) .
We could go on to say that butter is yellow, is less dense than water, is soft at room
temperature, has a high fat content, and so on. On the other hand, butter has no
particular size, shape, or weight. We can define more specialized categories of
butter such as UnsaltedButter , which is also a kind of stuff. Note that the category
PoundOfButter , which includes as members all butter-objects weighing one pound,
is not a kind of stuff. If we cut a pound of butter in half, we do not, alas, get two
pounds of butter. What is actually going on is this: some properties are intrinsic:
they belong to the very substance of the object, rather than to the object as a
whole. When you cut an instance of stuff in half, the two pieces retain the intrinsic
properties—things like density, boiling point, flavor, color, ownership, and so on. On
the other hand, their extrinsic properties—weight, length, shape, and so on—are not
retained under subdivision.

A category of objects that includes in its definition only intrinsic properties is then a
substance, or mass noun; a class that includes any extrinsic properties in its
definition is a count noun. The category Stuff is the most general substance
category, specifying no intrinsic properties. The category Thing is the most general
discrete object category, specifying no extrinsic properties.

4.3 EVENTS
Consider a continuous action, such as filling a bathtub. Situation calculus can say
that the tub is empty before the action and full when the action is done, but it can’t
talk about what happens during the action. It also can’t describe two actions
happening at the same time—such as brushing one’s teeth while waiting for the tub
to fill. To handle such cases we introduce an alternative formalism known as event
calculus, which is based on points of time rather than on situations.

25
Event calculus reifies fluents and events. The fluent At(Shankar , Berkeley) is an object that
refers to the fact of Shankar being in Berkeley, but does not by itself say anything about
whether it is true. To assert that a fluent is actually true at some point in time we use the
predicate T, as in T(At(Shankar , Berkeley), t). Events are described as instances of event
categories. The event E1 of Shankar flying from San Francisco to Washington, D.C. is
described as

E1 Є Flyings 𝖠 Flyer (E1, Shankar ) 𝖠 Origin(E1, SF) 𝖠 Destination(E1,DC) .


If this is too verbose, we can define an alternative three-argument version of the category
of flying events and say

E1 Є Flyings(Shankar , SF,DC) .
We then use Happens(E1, i) to say that the event E1 took place over the time interval i, and
we say the same thing in functional form with Extent(E1)=i. We represent time intervals by
a (start, end) pair of times; that is, i = (t1, t2) is the time interval that starts at t1 and ends
at t2. The complete set of predicates for one version of the event calculus is ,

T(f, t) Fluent f is true at time t

Happens(e, i) Event e happens over the time interval i Initiates(e, f, t) Event e causes fluent

f to start to hold at time t


Terminates(e, f, t) Event e causes fluent f to cease to hold at time t Clipped(f, i) Fluent f

ceases to be true at some point during time interval i

Restored (f, i) Fluent f becomes true sometime during time interval i


We assume a distinguished event, Start , that describes the initial state by saying which
fluent are initiated or terminated at the start time. We define T by saying that a fluent holds
at a point in time if the fluent was initiated by an event at some time in the past and was
not made false (clipped) by an intervening event. A fluent does not hold if it was terminated
by an event and not made true (restored) by another event. Formally, the axioms are:

26
Happens(e, (t1, t2)) 𝖠 Initiates(e, f, t1) 𝖠 ¬Clipped(f, (t1, t)) 𝖠 t1 < t ⇒ T(f, t)
Happens(e, (t1, t2)) 𝖠 Terminates(e, f, t1)𝖠 ¬Restored (f, (t1, t)) 𝖠 t1 < t ⇒ ¬T(f, t)

where Clipped and Restored are defined by

Clipped(f, (t1, t2)) ⇔ Ǝ e, t, t3 Happens(e, (t, t3)) 𝖠 t1 ≤ t < t2 𝖠 Terminates(e, f, t)

Restored (f, (t1, t2)) ⇔ Ǝe, t, t3 Happens(e, (t, t3)) 𝖠 t1 ≤ t < t2 𝖠 Initiates(e, f, t)

It is convenient to extend T to work over intervals as well as time points; a fluent

holds over an interval if it holds on every point within the interval: T(f, (t1, t2)) ⇔ [∀

t (t1 ≤ t < t2) ⇒ T(f, t)]


Fluents and actions are defined with domain-specific axioms that are similar to
successor state axioms. For example, we can say that the only way a wumpus-world
agent gets an arrow is at the start, and the only way to use up an arrow is to shoot
it:

Initiates(e, HaveArrow(a), t) ⇔ e = Start Terminates(e, HaveArrow(a), t) ⇔ e Є

Shootings(a)

4.3.1 Processes
The events we have seen so far are what we call discrete events—they have a
definite structure. Shankar’s trip has a beginning, middle, and end. If interrupted
halfway, the event would be something different—it would not be a trip from San
Francisco to Washington, but instead a trip from San Francisco to somewhere over
Kansas. On the other hand, the category of events denoted by Flyings has a
different quality. If we take a small interval of Shankar’s flight, say, the third 20-
minute segment (while he waits anxiously for a bag of peanuts), that event is still a
member of Flyings. In fact, this is true for any subinterval

27
4.3.2 Time
intervals

28
Two intervals Meet if the end time of the first equals the start time of the second.
The complete set of interval relations, as proposed by Allen (1983), is shown
graphically in Figure 3.14 and logically below:

Figure 3.14 Predicates on time intervals.

4.3.3 Fluents and objects


Physical objects can be viewed as generalized events, in the sense that a physical
object is a chunk of space–time. For example, USA can be thought of as an event
that began in, say, 1776 as a union of 13 states and is still in progress today as a
union of 50. We can describe the changing properties of USA using state fluents,
such as Population(USA). A property of the USA that changes every four or eight
years, barring mishaps, is its president. One might propose that President(USA) is a
logical term that denotes a different object at different times.

4.4 MENTAL EVENTS AND MENTAL OBJECTS


The agents we have constructed so far have beliefs and can deduce new beliefs. Yet
none of them has any knowledge about beliefs or about deduction. Knowledge
about one’s own knowledge and reasoning processes is useful for controlling
inference. For example, suppose Alice asks “what is the square root of 1764” and
Bob replies “I don’t know.” If Alice insists “think harder,” Bob should realize that with
some more thought, this question can in fact be answered.

29
On the other hand, if the question were “Is your mother sitting down right now?”
then Bob should realize that thinking harder is unlikely to help. We begin with the
propositional attitudes that an agent can have toward mental objects: attitudes such
as Believes, Knows, Wants, Intends, and Informs. The difficulty is that these
attitudes do not behave like “normal” predicates. For example, suppose we try to
assert that Lois knows that Superman can fly:

Knows(Lois, CanFly(Superman)) .
One minor issue with this is that we normally think of CanFly(Superman) as a
sentence, but here it appears as a term. That issue can be patched up just be
reifying CanFly(Superman); making it a fluent. A more serious problem is that, if it is
true that Superman is Clark Kent, then we must conclude that Lois knows that Clark
can fly:

(Superman = Clark) 𝖠 Knows(Lois , CanFly(Superman)) |= Knows(Lois, CanFly(Clark


)) .
This is a consequence of the fact that equality reasoning is built into logic. Normally
that is a good thing; if our agent knows that 2 + 2 = 4 and 4 < 5, then we want our
agent to know that 2 + 2 < 5. This property is called referential transparency—it
doesn’t matter what term a logic uses to refer to an object, what matters is the
object that the term names. But for propositional attitudes like believes and knows,
we would like to have referential opacity—the terms used do matter, because not all
agents know which terms are co-referential.

30
Modal logic is designed to address this problem. Regular logic is concerned with a
single modality, the modality of truth, allowing us to express “P is true.” Modal logic
includes special modal operators that take sentences (rather than terms) as
arguments. For example, “A knows P” is represented with the notation KAP, where K
is the modal operator for knowledge. It takes two arguments, an agent (written as
the subscript) and a sentence. The syntax of modal logic is the same as first-order
logic, except that sentences can also be formed with modal operators.

In general, a knowledge atom KAP is true in world w if and only if P is true in every
world accessible from w. The truth of more complex sentences is derived by
recursive application of this rule and the normal rules of first-order logic. That
means that modal logic can be used to reason about nested knowledge sentences:
what one agent knows about another agent’s knowledge. Figure 3.15 shows some
possible worlds for this domain, with accessibility relations for Lois and Superman.

In the TOP-LEFT diagram, it is common knowledge that Superman knows his own
identity, and neither he nor Lois has seen the weather report. So in w0 the worlds
w0 and w2 are accessible to Superman; maybe rain is predicted, maybe not. For Lois
all four worlds are accessible from each other; she doesn’t know anything about the
report or if Clark is Superman. But she does know that Superman knows whether he
is Clark, because in every world that is accessible to Lois, either Superman knows I,
or he knows ¬I. Lois does not know which is the case, but either way she knows
Superman knows.

In the TOP-RIGHT diagram it is common knowledge that Lois has seen the weather
report. So in w4 she knows rain is predicted and in w6 she knows rain is not
predicted Superman does not know the report, but he knows that Lois knows,
because in every world that is accessible to him, either she knows R or she knows

¬R.
Modal logic solves some tricky issues with the interplay of quantifiers and
knowledge. The English sentence “Bond knows that someone is a spy” is
ambiguous. The first reading is that there is a particular someone who Bond knows
is a spy; we can write this as

31
Ǝx KBondSpy(x) ,
which in modal logic means that there is an x that, in all accessible worlds, Bond
knows to be a spy. The second reading is that Bond just knows that there is at least
one spy:
KBond Ǝx Spy(x) .

The modal logic interpretation is that in each accessible world there is an x that is a
spy, but it need not be the same x in each world. Now that we have a modal
operator for knowledge, we can write axioms for it. First, we can say that agents are
able to draw deductions; if an agent knows P and knows that P implies Q, then the
agent knows Q:

32
(KaP 𝖠 Ka(P ⇒ Q)) ⇒ KaQ.

From this (and a few other rules about logical identities) we can establish that KA(P
∨ ¬P) is a tautology; every agent knows every proposition P is either true or false.
On the other hand, (KAP) ∨ (KA¬P) is not a tautology; in general, there will be lots
of propositions that an agent does not know to be true and does not know to be
false. It is said (going back to Plato) that knowledge is justified true belief. That is, if
it is true, if you believe it, and if you have an unassailably good reason, then you
know it. That

means that if you know something, it must be true, and we have the axiom: KaP ⇒

P.
Furthermore, logical agents should be able to introspect on their own knowledge. If
they know something, then they know that they know it:

KaP ⇒ Ka(KaP) .

4.5 REASONING SYSTEMS FOR CATEGORIES


Categories are the primary building blocks of large-scale knowledge representation
schemes. This section describes systems specially designed for organizing and
reasoning with categories. There are two closely related families of systems:
semantic networks provide graphical aids for visualizing a knowledge base and
efficient algorithms for inferring properties of an object on the basis of its category
membership; and description logics provide a formal language for constructing and
combining category definitions and efficient algorithms for deciding subset and
superset relationships between categories.

33
Semantic networks
The notation that semantic networks provide for certain kinds of sentences is often
more convenient, but if we strip away the “human interface” issues, the underlying
concepts—objects, relations, quantification, and so on—are the same. There are
many variants of semantic networks, but all are capable of representing individual
objects, categories of objects, and relations among objects. A typical graphical
notation displays object or category names in ovals or boxes, and connects them
with labeled links.

For example, Figure 3.15 has a MemberOf link between Mary and FemalePersons ,
corresponding to the logical assertion Mary Є FemalePersons ; similarly, the SisterOf
link between Mary and John corresponds to the assertion SisterOf (Mary, John). We
can connect categories using SubsetOf links, and so on. It is such fun drawing
bubbles and arrows that one can get carried away. For example, we know that
persons have female persons as mothers, so can we draw a HasMother link from
Persons to FemalePersons? The answer is no, because HasMother is a relation
between a person and his or her mother, and categories do not have mothers For
this reason, we have used a special notation—the double-boxed link—in Figure 3.15

This link asserts that

∀x x Є Persons ⇒ [∀ y HasMother (x, y) ⇒ y Є FemalePersons ] .

We might also want to assert that persons have two legs—that is,

∀x xЄ Persons ⇒ Legs(x, 2) .

34
As before, we need to be careful not to assert that a category has legs; the single-
boxed link in Figure 3.15 is used to assert properties of every member of a category.
Thus, to find out how many legs Mary has, the inheritance algorithm follows the
MemberOf link from Mary to the category she belongs to, and then follows SubsetOf
links up the hierarchy until it finds a category for which there is a boxed Legs link—
in this case, the Persons category. The simplicity and efficiency of this inference
mechanism, compared with logical theorem proving, has been one of the main
attractions of semantic networks.

Figure 3.15 A semantic network with four objects (John, Mary, 1, and 2) and four
categories.

Relations are denoted by labeled links. Inheritance becomes complicated when an


object can belong to more than one category or
when a category can be a subset of more than one other category;
this is called multiple inheritance. In such cases, the inheritance
algorithm might find two or more conflicting values answering the
query. For this reason, multiple inheritance is banned in some
object-oriented programming (OOP) languages, such as Java, that
use inheritance in a class hierarchy. The reader might have noticed
an obvious drawback o3f4 semantic network notation, compared to
first-order logic: the fact that links between bubbles
Reification of propositions makes it possible to represent every ground, function-free
atomic sentence of first-order logic in the semantic network notation. Certain kinds
of univer sally quantified sentences can be asserted using inverse links and the
singly boxed and doubly boxed arrows applied to categories, but that still leaves us a
long way short of full first-order logic. Negation, disjunction, nested function
symbols, and existential quantification are all missing. One of the most important
aspects of semantic networks is their ability to represent default values for
categories. We say that the default is overridden by the more specific value. Notice
that we could also override the default number of legs by creating a category of
OneLeggedPersons, a subset of Persons of which John is a member. We can retain a
strictly logical semantics for the network if we say that the Legs assertion for
Persons includes an exception for John:

∀x x Є Persons 𝖠x _= John ⇒ Legs(x, 2)

Description logics
The syntax of first-order logic is designed to make it easy to say things about
objects. Description logics are notations that are designed to make it easier to
describe definitions and properties of categories. Description logic systems evolved
from semantic networks in response to pressure to formalize what the networks
mean while retaining the emphasis on taxonomic structure as an organizing
principle.

The principal inference tasks for description logics are subsumption (checking if one
category is a subset of another by comparing their definitions) and classification
(checking whether an object belongs to a category).. Some systems also include
consistency of a categorydefinition—whether the membership criteria are logically
satisfiable. The syntax of descriptions in a subset of the CLASSIC language is given
below:

36
For example, to say that bachelors are unmarried adult males we would write
Bachelor = And(Unmarried, Adult ,Male) . The equivalent in first-order logic would
be

Bachelor (x) Unmarried(x) 𝖠 Adult(x) 𝖠 Male(x) .


Notice that the description logic has an an algebra of operations on predicates,
which of course we can’t do in first-order logic. Any description in CLASSIC can be
translated into an equivalent first-order sentence, but some descriptions are more
straightforward in CLASSIC. For example, to describe the set of men with at least
three sons who are all unemployed and married to doctors, and at most two
daughters who are all professors in physics or math departments, we would use

And(Man, AtLeast(3, Son), AtMost(2, Daughter ),

All(Son, And(Unemployed,Married, All(Spouse, Doctor ))),

All(Daughter , And(Professor , Fills(Department , Physics,Math)))) .

37
We leave it as an exercise to translate this into first-order logic. Perhaps the most
important aspect of description logics is their emphasis on tractability of inference. A
problem instance is solved by describing it and then asking if it is subsumed by one
of several possible solution categories. This sounds wonderful in principle, until one
realizes that it can only have one of two consequences: either hard problems cannot
be stated at all, or they require exponentially large descriptions!

However, the tractability results do shed light on what sorts of constructs cause
problems and thus help the user to understand how different representations
behave. For example, description logics usually lack negation and disjunction. Each
forces firstorder logical systems to go through a potentially exponential case analysis
in order to ensure completeness. CLASSIC allows only a limited form of disjunction
in the Fills and one of constructs, which permit disjunction over explicitly
enumerated individuals but not over descriptions. With disjunctive descriptions,
nested definitions can lead easily to an exponential number of alternative routes by
which one category can subsume another.

4.6 REASONING WITH DEFAULT INFORMATION Circumscription and

default logic

Simple introspection suggests that these failures of monotonicity are widespread in


commonsense reasoning. It seems that humans often “jump to conclusions.” For
example, when one sees a car parked on the street, one is normally willing to
believe that it has four wheels even though only three are visible. Now, probability
theory can certainly provide a conclusion that the fourth wheel exists with high
probability, yet, for most people, the possibility of the car’s not having four wheels
does not arise unless some new evidence presents itself. Thus, it seems that the
four-wheel conclusion is reached by default, in the absence of any reason to doubt
it.

38
If new evidence arrives—for example, if one sees the owner carrying a wheel and
notices that the car is jacked up—then the conclusion can be retracted. This kind of
reasoning is said to exhibit nonmonotonicity, because the set of beliefs does not
grow monotonically over time as new evidence arrives. Nonmonotonic logics have
been devised with modified notions of truth and entailment in order to capture such
behavior. We will look at two such logics that have been studied extensively:
circumscription and default logic.

Circumscription can be seen as a more powerful and precise version of the closed
world assumption. The idea is to specify particular predicates that are assumed to
be “as false as possible”—that is, false for every object except those for which they
are known to be true. For example, suppose we want to assert the default rule that
birds fly. We would introduce a predicate, say Abnormal 1(x), and write

Bird(x) 𝖠¬Abnormal 1(x) ⇒ Flies(x) .

If we say that Abnormal 1 is to be circumscribed, a circumscriptive reasoner is


entitled to assume ¬Abnormal 1(x) unless Abnormal 1(x) is known to be true. This
allows the conclusion Flies(Tweety) to be drawn from the premise Bird(Tweety ), but
the conclusion no longer holds if Abnormal 1(Tweety) is asserted. The standard
example for which multiple inheritance is problematic is called the “Nixon diamond.”
It arises from the observation that Richard Nixon was both a Quaker (and hence by
default a pacifist) and a Republican (and hence by default not a pacifist). We can
write this as follows:

39
Republican(Nixon) 𝖠 Quaker(Nixon) .

Republican(x) 𝖠¬Abnormal 2(x) ⇒ ¬Pacifist (x) .

Quaker(x) 𝖠 ¬Abnormal 3(x) ⇒ Pacifist (x) .


If we circumscribe Abnormal 2 and Abnormal 3, there are two preferred models: one
in which Abnormal 2(Nixon) and Pacifist (Nixon) hold and one in which Abnormal
3(Nixon) and ¬Pacifist(Nixon) hold. Thus, the circumscriptive reasoner remains
properly agnostic as to whether Nixon was a pacifist. If we wish, in addition, to
assert that religious beliefs take precedence over political beliefs, we can use a
formalism called prioritized circumscription to give preference to models where
Abnormal 3 is minimized.

Default logic is a formalism in which default rules can be written to generate


contingent, nonmonotonic conclusions. A default rule looks like this:

Bird(x) : Flies(x)/Flies(x) .
This rule means that if Bird(x) is true, and if Flies(x) is consistent with the
knowledge base, then Flies(x) may be concluded by default. In general, a default
rule has the form P : J1, . . . , Jn/C where P is called the prerequisite, C is the
conclusion, and Ji are the justifications—if any one of them can be proven false,
then the conclusion cannot be drawn. Any variable that appears in Ji or C must also
appear in P.The Nixon-diamond example can be represented in default logic with
one fact and two default rules:

Republican(Nixon) 𝖠Quaker(Nixon) .

Republican(x) : ¬Pacifist (x)/¬Pacifist (x) .

Quaker(x) : Pacifist (x)/Pacifist (x) .

40
To interpret what the default rules mean, we define the notion of an extension of a
default theory to be a maximal set of consequences of the theory. That is, an
extension S consists of the original known facts and a set of conclusions from the
default rules, such that no additional conclusions can be drawn from S and the
justifications of every default conclusion in S are consistent with S.

Truth maintenance systems


We have seen that many of the inferences drawn by a knowledge representation
system will have only default status, rather than being absolutely certain. Inevitably,
some of these inferred facts will turn out to be wrong and will have to be retracted
in the face of new information. This process is called belief revision. Suppose that a
knowledge base KB contains a sentence P—perhaps a default conclusion recorded
by a forward-chaining algorithm, or perhaps just an incorrect assertion—and we
want to execute TELL(KB, ¬P). To avoid creating a contradiction, we must first
execute RETRACT(KB, P).

This sounds easy enough. The obvious “solution”—retracting all sentences inferred
from P—fails because such sentences may have other justifications besides P. For
example, if R and R ⇒ Q are also in the KB, then Q does not have to be removed
after all. Truth maintenance systems, or TMSs, are designed to handle exactly these
kinds of complications.

One simple approach to truth maintenance is to keep track of the order in which
sentences are told to the knowledge base by numbering them from P1 to Pn. When
the call RETRACT(KB, Pi) is made, the system reverts to the state just before Pi was
added, thereby removing both Pi and any inferences that were derived from Pi. The
sentences Pi+1 through Pn can then be added again. Amore efficient approach JTMS
is the justification-based truth maintenance system, or JTMS. In a JTMS, each
sentence in the knowledge base is annotated with a justification consisting of the set
of sentences from which it was inferred.

41
The JTMS assumes that sentences that are considered once will probably be
considered again, so rather than deleting a sentence from the knowledge base
entirely when it loses all justifications, we merely mark the sentence as being out of
the knowledge base. If a subsequent assertion restores one of the justifications,
then we mark the sentence as being back in. In addition to handling the retraction
of incorrect information, TMSs can be used to speed up the analysis of multiple
hypothetical situations. Suppose, for example, that the Romanian Olympic
Committee is choosing sites for the swimming, athletics, and equestrian events at
the 2048 Games to be held in Romania.

For example, let the first hypothesis be Site(Swimming, Pitesti ),


Site(Athletics,Bucharest ), and Site(Equestrian, Arad). A great deal of reasoning
must then be done to work out the logistical consequences and hence the
desirability of this selection. If we want to consider Site(Athletics, Sibiu) instead, the
TMS avoids the need to start again from scratch. Instead, we simply retract
Site(Athletics,Bucharest ) and assert Site(Athletics, Sibiu) and the TMS takes care of
the necessary revisions. An assumption-based truth maintenance system, or ATMS,
makes this type of context switching between hypothetical worlds particularly
efficient. In a JTMS, the maintenance of justifications allows you to move quickly
from one state to another by making a few retractions and assertions, but at any
time only one state is represented. An ATMS represents all the states that have ever
been considered at the same time. Whereas a JTMS simply labels each sentence as
being in or out, an ATMS keeps track, for each sentence, of which assumptions
would cause the sentence to be true.

42
Truth maintenance systems also provide a mechanism for generating explanations.
Technically, an explanation of a sentence P is a set of sentences E such that E entails
P.If the sentences in E are already known to be true, then E simply provides a
sufficient basis for proving that P must be the case. But explanations can also

include assumptions— sentences that are not known to be true, but would suffice to
prove P if they were true The exact algorithms used to implement truth
maintenance systems are a little complicated, and we do not cover them here. The
computational complexity of the truth maintenance problem is at least as great as
that of propositional inference—that is, NP-hard. When used carefully, however, a
TMS can provide a substantial increase in the ability of a logical system to handle

complex environments and hypotheses.

4.7 Classical Planning

What is planning in AI?

∙ The planning in Artificial Intelligence is about the decision making tasks


performed by the robots or computer programs to achieve a specific goal.

∙ The execution of planning is about choosing a sequence of actions with a high


likelihood to complete the specific task.

Blocks-World planning problem


∙ The blocks-world problem is known as Sussman Anomaly.

∙ Noninterleaved planners of the early 1970s were unable to solve this problem,
hence it is considered as anomalous.

∙ When two subgoals G1 and G2 are given, a noninterleaved planner produces


either a plan for G1 concatenated with a plan for G2, or vice-versa.

∙ In blocks-world problem, three blocks labeled as 'A', 'B', 'C' are allowed to rest on
the flat surface. The given condition is that only one block can be moved at a time
to achieve the goal.

43
∙ The start state and goal state are shown in the following
diagram.

1. Components of Planning System


The planning consists of following important steps:

∙ Choose the best rule for applying the next rule based on the best available heuristics.

∙ Apply the chosen rule for computing the new problem state.

∙ Detect when a solution has been found.


∙ Detect dead ends so that they can be abandoned and the system’s effort is
directed in more fruitful directions.

∙ Detect when an almost correct solution has been found.

Goal stack planning: This is one of the most important planning algorithms, which is
specifically used by STRIPS.
∙ The stack is used in an algorithm to hold the action and satisfy the goal. A
knowledge base is used to hold the current state, actions.
∙ Goal stack is similar to a node in a search tree, where the branches are created if
there is a choice of an action.

The important steps of the algorithm are as stated below:


i. Start by pushing the original goal on the stack. Repeat this until the stack becomes
empty. If stack top is a compound goal, then push its unsatisfied subgoals on the
stack.
ii. If stack top is a single unsatisfied goal then, replace it by an action and push the

action’s precondition on the stack to satisfy the condition.

iii. If stack top is an action, pop it from the stack, execute it and change the knowledge
base by the effects of the action.

iv. If stack top is a satisfied goal, pop it from the stack.


4.7.2 Non-linear planning
∙ This planning is used to set a goal stack and is included in the search space of all
possible subgoal orderings. It handles the goal interactions by interleaving method.

Advantage of non-Linear planning


Non-linear planning may be an optimal solution with respect to plan length
(depending on search strategy used).
Disadvantages of Nonlinear planning
∙ It takes larger search space, since all possible goal orderings are taken into

consideration.

∙ Complex algorithm to understand.

Algorithm
1. Choose a goal 'g' from the goalset
2. If 'g' does not match the state, then

• Choose an operator 'o' whose add-list matches goal g

• Push 'o' on the opstack

• Add the preconditions of 'o' to the goalset

1. While all preconditions of operator on top of opstack are met in state

∙ Pop operator o from top of opstack

∙ state = apply(o, state)

∙ plan = [plan; o]
4.7.3 Planners: Forward Planners
a forward planner, or progression planner starts at the initial state, and applies actions in
an effort to find a path to the goal state until about the year 2000, forward planning was
thought to be too inefficient to be practical

some apparent problems with forward planning are:


∙ how can you choose what action to apply to a state when there are, say, thousands of

actions to choose from?

o e.g. the textbook gives the example of buying a book online using the action
scheme Buy(isbn), where isbn is an ISBN number for a book
o an ISBN has at least 10 digits, so a planner would have to essentially
enumerate all 10 billion ISBN numbers to choose among Buy(isbn) actions

another problem with forward planning is that the search spaces can be huge for

even problems that don’t seem that big


o e.g. the textbook gives the problem 10 airports, where each airport has 5
planes and 20 pieces of cargo

o the problem is to move all the cargo at airport A to airport B


o the solution is easy: put all 20 pieces of cargo into a plane at A, and fly it
to B, and unload all the cargo (for a total of 41 actions)
o the difficulty is that the branching factor is very large, i.e. there are 50 planes
in total that can fly to 9 different airports, and 200 pieces of cargo that can
be loaded or unloaded

▪ between 450 and 10450 actions per state!


▪ if we assume 2000 actions per state on average, then the obvious 41
move solution is in a search tree with 200041200041 nodes

surprisingly, forward planning’s reputation changed in the early 2000s as good heuristics
were found that lead to the creation of efficient forward planning systems

Planners: Backward Planners


a backward planner, or regression planner, starts at the goal state, and works backwards
from that to try to find a path to the state

for many decades, backwards planners were thought to be inherently more efficient
than
forward planners because they only consider actions that are relevant to the goal

the reason for their presumed superiority was they result in smaller branching factors
because they focus on relevant states
∙ when you start at the goal state, you can focus your search just on the actions
that are relevant to achieving that state

however, it does require that you apply actions “in reverse”, which results in having
to deal with sets of states instead of individual states (as in forward planning)

4.7.4 Planning Heuristics

it appears now that the best planners are forward planners

the essential reason is that very good general-purpose, and problem-specific,


heuristics have been found for forward planners, but similarly good heuristics
have not been found for backwards planners

for example, one good general-purpose forward planning heuristic is to ignore pre-
conditions of actions: this gives a relaxed version of the planning problem that
is still hard to solve, but for which pretty good approximation algorithms exist

another useful technique is to create a so-called planning graph, which can give good
estimates of how many actions might be needed to reach a goal
∙ if you are curious, see the textbook for details on how to create planning
graphs, and how a planning graph can be used to make a complete planner
knowns as GraphPlan

4.7.5 Planners: SATPlan

another efficient approach to basic planning is to convert a planning problem into a


SAT problem, and then to use a SAT solver to find a solution

the trick here is to come up with a propositional encoding of a SAT problem


∙ if you are curious, section 10.4.1 of the textbook outlines how to do this
conversion: it has many steps, but it is fairly straightforward
4.8 Hierarchical Planning in AI
Hierarchical planning is a planning method based on Hierarchical Task Network
(HTN) or HTN planning. It combines ideas from Partial Order Planning & HTN
Planning. HTN planning is often formulated with a single “top level” action called
Act, where the aim is to find an implementation of Act that achieves the goal.

In HTN planning, the initial plan is viewed as a very high level description of what
is to be done. This plan is refined by applying decomposition actions. Each action
decomposition reduces a higher level action to a partially ordered set of lower level
actions. This decomposition continues until only the primitive actions remain in the
plan.

Consider the example of a hierarchical plan to travel from a certain source to


destination,

Hierarchical Plan to travel from a certain Source to Destination by


Bus
∙ In the above hierarchical planner diagram, suppose we are Travelling from
source “Mumbai” to Destination “Goa”.
∙ Then, you can plan how to travel: whether by Plane, Bus, or a Car. Suppose, you
choose to travel by “Bus”.

∙ Then, “Take-Bus” plan can be further broken down into set of actions like: Goto
Mumbai – Bus stop, Buy-Ticket for Bus, Hop-on Bus, & Leave for Goa.
∙ Now, the four actions in previous point can be individually broken down. Take,
“By-Ticket for Bus”.
∙ It can be decomposed into: Go to Bus stop counter, Request Ticket & Pay for
Ticket.

∙ Thus, each of these actions can be decomposed further, until we reach the level
of actions that can be executed without deliberation to generate the required
motor control sequences.

Here, we also use the concept of “One level Partial Order Planner” where say, if you
plan to take a trip, you need to decide a location first. This can be done by One
Level Planner as:

Switch on computer > Start web browser > Open Redbus website > Select date >
Select class > Select bus > …

Advantages of Hierarchical Planning:


The key benefit of hierarchical structure is that, at each level of the hierarchy, plan is
reduced to a small number of activities at the next lower level,so the
computational cost of finding the correct way to arrange those activities for the
current problem is small.

• HTN methods can create the very large plans required by many real-world
applications.

• Hierarchical structure makes it easy to fix problems in case things go wrong.


• For complex problems hierarchical planning is much more efficient than single
level planning.

Disadvantages of Hierarchical Planning:

Many of the HTN planners require a Deterministic environment.

Many of the HTN planners are unable to handle uncertain outcomes of actions.
Assignment

50
Assignment

1. Nearest Neighbor Classification


This assignment engages students in basic Machine Learning concepts and
implementation, including classification and similarity-based search, with minimal
background knowledge.

Nearest neighbor classification

2. Behavior-Based Robotics
Students implement a behavior-based simulated tank agent in the AutoTank
environment with a reactive behavior-based architecture built using the Unified
Behavior Framework (UBF).

Behavior based robotics

51
Part - A Question &
Answers

52
Part - A Question & Answers
1. Define ontological engineering (K1, CO4)

How to create choice of representations, concentrating on general concepts- such


as Actions, Time, Physical Objects, and Beliefs-that occur in many different
domains. Representing these abstract concepts is sometimes called ontological
engineering-it is related to the knowledge engineering process, but operates on a
grander scale. The prospect of representing everything in the world is daunting.

2. Distinguish between predicate and propositional logic. (K1, CO4)


Propositional logic (also called sentential logic) is the logic the includes sentence
letters (A,B,C) and logical connectives, but not quantifiers. The semantics of
propositional logic uses truth assignments to the letters to determine whether a

compound propositional sentence is true.


Predicate logic is usually used as a synonym for first-order logic, but sometimes it
is used to refers to other logics that have similar syntax. Syntactically, first-order
logic has the same connectives as propositional logic, but it also has variables for
individual objects, quantifiers, symbols for functions, and symbols for relations.
The semantics include a domain of discourse for the variables and quantifiers to
range over, along with interpretations of the relation and function symbols.

3. What is data-driven search? (forward chaining) (K1, CO4)


In data-driven search, sometimes called forward chaining, the problem solver
begins with the given facts and a set of legal moves or rules for changing the
state. Search proceeds by applying rules to facts to produce new facts. This
process continues until (hopefully) it generates a path that satisfies the goal
condition.

Data-driven search uses knowledge and constraints found in the given data to
search along lines known to be true. Use data-driven search if:

• All or most of the data are given in the initial problem statement.
• There are a large number of potential goals, but there are only a few ways to use
the facts and the given information of a particular problem.

It is difficult to form a goal or hypothesis.


53
4. What is data-driven search? (forward chaining) (K1, CO4)
In data-driven search, sometimes called forward chaining, the problem solver begins
with the given facts and a set of legal moves or rules for changing the state. Search
proceeds by applying rules to facts to produce new facts. This process continues
until (hopefully) it generates a path that satisfies the goal condition.

Data-driven search uses knowledge and constraints found in the given data to
search along lines known to be true. Use data-driven search if:

• All or most of the data are given in the initial problem statement.
• There are a large number of potential goals, but there are only a few ways to use
the facts and the given information of a particular problem.

It is difficult to form a goal or hypothesis.

5. What is Goal-driven search? (Backward chaining) (K1,CO4)


Goal driven approach focus on the goal, search the facts and rules that could be able
to produce that goal. It works by processing in backward direction via rules and sub
goals to reach to the facts of the problem. It is also called backward chaining

54
Part - B Questions

55
Part - B Questions

1. Describe the differences and similarities between problem solving and


planning. (K2,CO4)
2. Describe in detail about Classical planning? (K2, CO4)

2.Discuss about FSP and BSP? (K3, CO4)

2.Elaborate in detail about Ontological Engineering? (K2, CO4)

2.What is Reasoning? Explain reasoning systems for categories and reasoning


with default information? (K2, CO4)

2.Explain Hierarchical planning(K3,CO3)

2.Discuss about Non-deterministic domains in terms of Schedule, Time,


Resource analysis? (K3,CO3)

56
Supportive Online
Certification
Courses

57
Supportive Online Certification Courses

1. Udacity: Artificial Intelligence


https://www.udacity.com/course/ai-artificial-intelligence-

nanodegree--nd898

2. NPTEL: Artificial Intelligence

https://nptel.ac.in/courses/106/102/106102220/

58
Assessment
Schedule

59
15. ASSESSMENT SCHEDULE

Tentative schedule for the Assessment During 2022-2023 ODD


semester

Name of the
S.NO Start Date End Date Portion
Assessment

1 Unit Test 1 UNIT 1


27.01.2025 01.02.2025
2 IAT 1 UNIT 1 & 2

3 Unit Test 2 UNIT 3


10.03.2025 15.03.2025
4 IAT 2 UNIT 3 & 4

UNIT 5 , 1 &
5 Revision 1
2
6 Revision 2 UNIT 3 & 4
03.04.2025 17.04.2025
7 Model ALL 5 UNITS

60
Prescribed Text
Books and
Reference Books

60
17. PRESCRIBED TEXT BOOKS & REFERENCE BOOKS

TEXT BOOKS:
1. Peter Norvig and Stuart Russel, Artificial Intelligence: A Modern Approach‖,
Pearson, Fourth Edition, 2020.

2. Bratko, Prolog: Programming for Artificial Intelligence‖, Fourth edition, Addison-


Wesley Educational Publishers Inc., 2011.

REFERENCES:
1. Elaine Rich,Kevin Knight and B.Nair, Artificial Intelligence 3rd Edition, McGraw
Hill,2017.

2. Melanie Mitchel, Artificial Intelligence: A Guide for Thinking Humans. Series :


Pelican Books,2020

3. Ernest Friedman-Hill, Jess in action, Rule-Based Systems in Java, Manning


Publications,2003

1. Nils J.Nilsson, The Quest for Artificial Intelligence, Cambridge University Press,2009
2. Dan W.Patterson Introduction to Artificial Intelligence and expert systems,1st
Edition by Patterson, Pearson, India, 2015
Mini Project
Suggestions

63
Mini Project Suggestions

1. Create a semantic web projects


It can be social network. Create the web portal which has a semantic connectivity
with all other social network. Where you can understand the concepts of

ontological engineering

2. Optimal Path detection


AI Project Idea – One of the challenging tasks of AI is to find the optimal path
from one place to the destination place.

The project idea is to find the optimal path for a vehicle to travel so that cost and
time can be minimized. This is a business problem that needs solutions.

64
Thank
you

Disclaimer:

This document is confidential and intended solely for the educational purpose of RMK Group
of Educational Institutions. If you have received this document through email in error, please
notify the system manager. This document contains proprietary information and is intended
only to the respective group / learning community as intended. If you are not the addressee
you should not disseminate, distribute or copy through e-mail. Please notify the sender
immediately by e-mail if you have received this document by mistake and delete this
document from your system. If you are not the intended recipient you are notified that
disclosing, copying, distributing or taking any action in reliance on the contentsof this
information is strictly prohibited.

65

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy