Ai Unit 3 Digital Notes
Ai Unit 3 Digital Notes
2
Pleaseread this disclaimerbeforeproceeding:
This document is confidential and intended solely for the educational purpose of
RMK Group of Educational Institutions. If you have received this document
through email in error, please notify the system manager. This document
contains proprietary information and is intended only to the respective group /
learning community as intended. If you are not the addressee you should not
disseminate, distribute or copy through e-mail. Please notify the sender
immediately by e-mail if you have received this document by mistake and delete
this document from your system. If you are not the intended recipient you are
notified that disclosing, copying, distributing or taking any action in reliance on
the contentsof this information is strictlyprohibited.
3
22AI301 ARTIFICIAL INTELLIGENCE
(Lab Integrated)
Department: CSE
Batch/Year: 2023-2027/II
Created by: Dr.R.Sasikumar
Dr . M. Arun Manicka Raja
Dr. G. Indra
Dr. M. Raja Suguna
Mr.K.Mohanasundaram
Mrs.S.Logesswari
Date: 25.01.2024
4
Table of Contents
1 Contents 6
2 Course Objectives 7
3 Pre-Requisites 9
4 Sylabus 11
5 Course outcomes 13
7 Lecture Plan 17
9 Lecture Notes 23
10 Assignments 109
12 Part B Qs 117
5
1. COURSE OBJECTIVES
6
3. PRE REQUISITES
PRE-REQUISITE CHART
21CS201-Data Structures
21MA402-Probability and
Statistics
21CS502-Artificial Intelligence
7
4.SYLLABUS
22AI301-ARTIFICIALINTELLIGENCE LTPC
3 02 4
Strategies
Lab Programs:
1. Implement basic search strategies – 8-Puzzle, 8 - Queens problem.
2. Implement Breadth First Search & Depth first Search Algorithm
3. Implement Water Jug problem.
4. Solve Tic-Tac-Toe problem.
Cognitive
/
Affective
Course Level of Course
Code Course Outcome Statement the Outcome
Course
Outcome
Course Outcome Statements in Cognitive Domain
9
6.CO-PO/PSO MAPPING
Course P P P P P P P P P P P P PS PS PS
Outcomes
O O O O O O O O O O O O O O O
(Cos)
1 2 3 4 5 6 7 8 9 10 11 12 1 2 3
21AI401.1 K
2 3 3 1 - - - - - - - - - - - -
21AI401.2 K
3 3 2 1 - - - - - - - - - - - -
21AI401.3 K
3 3 2 1 - - - - - - - - - - - -
5
21AI401.4 K
3 3 3 2 - - - - - - - - - - - -
21AI401.5 K
2 3 2 2 - - - - - - - - - - - -
10
LECTURE PLAN – UNIT III
11
7. LECTURE PLAN – UNIT III
UNIT II PROBLEMSOLVINGMETHODS
Sl.
No PROPOSED ACTUAL
NO
LECTURE LECTURE
OF PERTAINI TAXONOMY MODE OF
TOPIC PER PEROID PERIOD NG CO(s) LEVEL DELIVERY
I
ODS
Knowledge 19.09.2023
MD1,
1 based agents-
Logic- 1 CO3 K3 MD5
Propositional
logic
20.09.2023
Propositional MD1,
Theorem Proving- 1 CO3 K3 MD5
2 Propositional model
checking
21.09.2023
Agent based on MD1,
propositional logic- 1 CO3 K3 MD5
3 First order Logic-
Syntax and semantics
Using first order 22.09.2023 MD1,
4 logic 1 CO3 K3 MD5
23.09.2023
Knowledge MD1,
Representation and 1 CO3 K3 MD5
5 Engineering
Inference in first order 25.09.2023
MD1,
6 logic
1 CO3 K3 MD5
9
8. Activity Based
Learning
13
Activity Based Learning
1. Play a game where the other person thinks of an animal you must identify. Does it
have feathers? Does it have fur? Does it walk on four legs? Is it black? In a small
number of questions we narrow down the possibilities until we know what it is.
The more we know about the features of animals, the easier it is to narrow it
down. A machine learning algorithm learns about the features of the specific
things it classifies. Each feature rules out some possibilities, but leaves others.
Returning to our robot that makes expressions. Was it a loud sound? Yes. Was it
sudden? No. … The robot should look unhappy. The right combination of features
allows the algorithm to narrow the possibilities down to one thing. The more data
the algorithm was trained on, the more patterns it can spot. The more patterns it
spots, the more rules about features it can create. The more rules about features
it has, the more questions it can make its decision on.
Materials
14
9. LECTURE NOTES-UNIT III
SYLLABUS
Knowledge Based Agents-Logic-Propositional logic- Propositional theorem proving-
Propositional model Checking- Agents based on propositional Logic-First Order Logic –
Propositional Vs First Order Inference – Unification and First Order Inference – Forward
Chaining- Backward Chaining – Resolution
Declarative approach: A knowledge-based agent can be built simply by tel ing ing it
what it needs to know. Starting with an empty knowledge base, the gent designer can
TELL sentences one by one until the agent knows how to operate in its environment.
3.1.1 Logic
A representation language is defined by its syntax, which specifies the structure of
sentences, and its semantics, which defines the truth of each sentence in each possible
world or model.
Syntax: The sentences in KB are expressed according to the syntax of the representation
language, which specifies al the sentences that are wel formed.
Semantics: The semantics defines the truth of each sentence with respect to
each possible world.
Models: We use the term model in place of “possible world” when we need to be
precise. Possible world might be thought of as (potentially) real environments that the
agent might or m ight not be in, models are mathematical abstractions, each of which
simply fixes the truth or falsehood of every relevant sentences.
If KB is true in the real world, then any sentence α derived from KB by a sound
inference procedure is also true in the real world.
Grounding: The connection between logical reasoning process and the real environment in
which the agent exists.
Syntax
The syntax defines the alowable sentences.
Atomic sentences: consist of a single proposition symbol, each such symbol stands for
a proposition that can be true or false. (e.g. W1,3 stand for the proposition that the
wumpus is in [1, 3].)
Complex sentences: constructed from simpler sentences, using parentheses and logical
connectives.
Semantics
The semantics defines the rules for determining the truth of a sentence with respect
to a particular model.
The semantics for propositional logic must specify how to compute the truth value of
any sentence, given a model.
For atomic sentences: The truth value of every other proposition symbol must be
specified directly in the model.
Whenever any sentences of the form α⇒β and α are given, then the
sentence β can be inferred.
·And-Elimination:
e.g.
KB = (B1,1⟺(P1,2∨P2,1)) 𝖠¬B1,1
α = ¬P1,2
Notice: Any clause in which two complementary literals appear can be discarded,
because it is always equivalent to True.
e.g. B1,1∨¬B1,1∨P1,2 = True∨P1,2 = True.
PL-RESOLUTION is complete.
Horn clause: A disjunction of literals of which at most one is positive. (All definite
clauses are Horn clauses.) In Horn form, the premise is called the body and the
conclusion is called the head. A sentence consisting of a single positive literal is
caled a fact, it too can be written in implication form.
Horn clause are closed under resolution: if you resolve 2 horn clauses, you get back
a horn clause.Inference with horn clauses can be done through the forward-
chaining and backward-chaining algorithms.
Deciding entailment with Horn clauses can be done in time that is linear in the size of
the knowledge base.
On every iteration, the algorithm picks an unsatisfied clause, and chooses randomly
between 2 ways to pick a symbol to flip:
Either a. a “min-conflicts” step that minimizes the number of unsatisfied clauses in the
new state;Or b. a “random walk” step that picks the symbol randomly.
When the algorithm returns a model, the input sentence is indeed satisfiable;
Satifiability threshold conjecture: A theory says that for every k≥3, there is a threshold
ratio rk, such that as n goes to infinity, the probability that CNFk(n, rn) is satisfiable
becomes 1 for al values or r below the threshold, and 0 for all values above. (remains
unproven)
3.1.1.4 Agent based on propositional logic
Frame problem: some information lost because the effect axioms fails to state
what remains unchanged as the result of an action.
Solution: add frame axioms explicity asserting al the propositions that remain the
same.
We use a logical sentence involving the proposition symbols associated with the
current time step and the temporal symbols. Logical state estimation involves
maintaining a logical sentence that describes the set of possible states consistent
with the observation history. Each update step requires inference using the transition
model of the environment, which is built from successor-state axioms that specify
how each fluent changes.
State estimation: The process of updating the belief state as new percepts arrive.
Exact state estimation may require logical formulas whose size is exponential in the
number of symbols. One common scheme for approximate state estimation: to
represent belief state as conjunctions of literals (1-CNF formulas).
The agent simply tries to prove Xt and ¬Xt for each symbol Xt, given the belief state
at t-1.The conjunction of provable literals becomes the new belief state, and the
previous belief state is discarded.(This scheme may lose some information as time
goes along.)
The set of possible states represented by the 1-CNF belief state includes al states
that are in fact possible given the ful percept history. The 1-CNF belief state acts as
a simple outer envelope, or conservative approximation.
4. Making plans by propositional inference
We can make plans by logical inference instead of A* search in Figure 7.20.
Basic idea:
1. Construct a sentence that includes:
a) Init0: a colection of assertions about the initial state;
b) Transition1, …, Transitiont: The successor-state axioms for al possible actions at
each time up to some maximum time t;
c) HaveGoldt𝖠ClimbedOutt: The assertion that the goal is achieved at time t.
2. Present the whole sentence to a SAT solver. If the solver finds a satisfying model,
the goal is achievable; else the planning is impossible.
3. Assuming a model is found, extract from the model those variables that represent
actions and are assigned true.
Together they represent a plan to achieve the goals.
Decisions within a logical agent can be made by SAT solving: finding possible models
specifying future action sequences that reach the goal. This approach works only for
fuly observable or sensorless environment.
SATPLAN: A propositional planning. (Cannot be used in a partialy observable
environment)
SATPLAN finds models for a sentence containing the initial sate, the goal, the
successor-state axioms, and the action exclusion axioms.
(Because the agent does not know how many steps it wil take to reach the goal, the
algorithm tries each possible number of steps t up to some maximum conceivable
plan length Tmax.)
1. Logic
The knowledge bases consist of sentences and these sentences are
expressed according to the syntax of the representation language, which specifies all
the sentences that are well formed. The notion of syntax is clear enough in ordinary
Propositional Logic
The syntax of propositional logic and its semantics are the way in which
the truth of sentences is determined. Then we look at entailment the relation
between a sentence and another sentence that folows from it and see how this
leads to a simple algorithm for logical inference.
35
Truth tables for the five logical connectives are given below. To use the table to
compute, for example, the value of P ∨ Q when P is true and Q is false, first look on
the left for the row where P is true and Q is false (the third row). Then look in that
row under the P ∨Q column to see the result is true.
36
It has been so important to mathematics, philosophy, and artificial
intelligence precisely because those fields and indeed, much of everyday human
existence can be usefully thought of as dealing with objects and the relations among
them. First-order logic can also express facts about some or al of the objects in the
universe.
• Functions: father of,best friend, third inning of,one more than, beginning of . . .
37
Models for first-order logic are much more interesting. The domain of a model is the
set of objects or domain elements it contains.
The do-main is required to be nonempty—every possible world must
contain at least one object. The objects in the model may be related in various ways.
In the figure, Richard and John are brothers. Formally speaking, a relation is just the
set of tuples of objects that are related. (A tuple is a collection of objects arranged in
a fixed order and is written with angle brackets surrounding the objects.) Thus, the
brotherhood relation in this model is the set
{ _Richard the Lionheart, King John_, _King John, Richard the Lionheart_ }.
The crown is on King John’s head, so the “on head” relation contains just one tuple, _the
crown, King John_. The “brother” and “on head” relations are binary relations—that is, they
relate pairs of objects. The model also contains unary relations, or properties: the “person”
property is true of both Richard and John; the “king” property is true only of John
(presumably because Richard is dead at this point); and the “crown” property is true only of
the crown. For example, each person has one left leg, so the model has a unary “left leg”
function that includes the following mappings:
38
3.2.4 Symbols and interpretations of First-Order Logic
The basic syntactic elements of first-order logic are the symbols that stand
for objects, relations, and functions. The symbols, therefore, come in three kinds:
constant symbols, which stand for objects; predicate symbols, which stand for
relations; and function symbols, which stand for functions. We adopt the convention
that these symbols will begin with uppercase letters. For example, we might use the
constant symbols Richard and John; the predicate symbols Brother , OnHead,
Person, King, and Crown; and the function symbol LeftLeg. As with proposition
symbols, the choice of names is entirely up to the user. Each predicate and function
symbol comes with an arity that fixes the number of arguments An interpretation is
that specifies exactly which objects, relations and functions are referred to by the
constant, predicate, and function symbols. One possible interpretation for our
example which a logician would cal the intended interpretation is as folows:
• Richard refers to Richard the Lionheart and John refers to the evil King John.
• Brother refers to the brotherhood relation, that is, the set of tuples of objects.
OnHead refers to the “on head” relation that holds between the crown and King
John; Person, King, and Crown refer to the sets of objects that are persons, kings,
and crowns.
• LeftLeg refers to the “left leg” function, that is, the mapping .
A term is a logical expression that refers to an object. Constant symbols
are therefore terms, but it is not always convenient to have a distinct symbol to
name every object. For example, in English we m ight use the expression “King
John’s left leg” rather than giv ing a name to his leg. This is what function symbols
are for: instead of using a constant symbol, we use LeftLeg (John). In the general
case, a complex term is formed by a function symbol followed by a parenthesized list
of terms as arguments to the function symbol. An atomic sentence (or atom for
short) is ATOMIC SENTENCE formed from a predicate symbol optionally followed by a
parenthesized list of terms, such as
39
Brother (Richard , John).
This states, under the intended interpretation given earlier,that Richard the
Lionheart is the brother of King John.
We can use logical connectives to construct more complex sentences, with the same
syntax and semantics as in propositional calculus.
40
¬Brother (LeftLeg(Richard), John)
Brother (Richard , John) 𝖠Brother (John,Richard)
King(Richard ) ∨King(John)
¬King(Richard) ⇒ King(John) .
3.2.5 Quantifiers
First-order logic contains two standard quantifiers, caled universal and existential
∀x King(x) ⇒ Person(x).
∀ is usually pronounced “For all . . .”. (Remember that the upside-down A stands for
“all.”) Thus, the sentence says, “For al x, if x is a king, then x is a person.” The
symbol x is caled a variable. By convention, variables are lowercase letters. A
variable is a term all by itself, and as such can also serve as the argument of a
function—for example, LeftLeg(x). A term with no variables is caled a ground term .
The sentence ∀x P, where P is any logical expression, says that P is true for every
object x. More precisely, ∀x P is true in a given model if P is true in all possible
extended interpretations constructed from the interpretation given in the model,
where each extended interpretation specifies a domain element to which x refers
x → the crown.
41
The universally quantified sentence ∀ x King(x) ⇒ Person(x) is true in the origina l
model of the sentence King(x) ⇒ Person(x) is true under each of the five extended
interpretations. That is, the universally quantified sentence is equivalent to asserting
the folowing five sentences:
write
42
The fifth assertion is true in the model, so the original existentialy quantified
sentence is true in the model.
Nested quantifiers
The simplest case is where the quantifiers are of the same type. For example,
“Brothers are siblings” can be written as
∀x Ǝy Loves(x, y) .
On the other hand, to say “There is someone who is loved by everyone,” we write
Ǝy∀x Loves(x, y) .
The order of quantification is therefore very important. It becomes clearer if we
insert parentheses. ∀ x (Ǝy Loves(x, y)) says that everyone has a particular property,
namely, the property that they love someone.
43
We can go one step further: “Everyone likes ice cream” means that there is no one
who does not like ice cream:
∀x Likes(x, IceCream) is equivalent to ¬Ǝx¬Likes(x, IceCream)
Because ∀ is really a conjunction over the universe of objects and ∃ is a disjunction,
it should not be surprising that they obey De Morgan’s rules. The De Morgan rules
for quantified and unquantified sentences are as folows:
∀x P ≡ ¬Ǝx ¬ P P𝖠 Q ≡¬ ( ¬ P ∨¬Q)
Equality
First-order logic includes one more way to make atomic sentences, other than using
a predicate and terms as described earlier. We can use the equality symbol to signify
that two terms refer to the same object. For example,
Father (John)=Henry
says that the object referred to by Father (John) and the object referred to by Henry
are the same. One proposal that is very popular in database systems works as
follows. First, we insist that every constant symbol refer to a distinct object—the so-
called unique-names assumption. Second, we assume that atomic sentences not
known to be true are in fact false the closed-world assumption.
3.3 UNIFICATION
44
∀x King(x) 𝖠Greedy(x) ⇒ Evil(x) .
...
The rule of Universal Instantiation (UI for short) says that we can infer any sentence
obtained by substituting a ground term (a term without variables) for the variable.1
Towrite out the inference rule formaly, we use the notion of substitutions. In the
rule for Existential Instantiation, the variable is replaced by a single new constant
symbol. For example, from the sentence
King(John)
Greedy(John)
45
Then we apply UI to the first sentence using al possible ground-term substitutions
from the vocabulary of the knowledge base—in this case, {x/John} and {x/Richard
}. We obtain
∀y Greedy(y)
Then we would still like to be able to conclude that Evil(John), because we know
that John is a king (given) and John is greedy (because everyone is greedy).
apply ing the substitution {x/John, y/John} to the implication prem ises King(x) and
Greedy(x) and the knowledge-base sentences King(John) and Greedy(y) will make
them identical. Thus, we can infer the conclusion of the implication. This inference
process can be captured as a single inference rule that we call Generalized Modus
Ponens.
46
3.3.2 Unification
Lifted inference rules require finding substitutions that make different
logical expressions look identical. This process is called unification and is a key
component of all first-order inference algorithms. The UNIFY algorithm takes two
sentences and returns a unifier for them if one exists:
The last unification fails because x cannot take on the values John and Elizabeth at
the same time. Now, remember that Knows(x, Elizabeth) means “Everyone knows
Elizabeth,” so we should be able to infer that John knows Elizabeth. The problem
arises only because the two sentences happen to use the same variable name, x.
The problem can be avoided by standardizing apart one of the two sentences being
unified, which means renaming its variables to avoid name clashes. For example, we
can rename x in Knows(x, Elizabeth) to x17 (a new variable name) without changing
its meaning. Now the unification wil work:
47
For example, UNIFY(Knows(John, x), Knows(y, z)) could return {y/John, x/z} or
{y/John, x/John, z/John}. The first unifier gives Knows(John, z) as the result of
unification, whereas the second gives Knows(John, John). The second result could
be obtained from the first by an additional substitution {z/John}; we say that the
first unifier is more general than the second, because it places fewer restrictions on
the values of the variables. It turns out that, for every unifiable pair of expressions,
there is a single most general unifier (or UNIFIER MGU) that is unique up to
renaming and substitution of variables. (For example, {x/John} and {y/John} are
considered equivalent, as are {x/John, y/John} and {x/John, y/x}.) In this case it is
{y/John, x/z}.
The algorithm works by comparing the structures of the inputs, element by element.
The substitution θthat is the argument to UNIFY is built up along the way and is
used to make sure that later comparisons are consistent with bindings that were
established earlier. In a compound expression such as F(A,B), the OP field picks out
the function symbol F and the ARGS field picks out the argument list (A,B).
48
3.3.3 Storage and retrieval
Underly ing the TELL and ASK functions used to inform and interrogate a
knowledge base are the more prim itive STORE and FETCH functions. STORE(s)
stores a sentence s into the knowledge base and FETCH(q) returns all unifiers such
that the query q unifies with some sentence in the knowledge base. The problem we
used to illustrate unification—finding all facts that unify with Knows(John, x)—is an
instance of Fetching. The simplest way to implement STORE and FETCH is to keep al
the facts in one long list and unify each query against every element of the list. We
can make FETCH more efficient by ensuring that unifications are attempted only with
sentences that have some chance of unifying.
Predicate index ing is useful when there are many predicate symbols but only a few
clauses for each symbol. Sometimes, however, a predicate has many clauses. For
example, suppose that the tax authorities want to keep track of who employs whom,
using a predicate Employs(x, y). This would be a very large bucket with perhaps
millions of employers. and tens of millions of employees. Answering a query such as
Employs(x,Richard ) with predicate index ing would require scanning the entire
bucket.
For other queries, such as Employs(IBM , y), we would need to have indexed the
facts by combining the predicate with the first argument. Therefore, facts can be
stored under multiple index keys, rendering them instantly accessible to various
queries that they might unify with.
49
Given a sentence to be stored, it is possible to construct indices for al possible
queries that unify with it. For the fact Employs(IBM ,Richard), the queries are
These queries form a subsumption lattice, as shown in Figure 3.4(a). The lattice has
some interesting properties. For example, the child of any node in the lattice is
obtained from its parent by a single substitution; and the “highest” common
descendant of any two nodes is the result of apply ing their most general unifier. The
portion of the lattice above any ground fact can be constructed systematically. A
sentence with repeated constants has a slightly different lattice, as shown in Figure
3.4(b). Function symbols and variables in the sentences to be stored introduce still
more interesting lattice structures. For most AI systems, the number of facts to be
stored is small enough that efficient indexing is considered a solved problem. For
commercial databases, where facts number in the billions, the problem has been the
subject of intensive study and technology development.
50
3.4.1 First-order definiteclauses
A definite clause either is atom ic or is an implication whose antecedent is
a conjunction of positive literals and whose consequent is a single positive literal.
The folowing are first-order definite clauses:
King(John) .
Greedy(y) .
Unlike propositional literals, first-order literals can include variables, in which case
those variables are assumed to be universally quantified. (Typically, we om it universal
quantifiers when writing definite clauses.) Not every knowledge base can be
converted into a set of definite clauses because of the single-positive-literal
restriction, but many can. Consider the folowing problem:
The law says that it is a crime for an American to sell weapons to hostile
nations. The country Nono, an enemy of America, has some missiles, and all of its
missiles were sold to it by ColonelWest, who is American.
We will prove that “West is a criminal”. First, we will represent these facts as first-
order definite clauses. The next section shows how the forward-chaining algorithm
solves the problem.
Owns(Nono,M1) --------------- 2
Missile(M1)-------------------- 3
51
“All of its missiles were sold to it by Colonel West”:
American(West) .---------------- 7
Enemy(Nono,America) . -------- 8
This knowledge base contains no function symbols and is therefore an instance of
the class of Datalog knowledge bases. Datalog DATALOG is a language that is
restricted to first-order definite clauses with no function symbols. Datalog gets its
name because it can represent the type of statements typically made in relational
databases.
52
We use our crime problem to ilustrate how FOL-FC-ASK works. The implication
sentences are (1), (4), (5), and (6). Twoiterations are required:
53
The function STANDARDIZE-VARIABLES replaces al variables in its arguments with
new ones that have not been used before.
Step 1:
Step 2:
Step 3:
54
Forward-chaining Algorithm
Example:
Solution:
55
3.4.3 Efficientforward chaining
There are three possible sources of inefficiency. First, the “inner loop” of
the algorithm involves finding all possible unifiers such that the premise of a rule
unifies with a suitable set of facts in the knowledge base. This is often called pattern
matching and can be very expensive. Second, the algorithm rechecks every rule on
every iteration to see whether its premises are satisfied, even if very few additions
are made to the knowledge base on each iteration. Finally, the algorithm might
generate many facts that are irrelevant to the goal.
Then we need to find all the facts that unify withM issile(x); in a suitably indexed
knowledge base, this can be done in constant time per fact. Now consider a rule
such as Missile(x) 𝖠Owns(Nono, x) ⇒ Sells(West, x, Nono) .
Again, we can find all the objects owned by Nono in constant time per object; then,
for each object, we could check whether it is a missile. If the knowledge base
contains many objects owned by Nono and very few missiles, however, it would be
better to find all the m issiles first and then check whether they are owned by Nono.
This is the conjunct ordering problem: find an ordering to solve the conjuncts of the
rule prem ise so that the total cost is minim ized. It turns out that finding the optimal
ordering is NP-hard, but good heuristics are available. The connection between
pattern matching and constraint satisfaction is actually very close. We can view each
conjunct as a constraint on the variables that it contains—for example, Missile(x) is a
unary constraint on x. Extending this idea, we can express every finite-domain CSP
as a single definite clause together with some associated ground facts.
56
Consider the map-coloring problem shown again in Figure 3.6(a). An equivalent
formulation as a single definite clause is given in Figure 3.7(b). Clearly, the
conclusion Colorable () can be inferred only if the CSP has a solution. Because CSPs
in general include 3-SAT problems as special cases, we can conclude that matching a
definite clause against a set of facts is NP-hard.
It might seem rather depressing that forward chaining has an NP-hard matching
problem in its inner loop. There are three ways to cheer ourselves up:
• We can remind ourselves that most rules in real-world knowledge bases are small
and simple (like the rules in our crime example) rather than large and complex (like
the CSP formulation in Figure 3.6).
57
Incremental forward chaining
When we showed how forward chaining works on the crime example, we
cheated; in particular, we om itted some of the rule matching done by the algorithm.
For example, on the second iteration, the rule
Missile(x) ⇒ Weapon(x)
matches against Missile(M1) (again), and of course the conclusion Weapon(M1) is
already known so nothing happens. Such redundant rule matching can be avoided if
we make the following observation: Every new fact inferred on iteration t must be
derived from at least one new fact inferred on iteration t − 1. This is true because
any inference that does not require a new fact from iteration t − 1 could have been
done at iteration t − 1 already. Typically, only a small fraction of the rules in the
knowledge base are actualy triggered by the addition of a given fact. This means
that a great deal of redundant work is done in repeatedly constructing partia l
matches that have some unsatisfied prem ises. Our crime example is rather too small
to show this effectively, but notice that a partial match is constructed on the first
iteration between the rule,
Irrelevantfacts
The final source of inefficiency in forward chaining appears to be intrinsic to the
approach and also arises in the propositional context. Forward chaining makes all
allowable inferences based on the known facts, even if they are irrelevant to the goal
at hand.
58
In our crime example, there were no rules capable of drawing irrelevant
conclusions, so the lack of directedness was not a problem. In other cases (e.g., if
many rules describe the eating habits of Americans and the prices of missiles), FOL- FC-
ASK wil generate many irrelevant conclusions.
The idea is to rewrite the rule set, using information from the goal, so that
only relevant variable bindings—those MAGIC SET belonging to a so-caled magic set—
are considered during forward inference. For example, if the goal is Crim inal (West),
the rule that concludes Crim inal (x) will be rewritten to include an extra conjunct that
constrains the value of x:
magic sets and rewriting the knowledge base is too complex to go into here, but the
basic idea is to perform a sort of “generic” backward inference from the goal in order
to work out which variable bindings need to be constrained. The magic sets
approach can therefore be thought of as a kind of hybrid between forward inference
and backward preprocessing.
59
5. BACKWARD CHAINING
The second major family of logical inference algorithms uses the backward
chaining approach. These algorithms work backward from the goal, chaining through
rules to find known facts that support the proof. We describe the basic algorithm,
and then we describe how it is used in logic programming, which is the most widely
used form of automated reasoning. We also see that backward chaining has some
disadvantages compared with forward chaining, and we look at ways to overcome
them. Finally, we look at the close connection between logic programming and
constraint satisfaction problems.
1. A backward-chaining algorithm
60
It also means that backward chaining (unlike forward chaining) suffers from
problems with repeated states and incompleteness.
Proof tree (Fig 3.7) constructed by backward chaining to prove that West
is a criminal. The tree should be read depth first, left to right. To prove Crim inal
(West), we have to prove the four conjuncts below it. Some of these are in the
knowledge base, and others require further backward chaining. Bindings for each
successful unification are shown next to the corresponding subgoal. Note that once
one subgoal in a conjunction succeeds, its substitution is applied to subsequent
subgoals. Thus, by the time FOL-BC-ASK gets to the last conjunct, originally
Hostile(z), z is already bound to Nono.
61
Step 5:
62
A simple backward-chaining algorithm
5. Forward chaining tests forallthe availablerules Backward chaining onlytests for fewrequired rules.
Forward chaining is suitable for theplanning, Backward chaining is suitable for diagnostic,
monitoring,control,andinterpretation
6. prescription, and debuggingapplication.
application.
Forward chaining can generate aninfinite Backward chaining generates a finite number of
7.
number of possible conclusions. possible conclusions.
8. It operates in the forward direction. It operates in the backwarddirection.
9. Forward chaining is aimed for any conclusion. Backward chaining is only aimed for the requireddata.
63
Example:
Goal State : Z
Facts: A,E,B,C
Rules: F&B->Z,C&D->F,A->D
Solution:
64
6. RESOLUTION
The last of our three families of logical systems is based on resolution.
1. Conjunctive normal form for first-order logic
As in the propositional case, first-order resolution requires that sentences
be in conjunctive normal form (CNF)—that is, a conjunction of clauses, where each
clause is a disjunction of literals. Literals can contain variables, which are assumed to
be universaly quantified. For example, the sentence
becomes, in CNF,
• Eliminate implications:
¬Ǝxp becomes ∀x ¬ p.
65
Our sentence goes through the folowing transformations:
∀x [Ǝy ¬(¬Animal(y) ∨Loves(x, y))] ∨[ƎyLoves(y, x)] .
∀x [Ǝy¬¬Animal(y) 𝖠¬Loves(x, y)] ∨[ƎyLoves(y, x)] .
∀x [ƎyAnimal (y) 𝖠¬Loves(x, y)] ∨[ƎyLoves(y, x)].
Notice how a universal quantifier (∀ y) in the prem ise of the implication has become
an existential quantifier.
• Standardize variables: For sentences like (ƎxP(x))∨( ƎxQ(x)) which use the same
variable name twice, change the name of one of the variables. This avoids confusion
later when we drop the quantifiers. Thus, we have
66
[Animal (F(x)) 𝖠¬Loves(x, F(x))] ∨Loves(G(z), x) .
• Distribute ∨over 𝖠:
This step may also require flattening out nested conjunctions and disjunctions. The
sentence is now in CNF and consists of two clauses. It is quite unreadable. (It may
help to explain that the Skolem function F(x) refers to the animal potentially unloved
by x, whereas G(z) refers to someone who might love x.)
67
Steps for Resolution:
Conversion of facts into first-order logic.
•
Convert FOL statements into CNF
•
Negate the statement which needs to prove (proof by contradiction)
•
• Draw resolution graph (unification).
68
Notice the structure: single “spine” beginning with the goal clause, resolving against
clauses from the knowledge base until the empty clause is generated. This is
characteristic of resolution on Horn clause knowledge bases. In fact, the clauses
along the main spine correspond exactly to the consecutive values of the goals
variable in the backward-chaining algorithm. Our second example makes use of
Skolemization and involves clauses that are not definite clauses. This results in a
somewhat more complex proof structure. In English, the problem is as folows:
69
1. John likes al kind of food.
In the first step we wil convert al the given statements into its first order logic.
70
71
• Eliminate instantiationquantifier by eliminationexistential
In this step, we will elim inate existential quantifier ∃, and this process is known as
Skolem ization. But in this example problem since there is no existential quantifier so
al the statements wil remain same in this step.
72
Hence the negation of the conclusion has been proved as a complete contradiction
with the given set of statements.
Explanation of Resolutiongraph:
oIn the first step of resolution graph, ¬likes(John, Peanuts) , and likes(John, x) get
resolved(canceled) by substitution of {Peanuts/x}, and we are left with ¬
food(Peanuts)
oIn the second step of the resolution graph, ¬ food(Peanuts) , and food(z) get
resolved (canceled) by substitution of { Peanuts/z}, and we are left with ¬ eats(y,
Peanuts) V kiled(y) .
oIn the third step of the resolution graph, ¬ eats(y, Peanuts) and eats (Anil,
Peanuts) get resolved by substitution {Anil/y}, and we are left with Kiled(Anil)
.
oIn the fourth step of the resolution graph, Kiled(Anil) and ¬ kiled(k) get resolve by
substitution {Anil/k}, and we are left with ¬ alive(Anil) .
oIn the last step of the resolution graph ¬ alive(Anil) and alive(Anil) get resolved.
73
3.6.4 Completeness of resolution
We show that resolution is refutation-complete, which means that if a set of
sentences is unsatisfiable, then resolution will always be able to derive a
contradiction. Resolution cannot be used to generate all logical consequences of a
set of sentences, but it can be used to establish that a given sentence is entailed by
the set of sentences. Our proof sketch follows Robinson’s original proof with some
simplifications from Genesereth and Nilsson (1987).
2.We then appeal to the ground resolution theorem given in Chapter 7, which states
that propositional resolution is complete for ground sentences.
3.We then use a lifting lemma to show that, for any propositional resolution proof
using the set of ground sentences, there is a corresponding first-order resolution
proof using the first-order sentences from which the ground sentences were
obtained.
74
• Saturation: If S is a set of clauses and P is a set of ground terms, then P(S), the
saturation of S with respect to P, is the set of all ground clauses obtained by applying
al possible consistent substitutions of ground terms in P with variables in S.
• Herbrand base: The saturation of a set S of clauses with respect to its Herbrand
universe is called the Herbrand base of S, written as HS(S). For example, if S
contains solely the clause just given, then HS(S) is the infinite set of clauses
75
This is structure of a completeness proof for resolution is given below:
3.6.5 Equality
None of the inference methods described so far in this chapter handle an assertion
of the form x = y. Three distinct approaches can be taken. The first approach is to
axiomatize equality— to write down sentences about the equality relation in the
knowledge base. We need to say that equality is reflexive, symmetric, and transitive,
and we also have to say that we can substitute equals for equals in any predicate or
function. The simplest rule, demodulation, takes a unit clause x=y and some clause α
that contains the term x, and yields a new clause formed by substituting y for x
within α.Itworks if the term within αunifies with x; it need not be exactly equal to x.
76
we have θ =UNIFY(F(A, y), F(x,B))= {x/A, y/B}, and we can conclude by
paramodulation the sentence
3.6.6 Resolutionstrategies
We know that repeated applications of the resolution inference rule will eventually
find a proof if one ex ists. In this subsection, we examine strategies that help find
proofs efficiently. Unit preference: This strategy prefers to do resolutions where one
of the sentences is a single literal (also known as a unit clause). The idea behind the
strategy is that we are trying to produce an empty clause, so it might be a good idea
to prefer inferences that produce shorter clauses. Resolving a unit sentence (such as
P) with any other sentence (such as ¬P ∨¬Q∨R) always yields a clause (in this case,
Input resolution: In this strategy, every resolution combines one of the input
sentences (from the KB or the query) with some other sentence. The linear
resolution strategy is a slight generalization that allows P and Q to be resolved
together either if P is in the original KB or if P is an ancestor of Q in the proof tree.
Linear resolution is complete.
77
Subsumption: The subsumption method elim inates all sentences that are subsumed
by (that is, more specific than) an existing sentence in the KB. For example, if P(x) is
in the KB, then there is no sense in adding P(A) and even less sense in adding P(A) ∨
Q(B). Subsumption helps keep the KB small and thus helps keep the search space
small. Theorem provers can be applied to the problems involved in the synthesis and
verification of both hardware and software. Thus, theorem-proving research is
carried out in the fields of hardware design, programming languages, and software
engineering—not just in AI.
78
10. Assignment3
79
Assignment
2. Behavior-BasedRobotics
Students implement a behavior-based simulated tank agent in the AutoTank
environment with a reactive behavior-based architecture built using the Unified
Behavior Framework (UBF).
80
11. Part - A
Question & Answers
81
Part - A Question & Answers
1. Write down the basic syntactic elements of first order logic (K1, CO3)
The basic syntactic elements of the FOL are the symbols that stand for objects,
relations and functions. The symbols come in three kinds: constant symbols,
which stand for objects; predicate symbols, which stand for relations; and
function symbols, which stand for functions.
Predicate logic is usually used as a synonym for first-order logic, but sometimes it is
used to refers to other logics that have similar syntax. Syntactically, first-order
logic has the same connectives as propositional logic, but it also has variables
for indiv idual objects, quantifiers, symbols for functions, and symbols for
relations. The semantics include a domain of discourse for the variables and
quantifiers to range over, along with interpretations of the relation and function
symbols.
82
4. How TELL and ASK are used in first-order logic? (K1, CO3)
The TELL and ASK functions used to inform and interrogate a knowledge base are
the more prim itive STORE and FETCH functions. STORE (Ss) t ores a sentence s into
the knowledge base and FETCH (^) returns all unifiers such that the query q unifies
with some sentence in the knowledge base.
83
8. What are the 3 types of symbol which is used to indicate objects,
relations and functions? (K1, CO3)
Relation : Ruled
Functions : BROTHER OF
84
12. Define Universal Instantiation (K1, CO3)
The rule of Universal Instantiation is defined as any sentence can be obtained by
substituting a ground term (a term without variables) for the variable. To write out
the inference rule formaly, the notion of substitutions is introduced.
Data-driven search uses knowledge and constraints found in the given data to
search along lines known to be true. Use data-driven search if:
• All or most of the data are given in the initial problem statement.
• There are a large number of potential goals, but there are only a few ways to use
the facts and the given information of a particular problem.
85
15. What are the four parts of knowledge in first-order logic? (K1, CO3)
Inductive logic programming (ILP) combines inductive methods with the power of first-
order representations, concentrating in particular on the representation of theories
as logic program .It has gained popularity for three reasons. First, ILP offers a
rigorous approach to the general knowledge-based inductive learning problem.
Second, it offers complete algorithms for inducing general, first-order theories from
examples, which can therefore learn successfully in domains where attribute-based
algorithms are hard to apply.
86
12. Part - B Questions
87
Part - B Questions
5. What is resolution? Explain its various types. Give example KB for deriving a
conclusion from it. (K2, CO4)
88
13. Supportive
Online Certification
Courses
89
Supportive Online Certification Courses
https://nptel.ac.in/courses/106/102/106102220/
90
14. Real time
applications in day
to day life and
Industry
91
Real time applications in day to day life and to Industry
Smart Cars
Self-driv ing cars are the most common existing example of applications of artificial
intelligence in real-world, becoming increasingly reliable and ready for dispatch every
single day. From Google’s self-driving car project to Tesla’s “autopilot” feature, it is a
matter of time before AI is a standard-issue technology in the automotive industry.
Advanced Deep Learning algorithms can accurately predict what objects in the
vehicle’s vicinity are likely to do. The AI system collects data from the vehicle’s radar,
cameras, GPS, and cloud services to produce control signals that operate the vehicle.
Moreover, some high-end vehicles come with AI parking systems already. With the
evolution of AI, soon enough, fuly automated vehicles will be seen on most streets.
Smart Car
AI has taken a critical step in helping people with looking after patients as well. The
automated bots and healthcare applications ensure proper medication and treatment
of patients in the facilities.
Healthcare
92
15. Assessment
Schedule
93
15. ASSESSMENT SCHEDULE
Name of the
S.NO Start Date End Date Portion
Assessment
UNIT 5 , 1 &
5 Revision 1
2
6 Revision 2 UNIT 3 & 4
15.11.2023 25.11.2023
7 Model ALL 5 UNITS
94
16. Prescribed
Text Books and
Reference Books
95
16. PRESCRIBED TEXT BOOKS & REFERENCE BOOKS
TEXT BOOKS:
1. Peter Norvig and Stuart Russel, Artificial Inteligence: A Modern Approach‖, Pearson,
Fourth Edition, 2020.
2. Bratko, Prolog: Programming for Artificial Inte ligence‖, Fourth edition, Addison-
Wesley Educational Publishers Inc., 2011.
REFERENCES:
1. Elaine Rich,Kevin Knight and B.Nair, Artificial Inteligence 3rd Edition, McGraw
Hil,2017.
4. Nils J.Nilsson, The Quest for Artificial Inteligence, Cambridge University Press,2009
5. Dan W.Patterson Introduction to Artificial Inteligence and expert systems,1st
Edition by Patterson, Pearson, India, 2015
17. Mini Project
Suggestions
99
Mini Project Suggestions
The project idea is to find the optimal path for a vehicle to travel so that cost and
time can be minimized. This is a business problem that needs solutions.
10
0
Thank you
Disclaimer:
This document is confidential and intended solely for the educational purpose of RMK Group of
Educational Institutions. If you have received this document through email in error, please notify the
system manager. This document contains proprietary information and is intended only to the
respective group / learning community as intended. If you are not the addressee you should not
disseminate, distribute or copy through e-mail. Please notify the sender immediately by e-mail if you
have received this document by mistake and delete this document from your system. If you are not
the intended recipient you are notified that disclosing, copying, distributing or taking any action in
reliance on the contentsofthisinformationisstrictlyprohibited.
10
1