0% found this document useful (0 votes)
22 views36 pages

Unit4 Secondhalf

Uploaded by

2022pccads205
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
22 views36 pages

Unit4 Secondhalf

Uploaded by

2022pccads205
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 36

FIRST ORDER PREDICATE LOGIC

1. Logic
The knowledge bases consist of sentences and these sentences are expressed
according to the syntax of the representation language, which specifies all the sentences that
are well formed. The notion of syntax is clear enough in ordinary arithmetic: “x + y = 4” is a
well-formed sentence, whereas “x4y+ =” is not.

Logic must also define the semantics or meaning of sentences. The semantics
defines the truth of each sentence with respect to each possible world. For example, the
semantics for arithmetic specifies that the sentence “x + y =4” is true in a world where x is 2
and y is 2, but false in a world where x is 1 and y is 1. In standard logics, every sentence
must be either true or false in each possible world there is no “in between.”. Types of Logic are
propositional Logic and First order Logic.

Propositional Logic
The syntax of propositional logic and its semantics are the way in which the
truth of sentences is determined. Then we look at entailment the relation between a sentence
and another sentence that follows from it and see how this leads to a simple algorithm for
logical inference.

Fig 3.1 A BNF (Backus–Naur Form) grammar of sentences in propositional


logic,along with operator precedence’s, from highest to lowest.
Truth tables for the five logical connectives are given below. To use the table to compute, for
example, the value of P ∨ Q when P is true and Q is false, first look on the left for the row
where P is true and Q is false (the third row). Then look in that row under the P ∨Q column to
see the result is true.

We used propositional logic as our representation language because it sufficed to illustrate the
basic concepts of logic and knowledge-based agents. Unfortunately, propositional logic is too
puny a language to represent knowledge of complex environments in a concise way. we
examine first-order logic, which is sufficiently expressive to represent a good deal of our
commonsense knowledge.

Difference between Propositional logic & predicate logic:


Besides the propositional logic, there are other logics as well such as predicate
logic and other modal logics. Propositional logic provides more efficient and scalable
algorithms than the other logics. There are few differences between the propositional logic and
first-order logic, some of them are mentioned below.

• Propositional logic deals with simple declarative propositions, while first-order logic
additionally covers predicates and quantification.
• A proposition is a collection of declarative statements that has either a truth
value “true” or a truth value “false”. While a predicate logic is an expression
of one or more variables defined on some specific domain.

3.2.2 First-Order Logic


The language of first-order logic, whose syntax and semantics we define in the
next section, is built around objects and relations.
It has been so important to mathematics, philosophy, and artificial intelligence
precisely because those fields and indeed, much of everyday human existence can be usefully
thought of as dealing with objects and the relations among them. First-order logic can also
express facts about some or all of the objects in the universe.

The foundation of propositional logic like declarative, compositional semantics


that is context-independent and unambiguous are used and build a more expressive logic on
that foundation, borrowing representational ideas from natural language while avoiding its
drawbacks. When we look at the syntax of natural language, the most obvious elements are
nouns and noun phrases that refer to objects (squares, pits, wumpuses) and verbs and verb
phrases that refer to relations among objects (is breezy, is adjacent to, shoots). Some of these
relations are functions—relations in which there is only one “value” for a given “input.” It is
easy to start listing examples of objects, relations, and functions.

• Objects: people, houses, numbers, theories, Ronald McDonald, colors, baseball games,
wars, centuries . . .
• Relations: these can be unary relations or properties such as red, round, bogus, prime,
multistoried . . ., or more general n-ary relations such as brother of, bigger than, inside, part
of, has color, occurred after, owns, comes between, . . .

• Functions: father of, best friend, third inning of, one more than, beginning of . . .

3.2.3 Syntax and Semantics of First-Order Logic


The models of a logical language are the formal structures that constitute the
possible worlds under consideration. Each model links the vocabulary of the logical sentences
to elements of the possible world, so that the truth of any sentence can be determined. Thus,
models for propositional logic link proposition symbols to predefined truth values.
Models for first-order logic are much more interesting. The domain of a model is the set of
objects or domain elements it contains.
The do-main is required to be nonempty—every possible world must contain at
least one object. The objects in the model may be related in various ways. In the figure,
Richard and John are brothers. Formally speaking, a relation is just the set of tuples of objects
that are related. (A tuple is a collection of objects arranged in a fixed order and is written with
angle brackets surrounding the objects.) Thus, the brotherhood relation in this model is the set

{ _Richard the Lionheart, King John_, _King John, Richard the


Lionheart_ }.

Fig 3.2 A model containing five objects, two binary relations, three unary
relations(indicated by labels on the objects), and one unary function, left-leg

The crown is on King John’s head, so the “on head” relation contains just one tuple, _the
crown, King John_. The “brother” and “on head” relations are binary relations—that is, they
relate pairs of objects. The model also contains unary relations, or properties: the “person”
property is true of both Richard and John; the “king” property is true only of John
(presumably because Richard is dead at this point); and the “crown” property is true only of
the crown. For example, each person has one left leg, so the model has a unary “left leg”
function that includes the following mappings:

_Richard the Lionheart_ → Richard’s left leg

_King John_ → John’s left leg .


3.2.4 Symbols and interpretations of First-Order Logic
The basic syntactic elements of first-order logic are the symbols that stand for
objects, relations, and functions. The symbols, therefore, come in three kinds: constant
symbols, which stand for objects; predicate symbols, which stand for relations; and function
symbols, which stand for functions. We adopt the convention that these symbols will begin
with uppercase letters. For example, we might use the constant symbols Richard and John; the
predicate symbols Brother , OnHead, Person, King, and Crown; and the function symbol
LeftLeg. As with proposition symbols, the choice of names is entirely up to the user. Each
predicate and function symbol comes with an arity that fixes the number of arguments An
interpretation is that specifies exactly which objects, relations and functions are referred to by
the constant, predicate, and function symbols. One possible interpretation for our example
which a logician would call the intended interpretation is as follows:

• Richard refers to Richard the Lionheart and John refers to the evil King John.
• Brother refers to the brotherhood relation, that is, the set of tuples of objects. OnHead
refers to the “on head” relation that holds between the crown and King John; Person, King,
and Crown refer to the sets of objects that are persons, kings,and crowns.

• LeftLeg refers to the “left leg” function, that is, the mapping .
A term is a logical expression that refers to an object. Constant symbols are
therefore terms, but it is not always convenient to have a distinct symbol to name every
object. For example, in English we might use the expression “King John’s left leg” rather
than giving a name to his leg. This is what function symbols are for: instead of using a
constant symbol, we use LeftLeg (John). In the general case, a complex term is formed by a
function symbol followed by a parenthesized list of terms as arguments to the function
symbol. An atomic sentence (or atom for short) is ATOMIC SENTENCE formed from a
predicate symbol optionally followed by aparenthesized list of terms, such as
Brother (Richard , John).
This states, under the intended interpretation given earlier, that Richard theLionheart is
the brother of King John.

6 Atomic sentences can have complex terms as arguments. Thus,

Fig 3.3 The syntax of first-orderlogic


states that Richard the Lionheart’s father is married to King John’s mother (again, under a
suitable interpretation). An atomic sentence is true in a given model if the relation referred to
by the predicate symbol holds among the objects referred to by the arguments.

We can use logical connectives to construct more complex sentences, with the same syntax
and semantics as in propositional calculus.
¬Brother (LeftLeg(Richard), John)
Brother (Richard , John) 𝖠Brother (John,Richard)

King(Richard ) ∨King(John)

¬King(Richard) ⇒ King(John) .

3.2.5 Quantifiers

First-order logic contains two standard quantifiers, called universal and existential

Universal quantification (∀)


Rules such as “Squares neighboring the wumpus are smelly” and “All kings are persons” are
the bread and butter of first-order logic. The second rule, “All kings are persons,” is written in
first-order logic as

∀x King(x) ⇒ Person(x) .
∀ is usually pronounced “For all . . .”. (Remember that the upside-down A stands for “all.”)
Thus, the sentence says, “For all x, if x is a king, then x is a person.” The symbol x is called a
variable. By convention, variables are lowercase letters. A variable is a term all by itself, and
as such can also serve as the argument of a function—for example, LeftLeg(x). A term with no
variables is called a ground term .

The sentence ∀x P, where P is any logical expression, says that P is true for every object x.
More precisely, ∀x P is true in a given model if P is true in all possible extended
interpretations constructed from the interpretation given in the model, where each extended
interpretation specifies a domain element to which x refers

x → Richard the Lionheart, x


→ King John,
x → Richard’s left leg,
x → John’s left leg,x

→ the crown.
The universally quantified sentence ∀ x King(x) ⇒ Person(x) is true in the original model of
the sentence King(x) ⇒ Person(x) is true under each of the five extended interpretations. That
is, the universally quantified sentence is equivalent to asserting the following five sentences:
Richard the Lionheart is a king ⇒ Richard the Lionheart is a person.King
John is a king ⇒ King John is a person.

Richard’s left leg is a king ⇒ Richard’s left leg is a person.John’s


left leg is a king ⇒ John’s left leg is a person.

The crown is a king ⇒ the crown is a person.


in our model, King John is the only king, the second sentence asserts that he is a
person matches.

Existential quantification (Ǝ)


Universal quantification makes statements about every object. Similarly, we can make a
statement about some object in the universe without naming it, by using an existential
quantifier. To say, for example, that King John has a crown on his head, wewrite

Ǝx Crown(x) 𝖠OnHead(x, John) .


Ǝx is pronounced “There exists an x such that . . .” or “For some x . . .”. Intuitively, the sentence
ƎxP says that P is true for at least one object x. More precisely, ƎxP is true in a given model if P
is true in at least one extended interpretation that assigns x to a domain element. That is, at
least one of the following is true:

Richard the Lionheart is a crown 𝖠Richard the Lionheart is on John’s head;

King John is a crown 𝖠King John is on John’s head;

Richard’s left leg is a crown 𝖠Richard’s left leg is on John’s head;


John’s left leg is a crown 𝖠John’s left leg is on John’s head;

The crown is a crown 𝖠the crown is on John’s head.


The fifth assertion is true in the model, so the original existentially quantified
sentence is true in the model.

Nested quantifiers

The simplest case is where the quantifiers are of the same type. For example, “Brothers are
siblings” can be written as

∀x ∀y Brother (x, y) ⇒ Sibling(x, y).


Consecutive quantifiers of the same type can be written as one quantifier with several
variables. For example, to say that siblinghood is a symmetric relationship,we can write

∀x, y Sibling(x, y) ⇔ Sibling(y, x).

In other cases we will have mixtures. “Everybody loves somebody” means that for
every person, there is someone that person loves:

∀x Ǝy Loves(x, y) .

On the other hand, to say “There is someone who is loved by everyone,” we write

Ǝy∀x Loves(x, y) .
The order of quantification is therefore very important. It becomes clearer if we insert
parentheses. ∀ x (Ǝy Loves(x, y)) says that everyone has a particular property, namely, the
property that they love someone.

Connections between ∀andƎ


The two quantifiers are actually intimately connected with each other, through negation.
Asserting that everyone dislikes parsnips is the same as asserting there does not exist someone
who likes them, and vice versa:

∀x ¬Likes(x, Parsnips ) is equivalent to ¬ƎxLikes(x, Parsnips).


We can go one step further: “Everyone likes ice cream” means that there is no one
who does not like ice cream:

∀x Likes(x, IceCream) is equivalent to ¬Ǝx¬Likes(x, IceCream) Because


∀ is really a conjunction over the universe of objects and ∃ is a disjunction,it should
not be surprising that they obey De Morgan’s rules. The De Morgan rulesfor
quantified and unquantified sentences are as follows:

∀x ¬P ≡ ¬Ǝx P ¬(P ∨Q) ≡ ¬P 𝖠¬Q

¬∀x P ≡Ǝx¬P ¬(P 𝖠Q) ≡ ¬P ∨¬Q

∀x P ≡ ¬Ǝx ¬P P𝖠 Q ≡¬(¬P

∨¬Q)Ǝx P ≡ ¬∀x ¬P P∨ Q ≡¬(

¬P 𝖠¬Q)

Equality
First-order logic includes one more way to make atomic sentences, other than using
a predicate and terms as described earlier. We can use the equality symbol to signify
that two terms refer to the same object. For example,

Father (John)=Henry
says that the object referred to by Father (John) and the object referred to by Henry
are the same. One proposal that is very popular in database systems works as follows.
First, we insist that every constant symbol refer to a distinct object—the so- called
unique-names assumption. Second, we assume that atomic sentences not known to be
true are in fact false the closed-world assumption.
KNOWLEDGE ENGINEERING IN FIRST ORDER LOGIC
What is knowledge-engineering?

The process of constructing a knowledge-base in first-order logic is called as knowledge-


engineering. In knowledge-engineering, someone who investigates a particular domain, learns
important concept of that domain, and generates a formal representation of the objects, is known
as knowledge engineer.

In this topic, we will understand the Knowledge engineering process in an electronic circuit
domain, which is already familiar. This approach is mainly suitable for creating special-purpose
knowledge base.

The knowledge-engineering process:

Following are some main steps of the knowledge-engineering process. Using these steps, we will
develop a knowledge base which will allow us to reason about digital circuit (One-bit full
adder) which is given below

1. Identify the task:

The first step of the process is to identify the task, and for the digital circuit, there are various
reasoning tasks.

At the first level or highest level, we will examine the functionality of the circuit:

o Does the circuit add properly?


o What will be the output of gate A2, if all the inputs are high?

At the second level, we will examine the circuit structure details such as:
o Which gate is connected to the first input terminal?
o Does the circuit have feedback loops?

2. Assemble the relevant knowledge:

In the second step, we will assemble the relevant knowledge which is required for digital
circuits. So for digital circuits, we have the following required knowledge:

o Logic circuits are made up of wires and gates.


o Signal flows through wires to the input terminal of the gate, and each gate produces the
corresponding output which flows further.
o In this logic circuit, there are four types of gates used: AND, OR, XOR, and NOT.
o All these gates have one output terminal and two input terminals (except NOT gate, it has
one input terminal).

3. Decide on vocabulary:

The next step of the process is to select functions, predicate, and constants to represent the
circuits, terminals, signals, and gates. Firstly we will distinguish the gates from each other and
from other objects. Each gate is represented as an object which is named by a constant, such
as, Gate(X1). The functionality of each gate is determined by its type, which is taken as
constants such as AND, OR, XOR, or NOT. Circuits will be identified by a predicate: Circuit
(C1).

For the terminal, we will use predicate: Terminal(x).

For gate input, we will use the function In(1, X1) for denoting the first input terminal of the gate,
and for output terminal we will use Out (1, X1).

The function Arity(c, i, j) is used to denote that circuit c has i input, j output.

The connectivity between gates can be represented by predicate Connect(Out(1, X1), In(1,
X1)).

We use a unary predicate On (t), which is true if the signal at a terminal is on.

4. Encode general knowledge about the domain:

To encode the general knowledge about the logic circuit, we need some following rules:
o If two terminals are connected then they have the same input signal, it can be represented
as:

∀ t1, t2 Terminal (t1) ∧ Terminal (t2) ∧ Connect (t1, t2) → Signal (t1) = Signal (2).
o Signal at every terminal will have either value 0 or 1, it will be represented as:

∀ t Terminal (t) →Signal (t) = 1 ∨Signal (t) = 0.


o Connect predicates are commutative:

∀ t1, t2 Connect(t1, t2) → Connect (t2, t1).


o Representation of types of gates:

∀ g Gate(g) ∧ r = Type(g) → r = OR ∨r = AND ∨r = XOR ∨r = NOT.


o Output of AND gate will be zero if and only if any of its input is zero.

∀ g Gate(g) ∧ Type(g) = AND →Signal (Out(1, g))= 0 ⇔ ∃n Signal (In(n, g))= 0.


o Output of OR gate is 1 if and only if any of its input is 1:

∀ g Gate(g) ∧ Type(g) = OR → Signal (Out(1, g))= 1 ⇔ ∃n Signal (In(n, g))= 1


o Output of XOR gate is 1 if and only if its inputs are different:

∀ g Gate(g) ∧ Type(g) = XOR → Signal (Out(1, g)) = 1 ⇔ Signal (In(1, g)) ≠ Signal (In(2, g
)).
o Output of NOT gate is invert of its input:

∀ g Gate(g) ∧ Type(g) = NOT → Signal (In(1, g)) ≠ Signal (Out(1, g)).


o All the gates in the above circuit have two inputs and one output (except NOT gate).

∀ g Gate(g) ∧ Type(g) = NOT → Arity(g, 1, 1)


∀ g Gate(g) ∧ r =Type(g) ∧ (r= AND ∨r= OR ∨r= XOR) → Arity (g, 2, 1).
o All gates are logic circuits:

∀ g Gate(g) → Circuit (g).


5. Encode a description of the problem instance:

Now we encode problem of circuit C1, firstly we categorize the circuit and its gate components.
This step is easy if ontology about the problem is already thought. This step involves the writing
simple atomics sentences of instances of concepts, which is known as ontology.

For the given circuit C1, we can encode the problem instance in atomic sentences as below:

Since in the circuit there are two XOR, two AND, and one OR gate so atomic sentences for these
gates will be:

1. For XOR gate: Type(x1)= XOR, Type(X2) = XOR


2. For AND gate: Type(A1) = AND, Type(A2)= AND
3. For OR gate: Type (O1) = OR.

And then represent the connections between all the gates.

Note: Ontology defines a particular theory of the nature of existence.


6. Pose queries to the inference procedure and get answers:

In this step, we will find all the possible set of values of all the terminal for the adder circuit. The
first query will be:

What should be the combination of input which would generate the first output of circuit C1, as 0
and a second output to be 1?

1. ∃ i1, i2, i3 Signal (In(1, C1))=i1 ∧ Signal (In(2, C1))=i2 ∧ Signal (In(3, C1))= i3
2. ∧ Signal (Out(1, C1)) =0 ∧ Signal (Out(2, C1))=1

7. Debug the knowledge base:

Now we will debug the knowledge base, and this is the last step of the complete process. In this
step, we will try to debug the issues of knowledge base.

In the knowledge base, we may have omitted assertions like 1 ≠ 0.


FORWARD CHAINING AND BACKWARD CHAINING

Inference engine:

The inference engine is the component of the intelligent system in artificial intelligence, which
applies logical rules to the knowledge base to infer new information from known facts. The first
inference engine was part of the expert system. Inference engine commonly proceeds in two
modes, which are:

a. Forward chaining
b. Backward chaining

Horn Clause and Definite clause:

Horn clause and definite clause are the forms of sentences, which enables knowledge base to use
a more restricted and efficient inference algorithm. Logical inference algorithms use forward and
backward chaining approaches, which require KB in the form of the first-order definite clause.

Definite clause: A clause which is a disjunction of literals with exactly one positive literal is
known as a definite clause or strict horn clause.

Horn clause: A clause which is a disjunction of literals with at most one positive literal is
known as horn clause. Hence all the definite clauses are horn clauses.

Example: (¬ p V ¬ q V k). It has only one positive literal k.

It is equivalent to p ∧ q → k.

A. Forward Chaining

Forward chaining is also known as a forward deduction or forward reasoning method when using
an inference engine. Forward chaining is a form of reasoning which start with atomic sentences
in the knowledge base and applies inference rules (Modus Ponens) in the forward direction to
extract more data until a goal is reached.

The Forward-chaining algorithm starts from known facts, triggers all rules whose premises are
satisfied, and add their conclusion to the known facts. This process repeats until the problem is
solved.
Properties of Forward-Chaining:

o It is a down-up approach, as it moves from bottom to top.


o It is a process of making a conclusion based on known facts or data, by starting from the
initial state and reaches the goal state.
o Forward-chaining approach is also called as data-driven as we reach to the goal using
available data.
o Forward -chaining approach is commonly used in the expert system, such as CLIPS,
business, and production rule systems.

Example:

"As per the law, it is a crime for an American to sell weapons to hostile nations. Country A,
an enemy of America, has some missiles, and all the missiles were sold to it by Robert, who
is an American citizen."

Prove that "Robert is criminal."

To solve the above problem, first, we will convert all the above facts into first-order definite
clauses, and then we will use a forward-chaining algorithm to reach the goal.

Facts Conversion into FOL:


o It is a crime for an American to sell weapons to hostile nations. (Let's say p, q, and r are
variables)
American (p) ∧ weapon(q) ∧ sells (p, q, r) ∧ hostile(r) → Criminal(p) ...(1)
o Country A has some missiles. ?p Owns(A, p) ∧ Missile(p). It can be written in two
definite clauses by using Existential Instantiation, introducing new Constant T1.
Owns(A, T1) ......(2)
Missile(T1) .......(3)
o All of the missiles were sold to country A by Robert.
?p Missiles(p) ∧ Owns (A, p) → Sells (Robert, p, A) ......(4)
o Missiles are weapons.
Missile(p) → Weapons (p) .......(5)
o Enemy of America is known as hostile.
Enemy(p, America) →Hostile(p) ........(6)
o Country A is an enemy of America.
Enemy (A, America) .........(7)
o Robert is American
American(Robert). ..........(8)

Forward chaining proof:

Step-1:

In the first step we will start with the known facts and will choose the sentences which do not
have implications, such as: American(Robert), Enemy(A, America), Owns(A, T1), and
Missile(T1). All these facts will be represented as below.

Step-2:

At the second step, we will see those facts which infer from available facts and with satisfied
premises.

Rule-(1) does not satisfy premises, so it will not be added in the first iteration.

Rule-(2) and (3) are already added.

Rule-(4) satisfy with the substitution {p/T1}, so Sells (Robert, T1, A) is added, which infers
from the conjunction of Rule (2) and (3).

Rule-(6) is satisfied with the substitution(p/A), so Hostile(A) is added and which infers from
Rule-(7).
Step-3:

At step-3, as we can check Rule-(1) is satisfied with the substitution {p/Robert, q/T1, r/A}, so
we can add Criminal(Robert) which infers all the available facts. And hence we reached our
goal statement.

Hence it is proved that Robert is Criminal using forward chaining approach.

BACKWARD CHAINING

Backward-chaining is also known as a backward deduction or backward reasoning method when


using an inference engine. A backward chaining algorithm is a form of reasoning, which starts
with the goal and works backward, chaining through rules to find known facts that support the
goal.

Properties of backward chaining:

o It is known as a top-down approach.


o Backward-chaining is based on modus ponens inference rule.
o In backward chaining, the goal is broken into sub-goal or sub-goals to prove the facts
true.
o It is called a goal-driven approach, as a list of goals decides which rules are selected and
used.
o Backward -chaining algorithm is used in game theory, automated theorem proving tools,
inference engines, proof assistants, and various AI applications.
o The backward-chaining method mostly used a depth-first search strategy for proof.

Example:

Backward-Chaining proof:

In Backward chaining, we will start with our goal predicate, which is Criminal(Robert), and
then infer further rules.

Step-1:

At the first step, we will take the goal fact. And from the goal fact, we will infer other facts, and
at last, we will prove those facts true. So our goal fact is "Robert is Criminal," so following is the
predicate of it.

Step-2:

At the second step, we will infer other facts form goal fact which satisfies the rules. So as we can
see in Rule-1, the goal predicate Criminal (Robert) is present with substitution {Robert/P}. So
we will add all the conjunctive facts below the first level and will replace p with Robert.

Here we can see American (Robert) is a fact, so it is proved here.


Step-3:t At step-3, we will extract further fact Missile(q) which infer from Weapon(q), as it
satisfies Rule-(5). Weapon (q) is also true with the substitution of a constant T1 at q.

Step-4:

At step-4, we can infer facts Missile(T1) and Owns(A, T1) form Sells(Robert, T1, r) which
satisfies the Rule- 4, with the substitution of A in place of r. So these two statements are proved
here.
Step-5:

At step-5, we can infer the fact Enemy(A, America) from Hostile(A) which satisfies Rule- 6.
And hence all the statements are proved true using backward chaining.

DIFFERENCE BETWEEN FORWARD CHAINING AND BACKWARD CHAINING


S. Forward Chaining Backward Chaining
No.

1. Forward chaining starts from known facts and Backward chaining starts from the goal and works
applies inference rule to extract more data unit it backward through inference rules to find the required
reaches to the goal. facts that support the goal.

2. It is a bottom-up approach It is a top-down approach

3. Forward chaining is known as data-driven Backward chaining is known as goal-driven


inference technique as we reach to the goal technique as we start from the goal and divide into
using the available data. sub-goal to extract the facts.

4. Forward chaining reasoning applies a breadth- Backward chaining reasoning applies a depth-first
first search strategy. search strategy.

5. Forward chaining tests for all the available rules Backward chaining only tests for few required rules.

6. Forward chaining is suitable for the planning, Backward chaining is suitable for diagnostic,
monitoring, control, and interpretation prescription, and debugging application.
application.

7. Forward chaining can generate an infinite Backward chaining generates a finite number of
number of possible conclusions. possible conclusions.

8. It operates in the forward direction. It operates in the backward direction.

9. Forward chaining is aimed for any conclusion. Backward chaining is only aimed for the required
data.
Resolution in FOL

Resolution

Resolution is a theorem proving technique that proceeds by building refutation proofs, i.e.,
proofs by contradictions. It was invented by a Mathematician John Alan Robinson in the year
1965.

Resolution is used, if there are various statements are given, and we need to prove a conclusion
of those statements. Unification is a key concept in proofs by resolutions. Resolution is a single
inference rule which can efficiently operate on the conjunctive normal form or clausal form.

Clause: Disjunction of literals (an atomic sentence) is called a clause. It is also known as a unit
clause.

Conjunctive Normal Form: A sentence represented as a conjunction of clauses is said to


be conjunctive normal form or CNF.

The resolution inference rule:

The resolution rule for first-order logic is simply a lifted version of the propositional rule.
Resolution can resolve two clauses if they contain complementary literals, which are assumed to
be standardized apart so that they share no variables.

Where li and mj are complementary literals.

This rule is also called the binary resolution rule because it only resolves exactly two literals.

Example:

We can resolve two clauses which are given below:

[Animal (g(x) V Loves (f(x), x)] and [¬ Loves(a, b) V ¬Kills(a, b)]

Where two complimentary literals are: Loves (f(x), x) and ¬ Loves (a, b)
These literals can be unified with unifier θ= [a/f(x), and b/x] , and it will generate a resolvent
clause:

[Animal (g(x) V ¬ Kills(f(x), x)].

Steps for Resolution:

1. Conversion of facts into first-order logic.


2. Convert FOL statements into CNF
3. Negate the statement which needs to prove (proof by contradiction)
4. Draw resolution graph (unification).

To better understand all the above steps, we will take an example in which we will apply
resolution.

Example:
a. John likes all kind of food.
b. Apple and vegetable are food
c. Anything anyone eats and not killed is food.
d. Anil eats peanuts and still alive
e. Harry eats everything that Anil eats.

Prove by resolution that:

f. John likes peanuts.

Step-1: Conversion of Facts into FOL

In the first step we will convert all the given statements into its first order logic.
Step-2: Conversion of FOL into CNF

In First order logic resolution, it is required to convert the FOL into CNF as CNF form makes
easier for resolution proofs.

o liminate all implication (→) and rewrite


a. ∀x ¬ food(x) V likes(John, x)
b. food(Apple) Λ food(vegetables)
c. ∀x ∀y ¬ [eats(x, y) Λ ¬ killed(x)] V food(y)
d. eats (Anil, Peanuts) Λ alive(Anil)
e. ∀x ¬ eats(Anil, x) V eats(Harry, x)
f. ∀x¬ [¬ killed(x) ] V alive(x)
g. ∀x ¬ alive(x) V ¬ killed(x)
h. likes(John, Peanuts).
o Move negation (¬)inwards and rewrite
. ∀x ¬ food(x) V likes(John, x)
a. food(Apple) Λ food(vegetables)
b. ∀x ∀y ¬ eats(x, y) V killed(x) V food(y)
c. eats (Anil, Peanuts) Λ alive(Anil)
d. ∀x ¬ eats(Anil, x) V eats(Harry, x)
e. ∀x ¬killed(x) ] V alive(x)
f. ∀x ¬ alive(x) V ¬ killed(x)
g. likes(John, Peanuts).
o Rename variables or standardize variables
. ∀x ¬ food(x) V likes(John, x)
a. food(Apple) Λ food(vegetables)
b. ∀y ∀z ¬ eats(y, z) V killed(y) V food(z)
c. eats (Anil, Peanuts) Λ alive(Anil)
d. ∀w¬ eats(Anil, w) V eats(Harry, w)
e. ∀g ¬killed(g) ] V alive(g)
f. ∀k ¬ alive(k) V ¬ killed(k)
g. likes(John, Peanuts).
o Eliminate existential instantiation quantifier by elimination.
In this step, we will eliminate existential quantifier ∃, and this process is known
as Skolemization. But in this example problem since there is no existential quantifier so
all the statements will remain same in this step.
o Drop Universal quantifiers.
In this step we will drop all universal quantifier since all the statements are not implicitly
quantified so we don't need it.
. ¬ food(x) V likes(John, x)
a. food(Apple)
b. food(vegetables)
c. ¬ eats(y, z) V killed(y) V food(z)
d. eats (Anil, Peanuts)
e. alive(Anil)
f. ¬ eats(Anil, w) V eats(Harry, w)
g. killed(g) V alive(g)
h. ¬ alive(k) V ¬ killed(k)
i. likes(John, Peanuts).
o Distribute conjunction ∧ over disjunction ¬.
This step will not make any change in this problem.

Step-3: Negate the statement to be proved


In this statement, we will apply negation to the conclusion statements, which will be written as
¬likes(John, Peanuts)

Step-4: Draw Resolution graph:

Now in this step, we will solve the problem by resolution tree using substitution. For the above
problem, it will be given as follows:

Hence the negation of the conclusion has been proved as a complete contradiction with the given
set of statements.

Explanation of Resolution graph:

o In the first step of resolution graph, ¬likes(John, Peanuts) , and likes(John, x) get
resolved(canceled) by substitution of {Peanuts/x}, and we are left with ¬ food(Peanuts)
o In the second step of the resolution graph, ¬ food(Peanuts) , and food(z) get resolved
(canceled) by substitution of { Peanuts/z}, and we are left with ¬ eats(y, Peanuts) V
killed(y) .
o In the third step of the resolution graph, ¬ eats(y, Peanuts) and eats (Anil, Peanuts) get
resolved by substitution {Anil/y}, and we are left with Killed(Anil) .
o In the fourth step of the resolution graph, Killed(Anil) and ¬ killed(k) get resolve by
substitution {Anil/k}, and we are left with ¬ alive(Anil) .
o In the last step of the resolution graph ¬ alive(Anil) and alive(Anil) get resolved.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy