Scsa1702 - Ai - Unit III
Scsa1702 - Ai - Unit III
Unit:III
Logical Agents
The representation of knowledge and the reasoning processes that bring knowledge to
life are central to the entire field of artificial intelligence.
• Knowledge and reasoning are enable successful behaviors that would be very hard to
achieve.
• Knowledge and reasoning also play a crucial role in dealing with partially observable
environments.
• A knowledge-based agent can combine general knowledge with current percepts to
infer hidden aspects of the current state prior to selecting actions.
Like all our agents, it takes a percept as input and returns an action.
• The agent maintains a knowledge base, KB, which may initially contain some
background knowledge. Each time the agent program is called, it does three
things.
• First, it TELLS the knowledge base what it perceives.
• Second, it ASKS the knowledge base what action it should perform. In
the process of answering this query, extensive reasoning may be done
about the current state of the world, about the outcomes of possible action
sequences, and so on.
• Third, the agent records its choice with TELL and executes the action.
The TELL is necessary to let the knowledge base know that the
hypothetical action has actually been executed.
The details of the Representation Language are hidden inside three functions that
implement the interface between the sensors and actuators and the core representation and
reasoning system.
• MAKE-PERCEPT-SENTENCE takes a percept and a time and returns a sentence
asserting that the agent perceived the given percept at the given time.
• MAKE-ACTION-QUERY takes a time as input and returns a sentence that asks what
action should be done at the current time.
• MAKE-ACTION-SENTENCE constructs a sentence asserting that the chosen action
was executed. The details of the inference mechanisms are hidden inside TELL and ASK.
Knowledge level
The knowledge-based agent is not an arbitrary program for calculating actions. It is
amenable to a description at the knowledge level, where we need specify only what the agent
knows and what its goals are, in order to fix its behavior.
Implementation level
For example : an automated taxi might have the goal of dropping a passenger to Marin
County and might know that it is in San Francisco and that the Golden Gate Bridge is the only
link between the two locations. Then we can expect it to cross the Golden Gate Bridge because it
knows that that will achieve its goal. This analysis is independent of how the taxi works at the
implementation level. It doesn't matter whether its geographical knowledge is implemented as
linked lists or pixel maps, or whether it reasons by manipulating strings of symbols stored in
registers or by propagating noisy signals in a network of neurons.
• One can build a knowledge-based agent simply by Telling it what it needs to know.
• The agent's initial program, before it starts to receive percepts, is built by adding one by
one the sentences that represent the designer's knowledge of the environment.
• Designing the representation language to make it easy to express this knowledge in the
form of sentences simplifies the construction problem enormously. This is called the
declarative approach to system building.
• The procedural approach encodes desired behaviors directly as program code;
minimizing the role of explicit representation and reasoning can result in a much more
efficient system
The Wumpus world environment
The percepts will be given to the agent in the form of a list of five symbols; for example,
if there is a stench and a breeze, but no glitter, bump, or scream, the agent will receive the
percept [Stench, Breeze, None, None, None].
In most instances of the wumpus world, it is possible for the agent to retrieve the gold
safely. Occasionally, the agent must choose between going home empty-handed and risking
death to find the gold. About 21% of the environments are utterly unfair, because the gold is in a
pit or surrounded by pits.
The agent's initial knowledge base contains the rules of the environment, as listed above;
in particular, it knows that it is in [l,l] and that [l,l] is a safe square.
• The first percept is [None, None, None, None, None], from which the agent can
conclude that its neighboring squares are safe. Figure 3.2(a) shows the agent's state of knowledge
at this point. We list the sentences in the kowledge base using letters such as B (breezy) and OK
(safe, neither pit nor wumpus) marked in the appropriate squares.
Figure 3.2 The first step taken by the agent in the wumpus world. (a) The initial situation, after
percept [None, None, None, None, None]. (b) After one move, with percept [None, Breeze,
None, None, None].
From the fact that there was no stench or breeze in [1,1], the agent can infer that [1,2] and
[2,1] are free of dangers. They are marked with an OK to indicate this. A cautious agent will
move only into a square that it knows is OK. Let us, suppose the agent decides to move forward
to [2,1], giving the scene in Figure 3.2(b).
• The agent detects a breeze in [2,1], so there must be a pit in a neighboring square. The pit
cannot be in [1, 1], by the rules of the game, so there must be a pit in [2,2] or [3,1] or both. The
notation P? in Figure 3.2(b) indicates a possible pit in those squares. At this point, there is only
one known square that is OK and has not been visited yet. So the prudent agent will turn around,
go back to [1, 1] and then proceed to [1,2].
• The new percept in [1,2] is [Stench, None, None, None, None], resulting in the state of
knowledge shown in Figure 3.2(a). The stench in [1,2] means that there must be a wumpus
nearby. But the wumpus cannot be in [1,1], by the rules of the game, and it cannot be in [2,2] (or
the agent would have detected a stench when it was in [2,1]).
Therefore, the agent can infer that the wumpus is in [1,3]. The notation W! indicates this.
Moreover, the lack of a Breeze in [1,2] implies that there is no pit in [2,2]. Yet we already
inferred that there must be a pit in either [2,2] or [3,1], so this means it must be in [3,1]. This is a
fairly difficult inference, because it combines knowledge gained at different times in different
places and relies on the lack of a percept to make one crucial step. The inference is beyond the
abilities of most animals, but it is typical of the kind of reasoning that a logical agent does.
• The agent has now proved to itself that there is neither a pit nor a wumpus in [2,2], so it is OK
to move there. We will not show the agent's state of knowledge at [2,2]; we just assume that the
agent turns and moves to [2,3], giving us Figure 3.3(b). In [2,3], the agent detects a glitter, so it
should grab the gold and thereby end the game.
• In each case where the agent draws a conclusion from the available information, that
conclusion is guaranteed to be correct the available information is correct. This is a fundamental
property of logical reasoning.
Fig 3.3 Two later stages in the progress of the agent. (a) After the third move, with percept
[Stench, None, None, None, None]. (b) After the fifth move, with percept [Stench, Breeze,
Glitter, None, None].
LOGIC
• Knowledge bases consists of sentences. These sentences are expressed according to the syntax
of the representation language, which specifies all the sentences that are well formed.
• The syntax is clear enough in ordinary arithmetic:
"x + y = 4" is a well-formed sentence, whereas "x2y+ =" is not.
semantics
• A logic must also define the semantics of the language.
• Semantics means "meaning" of sentences. In logic, the definition is more precise.
• The semantics of the language defines the truth of each sentence with respect to each possible
world.
• For example, "x + y = 4" is true in a world where x is 2 and y is 2, but false in a world where x
is 1 and y is 1.1 .
In standard logics, every sentence must be either true or false in each possible world-
there is no "in between.
Example : Assume x and y as the number of men and women sitting at a table playing bridge,
for example, and the sentence x + y = 4 is true when there are four in total; formally, the possible
models are just all possible assignments of numbers to the variables x and y. Each such
assignment fixes the truth of any sentence of arithmetic whose variables are x and y.
Entailment
• Entailment means that one thing follows from another. Relation of logical entailment
between sentences is involved that a sentence follows logical from another sentence.
• KB ╞ α Knowledge base KB entails sentence α if and only if α is true in all worlds
where KB is true
– E.g., the KB containing “the Giants won” and “the Reds won” entails “Either the Giants won
or the Reds won”.
– E.g., x+y = 4 entails 4 = x+y.
– Entailment is a relationship between sentences (i.e., syntax) that is based on semantics.
Models
• Logicians typically think in terms of models, which are formally structured worlds with respect
to which truth can be evaluated.
m is a model of a sentence α if α is true in m
Figure 3.3 : Possible models for the presence of pits in squares [1,2], [2,2], and [3,1], given
observations of nothing in [1,1] and a breeze in [2,1]. (a) Models of the knowledge base and αl
(no pit in [1,2]). (b) Models of the knowledge base and α2 (no pit in [2,2]).
Conclusion
• The KB is false in models that contradict what the agent knows-for example, the KB is false in
any model in which [l,2] contains a pit, because there is no breeze in [1,1].
• Now let us consider two possible conclusions:
• α1= "There is no pit in [1,2].“
• α2= "There is no pit in [2,2].“
• In every model in which KB is true, α1 is also true.
• Hence, KB ╞ α1 : there is no pit in [1,2].
• In some models in which KB is true, α2 is false.
• Hence, KB ¥ α2 :so, the agent cannot conclude that there is no pit in [2,2].
Logical Inference : The above shown example illustrates entailment and also show how the
definition of entailment can be applied to derived conclusions to carry out logical inference.
Model Checking : The inference algorithm illustrated in Figure 3.3 is called model checking,
because it enumerates all possible models to check the α is true in all models in which KB is
true.
If an inference algorithm i can derive α from KB, then
KB ├ i α ,
Pronunciation for above notation is :
“α is derived from KB by i” (or) “ i derives α from KB”.
Sound : An inference algorithm that derives only entailed sentences is called sound or truth
preserving.
• i is sound if whenever KB ├i α, it is also true that KB╞ α
Soundness is a highly desirable property. An unsound inference procedure essentially
makes things up as it goes along-it announces the discovery of nonexistent needles.
Complete: an inference algorithm is complete if it can derive any sentence that is entailed.
• i is complete if whenever KB╞ α, it is also true that KB ├ i α
PROPOSITIONAL LOGIC
• Propositional logic is the simplest logic.
• The syntax of propositional logic and its semantics-the way in which the truth of sentences is
determined.
Syntax -The syntax of propositional logic defines the allowable sentences.
The atomic sentences - the indivisible syntactic elements consist of a single proposition
symbol. Each such symbol stands for a proposition that can be true or false.
Rules:
i. Uppercase names are used for symbols (ie) P,Q,R and so on.
ii. Names are Arbitrary
Complex Sentences : Complex sentences are constructed from simpler sentences using logical
connections. There are five connectives.
If S is a sentence, ¬S is a sentence (negation)
If S1 and S2 are sentences, S1 ˄S2 is a sentence (conjunction)
If S1 and S2 are sentences, S1 ˅S2 is a sentence (disjunction)
If S1 and S2 are sentences, S1 S2 is a sentence (implication)
If S1 and S2 are sentences, S1 S2 is a sentence (biconditional / IF AND ONLY IF)
¬ (not). A sentence such as ¬Wl,3 is called the negation of Wl,3. A literal is either an atomic
sentence (a positive literal) or a negated atomic sentence (a negative literal).
/\ (and). A sentence whose main connective is /\, such as Wl,3 /\ P3,1, is called a conjunction; its
parts are the conjuncts. (The /\ looks like an "N' for "And.")
V (or). A sentence using V, such as (W1,3/\P3,1) VW2,2, is a disjunction of the Disjuncts (Wl,3 /\
P3,1) and W2,2.
(implies). A sentence such as (W1,3/\ P3,1) ¬ W2,2 is called an implication (or conditional).
Its premise or antecedent is (Wl,3/\ P3,1),and its conclusion or consequent is W2,2. Implications are
also known as rules or if-then statements.
(if and only if). The sentence W1,3 ¬ W2,2 is a biconditional.
BNF(Backus-Naur Form): The figure 3.4 shows about the BNF grammar of sentence in
propositional logic.
Semantics
• The semantics defines the rules for determining the truth of a sentence with respect to a
particular model.
• In propositional logic, a model simply fixes the truth value, true or false for every proposition
symbol.
• For example, if the sentences in the knowledge base make use of the proposition symbols Pl,2,
P2,2, and P3,1, then one possible model is
m1 = {P1,2 = false, P2,2 = false, P3,1 = true} .
• With three proposition symbols, there are 23 = 8 possible models.
• The semantics for propositional logic must specify how to compute the truth value of any
sentence, given a model. This is done recursively. All sentences are constructed from atomic
sentences and the five connectives; therefore, we need to specify how to compute the truth of
atomic sentences and how to compute the truth of sentences formed with each of the five
connectives.
Atomic sentences
Atomic sentences are easy:
• True is true in every model and False is false in every model.
• The truth value of every other proposition symbol must be specified directly in the model.
• For example, in the model m1 given earlier, P1,2 is false.
Complex sentences
• For any sentence s and any model m, the sentence ¬s is true in m if and only if s is false in m.
• Such rules reduce the truth of a complex sentence to the truth of simpler sentences.
• The rules for each connective can be summarized in a truth table that specifies the truth value
of a complex sentence for each possible assignment of truth values to its components.
For example,
(a) "5 is even implies Sam is smart" is true, regardless of whether Sam is smart.
(b) "P Q" saying that “If P is true, then Q is true”.
R4: ¬B1,1
R5: B2,1
The knowledge base, then, consists of sentences R1 through R5.
INFERENCE RULES
Figure 3.6 . A truth table constructed for the knowledge base given in the text.
From the above table, KB is true if R1 through R 5 are true, which occurs in just 3 of the 128
rows. In all 3 rows, P1 ,2 is false, so there is no pit in [1,2]. On the other hand, there might (or
might not) be a pit in [2,2].
The algorithm for deciding entailment in propositional logic is shown below.
Example :
1. “Evil King John ruled England in 1200”
Objects: John, England, 1200; Relation: ruled; Properties: evil, king.
2. "One plus two equals three"
Objects: one, two, three, one plus two; Relation: equals;
Function: plus. ("One plus two" is a name for the object that is obtained by applying the
function "plus" to the objects "one" and "two." Three is another name for this object.)
3. "Squares neighboring the wumpus are smelly."
Objects: wumpus, squares; Property: smelly; Relation: neighboring.
Ontological Commitment
• The primary difference between propositional and first-order logic lies in the ontological
commitment made by each language-that is, what it assumes about the nature of reality. Special
Purpose logic makes faster Ontological commitment.
Temporal Logic: It assumes that facts hold at particular times and that those times (intervals) are
ordered (arranged).
High order logic: It is more expressive than FOL. It allows one to make assertions about all
relations.
Epistemological Commitments
A logic that allows the possible states of knowledge that it allows with respect to each fact. In
first order logic, a sentence represents a fact and the agent believes the sentence to be true, believes it
to be false, or has no opinion. These logics therefore have three possible states of knowledge
regarding any sentence. Systems using probability theory, can have any degree of belief, ranging
from 0 (total disbelief) to 1 (total belief).
Ontological Commitment and Epistemological Commitment of different logics:
Syntax → Procedures
Semantics →Meanings
a) Models for FOL
Models for propositional logic are just sets of truth values for the proposition symbols
(ie 0’s and 1’s).
Models for first-order logic are represented in terms of objects and predicates on
objects (ie) properties of objects (or) relation between objects.
The domain of a model is the set of objects it contains. These objects are sometimes
called domain elements.
Relation → related set of tuples of objects
Tuple → is a collection of objects arranged in a fixed order and is written with angle
brackets surrounding the objects.
Example: Richard the Lionheart, King of England from 1189 to 1199; his younger brother, the evil
King John, who ruled from 1199 to 1215; the left legs of Richard and John; and a crown.
• { (Richard the Lionheart, King John), (King John, Richard the Lionheart) }
From the above example, the underlined words are the objects.
2. Atomic Sentences
• Constant symbols and function symbols refers to objects and predicate symbols which further
refers to relation is called Atomic Sentence which state facts.
• An Atomic Sentence is formed from a predicate symbol followed by parenthesized list of
terms.
Atomic sentence = predicate (term1,...,term n) or term1 = term2
Term = function (term1,...,term n) or constant or variable
• Eg. 1) William is the brother of Richard
Brother ( William, Richard)
2) Richard’s father is married to William’s mother
Married (Father(Richard), Mother(William))
• An atomic sentence is true in a given model, under a given interpretation, if the relation
referred to by the predicate symbol holds among the objects referred to by the arguments.
3) Complex Sentences
Logical Connectives are used to construct more complex sentences, in which meaning of
given sentences has to be satisfied.
• Complex sentences are made from atomic sentences using connectives.
⌐S, S1 Λ S2, S1 V S2, S1 S2, S1 S2,
E.g. 1)Sibling(King John, Richard) Sibling(Richard, King John)
2) Either Richard is king or John is King.
King(Richard) V King (John)
3) Richard is not king, so it implies John is king
⌐King(Richard) King(John)
4) Quantifiers
• To express properties of entire collections of objects, instead of enumerating the objects by
name.
• First-order logic contains two standard quantifiers, called Universal (∀) and Existential (Ǝ).
Universal quantification(∀)
This quantifier is usually denoted as ∀ and Pronounced as “For All”.
Logical Expression is true for all objects x(variable) in the universe.
Eg. "All kings are persons“ is written in first-order logic as
∀x King (x) Person(x). (ie) "For all x, if x is a king, then x is a person”. X →
Variable.
The symbol x is called variable. Eg. LeftLeg(x)
Ground Term- A term with no variable is called a ground term.
The sentence ∀x P, where P is any logical expression, says that P is true for every
object x.
Extended Interpretations
• ∀x P is true in a given model under a given interpretation if P is true in all possible extended
interpretations constructed from the given interpretation, where each extended interpretation
specifies a domain element to which x refers.
• We can extend the interpretation in five ways:
• x → Richard the Lionheart,
• x → King John,
• X → Richard's left leg,
• x → John's left leg,
• x → the crown.
• The universally quantified sentence
∀ x King (x) Person(x) is true under the original interpretation if the sentence
So, in the first assertion, if Richard the LionHeart is not a crown, then the first assertion is true
and the existential is satisfied.
Nested Quantifiers
• The sentences are represented using both quantifiers (ie) universal and existential are called
Nested Quantifiers. There are two types of sentences in which different nested quantifiers are
used.
• Simple sentences use same type of quantifiers.
• Eg: a) Same type : "Brothers are siblings" Note : x,y are brothers
∀x ∀y Brother(x,y) Sibling(x,y).
implies
∀x,y sibiling(x,y) sibling(y,x)
equivalence
Example:
“Everybody loves somebody”
∀xƎy Loves(x,y)
"There is someone who is loved by everyone”, → Ǝy ∀x Loves(x,y)
“Everyone will be loved by some body”, → ∀x (Ǝy Loves( x, y))
“Some one will be loved by everybody” → Ǝx(∀y Loves(x,y))
Connections between ∀ and Ǝ
• The two quantifiers are actually intimately connected with each other, through negation.
• Eg: "Everyone likes ice cream“means that there is no one who does not like ice cream:
∀x Likes(x,IceCream) Ǝ⌐x ⌐Likes ( x, IceCream).
All X Nobody Dislikes
Because ∀ is a conjunction and Ǝ is a disjunction.
De Morgan's rules
∀ is a conjunction and Ǝ is a disjunction and its not quantified
Assertions and queries in first-order logic
Sentences are added to a knowledge base using TELL, exactly as in propositional logic.
Such sentences are called assertions. For example, we can assert that John is a king and that
kings are persons:
TELL(KB, King(John)) . Kb – Knowledge Base
TELL(KB, ∀x King(x) Person(x)) .
We can ask questions of the knowledge base using ASK.
For example, ASK(KB, King(John))
Questions asked using ASK are called queries or goals
(a) ASK(KB, Person(John)) (ie) to find whether John is a person. It is true.
(b) ASK(KB, Ǝ x Person(x)). It may be True (or) false.
(ie) ASK KB, that there may be some x who is a person and we solve it by
providing such an x, that is called substitution list (or) Binding list.
If there is more than one answer, a list of substitutions can be returned.
The kinship domain
• The domain of family relationships is called kinship domain.
• Eg. "E is the mother of C" and "C is the father of W" and rules such as "One's
grandmother is the mother of one's parent.“ (ie) “W’s grandmother is the mother of W’s
parent”.
• Kinship domain consists of
(a) Objects – people
(b) Unary predicate – Male and Female
(c) Binary predicate – Parent, Brother, Sister
(d) Function – Father. Mother (Every person has exactly one of these)
(e) Relation – Brotherhood, Sisterhood
For example :
• one's mother is one's female parent:
∀m,c Mother(c)=m Female(m) Λ Parent(m, c). (ie) m is a mother of c m is a Female
AND m is a parent of C
• One's husband is one's male spouse:
SYNTACTIC SUGAR : An extension to or abbreviation of the standard syntax that does not
change the semantics.
SETS :
• The domain of mathematical set representation, which consists of
(a) Constant – Empty Set { }
(b) Predicates – Member and subset
(c) Functions – Intersection, Union, and Adjoin
{(s1 ∩ s2) , (s1 ᴜ s2), x|s2}
One possible set of axioms is as follows:
1. The only sets are the empty set and those made by adjoining something to a set:
∀s Set(s) (s = { } ) V (Ǝx,s2 Set(s2) Λ s = {x|s2}) .
2. The empty set has no elements adjoined into it, in other words, there is no way to decompose
EmptySet into a smaller set and an element:
¬Ǝx,s {x|s} = { }
3. Adjoining an element already in the set has no effect:
∀x,s x € s s = {x|s}
4. The only members of a set are the elements that were adjoined into it.
∀x,s x € s [Ǝy,s2} (s = {y|s2} Λ (x = y V x € s2))]
5. A set is a subset of another set if and only if all of the first set's members are members of the
second set:
∀ s1,s2 s1 Ϲ s2 (∀x x € s1 x € s2)
6. Two sets are equal if and only if each is a subset of the other:
∀s1,s2 (s1 = s2) (s1 Ϲ s2 Λ s2 Ϲ s1)
7. An object is in the intersection of two sets if and only if it is a member of both sets:
∀x,s1,s2 x € (s1 ∩ s2) (x € s1 Λ x € s2)
8. An object is in the union of two sets if and only if it is a member of either set:
∀x,s1,s2 x € (s1 ᴜ s2) (x € s1 V x € s2)
Lists
Lists are similar to sets. The element in the list can appear multiple times. The lists are ordered.
Constant list is NIL, which has no element. Cons, Append, first and rest are the functions. Find is
the predicate.
The empty list is []. The term Cons(x, y), where y is a nonempty list, is written [xly].
The term Cons(x, Nil),is written as [x].
A list of several elements, such as [A, B, CJ, corresponds to the nested term Cons(A, Cons(B,
Cons(C, Nil))).
SITUATION CALCULUS
The Situation Calculus is a logic formalism designed for representing and reasoning
about dynamical domains.
o McCarthy, Hayes 1969
o Reiter 1991
In First-Order Logic, sentences are either true or false and stay that way. Nothing is
corresponding to any sort of change.
SitCalc represents changing scenarios as a set of SOL formulae.
A domain is encoded in SOL by three kind of formulae
o Action precondition axioms and action effects axioms
o Successor state axioms, one for each fluent
o The foundational axioms of the situation calculus
Situation Calculus : An Example
World:
o robot
o items
o locations (x,y)
o moves around the world
o picks up or drops items
o some items are too heavy for the robot to pick up
o some items are fragile so that they break when they are dropped
o robot can repair any broken item that it is holding
Actions
o move(x, y): robot is moving to a new location (x, y)
o pickup(o): robot picks up an object o
o drop(o): robot drops the object o that holds
Situations
o Initial situation S0: no actions have yet occurred
o A new situation, resulting from the performance of an action a in current situation
s, is denoted using the function symbol do(a, s).
do(move(2, 3), S0): denotes the new situation after the performance of
action move(2, 3) in initial situation S0.
do(pickup(Ball ), do(move(2, 3), S0))
do(a,s) is equal to do(a’,s’) s=s’ and a=a’
Fluents: “properties of the world”
o Relational fluents
Statements whose truth value may change
They take a situation as a final argument
is_carrying(o, s): robot is carrying object o in situation s
o E.g. Suppose that the robot initially carries nothing
is_carrying(Ball, S0) : FALSE
is_carrying(Ball, do(pickup(Ball ), S0)) : TRUE
Action Preconditions Axioms
o Some actions may not be executable in a given situation
o Poss(a,s): special binary predicate
denotes the executability of action a in situation s
o Examples:
o Poss(drop(o),s) ↔ is_carrying(o,s)
o Poss(pickup(o),s) ↔ (z ¬is_carrying(z,s) ¬heavy(o))
Action Effects Axioms
o Specify the effects of an action on the fluents
o Examples:
o Poss(pickup(o),s) → is_carrying(o,do(pickup(o),s))
o Poss(drop(o),s) fragile(o) → broken(o,do( drop(o),s))
o Is that enough? No, because of the frame problem
ONTOLOGY
Ontology means Remaining. Representing the abstract concepts is sometimes called
ontological engineering-it is related to the knowledge engineering process.
In "toy" domains, the choice of representation is not that important; it is easy to come up
with a consistent vocabulary.
Real world problems such as shopping on the Internet or controlling a robot in a changing
physical environment require more general and flexible representations, we have many choice of
representation like Actions, Time, Physical objects and Beliefs so this occur in different
domains.
Figure 3.9 The upper ontology of the world
The general framework of concepts is called an upper ontology, because of the
convention of drawing graphs with the general concepts at the top and the more specific
concepts below them, as in above Figure 3.9.
Characteristics of General purpose ontology.
1. A general purpose ontology should be applicable in more or less special purpose domain.
2. In any demanding domain, different areas of knowledge must be unified.
Forward chaining is one of the two main methods of reasoning when using an inference
engine and can be described logically as repeated application of modus ponens. Forward
chaining is a popular implementation strategy for expert systems, business and production rule
systems. The opposite of forward chaining is backward chaining.
Forward chaining starts with the available data and uses inference rules to extract more
data (from an end user, for example) until a goal is reached. An inference engine using forward
chaining searches the inference rules until it finds one where the antecedent (If clause) is known
to be true. When such a rule is found, the engine can conclude, or infer, the
consequent (Then clause), resulting in the addition of new information to its data.
Inference engines will iterate through this process until a goal is reached.
Example:1, suppose that the goal is to conclude the color of a pet named Fritz, given that he
croaks and eats flies, and that the rule base contains the following four rules:
1. If X croaks and X eats flies - Then X is a frog
2. If X chirps and X sings - Then X is a canary
3. If X is a frog - Then X is green
4. If X is a canary - Then X is yellow
Let us illustrate forward chaining by following the pattern of a computer as it evaluates the rules.
Assume the following facts:
Fritz croaks
Fritz eats flies
With forward reasoning, the inference engine can derive that Fritz is green in a series of steps:
1. Since the base facts indicate that "Fritz croaks" and "Fritz eats flies", the antecedent of rule #1
is satisfied by substituting Fritz for X, and the inference engine concludes:
Fritz is a frog
2. The antecedent of rule #3 is then satisfied by substituting Fritz for X, and the inference engine
concludes:
Fritz is green
The name "forward chaining" comes from the fact that the inference engine starts with
the data and reasons its way to the answer, as opposed to backward chaining, which works the
other way around. In the derivation, the rules are used in the opposite order as compared
to backward chaining. In this example, rules #2 and #4 were not used in determining that Fritz is
green.
Example:2
The law says that it is a crime for an American to sell weapons to hostile nations.
The country Nono, an enemy of America, has some missiles, and all of its missiles were
sold to it by Colonel West, who is American.
To prove:
“West is a criminal”.
Proof :
First, represent these facts as first-order definite clauses. The forward-chaining algorithm
solves this problem.
Step 1: “ it is a crime for an American to sell weapons to hostile nations”
American(x) Weapon(y) Sells(x,y,z) Hostile(z) Criminal(x)
Step 2: “ Nono … has some missiles”, Missile(x): is transformed in to 2 definite clauses :
x Owns(Nono,x) i.e., Owns(Nono,M1) and Missile(M1)
where M1 is a new constant
Step 3: “… all of its missiles were sold to it by Colonel West”
Sells(West, x, Nono) Owns(Nono ,x) Missile(x)
Step 4: Missiles are weapons:
Weapon(x)Missile(x)
Step 5:An enemy of America counts as "hostile“:
Hostile(x)Enemy(x,America)
Step 6: West, who is American …
American(West)
Step 7: The country Nono, an enemy of America …
Enemy(Nono, America)
This knowledge base contains no function symbols and is therefore an instance of the
class of Datalog knowledge bases-that is, sets of first-order definite clauses with no function
symbols. The diagrammatic representation of Forward Chaining shown in the below figure 3.10.
For example, suppose a new pet, Fritz, is delivered in an opaque box along with two facts about
Fritz:
Fritz croaks
Fritz eats flies
The goal is to decide whether Fritz is green, based on a rule base containing the following four
rules:
1. If X croaks and X eats flies – Then X is a frog
2. If X chirps and X sings – Then X is a canary
3. If X is a frog – Then X is green
4. If X is a canary – Then X is yellow
With backward reasoning, an inference engine can determine whether Fritz is green in
four steps. To start, the query is phrased as a goal assertion that is to be proved: "Fritz is
green".
1. Fritz is substituted for X in rule #3 to see if its consequent matches the goal, so
rule #3 becomes:
If Fritz is a frog – Then Fritz is green
Since the consequent matches the goal ("Fritz is green"),the rules engine now
needs to see if the antecedent ("If Fritz is a frog") can be proved. The antecedent
therefore becomes the new goal:
Fritz is a frog
2. Again substituting Fritz for X, rule #1 becomes:
If Fritz croaks and Fritz eats flies – Then Fritz is a frog
Since the consequent matches the current goal ("Fritz is a frog"), the inference
engine now needs to see if the antecedent ("If Fritz croaks and eats flies") can be proved.
The antecedent therefore becomes the new goal:
Fritz croaks and Fritz eats flies
3. Since this goal is a conjunction of two statements, the inference engine breaks it into
two sub-goals, both of which must be proved:
Fritz croaks
Fritz eats flies
4. To prove both of these sub-goals, the inference engine sees that both of these sub-goals
were given as initial facts. Therefore, the conjunction is true:
Fritz croaks and Fritz eats flies therefore the antecedent of rule #1 is true and the
consequent must be true:
Fritz is a frog
Therefore the antecedent of rule #3 is true and the consequent must be true:
Fritz is green
This derivation therefore allows the inference engine to prove that Fritz is green. Rules #2 and #4
were not used.
Difference
Backward chaining (a la Prolog) is more like finding what initial conditions form a path
to your goal. At a very basic level it is a backward search from your goal to find conditions that
will fulfil it.
Backward chaining is used for interrogative applications (finding items that fulfil certain
criteria) - one commercial example of a backward chaining application might be finding which
insurance policies are covered by a particular reinsurance contract.
Forward chaining (a la CLIPS) matches conditions and then generates inferences from
those conditions. These conditions can in turn match other rules. Basically, this takes a set of
initial conditions and then draws all inferences it can from those conditions.
Algorithm
RESOLUTION
In 1930, the German mathematician Kurt Godel proved the first completeness theorem
for first-order logic, showing that any entailed sentence has a finite proof. In 1931, Godel
proved an even more famous incompleteness theorem.
The theorem states that a logical system that includes the principle of induction
without which very little of discrete mathematics can be constructed-is necessarily
incomplete. Hence, there are sentences that are entailed, but have no finite proof
within the system.
Resolution-based theorem provers have been applied widely to derive
mathematical theorems, including several for which no proof was known
previously. Theorem provers have also been used to verify hardware designs and
to generate logically correct programs, among other applications like
Conjunctive normal form for first-order logic.
The resolution inference rule
Completeness of Resolution
Dealing with equality
Resolution strategies
Theorem provers
Example
Example
This makes use of Skolemization and involves clauses that are not definite clauses. This
results in a somewhat more complex proof structure. In English, the problem is as follows:
Example:
1.Jack owns a dog.
2.Every dog owner is an animal lover.
3.No animal lover kills an animal.
4.Either Jack or Curiosity killed the cat, who is named Tuna.
5.Did Curiosity kill the cat?
Conjunctive Normal Form
Resolution Strategies
Unit resolution: prefer to perform resolution if one clause is just a literal yields shorter
sentences.
•Set of support : identify a subset of the KB (hopefully small); every resolution will take a
clause from the set and resolve it with another sentence, then add the result to the set of support.
•Input resolution: always combine a sentence from the query or KB with another sentences.
Linear resolution: resolve P and Q if P is in the original KB or is an ancestor of Q in the proof
tree.
• Subsumption: eliminate all sentences more specific than a sentence already in the KB
Demodulation
Paramodulation
TRUTH MAINTENANCE SYSTEM
Many of the inferences drawn by a knowledge representation system will have only
default status, rather than being absolutely certain. Inevitably, some of these inferred facts will
turn out to be wrong and will have to be retracted in the face of new information. This process is
called belief revision.
Truth Maintenance Systems (TMS) have been developed as a means of implementing
Non-Monotonic Reasoning Systems.
Basically TMSs:
all do some form of dependency directed backtracking
assertions are connected via a network of dependencies.
Justification-Based Truth Maintenance Systems (JTMS)
This is a simple TMS in that it does not know anything about the structure of the
assertions themselves.
Each supported belief (assertion) in has a justification.
Each justification has two parts:
o An IN-List -- which supports beliefs held.
o An OUT-List -- which supports beliefs not held.
An assertion is connected to its justification by an arrow.
One assertion can feed another justification thus creating the network.
Assertions may be labelled with a belief status.
An assertion is valid if every assertion in the IN-List is believed and none in the OUT-
List are believed.
An assertion is non-monotonic is the OUT-List is not empty or if any assertion in the IN-
List is non-monotonic.
Logic-Based Truth Maintenance Systems (LTMS)
Similar to JTMS except:
Nodes (assertions) assume no relationships among them except ones explicitly stated in
justifications.
JTMS can represent P and P simultaneously. An LTMS would throw a contradiction
here.
If this happens network has to be reconstructed.
Assumption-Based Truth Maintenance Systems (ATMS)
JTMS and LTMS pursue a single line of reasoning at a time and backtrack (dependency-
directed) when needed -- depth first search.
ATMS maintain alternative paths in parallel -- breadth-first search
Backtracking is avoided at the expense of maintaining multiple contexts.
However as reasoning proceeds contradictions arise and the ATMS can be pruned
o Simply find assertion with no valid justification.