21CSC206T Unit-4
21CSC206T Unit-4
20/02/2025 1
Knowledge and Reasoning
Table of Contents
• Unit-4
• Knowledge Representation – Knowledge based agents – The Wumpus world –
Propositional Logic - syntax, semantics and knowledge base building - inferences –
reasoning patterns in propositional logic – predicate logic – representing facts in logic:
Syntax and semantics – Unification – Unification Algorithm - Knowledge representation
using rules - Knowledge representation using semantic nets - Knowledge representation
using frames inferences - Uncertain Knowledge and reasoning Methods.
20/02/2025 2
Knowledge Representation & Reasoning
• The second most important concept in AI
• If we are going to act rationally in our environment, then we must have some way of
describing that environment and drawing inferences from that representation.
• how do we describe what we know about the world ?
• how do we describe it concisely ?
• how do we describe it so that we can get hold of the right piece of knowledge when
we need it ?
• how do we generate new pieces of knowledge ?
• how do we deal with uncertain knowledge ?
20/02/2025 3
Knowledge Representation & Reasoning
Knowledge
Declarative Procedural
• Declarative knowledge deals with factoid questions (what is the capital of
India? Etc.)
• Procedural knowledge deals with “How”
• Procedural knowledge can be embedded in declarative knowledge
20/02/2025 4
Planning
Given a set of goals, construct a sequence of actions that achieves
those goals:
• often very large search space
• but most parts of the world are independent of most other
parts
• often start with goals and connect them to actions
• no necessary connection between order of planning and order
of execution
• what happens if the world changes as we execute the plan
and/or our actions don’t produce the expected results?
20/02/2025 5
Learning
• If a system is going to act truly appropriately, then it must
be able to change its actions in the light of experience:
• how do we generate new facts from old ?
• how do we generate new concepts ?
• how do we learn to distinguish different situations in
new environments ?
20/02/2025 6
What is knowledge representation?
20/02/2025 10
Knowledge and Reasoning
Table of Contents
• Knowledge and reasoning-Approaches and issues of knowledge reasoning-Knowledge
base agents
• Logic Basics-Logic-Propositional logic-syntax ,semantics and inferences-Propositional
logic- Reasoning patterns
• Unification and Resolution-Knowledge representation using rules-Knowledge
representation using semantic nets
• Knowledge representation using frames-Inferences-
• Uncertain Knowledge and reasoning-Methods-Bayesian probability and belief network
• Probabilistic reasoning-Probabilistic reasoning over time-Probabilistic reasoning over
time
• Other uncertain techniques-Data mining-Fuzzy logic-Dempster -shafer theory 11
20/02/2025
A KNOWLEDGE-BASED AGENT
• A knowledge-based agent includes a knowledge base and an inference system.
• A knowledge base is a set of representations of facts of the world.
• Each individual representation is called a sentence.
• The sentences are expressed in a knowledge representation language.
• The agent operates as follows:
1. It TELLs the knowledge base what it perceives.
2. It ASKs the knowledge base what action it should perform.
3. It performs the chosen action.
12
20/02/2025
Requirements for a Knowledge-Based Agent
1. \what it already knows" [McCarthy '59]
A knowledge base of beliefs.
2. \it must rst be capable of being told" [McCarthy '59]
A way to put new beliefs into the knowledge base.
3. \automatically deduces for itself a suciently wide class of
immediate consequences" [McCarthy '59]
A reasoning mechanism to derive new beliefs from ones already
in the knowledge base.
20/02/2025 13
ARCHITECTURE OF A KNOWLEDGE-BASED
AGENT
• Knowledge Level.
• The most abstract level: describe agent by saying what it knows.
• Example: A taxi agent might know that the Golden Gate Bridge connects San
Francisco with the Marin County.
• Logical Level.
• The level at which the knowledge is encoded into sentences.
• Example: Links(GoldenGateBridge, SanFrancisco, MarinCounty).
• Implementation Level.
• The physical representation of the sentences in the logical level.
• Example: ‘(links goldengatebridge sanfrancisco marincounty )
14
20/02/2025
THE WUMPUS WORLD ENVIRONMENT
• The Wumpus computer game
• The agent explores a cave consisting of rooms connected by passageways.
• Lurking somewhere in the cave is the Wumpus, a beast that eats any agent that
enters its room.
• Some rooms contain bottomless pits that trap any agent that wanders into the
room.
• Occasionally, there is a heap of gold in a room.
• The goal is to collect the gold and exit the world without being eaten
15
20/02/2025
A TYPICAL WUMPUS WORLD
• The agent always starts in the
field [1,1].
• The task of the agent is to
find the gold, return to the
field [1,1] and climb out of the
cave.
20/02/2025
16
AGENT IN A WUMPUS WORLD: PERCEPTS
• The agent perceives
• a stench in the square containing the Wumpus and in the adjacent squares (not
diagonally)
• a breeze in the squares adjacent to a pit
• a glitter in the square where the gold is
• a bump, if it walks into a wall
• a woeful scream everywhere in the cave, if the wumpus is killed
• The percepts are given as a five-symbol list. If there is a stench and a breeze, but no
glitter, no bump, and no scream, the percept is
[Stench, Breeze, None, None, None]
17
20/02/2025
WUMPUS WORLD ACTIONS
• go forward
• turn right 90 degrees
• turn left 90 degrees
• grab: Pick up an object that is in the same square as the agent
• shoot: Fire an arrow in a straight line in the direction the agent is facing. The arrow
continues until it either hits and kills the wumpus or hits the outer wall. The agent
has only one arrow, so only the first Shoot action has any effect
• climb is used to leave the cave. This action is only effective in the start square
• die: This action automatically and irretrievably happens if the agent enters a square
with a pit or a live wumpus
18
20/02/2025
ILLUSTRATIVE EXAMPLE: WUMPUS WORLD
•Performance measure
• gold +1000,
• death -1000
(falling into a pit or being eaten by the wumpus)
• -1 per step, -10 for using the arrow
•Environment
• Rooms / squares connected by doors.
• Squares adjacent to wumpus are smelly
• Squares adjacent to pit are breezy
• Glitter iff gold is in the same square
• Shooting kills wumpus if you are facing it
• Shooting uses up the only arrow
• Grabbing picks up gold if in same square
• Releasing drops the gold in same square
• Randomly generated at start of game. Wumpus only senses current room.
•Sensors: Stench, Breeze, Glitter, Bump, Scream [perceptual inputs]
•Actuators: Left turn, Right turn, Forward, Grab, Release, Shoot
20/02/2025 19
WUMPUS WORLD CHARACTERIZATION
Fully Observable No – only local perception
Discrete Yes
20/02/2025 20
EXPLORING A WUMPUS WORLD
The knowledge base of the agent
consists of the rules of the
Wumpus world plus the percept
“nothing” in [1,1]
Boolean percept
feature values:
<0, 0, 0, 0, 0>
20/02/2025 21
EXPLORING A WUMPUS WORLD
20/02/2025 22
EXPLORING A WUMPUS WORLD
T=0 T=1
P?
A/B P?
V
1 2 3 4
Stench, none, none, none, none
20/02/2025 24
EXPLORING A WUMPUS WORLD
We reasoned about the possible states the Wumpus world can be in,
given our percepts and our knowledge of the rules of the Wumpus
world.
I.e., the content of KB at T=3.
W
What follows is what holds true in all those worlds that satisfy what is
known at that time T=3 about the particular Wumpus world we are in.
P
Example property: P_in_(3,1)
Models(KB) Models(P_in_(3,1)) P
26
20/02/2025
SUMMARY OF KNOWLEDGE BASED AGENTS
• Intelligent agents need knowledge about the world for making good decisions.
• The knowledge of an agent is stored in a knowledge base in the form of sentences in
a knowledge representation language.
• A knowledge-based agent needs a knowledge base and an inference mechanism. It
operates by storing sentences in its knowledge base, inferring new sentences with the
inference mechanism, and using them to deduce which actions to take.
• A representation language is defined by its syntax and semantics, which specify the
structure of sentences and how they relate to the facts of the world.
• The interpretation of a sentence is the fact to which it refers. If this fact is part of the
actual world, then the sentence is true.
27
20/02/2025
Knowledge and Reasoning
Table of Contents
• Knowledge and reasoning-Approaches and issues of knowledge reasoning-Knowledge base
agents
• Logic Basics-Logic-Propositional logic-syntax ,semantics and inferences-Propositional logic-
Reasoning patterns
• Unification and Resolution-Knowledge representation using rules-Knowledge representation
using semantic nets
• Knowledge representation using frames-Inferences-
• Uncertain Knowledge and reasoning-Methods-Bayesian probability and belief network
• Probabilistic reasoning-Probabilistic reasoning over time-Probabilistic reasoning over time
• Other uncertain techniques-Data mining-Fuzzy logic-Dempster -shafer theory
20/02/2025 28
What is a Logic?
• A language with concrete rules
• No ambiguity in representation (may be other errors!)
• Allows unambiguous communication and processing
• Very unlike natural languages e.g. English
• Many ways to translate between languages
• A statement can be represented in different logics
• And perhaps differently in same logic
• Expressiveness of a logic
• How much can we say in this language?
• Not to be confused with logical reasoning
• Logics are languages, reasoning is a process (may use logic)
20/02/2025 29
30-03-2021
Syntax and Semantics
• Syntax
• Rules for constructing legal sentences in the logic
• Which symbols we can use (English: letters, punctuation)
• How we are allowed to combine symbols
• Semantics
• How we interpret (read) sentences in the logic
• Assigns a meaning to each sentence
• Example: “All lecturers are seven foot tall”
• A valid sentence (syntax)
• And we can understand the meaning (semantics)
• This sentence happens to be false (there is a counterexample)
20/02/2025 30
Propositional Logic
• Syntax
• Propositions, e.g. “it is wet”
• Connectives: and, or, not, implies, iff (equivalent)
20/02/2025 31
Predicate Logic
• Propositional logic combines atoms
• An atom contains no propositional connectives
• Have no structure (today_is_wet, john_likes_apples)
• Predicates allow us to talk about objects
• Properties: is_wet(today)
• Relations: likes(john, apples)
• True or false
• In predicate logic each atom is a predicate
• e.g. first order logic, higher-order logic
20/02/2025 32
First Order Logic
• More expressive logic than propositional
• Used in this course (Lecture 6 on representation in FOL)
• Constants are objects: john, apples
• Predicates are properties and relations:
• likes(john, apples)
• Functions transform objects:
• likes(john, fruit_of(apple_tree))
• Variables represent any object: likes(X, apples)
• Quantifiers qualify values of variables
• True for all objects (Universal): ∀X. likes(X, apples)
• Exists at least one object (Existential): ∃X. likes(X, apples)
20/02/2025 33
Example: FOL Sentence
• “Every rose has a thorn”
• For all X
• if (X is a rose)
• then there exists Y
• (X has Y) and (Y is a thorn)
20/02/2025 34
Example: FOL Sentence
• “On Mondays and Wednesdays I go to John’s house for dinner”
20/02/2025 37
Knowledge and Reasoning
Table of Contents
• Knowledge and reasoning-Approaches and issues of knowledge reasoning-
Knowledge base agents
• Logic Basics-Logic-Propositional logic-syntax ,semantics and inferences-
Propositional logic- Reasoning patterns
• Unification and Resolution-Knowledge representation using rules-Knowledge
representation using semantic nets
• Knowledge representation using frames-Inferences-
• Uncertain Knowledge and reasoning-Methods-Bayesian probability and belief
network
• Probabilistic reasoning-Probabilistic reasoning over time-Probabilistic reasoning
over time
Other uncertain techniques-Data mining-Fuzzy logic-Dempster -shafer theory
•20/02/2025 38
Knowledge and Reasoning
Table of Contents
• Knowledge and reasoning-Approaches and issues of knowledge reasoning-
Knowledge base agents
• Logic Basics-Logic-Propositional logic-syntax ,semantics and inferences-
Propositional logic- Reasoning patterns
• Unification and Resolution-Knowledge representation using rules-Knowledge
representation using semantic nets
• Knowledge representation using frames-Inferences-
• Uncertain Knowledge and reasoning-Methods-Bayesian probability and belief
network
• Probabilistic reasoning-Probabilistic reasoning over time-Probabilistic reasoning
over time
Other uncertain techniques-Data mining-Fuzzy logic-Dempster -shafer theory
•20/02/2025 39
Propositional logic
• Propositional logic consists of:
• The logical values true and false (T and F)
• Propositions: “Sentences,” which
• Are atomic (that is, they must be treated as indivisible units, with
no internal structure), and
• Have a single logical value, either true or false
• Operators, both unary and binary; when applied to logical values, yield
logical values
• The usual operators are and, or, not, and implies
40
20/02/2025
Truth tables
• Logic, like arithmetic, has operators, which apply to one, two, or more
values (operands)
• A truth table lists the results for each possible arrangement of operands
• Order is important: x op y may or may not give the same result as y op x
• The rows in a truth table list all possible sequences of truth values for n
operands, and specify a result for each sequence
• Hence, there are 2n rows in a truth table for n operands
41
20/02/2025
Unary operators
• There are four possible unary operators:
X Identity, (X)
X Constant true, (T) T T
T T F F
F T
X Negation, ¬X
X Constant false, (F)
T F
T F
F T
F F
• Only the last of these (negation) is widely used (and has a symbol,¬ ,for the operation
42
20/02/2025
Combined tables for unary operators
43
Binary operators
• There are sixteen possible binary operators:
X Y
T T T T T T T T T T F F F F F F F F
T F T T T T F F F F T T T T F F F F
F T T T F F T T F F T T F F T T F F
F F T F T F T F T F T F T F T F T F
• All these operators have names, but I haven’t tried to fit them in
• Only a few of these operators are normally used in logic
44
20/02/2025
Useful binary operators
• Here are the binary operators that are traditionally used:
AND OR IMPLIES BICONDITIONAL
X Y X∧Y X∨Y X⇒Y X⇔Y
T T T T T T
T F F T F F
F T F T T F
F F F F T T
• Notice in particular that material implication (⇒) only approximately means the same as the English
word “implies”
• All20/02/2025
45the other operators can be constructed from a combination of these (along with unary not, ¬)
Logical expressions
• All logical expressions can be computed with some combination of and (∧),
or (∨), and not (¬) operators
• For example, logical implication can be computed this way:
X Y ¬X ¬X ∨ Y X⇒Y
T T F T T
T F F F F
F T T T T
F F T T T
• Notice that ¬X ∨ Y is equivalent to X ⇒ Y
46
20/02/2025
X Y X <=Y> = X <=Y> = X <=>Y =
X =>Y AND X =>Y AND X =>Y AND
Y => X Y => X Y => X
T T T T T
T F F T F
F T T F F
F F T T T
20/02/2025 47
Another example
• Exclusive or (xor) is true if exactly one of its operands is true
X Y ¬X ¬Y ¬X ∧ Y X ∧ ¬Y (¬X∧Y)∨(X∧¬Y) X xor Y
T T F F F F F F
T F F T F T T T
F T T F T F T T
F F T T F F F F
48
20/02/2025
Given a propositional logic, check if it is valid
• ( X-> Y o (x v y)) X Y X->Y XvY X->Y
or
• Valid XvY
0 0 1 0 1
0 1 0 1 1
1 0 1 1 1
1 1 1 1 1
20/02/2025 49
World
• A world is a collection of prepositions and logical expressions relating those
prepositions
• Example:
• Propositions: JohnLovesMary, MaryIsFemale, MaryIsRich
• Expressions:
MaryIsFemale ∧ MaryIsRich ⇒ JohnLovesMary
• A proposition “says something” about the world, but since it is atomic (you
can’t look inside it to see component parts), propositions tend to be very
specialized and inflexible
50
20/02/2025
Models
A model is an assignment of a truth value to each proposition, for example:
• JohnLovesMary: T, MaryIsFemale: T, MaryIsRich: F
• An expression is satisfiable if there is a model for which the expression is true
• For example, the above model satisfies the expression
MaryIsFemale ∧ MaryIsRich ⇒ JohnLovesMary
• An expression is valid if it is satisfied by every model
• This expression is not valid:
MaryIsFemale ∧ MaryIsRich ⇒ JohnLovesMary
because it is not satisfied by this model:
JohnLovesMary: F, MaryIsFemale: T, MaryIsRich: T
• But this expression is valid:
MaryIsFemale ∧ MaryIsRich ⇒ MaryIsFemale
51
20/02/2025
Inference rules in propositional logic
• Here are just a few of the rules you can apply when reasoning in propositional logic:
52
20/02/2025
Implication elimination
• A particularly important rule allows you to get rid of
the implication operator, ⇒ :
• X ⇒ Y ≡ ¬X ∨ Y
• We will use this later on as a necessary tool for
simplifying logical expressions
• The symbol ≡ means “is logically equivalent to”
53
20/02/2025
Conjunction elimination
• Another important rule for simplifying logical expressions
allows you to get rid of the conjunction (and) operator, ∧ :
• This rule simply says that if you have an and operator at the
top level of a fact (logical expression), you can break the
expression up into two separate facts:
• MaryIsFemale ∧ MaryIsRich
• becomes:
• MaryIsFemale
• MaryIsRich
54
20/02/2025
Inference by computer
• To do inference (reasoning) by computer is basically a search process, taking logical expressions and applying
inference rules to them
• Which logical expressions to use?
• Which inference rules to apply?
• Usually you are trying to “prove” some particular statement
• Example:
• it_is_raining ∨ it_is_sunny
• it_is_sunny ⇒ I_stay_dry
• A => B = NOT A V B
• NOT(it_is_sunny) V I_stay_dry
• it_is_rainy ⇒ I_take_umbrella
• NOT(it_is_rainy) V I_take_umbrella
• I_take_umbrella ⇒ I_stay_dry
• NOT(I_take_umbrella) V I_stay_dry
20/02/2025
55• To prove: I_stay_dry
Knowledge and Reasoning
Table of Contents
• Knowledge and reasoning-Approaches and issues of knowledge reasoning-
Knowledge base agents
• Logic Basics-Logic-Propositional logic-syntax ,semantics and inferences-
Propositional logic- Reasoning patterns
• Unification and Resolution-Knowledge representation using rules-Knowledge
representation using semantic nets
• Knowledge representation using frames-Inferences-
• Uncertain Knowledge and reasoning-Methods-Bayesian probability and belief
network
• Probabilistic reasoning-Probabilistic reasoning over time-Probabilistic reasoning
over time
Other uncertain techniques-Data mining-Fuzzy logic-Dempster -shafer theory
•20/02/2025 56
Reasoning Patterns
• Inference in propositional logic is NP-complete!
• However, inference in propositional logic shows
monoticity:
• Adding more rules to a knowledge base does not
affect earlier inferences
20/02/2025 57
Forward and backward reasoning
• Situation: You have a collection of logical expressions (premises), and
you are trying to prove some additional logical expression (the
conclusion)
• You can:
• Do forward reasoning: Start applying inference rules to the logical
expressions you have, and stop if one of your results is the
conclusion you want
• Do backward reasoning: Start from the conclusion you want, and
try to choose inference rules that will get you back to the logical
expressions you have
• With the tools we have discussed so far, neither is feasible
58
20/02/2025
Example
• Given:
• it_is_raining ∨ it_is_sunny
• it_is_sunny ⇒ I_stay_dry
• it_is_raining ⇒ I_take_umbrella
• I_take_umbrella ⇒ I_stay_dry
• You can conclude:
• it_is_sunny ∨ it_is_raining
• I_take_umbrella ∨ it_is_sunny
• ¬I_stay_dry ⇒ I_take_umbrella
• Etc., etc. ... there are just too many things you can conclude!
59
20/02/2025
Predicate calculus
• Predicate calculus is also known as “First Order Logic” (FOL)
• Predicate calculus includes:
• All of propositional logic
• Logical values true, false
• Variables x, y, a, b,...
• Connectives ¬, ⇒, ∧, ∨, ⇔
• Constants KingJohn, 2, Villanova,...
• Predicates Brother, >,...
• Functions Sqrt, MotherOf,...
• Quantifiers ∀, ∃
60
20/02/2025
Constants, functions, and predicates
• A constant represents a “thing”--it has no truth value, and it
does not occur “bare” in a logical expression
• Examples: DavidMatuszek, 5, Earth, goodIdea
• Given zero or more arguments, a function produces a
constant as its value:
• Examples: motherOf(DavidMatuszek), add(2, 2),
thisPlanet()
• A predicate is like a function, but produces a truth value
• Examples: greatInstructor(DavidMatuszek),
61 isPlanet(Earth), greater(3, add(2, 2))
20/02/2025
Universal quantification
• The universal quantifier, ∀, is read as “for each”
or “for every”
• Example: ∀x, x2 ≥ 0 (for all x, x2 is greater than or equal to zero)
• Typically, ⇒ is the main connective with ∀:
∀x, at(x,Villanova) ⇒ smart(x)
means “Everyone at Villanova is smart”
• Common mistake: using ∧ as the main connective with ∀:
∀x, at(x,Villanova) ∧ smart(x)
means “Everyone is at Villanova and everyone is smart”
• If there are no values satisfying the condition, the result is true
• Example: ∀x, isPersonFromMars(x) ⇒ smart(x) is true
62
20/02/2025
Existential quantification
• The existential quantifier, ∃, is read “for some” or “there exists”
• Example: ∃x, x2 < 0 (there exists an x such that x2 is less than zero)
• Typically, ∧ is the main connective with ∃:
∃x, at(x,Villanova) ∧ smart(x)
means “There is someone who is at Villanova and is smart”
• Common mistake: using ⇒ as the main connective with ∃:
∃x, at(x,Villanova) ⇒ smart(x)
This is true if there is someone at Villanova who is smart...
...but it is also true if there is someone who is not at Villanova
By the rules of material implication, the result of F ⇒ T is T
63
20/02/2025
Properties of quantifiers
• ∀x ∀y is the same as ∀y ∀x LOVES (X,Y)
• ∃x ∃y is the same as ∃y ∃x
64
20/02/2025
Parentheses
• Parentheses are often used with quantifiers
• Unfortunately, everyone uses them differently, so don’t be upset at any
usage you see
• Examples:
• (∀x) person(x) ⇒ likes(x,iceCream)
• (∀x) (person(x) ⇒ likes(x,iceCream))
• (∀x) [ person(x) ⇒ likes(x,iceCream) ]
• ∀x, person(x) ⇒ likes(x,iceCream)
• ∀x (person(x) ⇒ likes(x,iceCream))
• I prefer parentheses that show the scope of the quantifier
• ∃x (x > 0) ∧ ∃x (x < 0)
65
20/02/2025
More rules
• Now there are numerous additional rules we can apply!
• Here are two exceptionally important rules:
• ¬∀x, p(x) ⇒ ∃x, ¬p(x)
“If not every x satisfies p(x), then there exists a x that does not satisfy
p(x)”
• ¬∃x, p(x) ⇒ ∀x, ¬p(x)
“If there does not exist an x that satisfies p(x), then all x do not satisfy
p(x)”
• In any case, the search space is just too large to be feasible
• This was the case until 1970, when J. Robinson discovered resolution
66
20/02/2025
Knowledge and Reasoning
Table of Contents
• Knowledge and reasoning-Approaches and issues of knowledge reasoning-
Knowledge base agents
• Logic Basics-Logic-Propositional logic-syntax ,semantics and inferences-
Propositional logic- Reasoning patterns
• Unification and Resolution-Knowledge representation using rules-Knowledge
representation using semantic nets
• Knowledge representation using frames-Inferences-
• Uncertain Knowledge and reasoning-Methods-Bayesian probability and belief
network
• Probabilistic reasoning-Probabilistic reasoning over time-Probabilistic reasoning
over time
Other uncertain techniques-Data mining-Fuzzy logic-Dempster -shafer theory
•20/02/2025 67
Logic by computer was infeasible
• Why is logic so hard?
• You start with a large collection of facts (predicates)
• You start with a large collection of possible transformations (rules)
• Some of these rules apply to a single fact to yield a new fact
• Some of these rules apply to a pair of facts to yield a new fact
• So at every step you must:
• Choose some rule to apply
• Choose one or two facts to which you might be able to apply the rule
• If there are n facts
• There are n potential ways to apply a single-operand rule
• There are n * (n - 1) potential ways to apply a two-operand rule
• Add the new fact to your ever-expanding fact base
68
20/02/2025
• The search space is huge!
The magic of resolution
• Here’s how resolution works:
• You transform each of your facts into a particular form, called a clause
(this is the tricky part)
• You apply a single rule, the resolution principle, to a pair of clauses
• Clauses are closed with respect to resolution--that is, when you
resolve two clauses, you get a new clause
• You add the new clause to your fact base
• So the number of facts you have grows linearly
• You still have to choose a pair of facts to resolve
• You never have to choose a rule, because there’s only one
69
20/02/2025
The fact base
• A fact base is a collection of “facts,” expressed in predicate calculus, that are presumed to be true (valid)
• These facts are implicitly “anded” together
• Example fact base:
• seafood(X) ⇒ likes(John, X) (where X is a variable)
• seafood(shrimp)
• pasta(X) ⇒ ¬likes(Mary, X) (where X is a different variable)
• pasta(spaghetti)
• That is,
• (seafood(X) ⇒ likes(John, X)) ∧ seafood(shrimp) ∧
(pasta(Y) ⇒ ¬likes(Mary, Y)) ∧ pasta(spaghetti)
• Notice that we had to change some Xs to Ys
• The scope of a variable is the single fact in which it occurs
70
20/02/2025
Clause form
• A clause is a disjunction ("or") of zero or more literals, some or all of
which may be negated
• Example:
sinks(X) ∨ dissolves(X, water) ∨ ¬denser(X, water)
• Notice that clauses use only “or” and “not”—they do not use “and,”
“implies,” or either of the quantifiers “for all” or “there exists”
• The impressive part is that any predicate calculus expression can be
put into clause form
• Existential quantifiers, ∃, are the trickiest ones
71
20/02/2025
Unification
• From the pair of facts (not yet clauses, just facts):
• seafood(X) ⇒ likes(John, X) (where X is a variable)
• seafood(shrimp)
• We ought to be able to conclude
• likes(John, shrimp)
• We can do this by unifying the variable X with the constant shrimp
• This is the same “unification” as is done in Prolog
• This unification turns seafood(X) ⇒ likes(John, X) into
seafood(shrimp) ⇒ likes(John, shrimp)
• Together with the given fact seafood(shrimp), the final deductive
72
step is easy
20/02/2025
The resolution principle
• Here it is:
• From X ∨ someLiterals
and ¬X ∨ someOtherLiterals
----------------------------------------------
conclude: someLiterals ∨ someOtherLiterals
• That’s all there is to it!
• Example:
• broke(Bob) ∨ well-fed(Bob)
¬broke(Bob) ∨ ¬hungry(Bob)
--------------------------------------
well-fed(Bob) ∨ ¬hungry(Bob)
73
20/02/2025
A common error
• You can only do one resolution at a time
• Example:
• broke(Bob) ∨ well-fed(Bob) ∨ happy(Bob)
¬broke(Bob) ∨ ¬hungry(Bob) ∨ ¬happy(Bob)
• You can resolve on broke to get:
• well-fed(Bob) ∨ happy(Bob) ∨ ¬hungry(Bob) ∨ ¬happy(Bob) ≡ T
• Or you can resolve on happy to get:
• broke(Bob) ∨ well-fed(Bob) ∨ ¬broke(Bob) ∨ ¬hungry(Bob) ≡ T
• Note that both legal resolutions yield a tautology (a trivially true statement, containing X
∨ ¬X), which is correct but useless
• But you cannot resolve on both at once to get:
• well-fed(Bob) ∨ ¬hungry(Bob)
74
20/02/2025
Contradiction
• A special case occurs when the result of a resolution (the resolvent) is
empty, or “NIL”
• Example:
• hungry(Bob)
¬hungry(Bob)
----------------
NIL
• In this case, the fact base is inconsistent
• This will turn out to be a very useful observation in doing resolution
theorem proving
75
20/02/2025
A first example
• “Everywhere that John goes, Rover goes. John is at school.”
• at(John, X) ⇒ at(Rover, X) (not yet in clause form)
• at(John, school) (already in clause form)
• We use implication elimination to change the first of these into clause
form:
• ¬at(John, X) ∨ at(Rover, X)
• at(John, school)
• We can resolve these on at(-, -), but to do so we have to unify X
with school; this gives:
• at(Rover, school)
76
20/02/2025
Refutation resolution
• The previous example was easy because it had very few clauses
• When we have a lot of clauses, we want to focus our search on the
thing we would like to prove
• We can do this as follows:
• Assume that our fact base is consistent (we can’t derive NIL)
• Add the negation of the thing we want to prove to the fact base
• Show that the fact base is now inconsistent
• Conclude the thing we want to prove
77
20/02/2025
Example of refutation resolution
• “Everywhere that John goes, Rover goes. John is at school. Prove that Rover is at
school.”
1. ¬at(John, X) ∨ at(Rover, X)
2. at(John, school)
3. ¬at(Rover, school) (this is the added clause)
• Resolve #1 and #3:
4. ¬at(John, X)
• Resolve #2 and #4:
4. NIL
• Conclude the negation of the added clause: at(Rover, school)
• This seems a roundabout approach for such a simple example, but it works well
for larger problems
78
20/02/2025
A second example
• Start with:
• it_is_raining ∨ it_is_sunny
• it_is_sunny ⇒ I_stay_dry
• it_is_raining ⇒ I_take_umbrella
• I_take_umbrella ⇒ I_stay_dry • Proof:
• Convert to clause form: 6. (5, 2) ¬it_is_sunny
1. it_is_raining ∨ it_is_sunny 7. (6, 1) it_is_raining
2. ¬it_is_sunny ∨ I_stay_dry 8. (5, 4) ¬I_take_umbrella
3. ¬it_is_raining ∨ I_take_umbrella 9. (8, 3) ¬it_is_raining
4. ¬I_take_umbrella ∨ I_stay_dry 10. (9, 7) NIL
• Prove that I stay dry: ▪ Therefore, ¬(¬I_stay_dry)
5. ¬I_stay_dry ▪ I_stay_dry
79
20/02/2025
Converting sentences to CNF
1. Eliminate all ↔ connectives
(P ↔ Q) ⇒ ((P → Q) ^ (Q → P))
2. Eliminate all → connectives
(P → Q) ⇒ (¬P ∨ Q)
3. Reduce the scope of each negation symbol to a single predicate
¬¬P ⇒ P
¬(P ∨ Q) ⇒ ¬P ∧ ¬Q
¬(P ∧ Q) ⇒ ¬P ∨ ¬Q
¬(∀x)P ⇒ (∃x)¬P
¬(∃x)P ⇒ (∀x)¬P
4. Standardize variables: rename all variables so that each quantifier has its own
unique variable name
80
20/02/2025
Converting sentences to clausal form Skolem
constants and functions
5. Eliminate existential quantification by introducing Skolem
constants/functions
(∃x)P(x) ⇒ P(c)
c is a Skolem constant (a brand-new constant symbol that is not used in any
other sentence)
(∀x)(∃y)P(x,y) ⇒ (∀x)P(x, f(x))
since ∃ is within the scope of a universally quantified variable, use a Skolem
function f to construct a new value that depends on the universally
quantified variable
f must be a brand-new function name not occurring in any other sentence in
the KB.
E.g., (∀x)(∃y)loves(x,y) ⇒ (∀x)loves(x,f(x))
In this case, f(x) specifies the person that x loves
81
20/02/2025
Converting sentences to clausal form
6. Remove universal quantifiers by (1) moving them all to the left end; (2)
making the scope of each the entire sentence; and (3) dropping the
“prefix” part
Ex: (∀x)P(x) ⇒ P(x)
7. Put into conjunctive normal form (conjunction of disjunctions) using
distributive and associative laws
(P ∧ Q) ∨ R ⇒ (P ∨ R) ∧ (Q ∨ R)
(P ∨ Q) ∨ R ⇒ (P ∨ Q ∨ R)
8. Split conjuncts into separate clauses
9. Standardize variables so each clause contains only variable names that do
not occur in any other clause
82
20/02/2025
An example
(∀x)(P(x) → ((∀y)(P(y) → P(f(x,y))) ∧ ¬(∀y)(Q(x,y) → P(y))))
2. Eliminate →
(∀x)(¬P(x) ∨ ((∀y)(¬P(y) ∨ P(f(x,y))) ∧ ¬(∀y)(¬Q(x,y) ∨ P(y))))
3. Reduce scope of negation
(∀x)(¬P(x) ∨ ((∀y)(¬P(y) ∨ P(f(x,y))) ∧(∃y)(Q(x,y) ∧ ¬P(y))))
4. Standardize variables
(∀x)(¬P(x) ∨ ((∀y)(¬P(y) ∨ P(f(x,y))) ∧(∃z)(Q(x,z) ∧ ¬P(z))))
5. Eliminate existential quantification
(∀x)(¬P(x) ∨((∀y)(¬P(y) ∨ P(f(x,y))) ∧(Q(x,g(x)) ∧ ¬P(g(x)))))
6. Drop universal quantification symbols
(¬P(x) ∨ ((¬P(y) ∨ P(f(x,y))) ∧(Q(x,g(x)) ∧ ¬P(g(x)))))
83
20/02/2025
Example
7. Convert to conjunction of disjunctions
(¬P(x) ∨ ¬P(y) ∨ P(f(x,y))) ∧ (¬P(x) ∨ Q(x,g(x))) ∧
(¬P(x) ∨ ¬P(g(x)))
8. Create separate clauses
¬P(x) ∨ ¬P(y) ∨ P(f(x,y))
¬P(x) ∨ Q(x,g(x))
¬P(x) ∨ ¬P(g(x))
9. Standardize variables
¬P(x) ∨ ¬P(y) ∨ P(f(x,y))
¬P(z) ∨ Q(z,g(z))
¬P(w) ∨ ¬P(g(w))
84
20/02/2025
Resolution
• Resolution is a sound and complete inference procedure for FOL
• Reminder: Resolution rule for propositional logic:
• P1 ∨ P2 ∨ ... ∨ Pn
• ¬P1 ∨ Q2 ∨ ... ∨ Qm
• Resolvent: P2 ∨ ... ∨ Pn ∨ Q2 ∨ ... ∨ Qm
• Examples
• P and ¬ P ∨ Q : derive Q (Modus Ponens)
• (¬ P ∨ Q) and (¬ Q ∨ R) : derive ¬ P ∨ R
• P and ¬ P : derive False [contradiction!]
• (P ∨ Q) and (¬ P ∨ ¬ Q) : derive True
85
20/02/2025
Resolution in first-order logic
• Given sentences
P1 ∨ ... ∨ Pn
Q1 ∨ ... ∨ Qm
• in conjunctive normal form:
• each Pi and Qi is a literal, i.e., a positive or negated predicate symbol with its
terms,
• if Pj and ¬Qk unify with substitution list θ, then derive the resolvent sentence:
subst(θ, P1 ∨... ∨ Pj-1 ∨ Pj+1 ... Pn ∨ Q1 ∨ …Qk-1 ∨ Qk+1 ∨... ∨ Qm)
• Example
• from clause P(x, f(a)) ∨ P(x, f(y)) ∨ Q(y)
• and clause ¬P(z, f(a)) ∨ ¬Q(z)
• derive resolvent P(z, f(y)) ∨ Q(y) ∨ ¬Q(z)
86 • using
20/02/2025 θ = {x/z}
Resolution refutation
• Given a consistent set of axioms KB and goal sentence Q, show that KB |=
Q
• Proof by contradiction: Add ¬Q to KB and try to prove false.
i.e., (KB |- Q) ↔ (KB ∨ ¬Q |- False)
• Resolution is refutation complete: it can establish that a given sentence Q
is entailed by KB, but can’t (in general) be used to generate all logical
consequences of a set of sentences
• Also, it cannot be used to prove that Q is not entailed by KB.
• Resolution won’t always give an answer since entailment is only
semidecidable
• And you can’t just run two proofs in parallel, one trying to prove Q and the other
trying to prove ¬Q, since KB might not entail either one
88
20/02/2025
Refutation resolution proof tree
¬allergies(w) v ¬cat(y) v ¬allergic-to-cats(z) ∨
sneeze(w) allergies(z)
w/
z
¬cat(y) v sneeze(z) ∨ ¬allergic-to- cat(Feli
cats(z) x)
y/
Felix
sneeze(z) v ¬allergic-to- allergic-to-
cats(z) cats(Lise)
z/
Lise
sneeze(Lis ¬sneeze(Lis
e) e)
fals
e
negated query
89
20/02/2025
We need answers to the following questions
90
20/02/2025
Unification
• Unification is a “pattern-matching” procedure
• Takes two atomic sentences, called literals, as input
• Returns “Failure” if they do not match and a substitution list, θ, if they do
• That is, unify(p,q) = θ means subst(θ, p) = subst(θ, q) for two atomic
sentences, p and q
• θ is called the most general unifier (mgu)
• All variables in the given two literals are implicitly universally
quantified
• To make literals match, replace (universally quantified) variables by
terms
91
20/02/2025
Unification algorithm
procedure unify(p, q, θ)
Scan p and q left-to-right and find the first corresponding
terms where p and q “disagree” (i.e., p and q not equal)
If there is no disagreement, return θ (success!)
Let r and s be the terms in p and q, respectively,
where disagreement first occurs
If variable(r) then {
Let θ = union(θ, {r/s})
Return unify(subst(θ, p), subst(θ, q), θ)
} else if variable(s) then {
Let θ = union(θ, {s/r})
Return unify(subst(θ, p), subst(θ, q), θ)
} else return “Failure”
92 end
20/02/2025
Unification: Remarks
• Unify is a linear-time algorithm that returns the most
general unifier (mgu), i.e., the shortest-length substitution
list that makes the two literals match.
• In general, there is not a unique minimum-length
substitution list, but unify returns one of minimum length
• A variable can never be replaced by a term containing that
variable
Example: x/f(x) is illegal.
• This “occurs check” should be done in the above pseudo-
code before making the recursive calls
93
20/02/2025
Unification examples
• Example:
• parents(x, father(x), mother(Bill))
• parents(Bill, father(Bill), y)
• {x/Bill, y/mother(Bill)}
• Example:
• parents(x, father(x), mother(Bill))
• parents(Bill, father(y), z)
• {x/Bill, y/Bill, z/mother(Bill)}
• Example:
• parents(x, father(x), mother(Jane))
• parents(Bill, father(y), mother(y))
94 • Failure
20/02/2025
Resolution example
Practice example : Did Curiosity kill the cat
• Jack owns a dog. Every dog owner is an animal lover. No animal lover
kills an animal. Either Jack or Curiosity killed the cat, who is named
Tuna. Did Curiosity kill the cat?
• These can be represented as follows:
A. (∃x) Dog(x) ∧ Owns(Jack,x)
B. (∀x) ((∃y) Dog(y) ∧ Owns(x, y)) → AnimalLover(x)
C. (∀x) AnimalLover(x) → ((∀y) Animal(y) → ¬Kills(x,y))
D. Kills(Jack,Tuna) ∨ Kills(Curiosity,Tuna)
E. Cat(Tuna)
F. (∀x) Cat(x) → Animal(x) GOAL
G. Kills(Curiosity, Tuna)
20/02/2025
95
• Convert to clause form
D is a skolem constant
A1. (Dog(D))
A2. (Owns(Jack,D))
B. (¬Dog(y), ¬Owns(x, y), AnimalLover(x))
C. (¬AnimalLover(a), ¬Animal(b), ¬Kills(a,b))
D. (Kills(Jack,Tuna), Kills(Curiosity,Tuna))
E. Cat(Tuna)
F. (¬Cat(z), Animal(z))
• Add the negation of query:
¬G: (¬Kills(Curiosity, Tuna))
96
20/02/2025
• The resolution refutation proof
R1: ¬G, D, {} (Kills(Jack, Tuna))
R2: R1, C, {a/Jack, b/Tuna} (~AnimalLover(Jack),
~Animal(Tuna))
R3: R2, B, {x/Jack} (~Dog(y), ~Owns(Jack, y),
~Animal(Tuna))
R4: R3, A1, {y/D} (~Owns(Jack, D),
~Animal(Tuna))
R5: R4, A2, {} (~Animal(Tuna))
R6: R5, F, {z/Tuna} (~Cat(Tuna))
R7: R6, E, {} FALSE
97
20/02/2025
• The proof tree
¬G D
{}
R1: K(J,T) C
{a/J,b/T}
R2: ¬AL(J) ∨ ¬A(T) B
{x/J}
R3: ¬D(y) ∨ ¬O(J,y) ∨ ¬A(T) A1
{y/D}
R4: ¬O(J,D), ¬A(T) A2
{}
R5: ¬A(T) F
{z/T}
R6: ¬C(T) A
{}
R7: FALSE
98
20/02/2025
Knowledge and Reasoning
Table of Contents
• Knowledge and reasoning-Approaches and issues of knowledge reasoning-
Knowledge base agents
• Logic Basics-Logic-Propositional logic-syntax ,semantics and inferences-
Propositional logic- Reasoning patterns
• Unification and Resolution
• Knowledge representation using rules-Knowledge representation using semantic nets
• Knowledge representation using frames-Inferences-
• Uncertain Knowledge and reasoning-Methods-Bayesian probability and belief
network
• Probabilistic reasoning-Probabilistic reasoning over time-Probabilistic reasoning
over time
•20/02/2025
Other uncertain techniques-Data mining-Fuzzy logic-Dempster -shafer theory 99
Production Rules
• Condition-Action Pairs
• IF this condition (or premise or antecedent)
occurs,
THEN some action (or result, or conclusion,
or consequence) will (or should) occur
• IF the traffic light is red AND you have
stopped, THEN a right turn is OK
20/02/2025 100
Production Rules
• Each production rule in a knowledge base represents
an autonomous chunk of expertise
• When combined and fed to the inference engine, the
set of rules behaves synergistically
• Rules can be viewed as a simulation of the cognitive
behaviour of human experts
• Rules represent a model of actual human behaviour
• Predominant technique used in expert systems, often in
conjunction with frames
20/02/2025 101
Forms of Rules
• IF premise, THEN conclusion
• IF your income is high, THEN your chance of
being audited by the Inland Revenue is high
• Conclusion, IF premise
• Your chance of being audited is high, IF your
income is high
20/02/2025 102
Forms of Rules
• Inclusion of ELSE
• IF your income is high, OR your deductions are unusual,
THEN your chance of being audited is high, OR ELSE
your chance of being audited is low
• More complex rules
• IF credit rating is high AND salary is more than £30,000,
OR assets are more than £75,000, AND pay history is not
"poor,"
THEN approve a loan up to £10,000, and list the loan in
category "B.”
• Action part may have more information: THEN "approve the
loan"
20/02/2025 and "refer to an agent" 103
Characteristics of Rules
First Part Second Part
Names Premise Conclusion
Antecedent Consequence
Situation Action
IF THEN
Nature Conditions, similar to declarative Resolutions, similar to procedural
knowledge knowledge
Size Can have many IFs Usually only one conclusion
104
Rule-based Inference
• Production rules are typically used as part of a
production system
• Production systems provide pattern-directed control of
the reasoning process
• Production systems have:
• Productions: set of production rules
• Working Memory (WM): description of current state
of the world
• Recognise-act cycle
20/02/2025 105
Production Systems
Production
Rules
C1→A1 Working
C2→A2 Environment
Memory
C3→A3
…
Cn→An
Conflict Conflict
Set Resolution
20/02/2025 106
Recognise-Act Cycle
• Patterns in WM matched against production rule
conditions
• Matching (activated) rules form the conflict set
• One of the matching rules is selected (conflict
resolution) and fired
• Action of rule is performed
• Contents of WM updated
• Cycle repeats with updated WM
20/02/2025 107
Conflict Resolution
• Reasoning in a production system can be viewed as a
type of search
• Selection strategy for rules from the conflict set
controls search
• Production system maintains the conflict set as an
agenda
• Ordered list of activated rules (those with their
conditions satisfied) which have not yet been executed
• Conflict resolution strategy determines where a newly-
activated rule is inserted
20/02/2025 108
Salience
• Rules may be given a precedence order by assigning a
salience value
• Newly activated rules are placed in the agenda above all
rules of lower salience, and below all rules with higher
salience
• Rule with higher salience are executed first
• Conflict resolution strategy applies between rules of the
same salience
• If salience and the conflict resolution strategy can’t
determine which rule is to be executed next, a rule is
chosen at random from the most highly ranked rules
20/02/2025 109
Conflict Resolution Strategies
• Depth-first: newly activated rules placed above other rules in the
agenda
• Breadth-first: newly activated rules placed below other rules
• Specificity: rules ordered by the number of conditions in the LHS
(simple-first or complex-first)
• Least recently fired: fire the rule that was last fired the longest time
ago
• Refraction: don’t fire a rule unless the WM patterns that match its
conditions have been modified
• Recency: rules ordered by the timestamps on the facts that match
their conditions
20/02/2025 110
Salience
• Salience facilitates the modularization of expert
systems in which modules work at different levels of
abstraction
• Over-use of salience can complicate a system
• Explicit ordering to rule execution
• Makes behaviour of modified systems less
predictable
• Rule of thumb: if two rules have the same salience, are
in the same module, and are activated concurrently,
then the order in which they are executed should not
matter
20/02/2025 111
Common Types of Rules
• Knowledge rules, or declarative rules, state all the
facts and relationships about a problem
• Inference rules, or procedural rules, advise on how to
solve a problem, given that certain facts are known
• Inference rules contain rules about rules (metarules)
• Knowledge rules are stored in the knowledge base
• Inference rules become part of the inference engine
20/02/2025 112
Major Advantages of Rules
• Easy to understand (natural form of knowledge)
• Easy to derive inference and explanations
• Easy to modify and maintain
• Easy to combine with uncertainty
• Rules are frequently independent
20/02/2025 113
Major Limitations of Rules
• Complex knowledge requires many rules
• Search limitations in systems with many
rules
20/02/2025 114
Knowledge and Reasoning
Table of Contents
• Knowledge and reasoning-Approaches and issues of knowledge reasoning-
Knowledge base agents
• Logic Basics-Logic-Propositional logic-syntax ,semantics and inferences-
Propositional logic- Reasoning patterns
• Unification and Resolution
• Knowledge representation using rules-Knowledge representation using semantic nets
• Knowledge representation using frames-Inferences-
• Uncertain Knowledge and reasoning-Methods-Bayesian probability and belief
network
• Probabilistic reasoning-Probabilistic reasoning over time-Probabilistic reasoning
over time
•20/02/2025
Other uncertain techniques-Data mining-Fuzzy logic-Dempster -shafer theory 115
Semantic Networks
• A semantic network is a structure for
representing knowledge as a pattern of
interconnected nodes and arcs
• Nodes in the net represent concepts of entities,
attributes, events, values
• Arcs in the network represent relationships that
hold between the concepts
20/02/2025 116
Semantic Networks
• Semantic networks can show inheritance
• Relationship types – is-a, has-a
• Semantic Nets - visual representation of
relationships
• Can be combined with other representation
methods
20/02/2025 117
Semantic Networks
Bird
is- is- Canary
Can fly
a a Can sing
Has wings
Is yellow
Has feathers
Animal Ostrich
Can breathe is- Runs fast
Can eat a Cannot fly
Has skin Is tall
Fish Salmon
is- Can swim is- Swims upstream
a Has fins a Is pink
Has gills Is edible
20/02/2025 118
Semantic Networks
move
s ANIMA is
L a
breath DO is works
es G a sheep
is
a SHEEPDO
G
track HOUN has
s D bark tail is
s a
is size:
a medium
BEAGL COLLI
E E
instanc
size: e FICTIONAL
small instanc CHARACTE instanc
instanc e
e R e
SNOOP instanc
Y e LASSI
E
friend
20/02/2025 of CHARLIE 119
BROWN
Semantic Networks
What does or should a node represent?
• A class of objects?
• An instance of an class?
• The canonical instance of a class?
• The set of all instances of a class?
20/02/2025 120
Semantic Networks
• Semantics of links that define new objects and links
that relate existing objects, particularly those dealing
with ‘intrinsic’ characteristics of a given object
• How does one deal with the problems of comparison
between objects (or classes of objects) through their
attributes?
• Essentially the problem of comparing object
instances
• What mechanisms are there are to handle
quantification in semantic network formalisms?
20/02/2025 121
Transitive inference, but…
• Clyde is an elephant, an elephant is a mammal: Clyde is
a mammal.
20/02/2025 122
Knowledge and Reasoning
Table of Contents
• Knowledge and reasoning-Approaches and issues of knowledge reasoning-
Knowledge base agents
• Logic Basics-Logic-Propositional logic-syntax ,semantics and inferences-
Propositional logic- Reasoning patterns
• Unification and Resolution
• Knowledge representation using rules-Knowledge representation using semantic nets
• Knowledge representation using frames-Inferences-
• Uncertain Knowledge and reasoning-Methods-Bayesian probability and belief
network
• Probabilistic reasoning-Probabilistic reasoning over time-Probabilistic reasoning
over time
•20/02/2025
Other uncertain techniques-Data mining-Fuzzy logic-Dempster -shafer theory 123
Frames
• A frame is a knowledge representation formalism based on the
idea of a frame of reference.
• A frame is a data structure that includes all the knowledge about
a particular object
• Frames organised in a hierarchy Form of object-oriented
programming for AI and ES.
• Each frame describes one object
• Special terminology
20/02/2025 124
Frames
• There are two types of frame:
• Class Frame
• Individual or Instance Frame
• A frame carries with it a set of slots that can
represent objects that are normally associated
with a subject of the frame.
20/02/2025 125
Frames
• The slots can then point to other slots or frames.
That gives frame systems the ability to carry out
inheritance and simple kinds of data
manipulation.
• The use of procedures - also called demons in
the literature - helps in the incorporation of
substantial amounts of procedural knowledge
into a particular frame-oriented knowledge base
20/02/2025 126
Frame-based model of semantic
memory
• Knowledge is organised in a data structure
• Slots in structure are instantiated with particular
values for a given instance of data
• ...translation to OO terminology:
• frames == classes or objects
• slots == variables/methods
20/02/2025 127
General Knowledge as Frames
DOG COLLIE
Fixed Fixed
legs: 4 breed of: DOG
type: sheepdog
Default
diet: carnivorous Default
sound: bark size: 65cm
Variable Variable
size: colour:
colour:
20/02/2025 128
General Knowledge as Frames
MAMMAL:
subclass: ANIMAL
has_part: head
ELEPHANT
subclass: MAMMAL
colour: grey
size: large
Nellie
instance: ELEPHANT
likes: apples
20/02/2025 129
Logic underlies Frames
• ∀x mammal(x) ⇒ has_part(x, head)
• ∀x elephant(x) ⇒ mammal(x)
∴
• elephant(clyde)
mammal(clyde)
has_part(clyde, head)
20/02/2025 130
Logic underlies Frames
MAMMAL:
subclass: ANIMAL
has_part: head
*furry: yes
ELEPHANT
subclass: MAMMAL
has_trunk: yes
*colour: grey
*size: large
*furry: no
Clyde
instance: ELEPHANT
colour: pink
owner: Fred
Nellie
instance: ELEPHANT
20/02/2025 size: small 131
Frames (Contd.)
• Can represent subclass and instance relationships
(both sometimes called ISA or “is a”)
• Properties (e.g. colour and size) can be referred to as
slots and slot values (e.g. grey, large) as slot fillers
• Objects can inherit all properties of parent class
(therefore Nellie is grey and large)
• But can inherit properties which are only typical
(usually called default, here starred), and can be
overridden
• For example, mammal is typically furry, but this is not
so for an elephant
20/02/2025 132
Frames (Contd.)
• Provide a concise, structural representation of
knowledge in a natural manner
• Frame encompasses complex objects, entire situations
or a management problem as a single entity
• Frame knowledge is partitioned into slots
• Slot can describe declarative knowledge or procedural
knowledge
• Hierarchy of Frames: Inheritance
20/02/2025 133
Capabilities of Frames
• Ability to clearly document information about a domain
model; for example, a plant's machines and their
associated attributes
• Related ability to constrain allowable values of an
attribute
• Modularity of information, permitting ease of system
expansion and maintenance
• More readable and consistent syntax for referencing
domain objects in the rules
20/02/2025 134
Capabilities of Frames
• Platform for building graphic interface with object
graphics
• Mechanism to restrict the scope of facts considered
during forward or backward chaining
• Access to a mechanism that supports the inheritance of
information down a class hierarchy
• Used as underlying model in standards for accessing
KBs (Open Knowledge Base Connectivity - OKBC)
20/02/2025 135
Summary
• Frames have been used in conjunction with other, less
well-grounded, representation formalisms, like
production systems, when used to build to pre-
operational or operational expert systems
• Frames cannot be used efficiently to organise ‘a whole
computation
20/02/2025 136
Knowledge and Reasoning
Table of Contents
• Knowledge and reasoning-Approaches and issues of knowledge reasoning-
Knowledge base agents
• Logic Basics-Logic-Propositional logic-syntax ,semantics and inferences-
Propositional logic- Reasoning patterns
• Unification and Resolution-Knowledge representation using rules-Knowledge
representation using semantic nets
• Knowledge representation using frames-Inferences-
• Uncertain Knowledge and reasoning-Methods-Bayesian probability and belief
network
• Probabilistic reasoning-Probabilistic reasoning over time
• Other uncertain techniques-Data mining- Fuzzy logic-Dempster -shafer theory
20/02/2025 137
Types of Inference
• Deduction
• Induction
• Abduction
20/02/2025 138
Deduction
20/02/2025 139
Induction
20/02/2025 140
Abduction
20/02/2025 141
Knowledge and Reasoning
Table of Contents
• Knowledge and reasoning-Approaches and issues of knowledge reasoning-
Knowledge base agents
• Logic Basics-Logic-Propositional logic-syntax ,semantics and inferences-
Propositional logic- Reasoning patterns
• Unification and Resolution-Knowledge representation using rules-Knowledge
representation using semantic nets
• Knowledge representation using frames-Inferences-
• Uncertain Knowledge and reasoning-Methods-Bayesian probability and belief
network
• Probabilistic reasoning-Probabilistic reasoning over time
• Other uncertain techniques-Data mining- Fuzzy logic-Dempster -shafer theory
20/02/2025 142
Uncertain knowledge and reasoning
• In real life, it is not always possible to determine the state of the environment as it might not be clear. Due to
partially observable or non-deterministic environments, agents may need to handle uncertainty and deal with it.
• Uncertain data: Data that is missing, unreliable, inconsistent or noisy
• Uncertain knowledge: When the available knowledge has multiple causes leading to multiple effects or
incomplete knowledge of causality in the domain
• Uncertain knowledge representation: The representations which provides a restricted model of the real system,
or has limited expressiveness
• Inference: In case of incomplete or default reasoning methods, conclusions drawn might not be completely
accurate. Let’s understand this better with the help of an example.
• IF primary infection is bacteria cea
• AND site of infection is sterile
• AND entry point is gastrointestinal tract
• THEN organism is bacteriod (0.7).
• In such uncertain situations, the agent does not guarantee a solution but acts on its own assumptions
and probabilities and gives some degree of belief that it will reach the required solution.
20/02/2025 143
Uncertain knowledge and reasoning
• For example, In case of Medical diagnosis consider the rule Toothache = Cavity. This is
not complete as not all patients having toothache have cavities. So we can write a
more generalized rule Toothache = Cavity V Gum problems V Abscess… To make this
rule complete, we will have to list all the possible causes of toothache. But this is not
feasible due to the following rules:
• Laziness- It will require a lot of effort to list the complete set of antecedents and
consequents to make the rules complete.
• Theoretical ignorance- Medical science does not have complete theory for the domain
• Practical ignorance- It might not be practical that all tests have been or can be
conducted for the patients.
• Such uncertain situations can be dealt with using
Probability theory
Truth Maintenance systems
Fuzzy logic.
20/02/2025 144
Uncertain knowledge and reasoning
Probability
• Probability is the degree of likeliness that an event will occur. It provides a certain degree of belief in case
of uncertain situations. It is defined over a set of events U and assigns value P(e) i.e. probability of
occurrence of event e in the range [0,1]. Here each sentence is labeled with a real number in the range of
0 to 1, 0 means the sentence is false and 1 means it is true.
• Conditional Probability or Posterior Probability is the probability of event A given that B has already
occurred.
• P(A|B) = (P(B|A) * P(A)) / P(B)
• For example, P(It will rain tomorrow| It is raining today) represents conditional probability of it raining
tomorrow as it is raining today.
• P(A|B) + P(NOT(A)|B) = 1
• Joint probability is the probability of 2 independent events happening simultaneously like rolling two dice
or tossing two coins together. For example, Probability of getting 2 on one dice and 6 on the other is
equal to 1/36. Joint probability has a wide use in various fields such as physics, astronomy, and comes
into play when there are two independent events. The full joint probability distribution specifies the
probability of each complete assignment of values to random variables.
20/02/2025 145
Uncertain knowledge and reasoning
Bayes Theorem
• It is based on the principle that every pair of features being
classified is independent of each other. It calculates probability
P(A|B) where A is class of possible outcomes and B is given
instance which has to be classified.
• P(A|B) = P(B|A) * P(A) / P(B)
• P(A|B) = Probability that A is happening, given that B has
occurred (posterior probability)
• P(A) = prior probability of class
• P(B) = prior probability of predictor
P(B|A) = likelihood
•20/02/2025 146
Uncertain knowledge and reasoning
20/02/2025 148
Uncertain knowledge and reasoning
20/02/2025 149
Uncertain knowledge and reasoning
20/02/2025 150
Uncertain knowledge and reasoning
Problem:
• Calculate the probability that alarm has sounded, but there is neither a burglary, nor an earthquake
occurred, and David and Sophia both called the Harry.
Solution:
• The Bayesian network for the above problem is given below. The network structure is showing that
burglary and earthquake is the parent node of the alarm and directly affecting the probability of alarm's
going off, but David and Sophia's calls depend on alarm probability.
• The network is representing that our assumptions do not directly perceive the burglary and also do not
notice the minor earthquake, and they also not confer before calling.
• The conditional distributions for each node are given as conditional probabilities table or CPT.
• Each row in the CPT must be sum to 1 because all the entries in the table represent an exhaustive set of
cases for the variable.
• In CPT, a boolean variable with k boolean parents contains 2K probabilities. Hence, if there are two
parents, then CPT will contain 4 probability values
30-03-2021 18CSC305J_AI_UNIT3 153
Bayesian probability and belief network
List of all events occurring in this network:
• Burglary (B)
• Earthquake(E)
• Alarm(A)
• David Calls(D)
• Sophia calls(S)
We can write the events of problem statement in the form of probability: P[D, S, A, B, E], can rewrite
the above probability statement using joint probability distribution:
• P[D, S, A, B, E]= P[D | S, A, B, E]. P[S, A, B, E]
=P[D | S, A, B, E]. P[S | A, B, E]. P[A, B, E]
= P [D| A]. P [ S| A, B, E]. P[ A, B, E]
= P[D | A]. P[ S | A]. P[A| B, E]. P[B, E]
= P[D | A ]. P[S | A]. P[A| B, E]. P[B |E]. P[E]
30-03-2021 18CSC305J_AI_UNIT3 154
Bayesian probability and belief network
Let's take the observed probability for
the Burglary and earthquake
component:
P(B= True) = 0.002, which is the
probability of burglary.
P(B= False)= 0.998, which is the
probability of no burglary.
P(E= True)= 0.001, which is the
probability of a minor earthquake
P(E= False)= 0.999, Which is the
probability that an earthquake not
occurred.
The Conditional
probability of Sophia that True 0.75 0.25
she calls is depending on
its Parent Node "Alarm."
False 0.02 0.98
AP(S= True)P(S=
False)True0.750.25False0.020.98 AP(S=
30-03-2021 18CSC305J_AI_UNIT3
True)P(S=
Bayesian probability and belief network
• From the formula of joint distribution, we can write the problem
statement in the form of probability distribution:
• P(S, D, A, ¬B, ¬E) = P (S|A) *P (D|A)*P (A|¬B ^ ¬E) *P (¬B) *P (¬E).
= 0.75* 0.91* 0.001* 0.998*0.999
= 0.00068045.
Hence, a Bayesian network can answer any query about the domain by
using Joint distribution.
• The semantics of Bayesian Network:
• There are two ways to understand the semantics of the Bayesian network,
which is given below:
1. To understand the network as the representation of the Joint probability
distribution.
• It is helpful to understand how to construct the network.
2. To understand the network as an encoding of a collection of conditional
independence statements.
• It is helpful in designing inference procedure.
30-03-2021 18CSC305J_AI_UNIT3 159
Bayes' theorem in Artificial intelligence
Bayes' theorem:
• Bayes' theorem is also known as Bayes' rule, Bayes' law, or Bayesian
reasoning, which determines the probability of an event with uncertain
knowledge.
• In probability theory, it relates the conditional probability and marginal
probabilities of two random events.
• Bayes' theorem was named after the British mathematician Thomas Bayes.
The Bayesian inference is an application of Bayes' theorem, which is
fundamental to Bayesian statistics.
• It is a way to calculate the value of P(B|A) with the knowledge of P(A|B).
• Bayes' theorem allows updating the probability prediction of an event by
observing new information of the real world.
30-03-2021 18CSC305J_AI_UNIT3 160
Bayes' theorem in Artificial intelligence
Where:
30-03-2021 161
Applying Bayes'
Applying rule: in Artificial intelligence
Bayes' theorem
• Bayes' rule allows us to compute the single term P(B|A) in terms of P(A|B), P(B), and P(A). This is very useful in cases
where we have a good probability of these three terms and want to determine the fourth one. Suppose we want to
perceive the effect of some unknown cause, and want to compute that cause, then the Bayes' rule becomes:
Example-1:
Question: what is the probability that a patient has diseases meningitis with a stiff neck?
• Given Data:
A doctor is aware that disease meningitis causes a patient to have a stiff neck, and it occurs 80% of the time. He
is also aware of some more facts, which are given as follows:
The Known probability that a patient has meningitis disease is 1/30,000.
The Known probability that a patient has a stiff neck is 2%.
Let a be the proposition that patient has stiff neck and b be the proposition that patient has meningitis. , so we can calculate
the following as:
P(a|b) = 0.8
P(b) = 1/30000= 3.3*10-5
P(a)= .02
P(b/a) = (P(a/b) * P(b))/P(a)= 1.33 * 10-4 = 0.00133
• Hence, we can assume that 1 patient out of 750 patients
30-03-2021 has meningitis disease with a
18CSC305J_AI_UNIT3 stiff neck. 162
Applying Bayes' theorem in Artificial
intelligence
Example-2:
Question: From a standard deck of playing cards, a single card is drawn. The probability that the card is
king is 4/52, then calculate posterior probability P(King|Face), which means the drawn face card is a
king card.
Solution:
20/02/2025 165