0% found this document useful (0 votes)
51 views40 pages

Unit 3 AI

Knowledge-based agents maintain an internal state of knowledge, reason over it, and take actions based on their knowledge base and inference system. The Wumpus world serves as an example environment for these agents, where they navigate a cave to find gold while avoiding dangers like the Wumpus and pits. Logic plays a crucial role in AI, providing a framework for knowledge representation, reasoning, and decision-making.

Uploaded by

mohammedawt1605
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
51 views40 pages

Unit 3 AI

Knowledge-based agents maintain an internal state of knowledge, reason over it, and take actions based on their knowledge base and inference system. The Wumpus world serves as an example environment for these agents, where they navigate a cave to find gold while avoiding dangers like the Wumpus and pits. Logic plays a crucial role in AI, providing a framework for knowledge representation, reasoning, and decision-making.

Uploaded by

mohammedawt1605
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 40

Unit: 03 Knowledge Representation

Knowledge based Agents:


Knowledge-based agents are those agents who have the capability of maintaining an
internal state of knowledge, reason over that knowledge, update their knowledge after
observations and take actions. These agents can represent the world with some formal
representation and act intelligently.

Knowledge-based agents are composed of two main parts:

Knowledge-base
Inference system

A knowledge-based agent must able to do the following:

An agent should be able to represent states, actions, etc.


An agent Should be able to incorporate new precepts
An agent can update the internal representation of the world
An agent can deduce the internal representation of the world
An agent can deduce appropriate actions.

Knowledgebase (KB)

Knowledge-base is a central component of a knowledge-based agent, it is also known as


KB. It is a collection of sentences (here 'sentence' is a technical term and it is not
identical to sentence in English). These sentences are expressed in a language which is
called a knowledge representation language. The Knowledge-base of KBA stores fact
about the world. Knowledge-base is required for updating knowledge for an agent to
learn with experiences and take action as per the knowledge.

Operations Performed by KBA


Following are three operations which are performed by KBA in order to show the
intelligent behaviour:
1. TELL: This operation tells the knowledge base what it perceives from the environment.
2. ASK: This operation asks the knowledge base what action it should perform.
3. Perform: It performs the selected action.
Following is the structure outline of a generic knowledge-based agents program:

function KB-AGENT(percept):
persistent: KB, a knowledge base
t, a counter, initially 0, indicating time
TELL(KB, MAKE-PERCEPT-SENTENCE(percept, t))
Action = ASK(KB, MAKE-ACTION-QUERY(t))
TELL(KB, MAKE-ACTION-SENTENCE(action, t))
t=t+1
return action

Dept. of BCA, S.R.N.M.N.C, Shimoga Page 1


Unit: 03 Knowledge Representation

The knowledge-based agent takes percept as input and returns an action as output. The
agent maintains the knowledge base, KB, and it initially has some background knowledge
of the real world. It also has a counter to indicate the time for the whole process, and this
counter is initialized with zero.
The MAKE-PERCEPT-SENTENCE generates a sentence as setting that the agent
perceived the given percept at the given time.
The MAKE-ACTION-QUERY generates a sentence to ask which action should be done
at the current time.
MAKE-ACTION-SENTENCE generates a sentence which asserts that the chosen action
was executed.

Various levels of knowledge-based agent:


A knowledge-based agent can be viewed at different levels which are given below:
1. Knowledge level
Knowledge level is the first level of knowledge-based agent, and in this level, we need to specify
what the agent knows, and what the agent goals are. With these specifications, we can fix its
behaviour. For example, suppose an automated taxi agent needs to go from a station A to station
B, and he knows the way from A to B, so this comes at the knowledge level.

2. Logical level:
At this level, we understand that how the knowledge representation of knowledge is stored. At
this level, sentences are encoded into different logics. At the logical level, an encoding of
knowledge into logical sentences occurs. At the logical level we can expect to the automated taxi
agent to reach to the destination B.

3. Implementation level:
This is the physical representation of logic and knowledge. At the implementation level agent
perform actions as per logical and knowledge level. At this level, an automated taxi agent
actually implements his knowledge and logic so that he can reach to the destination.

Wumpus world:
The Wumpus world is a simple world example to illustrate the worth of a knowledge-based
agent and to represent knowledge representation. It was inspired by a video game Hunt the
Wumpus by Gregory Yob in 1973.
The Wumpus world is a cave which has 4/4 rooms connected with passageways. So there are
total 16 rooms which are connected with each other. We have a knowledge-based agent who will
go forward in this world. The cave has a room with a beast which is called Wumpus, who eats
anyone who enters the room. The Wumpus can be shot by the agent, but the agent has a single
arrow. In the Wumpus world, there are some Pits rooms which are bottomless, and if agent falls
in Pits, then he will be stuck there forever. The exciting thing with this cave is that in one room
there is a possibility of finding a heap of gold. So the agent goal is to find the gold and climb out
the cave without fallen into Pits or eaten by Wumpus. The agent will get a reward if he comes
out with gold, and he will get a penalty if eaten by Wumpus or falls in the pit.

Dept. of BCA, S.R.N.M.N.C, Shimoga Page 2


Unit: 03 Knowledge Representation

The rooms adjacent to the Wumpus room are smelly, so that it would have some stench.
The room adjacent to PITs has a breeze, so if the agent reaches near to PIT, then he will
perceive the breeze.
There will be glitter in the room if and only if the room has gold.
The Wumpus can be killed by the agent if the agent is facing to it, and Wumpus will emit
a horrible scream which can be heard anywhere in the cave.

PEAS description of Wumpus world:


To explain the Wumpus world we have given PEAS description as below:
Performance measure:
+1000 reward points if the agent comes out of the cave with the gold.
-1000 points penalty for being eaten by the Wumpus or falling into the pit.
-1 for each action, and -10 for using an arrow.
The game ends if either agent dies or came out of the cave.
Environment:
A 4*4 grid of rooms.
The agent initially in room square [1, 1], facing toward the right.
Location of Wumpus and gold are chosen randomly except the first square
[1,1].
Each square of the cave can be a pit with probability 0.2 except the first
square.
Actuators:
Left turn,
Right turn
Move forward
Grab
Release
Shoot.

Sensors:

Dept. of BCA, S.R.N.M.N.C, Shimoga Page 3


Unit: 03 Knowledge Representation

The agent will perceive the stench if he is in the room adjacent to the
Wumpus. (Not diagonally).
The agent will perceive breeze if he is in the room directly adjacent to the
Pit.
The agent will perceive the glitter in the room where the gold is present.
The agent will perceive the bump if he walks into a wall.
When the Wumpus is shot, it emits a horrible scream which can be
perceived anywhere in the cave.
These precepts can be represented as five element list, in which we will
have different indicators for each sensor.
Example if agent perceives stench, breeze, but no glitter, no bump, and no
scream then it can be represented as:
[Stench, Breeze, None, None, None]

The Wumpus world Properties:

Partially observable: The Wumpus world is partially observable because


the agent can only perceive the close environment such as an adjacent
room.
Deterministic: It is deterministic, as the result and outcome of the world
are already known.
Sequential: The order is important, so it is sequential.
Static: It is static as Wumpus and Pits are not moving.
Discrete: The environment is discrete.
One agent: The environment is a single agent as we have one agent only
and Wumpus is not considered as an agent.

Exploring the Wumpus world:

Dept. of BCA, S.R.N.M.N.C, Shimoga Page 4


Unit: 03 Knowledge Representation

At room [2,2], here no stench and no breezes present so let's suppose agent decides to move
to [2,3]. At room [2,3] agent perceives glitter, so it should grab the gold and climb out of the
cave.

Logic:
Logic plays a fundamental role in artificial intelligence (AI) by providing a formal framework
for representing and reasoning about knowledge. Here are some key aspects of logic in AI:

Dept. of BCA, S.R.N.M.N.C, Shimoga Page 5


Unit: 03 Knowledge Representation

Symbolic Representation: Logic allows AI systems to represent knowledge using


symbols and rules. These symbols can represent facts, relationships, actions, and other
relevant information.

Inference and Reasoning: Logic provides rules for drawing conclusions from known facts
and making inferences based on logical deductions. AI systems use logical reasoning to
derive new knowledge from existing knowledge.

Predicate Logic: Predicate logic is a common formalism used in AI for representing and
reasoning about statements involving quantifiers (such as "for all" and "there exists") and
predicates (relations between objects).

First-Order Logic (FOL): FOL extends predicate logic to include variables, quantifiers,
and functions. It allows for more expressive representations of knowledge and more
complex reasoning.

Automated Reasoning: AI systems use automated reasoning techniques, such as theorem


proving and model checking, to perform logical inference efficiently. These techniques
can be used to verify the correctness of logical statements and to solve logical puzzles.

Knowledge Representation: Logic provides a formal basis for representing knowledge in


AI systems. By encoding knowledge in a logical language, AI systems can manipulate
and reason about this knowledge to perform tasks such as planning, problem-solving, and
decision-making.

Expert Systems: Expert systems are AI systems that use logical rules to emulate the
reasoning of human experts in specific domains. These systems encode expert knowledge
in the form of logical rules and use inference mechanisms to make decisions and provide
advice.

Overall, logic provides a powerful framework for representing and reasoning about knowledge in
AI systems, enabling them to perform complex tasks and make intelligent decisions.

Logic is a formal language to express real world sentences


Syntax – notation to express sentences
Semantics – meaning of the sentences
Truth value – sentence truth value (True or False)
Model – World for sentences
Entailment:
If I was to tell you that either I was going to be promoted or get a big bonus because I just landed
a huge account for the company I work for, then you would think a few things:
If he doesn’t get promoted, then at least he’ll get compensated with a bonus
If he doesn’t get a bonus, then at least he’ll get compensated with a promotion
His company won’t both not promote him and not give him a bonus.

Dept. of BCA, S.R.N.M.N.C, Shimoga Page 6


Unit: 03 Knowledge Representation

All of these claims follow from the original claim. They “follow” in the sense that if the
original claim is in fact true, then this conclusion must be true. There’s some sense in which
they “mean the same thing”: they describe the same world or claim the same thing to be true
about the world. A lot of logic consequence is similar: it’s a relationship of “following” or
“entailment” between statements which mean essentially the same thing. Other logical
consequences are cases where one statement entails another statement (if the first is true, the
second must be true), but not because they essentially mean the same thing. Instead, because
the first statement is making a “stronger” claim than the other.
Logical entailment is denoted by: α
Which means β follows from α

Eg:

�� -No pit in [1,2]


��| = ��
Then,
Another example:

Dept. of BCA, S.R.N.M.N.C, Shimoga Page 7


Unit: 03 Knowledge Representation

�� -No pit in [2,2]


��| ≠ ��

Entailment is like the needle being in the haystack; inference is like finding it. This distinction is
embodied in some formal notation: if an inference algorithm i can derive α from KB, we write

KB ⊢i α , which is pronounced “α is derived from KB by i” or “i derives α from KB.”

An inference algorithm that derives only entailment is called “sound”


An inference algorithm that derives all possible entailment is called “complete”

Logical Reasoning with Entailment:

Dept. of BCA, S.R.N.M.N.C, Shimoga Page 8


Unit: 03 Knowledge Representation

Grounding:

It is the connection between logical reasoning processes and the real environment in which the
agent exists. In particular, how do we know that KB is true in the real world?. A simple answer
is that the agent’s sensors create the connection. For example, the wumpus-world agent has a
smell sensor. The agent program creates a suitable sentence whenever there is a smell. Then,
whenever that sentence is in the knowledge base, it is true in the real world.

Propositional logic in Artificial intelligence:


Propositional logic (PL) is the simplest form of logic where all the statements are made by
propositions. A proposition is a declarative statement which is either true or false. It is a
technique of knowledge representation in logical and mathematical form.
Example:
It is Sunday.
The Sun rises from West (False proposition)
3+3= 7(False proposition)
5 is a prime number.

Propositional Logic syntax:

Dept. of BCA, S.R.N.M.N.C, Shimoga Page 9


Unit: 03 Knowledge Representation

Atomic propositions are the simple propositions. It consists of a single proposition symbol.
These are the sentences which must be either true or false.

Compound propositions are constructed by combining simpler or atomic propositions, using


parenthesis and logical connectives.

Logical connectives are used to connect two simpler propositions or representing a sentence
logically. We can create compound propositions with the help of logical connectives. There are
mainly five connectives, which are given as follows:

1. Negation: A sentence such as ¬ P is called negation of P. A literal can be either Positive literal
or negative literal.

2. Conjunction: A sentence which has ∧ connective such as, P ∧ Q is called a conjunction.

Example: Rohan is intelligent and hardworking. It can be written as,

P= Rohan is intelligent Q= Rohan is hardworking. → P∧ Q.

3. Disjunction: A sentence which has ∨ connective, such as P ∨ Q. is called disjunction, where P


and Q are the propositions.

Example: "Ritika is a doctor or Engineer", Here P= Ritika is Doctor. Q= Ritika is Doctor, so we


can write it as P ∨ Q.

Dept. of BCA, S.R.N.M.N.C, Shimoga Page 10


Unit: 03 Knowledge Representation

4. Implication: A sentence such as P → Q, is called an implication. Implications are also known


as if-then rules. It can be represented as: If it is raining, then the street is wet. Let P= It is raining
and Q= Street is wet, so it is represented as P → Q

5. Biconditional: A sentence such as P⇔ Q is a Biconditional sentence, example If I am


breathing, then I am alive, P= I am breathing, Q= I am alive, it can be represented as P ⇔ Q.

Truth Table:
In propositional logic, we need to know the truth values of propositions in all possible scenarios.
We can combine all the possible combination with logical connectives, and the representation of
these combinations in a tabular format is called Truth table. Following are the truth table for all
logical connectives:

Standard Logical Equivalences:


Logical equivalence is the condition of equality that exists between two statements or sentences
in propositional logic. The relationship between the two statements translates verbally into "if

Dept. of BCA, S.R.N.M.N.C, Shimoga Page 11


Unit: 03 Knowledge Representation

and only if." In mathematics, logical equivalence is typically symbolized by a double arrow (⟺
or ⟷) or triple lines (≡).

This expression provides an example of logical equivalence between two simple statements:

A∨B⟺B∨A

The expression includes the statements A ∨ B and B ∨ A, which are connected together by the
IIF function. Each statement uses the OR Boolean function (∨) to indicate an inclusive
disjunction between variables A and B. This means that the statement returns a true value if
either variable is true or if both variables are true, but it returns a false value if both variables are
false. The expression in its entirety is effectively stating that the statement "variable A or
variable B" is logically equivalent to the statement "variable B or variable A."

Some of the standard logical equivalences:

Inference rules:

Inference rules are the templates for generating valid arguments. Inference rules are applied to
derive proofs in artificial intelligence, and the proof is a sequence of the conclusion that leads to
the desired goal. In inference rules, the implication among all the connectives plays an important
role. Following are some terminologies related to inference rules:

Implication: It is one of the logical connectives which can be represented as P → Q. It is a


Boolean expression.

Dept. of BCA, S.R.N.M.N.C, Shimoga Page 12


Unit: 03 Knowledge Representation

Converse: The converse of implication, which means the right-hand side proposition, goes
to the left-hand side and vice-versa. It can be written as Q → P.
Contrapositive: The negation of converse is termed as contrapositive, and it can be
represented as ¬ Q → ¬ P.
Inverse: The negation of implication is called inverse. It can be represented as ¬ P → ¬ Q.

Proof:

Types of Inference rules:

Modus Ponens:

The Modus Ponens rule is one of the most important rules of inference, and it states that if P and
P → Q is true, then we can infer that Q will be true. It can be represented as:

Example:

Statement-1: "If I am sleepy then I go to bed" ==> P→ Q

Statement-2: "I am sleepy" ==> P

Conclusion: "I go to bed." ==> Q.

Hence, we can say that, if P→ Q is true and P is true then Q will be true.

AND Elimination:

This rule states that if P∧ Q is true, then Q or P will also be true. It can be represented as:

Dept. of BCA, S.R.N.M.N.C, Shimoga Page 13


Unit: 03 Knowledge Representation

Prove logically that there is no pit in [1,2]


Knowledgebase:

Proof:

Proof by resolution,
The idea of resolution is simple: if we know that
o p is true or q is true
o and we also know that p is false or r is true
o then it must be the case that q is true or r is true.
This line of reasoning is formalized in the
Resolution Tautology:
(p V q) Λ (¬ p V r) -> q V r
Eg: Given the following hypotheses:
If it rains, Joe brings his umbrella (r -> u)
If Joe has an umbrella, he doesn't get wet (u -> NOT w)
If it doesn't rain, Joe doesn't get wet (NOT r -> NOT w)
Prove that Joes doesn't get wet (NOT w)
We first put each hypothesis in CNF:
a. r -> u == (NOT r OR u)
b. u -> NOT w == (NOT u OR NOT w)

Dept. of BCA, S.R.N.M.N.C, Shimoga Page 14


Unit: 03 Knowledge Representation

c. NOT r -> NOT w == (r OR NOT w)


We then use resolution on the hypotheses to derive the conclusion (NOT w):
1. NOT r OR u Premise

2. NOT u OR NOT w Premise

3. r OR NOT w Premise

4. NOT r OR NOT w L1, L2, resolution

5. NOT w OR NOT w L3, L4, resolution

6. NOT w L5, idempotence

Prove logically that there is a pit in [3,1]


��,� ↔ ��,� V��,� V ��,�

(��,� → ��,� V��,� V ��,� ) � (��,� V��,� V ��,� → ��,� )


��,�
(��,� V��,� V ��,� )
¬��,�
��,� V ��,�
¬��,�
�����, ��,�

Conjunctive Normal Form:


Resolution works best when the formula is of the special form: it is an ∧ of ∨s of (possibly
negated, ¬) variables (called literals).
Eg:
(y V¬z) � (¬y) � (y V z) CNF
(x V ¬y � z) Not CNF

To convert a formula into a CNF.

Convert double implication to single


Open up the implications to get ORs.
Get rid of double negations.
Use Demorgans Law
Use distributivity

Dept. of BCA, S.R.N.M.N.C, Shimoga Page 15


Unit: 03 Knowledge Representation

Eg: F V (G � H) can be written as


o (F V G) � (F V H)

Eg: Convert to CNF: A→ ( B ∧ C)

¬� � � ∧ C

¬� � � ∧ ¬� � �

Horn Clause in AI
The term "horn clause" refers to a disjunction of literals in which, at most, one literal is not
negated. A horn clause is a clause that has exactly one literal that is not negated.The
logician Alfred Horn first recognized the importance of Horn clauses in 1951. Horn clauses
are a type of logical formula used in logic programming, formal specification, universal
algebra, and model theory due to their helpful qualities in these areas and others.

Types of Horn Clauses:

 Definite clause / Strict Horn clause – It has precisely one positive literal.
 Unit clause - Definite clause containing no negative literals.
 Goal clause – Horn clause lacking a literal positive.

Horn clauses perform a fundamental role in both constructive and computational logic.

Syntax:

Inference Engines:

Dept. of BCA, S.R.N.M.N.C, Shimoga Page 16


Unit: 03 Knowledge Representation

An inference engine is a component of an AI system that is responsible for drawing conclusions


based on evidence and information that is provided to it. In other words, it is responsible for
making deductions and inferences based on what it knows.
The inference engine is often compared to the human brain, as it is responsible for making the
same kinds of deductions and inferences that we do. However, the inference engine is not limited
by the same constraints as the human brain. It can process information much faster and is not
subject to the same biases and errors that we are.
The inference engine is a critical component of AI systems because it is responsible for
making the decisions that the system needs to make in order to function. Without an inference
engine, an AI system would be little more than a collection of data.

Types of Inference engines:

 Forward chaining: Forward chaining is a form of reasoning for an AI expert system that starts
with simple facts and applies inference rules to extract more data until the goal is reached.
 Backward chaining: Backward chaining is another strategy used to shape an AI expert system
that starts with the end goal and works backward through the AI’s rules to find facts that support
the goal.

Forward Chaining:
Forward chaining is also known as a forward deduction or forward reasoning method
when using an inference engine. The forward-chaining algorithm starts from known facts,
triggers all rules whose premises are satisfied and adds their conclusion to the known facts. This
process repeats until the problem is solved. In this type of chaining, the inference engine starts by
evaluating existing facts, derivations, and conditions before deducing new information. An
endpoint, or goal, is achieved through the manipulation of knowledge that exists in the
knowledge base.

Dept. of BCA, S.R.N.M.N.C, Shimoga Page 17


Unit: 03 Knowledge Representation

Forward Chaining Properties

 Forward chaining follows a down-up strategy, going from bottom to top.


 It uses known facts to start from the initial state (facts) and works toward the goal state, or
conclusion.
 The forward chaining method is also known as data-driven because we achieve our objective by
employing available data.
 The forward chaining method is widely used in expert systems such as CLIPS, business rule
systems and manufacturing rule systems.
 It uses a breadth-first search as it has to go through all the facts first.
 It can be used to draw multiple conclusions.
Eg:

Knowledgebase:
1. John’s credit score is 780.
2. A person with a credit score greater than 700 has never defaulted on their loan.
3. John has an annual income of $100,000.
4. A person with a credit score greater than 750 is a low-risk borrower.
5. A person with a credit score between 600 to 750 is a medium-risk borrower.
6. A person with a credit score less than 600 is a high-risk borrower.
7. A low-risk borrower can be given a loan amount up to 4X of his annual income at a 10 percent
interest rate.
8. A medium-risk borrower can be given a loan amount of up to 3X of his annual income at a 12
percent interest rate.
9. A high-risk borrower can be given a loan amount of up to 1X of his annual income at a 16 percent
interest rate.
10.
Question:

1. What max loan amount can be sanctioned for John?


2. What will the interest rate be?

Solution:
To deduce the conclusion, we apply forward chaining on the knowledge base. We start from the
facts which are given in the knowledge base and go through each one of them to deduce

Dept. of BCA, S.R.N.M.N.C, Shimoga Page 18


Unit: 03 Knowledge Representation

intermediate conclusions until we are able to reach the final conclusion or have sufficient
evidence to negate the same.
John’ CS = 780 AND CS > 750 are Low Risk Borrower → John is a Low Risk Borrower

Loan Amount for Low Risk Borrower is 4X annual income AND John’s annual income is
$100k

→ Max loan amount that can be sanctioned is $400k at a 10% interest rate.

Backward Chaining
Backward chaining is also known as a backward deduction or backward reasoning method when using an
inference engine. In this, the inference engine knows the final decision or goal. The system starts from the
goal and works backward to determine what facts must be asserted so that the goal can be achieved.

For example, it starts directly with the conclusion (hypothesis) and validates it by backtracking
through a sequence of facts. Backward chaining can be used in debugging, diagnostics and prescription
applications.

Properties of Backward Chaining

 Backward chaining uses an up-down strategy going from top to bottom.


 The modus ponens inference rule is used as the basis for the backward chaining process. This rule
states that if both the conditional statement (p->q) and the antecedent (p) are true, then we can
infer the subsequent (q).
 In backward chaining, the goal is broken into sub-goals to prove the facts are true.
 It is called a goal-driven approach, as a list of goals decides which rules are selected and used.
 The backward chaining algorithm is used in game theory, automated theorem-proving tools,
inference engines, proof assistants and various AI applications.
 The backward-chaining method mostly used a depth-first search strategy for proof.
Eg:
Knowledgebase:

Dept. of BCA, S.R.N.M.N.C, Shimoga Page 19


Unit: 03 Knowledge Representation

 John is taller than Kim


 John is a boy
 Kim is a girl
 John and Kim study in the same class
 Everyone else other than John in the class is shorter than Kim
Question:

 Is John the tallest boy in class?

Now, to apply backward chaining, we start from the goal and assume that John is the tallest boy
in class. From there, we go backward through the knowledge base comparing that assumption to
each known fact to determine whether it is true that John is the tallest boy in class or not.
Height (John) > Height (anyone in the class)
AND
John and Kim both are in the same class
AND
Height (Kim) > Height (anyone in the class except John)
AND
John is boy
SO
Height (John) > Hight(Kim)
Which aligns with the knowledge base fact. Hence the goal is proved true.

AO* algorithm – Artificial intelligence


Best-first search is what the AO* algorithm does. The AO* method divides any given
difficult problem into a smaller group of problems that are then resolved using the AND-
OR graph concept. AND OR graphs are specialized graphs that are used in problems that can
be divided into smaller problems. The AND side of the graph represents a set of tasks that
must be completed to achieve the main goal, while the OR side of the graph represents
different methods for accomplishing the same main goal.

In this figure, the buying of a car may be broken down into smaller problems or tasks that can be
accomplished to achieve the main goal in the above figure, which is an example of a simple AND-OR

Dept. of BCA, S.R.N.M.N.C, Shimoga Page 20


Unit: 03 Knowledge Representation

graph. The other task is to either steal a car that will help us accomplish the main goal or use your own
money to purchase a car that will accomplish the main goal. The AND symbol is used to indicate the
AND part of the graphs, which refers to the need that all subproblems containing the AND to be
resolved before the preceding node or issue may be finished.

Representation of Horn clauses using AND-OR graph for forward chaining

Consider the example inference rules and facts:

The corresponding AND-OR graph is:

Effective Propositional Model Checking:


Propositional model checking is a formal verification method used to systematically explore
the state space of a system to ensure it meets certain specifications or properties. In artificial
intelligence (AI), effective propositional model checking can help verify the correctness and
reliability of AI systems, especially in safety-critical applications.
Here are some key points about effective propositional model checking in AI:

Dept. of BCA, S.R.N.M.N.C, Shimoga Page 21


Unit: 03 Knowledge Representation

1. State Space Exploration


State Space Explosion: One of the main challenges in model checking is the exponential
growth of the state space as the system size increases. Techniques like symbolic model
checking and abstraction can help manage this complexity.
Symbolic Model Checking: Uses symbolic representations like Binary Decision Diagrams
(BDDs) or SAT solvers to represent and manipulate large sets of states efficiently.
2. Temporal Logic Specifications
Linear Temporal Logic (LTL) and Computation Tree Logic (CTL) are commonly used to
specify properties of systems in model checking.
LTL: Focuses on sequences of states and allows expressing properties over these sequences.
CTL: Allows expressing properties over tree-like structures of possible execution paths.
3. Verification Techniques
Bounded Model Checking (BMC): Checks the existence of a counterexample within a given
bound. Uses SAT solvers to find counterexamples efficiently.
Explicit State Model Checking: Enumerates all possible states and transitions, though it
often struggles with large state spaces.
Abstraction and Refinement: Abstracts the state space to a smaller model and refines it
iteratively to ensure properties are preserved.
4. Applications in AI
Verification of AI Algorithms: Ensures the correctness of algorithms, such as those used in
autonomous systems or decision-making processes.
Safety and Reliability: Crucial for AI applications in healthcare, autonomous driving, and
other safety-critical domains.
Debugging and Testing: Helps identify and correct errors in AI systems during development.
5. Tools and Frameworks
Model Checking Tools: Tools like NuSMV, SPIN, and PRISM are widely used for model
checking.
Integration with AI Development: Incorporating model checking into the AI development
pipeline can help catch errors early and ensure robust system design.
6. Challenges and Future Directions
Scalability: As AI systems become more complex, improving the scalability of model
checking techniques is crucial.

Dept. of BCA, S.R.N.M.N.C, Shimoga Page 22


Unit: 03 Knowledge Representation

Automation: Enhancing the automation of model checking processes to reduce the need for
manual intervention.
Combining with Other Methods: Integrating model checking with other verification and
validation methods, such as testing and runtime verification, for comprehensive assurance.
By leveraging these techniques and tools, propositional model checking can effectively
contribute to the development of reliable and trustworthy AI systems.

Agents Based on Propositional Logic:

Agents based on propositional logic in artificial intelligence (AI) use logical frameworks to
represent knowledge and make decisions. These agents rely on the formalism of
propositional logic to encode facts, rules, and derive conclusions to guide their actions.
Here’s an overview of how such agents function, their structure, and their applications.

1. Fundamentals of Propositional Logic:

 Propositional Variables: Basic units of information that can be either true or false (e.g., PPP,
QQQ, RRR).
 Logical Connectives: Operators that combine propositional variables into complex formulas,
including:
 AND (∧)
 OR (∨)
 NOT (¬)
 IMPLIES (→)
 EQUIVALENT (↔)

Propositional Formulas: Expressions formed using variables and connectives (e.g., P∧QP \land
QP∧Q).

Dept. of BCA, S.R.N.M.N.C, Shimoga Page 23


Unit: 03 Knowledge Representation

2. Knowledge Representation:

 Knowledge Base (KB): A collection of propositional formulas representing the agent's


knowledge about the environment.
 Encoding Facts: Individual pieces of information (e.g., "it is raining" could be represented by
RRR).
 Encoding Rules: Logical implications that define relationships between facts (e.g., "if it is
raining, then the ground is wet" could be represented by WR→W).

3. Reasoning Mechanisms

 Inference: The process of deriving new propositions from existing ones using inference rules
such as Modus Ponens (if P→QP \rightarrow QP→Q and PPP are true, then QQQ is true).
 Entailment: Determining whether a particular proposition logically follows from the KB.

4. Decision Making:

 Action Selection: Based on the current knowledge base, the agent uses logical reasoning to
decide which actions to take to achieve its goals.
 Goal Formulation: Goals are represented as propositional formulas that the agent aims to make
true.

5. Types of Propositional Logic Agents:

 Simple Reflex Agents:


o Condition-Action Rules: Directly map observations to actions (e.g., "if it is raining, then
carry an umbrella" can be represented as R→UR→U).
o Stateless: Do not maintain an internal state; rely only on the current perception.
 Model-Based Reflex Agents:
o State Representation: Maintain an internal model of the world represented using
propositional logic.
o Updating the Model: Update their knowledge base with new observations to reflect
changes in the environment.
o Decision Rules: Use the internal model and logical rules to make decisions.

6. Advantages and Challenges:

 Advantages:
o Formal Verification: Propositional logic allows for formal verification of the agent's
behavior.
o Clarity and Precision: Logical representations are clear and precise, making it easier to
understand and debug the agent's reasoning.
 Challenges:
o Expressiveness: Propositional logic is less expressive than first-order logic and may not
capture complex relationships.

Dept. of BCA, S.R.N.M.N.C, Shimoga Page 24


Unit: 03 Knowledge Representation

o Scalability: Reasoning with large knowledge bases can become computationally


intensive.

7. Applications:

 Expert Systems: Encode expert knowledge in specific domains (e.g., medical diagnosis,
financial analysis) and make decisions based on logical inference.
 Automated Planning: Use propositional logic to represent states and actions, enabling the
generation of plans to achieve specified goals.
 Game Playing: Represent and reason about game states, strategies, and moves using logical
formulas.

8. Tools and Techniques:

 Logic Programming: Languages like Prolog are designed for logic-based programming and are
often used to implement propositional logic agents.
 SAT Solvers: Tools that determine the satisfiability of propositional formulas and are used for
decision-making and planning in logic-based agents.

Example: Simple Reflex Agent:

Consider a simple reflex agent that decides to turn on a light if it is dark:

 Propositional Variable: DDD (it is dark), LLL (light is on).


 Rule: D→LD→L (if it is dark, then turn on the light).

Example: Model-Based Reflex Agent:

A model-based reflex agent that keeps track of whether a room is occupied:

 Propositional Variables: OOO (room is occupied), LLL (light is on).


 Knowledge Base: O→ LO→L (if the room is occupied, the light should be on).
 Observation Update: If the agent observes OOO (the room is occupied), it updates its
knowledge base and decides LLL (turn on the light).

By leveraging propositional logic, AI agents can effectively represent and reason about their
environment, leading to intelligent and reliable behavior in various applications.

First-order Logic Syntax:

First-order logic (FOL), also known as predicate logic, extends propositional logic by adding
quantifiers and predicates, allowing for more expressive representations of knowledge. It is
widely used in artificial intelligence (AI) for knowledge representation, reasoning, and the
development of intelligent agents.

Dept. of BCA, S.R.N.M.N.C, Shimoga Page 25


Unit: 03 Knowledge Representation

Syntax of First-Order Logic

1. Basic Components

 Constants: Represent specific objects or individuals in the domain (e.g., alice, paris).
 Variables: Represent arbitrary elements of the domain (e.g., x, y, z).
 Predicates: Represent properties of objects or relationships between objects (e.g.,
Likes (alice, ice_cream), Loves(x, y)).
 Functions: Map tuples of objects to an object (e.g., MotherOf(x), which might
denote the mother of x).

2. Atomic Formulas

 Predicates with Arguments: The simplest form of an atomic formula (e.g., Loves(alice,
bob), Has(x, book)).

3. Logical Connectives

 Negation (¬\neg¬): Represents the negation of a formula (e.g., ¬Loves(alice, bob)).


 Conjunction (∧\land∧): Represents the logical AND of two formulas (e.g., Loves(alice,
bob) ∧ Likes(bob, chocolate)).
 Disjunction (∨\lor∨): Represents the logical OR of two formulas (e.g., Loves(alice, bob)
∨ Likes(bob, chocolate)).
 Implication (→\rightarrow→): Represents logical implication (e.g., Loves(alice, bob)
→ Happy(alice)).
 Biconditional (↔\leftrightarrow↔): Represents logical equivalence (e.g., Loves(alice,
bob) ↔ Loves(bob, alice)).

4. Quantifiers

 Universal Quantifier (∀\forall∀): Asserts that a formula holds for all elements in the
domain (e.g., ∀x (Loves(x, ice_cream)) means "everyone loves ice cream").
 Existential Quantifier (∃\exists∃): Asserts that there is at least one element in the
domain for which the formula holds (e.g., ∃x (Loves(x, chocolate)) means "someone
loves chocolate").

Structure of First-Order Logic Formulas

1. Atomic Formulas: The simplest form, consisting of a predicate and its arguments.
o Example: Loves(alice, bob)
2. Complex Formulas: Built from atomic formulas using logical connectives and
quantifiers.
o Example: ∀x (Human(x) → Mortal(x)) means "all humans are mortal."

Dept. of BCA, S.R.N.M.N.C, Shimoga Page 26


Unit: 03 Knowledge Representation

Examples of First-Order Logic Formulas

1. Simple Relationships:
o Parent(alice, bob): "Alice is a parent of Bob."
o Friends(alice, charlie): "Alice and Charlie are friends."
2. Using Quantifiers:
o ∀x (Human(x) → Mortal(x)): "All humans are mortal."
o ∃x (Human(x) ∧ Loves(x, chocolate)): "There exists a human who loves
chocolate."
3. Combining Connectives and Quantifiers:
o ∀x ∀y (Parent(x, y) → Loves(x, y)): "Every parent loves their child."
o ∃x (Student(x) ∧ ∀y (Class(y) → Attends(x, y))): "There exists a student
who attends every class."

Semantics of First-Order Logic

The meaning (semantics) of a first-order logic formula is determined by interpreting the


constants, functions, predicates, and quantifiers over a domain of discourse.

 Domain: The set of objects over which the variables can range.
 Interpretation: Assigns meanings to the constants, functions, and predicates. For example:
o Constants are mapped to specific objects in the domain.
o Functions are mapped to operations on objects in the domain.
o Predicates are mapped to relations among objects in the domain.

Example Interpretation

 Domain: {alice, bob, charlie}


 Interpretation:
o Parent(alice, bob) is true.
o Friends(alice, charlie) is true.
o Human(alice), Human(bob), and Human(charlie) are all true.
o Loves(x, y) is true if x and y are friends.

Applications in AI

 Knowledge Representation: Representing complex relationships and rules in expert systems.


 Automated Reasoning: Deriving new knowledge and making inferences using logical rules.
 Natural Language Processing: Understanding and generating human language based on logical
representations of meaning.
 Planning and Problem Solving: Representing actions, goals, and constraints in planning systems.

First-order logic provides a powerful and flexible framework for representing and reasoning
about knowledge in AI, enabling the development of sophisticated intelligent systems.

USING FIRST ORDER LOGIC

Dept. of BCA, S.R.N.M.N.C, Shimoga Page 27


Unit: 03 Knowledge Representation

Sentences are added to a knowledge base using TELL, exactly as in propositional logic. Such
sentences are called assertions. For example, we can assert that John is a king, Richard is a person, and all
kings are persons:

TELL(KB, King(John))

TELL(KB, Person(Richard))

TELL(KB, ∀ x King(x) ⇒ Person(x)) We can ask questions of the knowledge base using ASK.

For example, ASK(KB, King(John)) returns true. Questions asked with ASK are called queries or
goals If we want to know what value of x makes the sentence true, we will need a different function,
ASKVARS, which we call with ASKVARS(KB, Person(x)) and which yields a stream of answers. In this
case there will be two answers: {x/John} and {x/Richard}. Such an answer is called a substitution or
binding list.

The kinship domain

The first example we consider is the domain of family relationships, or kinship. This domain
includes facts such as “Elizabeth is the mother of Charles” and “Charles is the father of William” and
rules such as “One’s grandmother is the mother of one’s parent.” Clearly, the objects in our domain are
people. We have two unary predicates, Male and Female. Kinship relations—parenthood, brotherhood,
marriage, and so on—are represented by binary predicates: Parent, Sibling, Brother , Sister , Child ,
Daughter, Son, Spouse, Wife, Husband, Grandparent , Grandchild , Cousin, Aunt, and Uncle. We use
functions for Mother and Father , because every person has exactly one of each of these.

For example, one’s mother is one’s female parent:

∀ m, c Mother (c)=m ⇔ Female(m) ∧ Parent(m, c) .

One’s husband is one’s male spouse:

∀ w, h Husband(h,w) ⇔ Male(h) ∧ Spouse(h,w) .

Male and female are disjoint categories:

∀ x Male(x) ⇔ ¬Female(x) . Parent and child are inverse relations: ∀ p, c Parent(p, c)


⇔ Child (c, p) .

A grandparent is a parent of one’s parent:

∀ g, c Grandparent (g, c) ⇔ ∃p Parent(g, p) ∧ Parent(p, c) . A sibling is another child of


one’s parents: ∀ x, y

Sibling(x, y) ⇔ x _= y ∧ ∃p Parent(p, x) ∧ Parent(p, y) . Each of these sentences can be


viewed as an axiom of the kinship domain. Axioms are commonly associated with purely
mathematical domains. Our kinship axioms are also definitions; they have the form ∀ x, y P(x, y)
⇔ ..... The axioms define the Mother function and the Husband, Male, Parent, Grandparent, and

Dept. of BCA, S.R.N.M.N.C, Shimoga Page 28


Unit: 03 Knowledge Representation

Sibling predicates in terms of other predicates. For example, consider the assertion that
siblinghood is symmetric: ∀ x, y Sibling(x, y) ⇔ Sibling(y, x) .

Natural numbers in first-order logic


The natural numbers can be described in first-order logic. The language of natural numbers has
a single constant 0, defined by predicate NatNum(0)
a function Successor, S(n) which defines next number after n in the series of natural number
The successor is expressed as a quantifier:
∀n, NatNum(n) ⇒ NatNum(S n )
∀n, 0 ≠ S(n)
Two natural numbers cannot have same successor
∀m, n m ≠ n ⇒ S(m) ≠ S(n)
+ is a function defined on two natural numbers and equality is defined between 2 natural
numbers using FOL:
∀m, n NatNum m ∧ NatNum(n) ⇒+ S m , n = S( + m, n )

 0 is an arithmetic identity as
 ∀m, NatNum(m) ⇒+ 0, m = m

A set is a collection of objects; any one of the objects in a set is called a member or an element of
the set.
The basic statement in set theory is element inclusion: an element a is included in some set S.
Formally written as:

If an element is not included, we write:

Statements are either true or false, depending on the context. For example, given the above sets,
the first statement is true, whereas the second is false. If a statement S is true in a given context C,
we say the statement is valid in C. Formally, we write this as:

If the statement is not valid in that context, we write:

The operators to compose new sets out of existing ones are:

1. A special set is the empty set, which contains no elements at all:


2. Union: create a set S containing all elements from A, from B, or from both.
Formally:
3. Intersection: create a set S containing all elements that are both in A and in B.
Formally:
4. Exclusion: create a set S from the elements of A that are not in B. Formally:
These sets can be interpreted as quantified statements:

Dept. of BCA, S.R.N.M.N.C, Shimoga Page 29


Unit: 03 Knowledge Representation

Subsets:

∀ s1, s2 s1 ⊆ s2 ⇔ (∀ x x ∈ s1 ⇒ x ∈ s2) .

Equality of two sets:

∀ s1, s2 (s1 = s2) ⇔ (s1 ⊆ s2 ∧ s2 ⊆ s1) .

First Order Logic with Wumpus World


The wumpus agent receives a percept vector with five elements. The corresponding
firstorder sentence stored in the knowledge base must include both the percept and the
time at which it occurred; otherwise, the agent will get confused about when it saw what.
We use integers for time steps. A typical percept sentence would be Percept ([Stench,
Breeze, Glitter , None, None], 5) . Here, Percept is a binary predicate, and Stench and so
on are constants placed in a list.
The actions in the wumpus world can be represented by logical terms: Turn(Right ),
Turn(Left ), Forward , Shoot , Grab, Climb .
To determine which is best, the agent program executes the query ASKVARS(∃ a
BestAction(a, 5)) , which returns a binding list such as {a/Grab}.
The agent program can then return Grab as the action to take.
The raw percept data implies certain facts about the current state. For example: ∀t,s,g,m,c
Percept ([s,Breeze,g,m,c],t) ⇒ Breeze(t) , ∀t,s,b,m,c Percept ([s,b,Glitter,m,c],t) ⇒ Glitter
(t) These rules exhibit a trivial form of the reasoning process called perception.
Simple “reflex” behavior can also be implemented by quantified implication sentences.
For example, we have ∀ t Glitter (t) ⇒ BestAction(Grab, t) .
Given the percept and rules from the preceding paragraphs, this would yield the desired
conclusion BestAction(Grab, 5)—that is, Grab is the right thing to do.
For example, if the agent is at a square and perceives a breeze, then that square is breezy:
∀ s, t At(Agent, s, t) ∧ Breeze(t) ⇒ Breezy(s) . It is useful to know that a square is breezy
because we know that the pits cannot move about. Notice that Breezy has no time
argument.
Having discovered which places are breezy (or smelly) and, very important, not breezy
(or not smelly), the agent can deduce where the pits are (and where the wumpus is). first-
order logic just needs one axiom: ∀ s Breezy(s) ⇔ ∃r Adjacent (r, s) ∧ Pit(r) .

Inference in First-Order Logic


Inference in First-Order Logic is used to deduce new facts or sentences from existing
sentences. Before understanding the FOL inference rule, let's understand some basic
terminologies used in FOL.

Dept. of BCA, S.R.N.M.N.C, Shimoga Page 30


Unit: 03 Knowledge Representation

Substitution is a fundamental operation performed on terms and formulas. It occurs in all


inference systems in first-order logic. The substitution is complex in the presence of
quantifiers in FOL. If we write F[a/x], so it refers to substitute a constant "a" in place of
variable "x".
Equality-First-Order logic does not only use predicate and terms for making atomic
sentences but also uses another way, which is equality in FOL. For this, we can
use equality symbols which specify that the two terms refer to the same object.
Example: Brother (John) = Smith.
As in the above example, the object referred by the Brother (John) is similar to the
object referred by Smith. The equality symbol can also be used with negation to
represent that two terms are not the same objects.
Example: ¬(x=y) which is equivalent to x ≠y.

FOL inference rules for quantifier:

As propositional logic we also have inference rules in first-order logic, so following are some basic
inference rules in FOL:

o Universal Instantiation
o Existential Instantiation
Universal Instantiation:

o Universal instantiation is also called as universal elimination or UI is a valid inference rule. It can
be applied multiple times to add new sentences.
o The new KB is logically equivalent to the previous KB.
o As per UI, we can infer any sentence obtained by substituting a ground term for the variable.
o The UI rule state that we can infer any sentence by substituting a ground term v with g in the
universe of discourse.

o
Example:1.
o IF "Every person like ice-cream"=> we can infer
"John likes ice-cream" => P(c)
Example: 2.
o "All kings who are greedy are Evil." So let our knowledge base contains this detail as in
the form of FOL:
o ∀x king(x) ∧ greedy (x) → Evil (x),
So from this information, we can infer any of the following statements using Universal Instantiation:

o King(John) ∧ Greedy (John) → Evil (John),


o King(Richard) ∧ Greedy (Richard) → Evil (Richard),
o King(Father(John)) ∧ Greedy (Father(John)) → Evil (Father(John)),
Existential Instantiation:

Dept. of BCA, S.R.N.M.N.C, Shimoga Page 31


Unit: 03 Knowledge Representation

o Existential instantiation is also called as Existential Elimination, which is a valid inference rule in
first-order logic.
o It can be applied only once to replace the existential sentence.
o The new KB is not logically equivalent to old KB, but it will be satisfiable if old KB was
satisfiable.
o Represented as:

Example:

From the given sentence: ∃x Crown(x) ∧ OnHead(x, John),

So we can infer: Crown(K) ∧ OnHead( K, John), as long as K does not appear in the knowledge base.

o The above used K is a constant symbol, which is called Skolem constant.


o The Existential instantiation is a special case of Skolemization process.

Generalized Modus Ponens Rule:


For the inference process in FOL, we have a single inference rule which is called Generalized
Modus Ponens. It is lifted version of Modus ponens.
Generalized Modus Ponens can be summarized as, " P implies Q and P is asserted to be true,
therefore Q must be True."
According to Modus Ponens, for atomic sentences pi, pi', q. Where there is a substitution θ such
that SUBST (θ, pi',) = SUBST(θ, pi), it can be represented as:

Example:
We will use this rule for Kings are evil, so we will find some x such that x is king, and x is greedy so we
can infer that x is evil.

1. p1' is king(John) p1 is king(x)


2. p2' is Greedy(y) p2 is Greedy(x)
3. θ is {x/John, y/John} q is evil(x)
4. SUBST(θ,q).

Unification:
o Unification is a process of making two different logical atomic expressions identical by finding a
substitution. Unification depends on the substitution process.
o It takes two literals as input and makes them identical using substitution.

Dept. of BCA, S.R.N.M.N.C, Shimoga Page 32


Unit: 03 Knowledge Representation

o Let Ψ1 and Ψ2 be two atomic sentences and � be a unifier such that, Ψ1� = Ψ2�, then it can be
expressed as UNIFY(Ψ1, Ψ2).
o Example: Find the MGU for Unify{King(x), King(John)}
Let Ψ1 = King(x), Ψ2 = King(John),

Substitution θ = {John/x} is a unifier for these atoms and applying this substitution, and both expressions
will be identical.

o The UNIFY algorithm is used for unification, which takes two atomic sentences and returns a
unifier for those sentences (If any exist).
o Unification is a key component of all first-order inference algorithms.
o It returns fail if the expressions do not match with each other.
o The substitution variables are called Most General Unifier or MGU.
Conditions for Unification:
Following are some basic conditions for unification:
o Predicate symbol must be same, atoms or expression with different predicate symbol can never be
unified.
o Number of Arguments in both expressions must be identical.
o Unification will fail if there are two similar variables present in the same expression.
Unification Algorithm:
Algorithm: Unify(Ψ1, Ψ2)
Step. 1: If Ψ1 or Ψ2 is a variable or constant, then:
a) If Ψ1 or Ψ2 are identical, then return NIL.
b) Else if Ψ1is a variable,
a. then if Ψ1 occurs in Ψ2, then return FAILURE
b. Else return { (Ψ2/ Ψ1)}.
c) Else if Ψ2 is a variable,
a. If Ψ2 occurs in Ψ1 then return FAILURE,
b. Else return {( Ψ1/ Ψ2)}.
d) Else return FAILURE.
Step.2: If the initial Predicate symbol in Ψ1 and Ψ2 are not same, then return FAILURE.
Step. 3: IF Ψ1 and Ψ2 have a different number of arguments, then return FAILURE.
Step. 4: Set Substitution set(SUBST) to NIL.
Step. 5: For i=1 to the number of elements in Ψ1.
a) Call Unify function with the ith element of Ψ1 and ith element of Ψ2, and put the
result into S.
b) If S = failure then returns Failure
c) If S ≠ NIL then do,
a. Apply S to the remainder of both L1 and L2.
b. SUBST= APPEND(S, SUBST).
Step.6: Return SUBST.

For each pair of the following atomic sentences find the most general unifier (If exist).

1. Find the MGU of {p(f(a), g(Y)) and p(X, X)}


Sol: S0 => Here, Ψ1 = p(f(a), g(Y)), and Ψ2 = p(X, X)
SUBST θ= {f(a) / X}

Dept. of BCA, S.R.N.M.N.C, Shimoga Page 33


Unit: 03 Knowledge Representation

S1 => Ψ1 = p(f(a), g(Y)), and Ψ2 = p(f(a), f(a))


SUBST θ= {f(a) / g(y)}, Unification failed.
Unification is not possible for these expressions.

2. Find the MGU of {p(b, X, f(g(Z))) and p(Z, f(Y), f(Y))}


Here, Ψ1 = p(b, X, f(g(Z))) , and Ψ2 = p(Z, f(Y), f(Y))
S0 => { p(b, X, f(g(Z))); p(Z, f(Y), f(Y))}
SUBST θ={b/Z}
S1 => { p(b, X, f(g(b))); p(b, f(Y), f(Y))}
SUBST θ={f(Y) /X}
S2 => { p(b, f(Y), f(g(b))); p(b, f(Y), f(Y))}
SUBST θ= {g(b) /Y}
S2 => { p(b, f(g(b)), f(g(b)); p(b, f(g(b)), f(g(b))} Unified Successfully.
And Unifier = { b/Z, f(Y) /X , g(b) /Y}.

3. Find the MGU of {p (X, X), and p (Z, f(Z))}


Here, Ψ1 = {p (X, X), and Ψ2 = p (Z, f(Z))
S0 => {p (X, X), p (Z, f(Z))}
SUBST θ= {X/Z}
S1 => {p (Z, Z), p (Z, f(Z))}
SUBST θ= {f(Z) / Z}, Unification Failed.

4. UNIFY(knows(Richard, x), knows(Richard, John))


Here, Ψ1 = knows(Richard, x), and Ψ2 = knows(Richard, John)
S0 => { knows(Richard, x); knows(Richard, John)}
SUBST θ= {John/x}
S1 => { knows(Richard, John); knows(Richard, John)}, Successfully Unified.
Unifier: {John/x}.

Forward Chaining:

Forward chaining is a method used in artificial intelligence (AI) for reasoning and inferencing. It
is commonly employed in expert systems and rule-based systems to derive conclusions or make
decisions based on a set of known facts and inference rules. Forward chaining works by starting
with known facts and applying inference rules to extract more facts until a goal is reached or no
more rules can be applied.

How Forward Chaining Works

1. Initialization: Begin with a set of known facts (initial data) and a set of inference rules.
2. Inference: Apply the inference rules to the known facts to generate new facts.
3. Iteration: Repeat the inference process, adding new facts to the set of known facts, until
no new facts can be derived or a specific goal is achieved.

Dept. of BCA, S.R.N.M.N.C, Shimoga Page 34


Unit: 03 Knowledge Representation

Components of Forward Chaining

 Facts: Assertions about the world, usually expressed as predicates or propositions (e.g.,
Human(Socrates)).
 Rules: Implications that define how new facts can be inferred from existing facts. A rule
has the form If condition(s) Then conclusion.
 Working Memory: A dynamic collection of known facts that is updated as new facts are
inferred.

Example of Forward Chaining

Consider a simple medical diagnosis system:

Facts

 Fever(John)
 Cough(John)
 SoreThroat(John)

Rules

1. If Fever(x) And Cough(x) Then Flu(x)


2. If SoreThroat(x) And Cough(x) Then Cold(x)
3. If Flu(x) Then BedRest(x)

Process

1. Initial Facts:
o Fever(John)
o Cough(John)
o SoreThroat(John)
2. First Iteration:
o Apply Rule 1: Fever(John) And Cough(John) ⇒ Flu(John)
o Add Flu(John) to the working memory.
3. Second Iteration:
o Apply Rule 2: SoreThroat(John) And Cough(John) ⇒ Cold(John)
o Add Cold(John) to the working memory.
4. Third Iteration:
o Apply Rule 3: Flu(John) ⇒ BedRest(John)
o Add BedRest(John) to the working memory.
5. Final Facts:
o Fever(John)
o Cough(John)
o SoreThroat(John)
o Flu(John)

Dept. of BCA, S.R.N.M.N.C, Shimoga Page 35


Unit: 03 Knowledge Representation

o Cold(John)
o BedRest(John)

Advantages of Forward Chaining

 Data-Driven: Begins with available data and applies rules to derive conclusions, which
is useful in situations where all data may not be initially known.
 Incremental Reasoning: Can handle new facts as they become available, making it
suitable for dynamic environments.
 Simplicity: Easy to implement and understand, especially for rule-based systems.

Disadvantages of Forward Chaining

 Efficiency: Can be less efficient than backward chaining in some cases, as it may
generate many intermediate facts that are not useful.
 Comprehensiveness: May need a large number of rules and facts to derive complex
conclusions, potentially leading to combinatorial explosion.

Applications of Forward Chaining

 Expert Systems: Used for diagnostics, troubleshooting, and decision-making in domains


like medicine, engineering, and finance.
 Production Systems: Applied in industrial automation and manufacturing for process
control and optimization.
 Business Rules Engines: Used to automate business processes by applying rules to data
to make decisions or trigger actions.

Comparison with Backward Chaining

 Forward Chaining: Starts with known facts and applies rules to infer new facts until a
goal is reached.
 Backward Chaining: Starts with a goal and works backwards by finding rules that can
achieve that goal, checking if the conditions for those rules are met.

Both forward and backward chaining are fundamental inference methods in rule-based AI
systems, each with its own strengths and appropriate use cases.

Backward chaining:

Backward chaining is a reasoning method used in artificial intelligence (AI) for goal-driven
inference. It starts with a goal (or a set of goals) and works backward through a set of inference
rules to determine the necessary conditions (facts) that must be true to achieve that goal. This
approach is often used in expert systems and logic programming to derive solutions or make
decisions.

Dept. of BCA, S.R.N.M.N.C, Shimoga Page 36


Unit: 03 Knowledge Representation

How Backward Chaining Works

1. Initialization: Start with a goal that needs to be achieved.


2. Search for Rules: Identify rules that can conclude the goal.
3. Check Conditions: For each rule, check if the conditions (premises) of the rule are
already known facts.
4. Recursive Process: If the conditions are not known facts, treat them as new sub-goals
and repeat the process.
5. Terminate: The process continues until all sub-goals are satisfied or no applicable rules
are found.

Components of Backward Chaining

 Goal: The target proposition or statement that the system is trying to prove.
 Facts: Known propositions or statements about the domain.
 Rules: Implications that define how facts and sub-goals relate to each other. A rule
typically has the form If condition(s) Then conclusion.
 Working Memory: A dynamic collection of known facts that is updated as new facts are
inferred.

Example of Backward Chaining

Consider a simple animal identification system:

Goal

 Identify if the animal is a bird.

Facts

 HasFeathers(Tweety)
 LaysEggs(Tweety)

Rules

1. If HasFeathers(x) Then Bird(x)


2. If LaysEggs(x) Then Bird(x)

Process

1. Initial Goal: Determine Bird(Tweety).


2. Find Applicable Rules:
o Rule 1: If HasFeathers(Tweety) Then Bird(Tweety)
o Rule 2: If LaysEggs(Tweety) Then Bird(Tweety)
3. Check Conditions:

Dept. of BCA, S.R.N.M.N.C, Shimoga Page 37


Unit: 03 Knowledge Representation

oRule 1: Check HasFeathers(Tweety), which is a known fact.


oRule 2: Check LaysEggs(Tweety), which is also a known fact.
4. Satisfy Goal: Since both conditions are met, conclude Bird(Tweety).

Advantages of Backward Chaining

 Goal-Driven: Efficient for problems where the goal is specified, as it focuses directly on
achieving the goal.
 Reduced Search Space: By focusing on the goal and working backward, it often
examines fewer possibilities compared to forward chaining.
 Natural for Certain Applications: Well-suited for applications like diagnostics, where
the goal is to identify the cause of observed symptoms.

Disadvantages of Backward Chaining

 Complex Rule Management: Can become complex if there are many rules and sub-
goals to manage.
 Efficiency: May require extensive backtracking if multiple rules can achieve the same
goal or if the rules are deeply nested.
 Not Suitable for All Problems: Less effective in situations where data is continuously
changing or when it's difficult to specify a clear goal.

Applications of Backward Chaining

 Expert Systems: Used in medical diagnosis, legal reasoning, and other domains where
conclusions need to be drawn from known data and rules.
 Logic Programming: Employed in languages like Prolog for solving problems by
defining goals and rules.
 Automated Theorem Proving: Used to prove mathematical theorems by working
backward from the theorem to be proved.

Comparison with Forward Chaining

 Backward Chaining: Starts with a goal and works backward to find supporting facts.
Efficient for goal-driven problems.
 Forward Chaining: Starts with known facts and applies rules to derive new facts until a
goal is reached. Suitable for data-driven problems.

Example in Prolog

Prolog is a logic programming language that uses backward chaining for reasoning. Here's a
simple example:

prolog
Copy code

Dept. of BCA, S.R.N.M.N.C, Shimoga Page 38


Unit: 03 Knowledge Representation

% Facts
has_feathers(tweety).
lays_eggs(tweety).

% Rules
bird(X) :- has_feathers(X).
bird(X) :- lays_eggs(X).

% Query
?- bird(tweety).

In this example, the query bird(tweety) starts with the goal of proving bird(tweety). Prolog uses
backward chaining to check the rules and finds that both has_feathers(tweety) and
lays_eggs(tweety) are true, thus confirming the goal.

Backward chaining is a powerful inference method in AI, enabling systems to achieve goals by
systematically working backward through known rules and facts.

Dept. of BCA, S.R.N.M.N.C, Shimoga Page 39


Unit: 03 Knowledge Representation

Dept. of BCA, S.R.N.M.N.C, Shimoga Page 40

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy