Unit 3 AI
Unit 3 AI
Knowledge-base
Inference system
Knowledgebase (KB)
function KB-AGENT(percept):
persistent: KB, a knowledge base
t, a counter, initially 0, indicating time
TELL(KB, MAKE-PERCEPT-SENTENCE(percept, t))
Action = ASK(KB, MAKE-ACTION-QUERY(t))
TELL(KB, MAKE-ACTION-SENTENCE(action, t))
t=t+1
return action
The knowledge-based agent takes percept as input and returns an action as output. The
agent maintains the knowledge base, KB, and it initially has some background knowledge
of the real world. It also has a counter to indicate the time for the whole process, and this
counter is initialized with zero.
The MAKE-PERCEPT-SENTENCE generates a sentence as setting that the agent
perceived the given percept at the given time.
The MAKE-ACTION-QUERY generates a sentence to ask which action should be done
at the current time.
MAKE-ACTION-SENTENCE generates a sentence which asserts that the chosen action
was executed.
2. Logical level:
At this level, we understand that how the knowledge representation of knowledge is stored. At
this level, sentences are encoded into different logics. At the logical level, an encoding of
knowledge into logical sentences occurs. At the logical level we can expect to the automated taxi
agent to reach to the destination B.
3. Implementation level:
This is the physical representation of logic and knowledge. At the implementation level agent
perform actions as per logical and knowledge level. At this level, an automated taxi agent
actually implements his knowledge and logic so that he can reach to the destination.
Wumpus world:
The Wumpus world is a simple world example to illustrate the worth of a knowledge-based
agent and to represent knowledge representation. It was inspired by a video game Hunt the
Wumpus by Gregory Yob in 1973.
The Wumpus world is a cave which has 4/4 rooms connected with passageways. So there are
total 16 rooms which are connected with each other. We have a knowledge-based agent who will
go forward in this world. The cave has a room with a beast which is called Wumpus, who eats
anyone who enters the room. The Wumpus can be shot by the agent, but the agent has a single
arrow. In the Wumpus world, there are some Pits rooms which are bottomless, and if agent falls
in Pits, then he will be stuck there forever. The exciting thing with this cave is that in one room
there is a possibility of finding a heap of gold. So the agent goal is to find the gold and climb out
the cave without fallen into Pits or eaten by Wumpus. The agent will get a reward if he comes
out with gold, and he will get a penalty if eaten by Wumpus or falls in the pit.
The rooms adjacent to the Wumpus room are smelly, so that it would have some stench.
The room adjacent to PITs has a breeze, so if the agent reaches near to PIT, then he will
perceive the breeze.
There will be glitter in the room if and only if the room has gold.
The Wumpus can be killed by the agent if the agent is facing to it, and Wumpus will emit
a horrible scream which can be heard anywhere in the cave.
Sensors:
The agent will perceive the stench if he is in the room adjacent to the
Wumpus. (Not diagonally).
The agent will perceive breeze if he is in the room directly adjacent to the
Pit.
The agent will perceive the glitter in the room where the gold is present.
The agent will perceive the bump if he walks into a wall.
When the Wumpus is shot, it emits a horrible scream which can be
perceived anywhere in the cave.
These precepts can be represented as five element list, in which we will
have different indicators for each sensor.
Example if agent perceives stench, breeze, but no glitter, no bump, and no
scream then it can be represented as:
[Stench, Breeze, None, None, None]
At room [2,2], here no stench and no breezes present so let's suppose agent decides to move
to [2,3]. At room [2,3] agent perceives glitter, so it should grab the gold and climb out of the
cave.
Logic:
Logic plays a fundamental role in artificial intelligence (AI) by providing a formal framework
for representing and reasoning about knowledge. Here are some key aspects of logic in AI:
Inference and Reasoning: Logic provides rules for drawing conclusions from known facts
and making inferences based on logical deductions. AI systems use logical reasoning to
derive new knowledge from existing knowledge.
Predicate Logic: Predicate logic is a common formalism used in AI for representing and
reasoning about statements involving quantifiers (such as "for all" and "there exists") and
predicates (relations between objects).
First-Order Logic (FOL): FOL extends predicate logic to include variables, quantifiers,
and functions. It allows for more expressive representations of knowledge and more
complex reasoning.
Expert Systems: Expert systems are AI systems that use logical rules to emulate the
reasoning of human experts in specific domains. These systems encode expert knowledge
in the form of logical rules and use inference mechanisms to make decisions and provide
advice.
Overall, logic provides a powerful framework for representing and reasoning about knowledge in
AI systems, enabling them to perform complex tasks and make intelligent decisions.
All of these claims follow from the original claim. They “follow” in the sense that if the
original claim is in fact true, then this conclusion must be true. There’s some sense in which
they “mean the same thing”: they describe the same world or claim the same thing to be true
about the world. A lot of logic consequence is similar: it’s a relationship of “following” or
“entailment” between statements which mean essentially the same thing. Other logical
consequences are cases where one statement entails another statement (if the first is true, the
second must be true), but not because they essentially mean the same thing. Instead, because
the first statement is making a “stronger” claim than the other.
Logical entailment is denoted by: α
Which means β follows from α
Eg:
Entailment is like the needle being in the haystack; inference is like finding it. This distinction is
embodied in some formal notation: if an inference algorithm i can derive α from KB, we write
Grounding:
It is the connection between logical reasoning processes and the real environment in which the
agent exists. In particular, how do we know that KB is true in the real world?. A simple answer
is that the agent’s sensors create the connection. For example, the wumpus-world agent has a
smell sensor. The agent program creates a suitable sentence whenever there is a smell. Then,
whenever that sentence is in the knowledge base, it is true in the real world.
Atomic propositions are the simple propositions. It consists of a single proposition symbol.
These are the sentences which must be either true or false.
Logical connectives are used to connect two simpler propositions or representing a sentence
logically. We can create compound propositions with the help of logical connectives. There are
mainly five connectives, which are given as follows:
1. Negation: A sentence such as ¬ P is called negation of P. A literal can be either Positive literal
or negative literal.
Truth Table:
In propositional logic, we need to know the truth values of propositions in all possible scenarios.
We can combine all the possible combination with logical connectives, and the representation of
these combinations in a tabular format is called Truth table. Following are the truth table for all
logical connectives:
and only if." In mathematics, logical equivalence is typically symbolized by a double arrow (⟺
or ⟷) or triple lines (≡).
This expression provides an example of logical equivalence between two simple statements:
A∨B⟺B∨A
The expression includes the statements A ∨ B and B ∨ A, which are connected together by the
IIF function. Each statement uses the OR Boolean function (∨) to indicate an inclusive
disjunction between variables A and B. This means that the statement returns a true value if
either variable is true or if both variables are true, but it returns a false value if both variables are
false. The expression in its entirety is effectively stating that the statement "variable A or
variable B" is logically equivalent to the statement "variable B or variable A."
Inference rules:
Inference rules are the templates for generating valid arguments. Inference rules are applied to
derive proofs in artificial intelligence, and the proof is a sequence of the conclusion that leads to
the desired goal. In inference rules, the implication among all the connectives plays an important
role. Following are some terminologies related to inference rules:
Converse: The converse of implication, which means the right-hand side proposition, goes
to the left-hand side and vice-versa. It can be written as Q → P.
Contrapositive: The negation of converse is termed as contrapositive, and it can be
represented as ¬ Q → ¬ P.
Inverse: The negation of implication is called inverse. It can be represented as ¬ P → ¬ Q.
Proof:
Modus Ponens:
The Modus Ponens rule is one of the most important rules of inference, and it states that if P and
P → Q is true, then we can infer that Q will be true. It can be represented as:
Example:
Hence, we can say that, if P→ Q is true and P is true then Q will be true.
AND Elimination:
This rule states that if P∧ Q is true, then Q or P will also be true. It can be represented as:
Proof:
Proof by resolution,
The idea of resolution is simple: if we know that
o p is true or q is true
o and we also know that p is false or r is true
o then it must be the case that q is true or r is true.
This line of reasoning is formalized in the
Resolution Tautology:
(p V q) Λ (¬ p V r) -> q V r
Eg: Given the following hypotheses:
If it rains, Joe brings his umbrella (r -> u)
If Joe has an umbrella, he doesn't get wet (u -> NOT w)
If it doesn't rain, Joe doesn't get wet (NOT r -> NOT w)
Prove that Joes doesn't get wet (NOT w)
We first put each hypothesis in CNF:
a. r -> u == (NOT r OR u)
b. u -> NOT w == (NOT u OR NOT w)
3. r OR NOT w Premise
¬� � � ∧ C
¬� � � ∧ ¬� � �
Horn Clause in AI
The term "horn clause" refers to a disjunction of literals in which, at most, one literal is not
negated. A horn clause is a clause that has exactly one literal that is not negated.The
logician Alfred Horn first recognized the importance of Horn clauses in 1951. Horn clauses
are a type of logical formula used in logic programming, formal specification, universal
algebra, and model theory due to their helpful qualities in these areas and others.
Definite clause / Strict Horn clause – It has precisely one positive literal.
Unit clause - Definite clause containing no negative literals.
Goal clause – Horn clause lacking a literal positive.
Horn clauses perform a fundamental role in both constructive and computational logic.
Syntax:
Inference Engines:
Forward chaining: Forward chaining is a form of reasoning for an AI expert system that starts
with simple facts and applies inference rules to extract more data until the goal is reached.
Backward chaining: Backward chaining is another strategy used to shape an AI expert system
that starts with the end goal and works backward through the AI’s rules to find facts that support
the goal.
Forward Chaining:
Forward chaining is also known as a forward deduction or forward reasoning method
when using an inference engine. The forward-chaining algorithm starts from known facts,
triggers all rules whose premises are satisfied and adds their conclusion to the known facts. This
process repeats until the problem is solved. In this type of chaining, the inference engine starts by
evaluating existing facts, derivations, and conditions before deducing new information. An
endpoint, or goal, is achieved through the manipulation of knowledge that exists in the
knowledge base.
Knowledgebase:
1. John’s credit score is 780.
2. A person with a credit score greater than 700 has never defaulted on their loan.
3. John has an annual income of $100,000.
4. A person with a credit score greater than 750 is a low-risk borrower.
5. A person with a credit score between 600 to 750 is a medium-risk borrower.
6. A person with a credit score less than 600 is a high-risk borrower.
7. A low-risk borrower can be given a loan amount up to 4X of his annual income at a 10 percent
interest rate.
8. A medium-risk borrower can be given a loan amount of up to 3X of his annual income at a 12
percent interest rate.
9. A high-risk borrower can be given a loan amount of up to 1X of his annual income at a 16 percent
interest rate.
10.
Question:
Solution:
To deduce the conclusion, we apply forward chaining on the knowledge base. We start from the
facts which are given in the knowledge base and go through each one of them to deduce
intermediate conclusions until we are able to reach the final conclusion or have sufficient
evidence to negate the same.
John’ CS = 780 AND CS > 750 are Low Risk Borrower → John is a Low Risk Borrower
Loan Amount for Low Risk Borrower is 4X annual income AND John’s annual income is
$100k
→ Max loan amount that can be sanctioned is $400k at a 10% interest rate.
Backward Chaining
Backward chaining is also known as a backward deduction or backward reasoning method when using an
inference engine. In this, the inference engine knows the final decision or goal. The system starts from the
goal and works backward to determine what facts must be asserted so that the goal can be achieved.
For example, it starts directly with the conclusion (hypothesis) and validates it by backtracking
through a sequence of facts. Backward chaining can be used in debugging, diagnostics and prescription
applications.
Now, to apply backward chaining, we start from the goal and assume that John is the tallest boy
in class. From there, we go backward through the knowledge base comparing that assumption to
each known fact to determine whether it is true that John is the tallest boy in class or not.
Height (John) > Height (anyone in the class)
AND
John and Kim both are in the same class
AND
Height (Kim) > Height (anyone in the class except John)
AND
John is boy
SO
Height (John) > Hight(Kim)
Which aligns with the knowledge base fact. Hence the goal is proved true.
In this figure, the buying of a car may be broken down into smaller problems or tasks that can be
accomplished to achieve the main goal in the above figure, which is an example of a simple AND-OR
graph. The other task is to either steal a car that will help us accomplish the main goal or use your own
money to purchase a car that will accomplish the main goal. The AND symbol is used to indicate the
AND part of the graphs, which refers to the need that all subproblems containing the AND to be
resolved before the preceding node or issue may be finished.
Automation: Enhancing the automation of model checking processes to reduce the need for
manual intervention.
Combining with Other Methods: Integrating model checking with other verification and
validation methods, such as testing and runtime verification, for comprehensive assurance.
By leveraging these techniques and tools, propositional model checking can effectively
contribute to the development of reliable and trustworthy AI systems.
Agents based on propositional logic in artificial intelligence (AI) use logical frameworks to
represent knowledge and make decisions. These agents rely on the formalism of
propositional logic to encode facts, rules, and derive conclusions to guide their actions.
Here’s an overview of how such agents function, their structure, and their applications.
Propositional Variables: Basic units of information that can be either true or false (e.g., PPP,
QQQ, RRR).
Logical Connectives: Operators that combine propositional variables into complex formulas,
including:
AND (∧)
OR (∨)
NOT (¬)
IMPLIES (→)
EQUIVALENT (↔)
Propositional Formulas: Expressions formed using variables and connectives (e.g., P∧QP \land
QP∧Q).
2. Knowledge Representation:
3. Reasoning Mechanisms
Inference: The process of deriving new propositions from existing ones using inference rules
such as Modus Ponens (if P→QP \rightarrow QP→Q and PPP are true, then QQQ is true).
Entailment: Determining whether a particular proposition logically follows from the KB.
4. Decision Making:
Action Selection: Based on the current knowledge base, the agent uses logical reasoning to
decide which actions to take to achieve its goals.
Goal Formulation: Goals are represented as propositional formulas that the agent aims to make
true.
Advantages:
o Formal Verification: Propositional logic allows for formal verification of the agent's
behavior.
o Clarity and Precision: Logical representations are clear and precise, making it easier to
understand and debug the agent's reasoning.
Challenges:
o Expressiveness: Propositional logic is less expressive than first-order logic and may not
capture complex relationships.
7. Applications:
Expert Systems: Encode expert knowledge in specific domains (e.g., medical diagnosis,
financial analysis) and make decisions based on logical inference.
Automated Planning: Use propositional logic to represent states and actions, enabling the
generation of plans to achieve specified goals.
Game Playing: Represent and reason about game states, strategies, and moves using logical
formulas.
Logic Programming: Languages like Prolog are designed for logic-based programming and are
often used to implement propositional logic agents.
SAT Solvers: Tools that determine the satisfiability of propositional formulas and are used for
decision-making and planning in logic-based agents.
By leveraging propositional logic, AI agents can effectively represent and reason about their
environment, leading to intelligent and reliable behavior in various applications.
First-order logic (FOL), also known as predicate logic, extends propositional logic by adding
quantifiers and predicates, allowing for more expressive representations of knowledge. It is
widely used in artificial intelligence (AI) for knowledge representation, reasoning, and the
development of intelligent agents.
1. Basic Components
Constants: Represent specific objects or individuals in the domain (e.g., alice, paris).
Variables: Represent arbitrary elements of the domain (e.g., x, y, z).
Predicates: Represent properties of objects or relationships between objects (e.g.,
Likes (alice, ice_cream), Loves(x, y)).
Functions: Map tuples of objects to an object (e.g., MotherOf(x), which might
denote the mother of x).
2. Atomic Formulas
Predicates with Arguments: The simplest form of an atomic formula (e.g., Loves(alice,
bob), Has(x, book)).
3. Logical Connectives
4. Quantifiers
Universal Quantifier (∀\forall∀): Asserts that a formula holds for all elements in the
domain (e.g., ∀x (Loves(x, ice_cream)) means "everyone loves ice cream").
Existential Quantifier (∃\exists∃): Asserts that there is at least one element in the
domain for which the formula holds (e.g., ∃x (Loves(x, chocolate)) means "someone
loves chocolate").
1. Atomic Formulas: The simplest form, consisting of a predicate and its arguments.
o Example: Loves(alice, bob)
2. Complex Formulas: Built from atomic formulas using logical connectives and
quantifiers.
o Example: ∀x (Human(x) → Mortal(x)) means "all humans are mortal."
1. Simple Relationships:
o Parent(alice, bob): "Alice is a parent of Bob."
o Friends(alice, charlie): "Alice and Charlie are friends."
2. Using Quantifiers:
o ∀x (Human(x) → Mortal(x)): "All humans are mortal."
o ∃x (Human(x) ∧ Loves(x, chocolate)): "There exists a human who loves
chocolate."
3. Combining Connectives and Quantifiers:
o ∀x ∀y (Parent(x, y) → Loves(x, y)): "Every parent loves their child."
o ∃x (Student(x) ∧ ∀y (Class(y) → Attends(x, y))): "There exists a student
who attends every class."
Domain: The set of objects over which the variables can range.
Interpretation: Assigns meanings to the constants, functions, and predicates. For example:
o Constants are mapped to specific objects in the domain.
o Functions are mapped to operations on objects in the domain.
o Predicates are mapped to relations among objects in the domain.
Example Interpretation
Applications in AI
First-order logic provides a powerful and flexible framework for representing and reasoning
about knowledge in AI, enabling the development of sophisticated intelligent systems.
Sentences are added to a knowledge base using TELL, exactly as in propositional logic. Such
sentences are called assertions. For example, we can assert that John is a king, Richard is a person, and all
kings are persons:
TELL(KB, King(John))
TELL(KB, Person(Richard))
TELL(KB, ∀ x King(x) ⇒ Person(x)) We can ask questions of the knowledge base using ASK.
For example, ASK(KB, King(John)) returns true. Questions asked with ASK are called queries or
goals If we want to know what value of x makes the sentence true, we will need a different function,
ASKVARS, which we call with ASKVARS(KB, Person(x)) and which yields a stream of answers. In this
case there will be two answers: {x/John} and {x/Richard}. Such an answer is called a substitution or
binding list.
The first example we consider is the domain of family relationships, or kinship. This domain
includes facts such as “Elizabeth is the mother of Charles” and “Charles is the father of William” and
rules such as “One’s grandmother is the mother of one’s parent.” Clearly, the objects in our domain are
people. We have two unary predicates, Male and Female. Kinship relations—parenthood, brotherhood,
marriage, and so on—are represented by binary predicates: Parent, Sibling, Brother , Sister , Child ,
Daughter, Son, Spouse, Wife, Husband, Grandparent , Grandchild , Cousin, Aunt, and Uncle. We use
functions for Mother and Father , because every person has exactly one of each of these.
Sibling predicates in terms of other predicates. For example, consider the assertion that
siblinghood is symmetric: ∀ x, y Sibling(x, y) ⇔ Sibling(y, x) .
0 is an arithmetic identity as
∀m, NatNum(m) ⇒+ 0, m = m
A set is a collection of objects; any one of the objects in a set is called a member or an element of
the set.
The basic statement in set theory is element inclusion: an element a is included in some set S.
Formally written as:
Statements are either true or false, depending on the context. For example, given the above sets,
the first statement is true, whereas the second is false. If a statement S is true in a given context C,
we say the statement is valid in C. Formally, we write this as:
Subsets:
∀ s1, s2 s1 ⊆ s2 ⇔ (∀ x x ∈ s1 ⇒ x ∈ s2) .
As propositional logic we also have inference rules in first-order logic, so following are some basic
inference rules in FOL:
o Universal Instantiation
o Existential Instantiation
Universal Instantiation:
o Universal instantiation is also called as universal elimination or UI is a valid inference rule. It can
be applied multiple times to add new sentences.
o The new KB is logically equivalent to the previous KB.
o As per UI, we can infer any sentence obtained by substituting a ground term for the variable.
o The UI rule state that we can infer any sentence by substituting a ground term v with g in the
universe of discourse.
o
Example:1.
o IF "Every person like ice-cream"=> we can infer
"John likes ice-cream" => P(c)
Example: 2.
o "All kings who are greedy are Evil." So let our knowledge base contains this detail as in
the form of FOL:
o ∀x king(x) ∧ greedy (x) → Evil (x),
So from this information, we can infer any of the following statements using Universal Instantiation:
o Existential instantiation is also called as Existential Elimination, which is a valid inference rule in
first-order logic.
o It can be applied only once to replace the existential sentence.
o The new KB is not logically equivalent to old KB, but it will be satisfiable if old KB was
satisfiable.
o Represented as:
Example:
So we can infer: Crown(K) ∧ OnHead( K, John), as long as K does not appear in the knowledge base.
Example:
We will use this rule for Kings are evil, so we will find some x such that x is king, and x is greedy so we
can infer that x is evil.
Unification:
o Unification is a process of making two different logical atomic expressions identical by finding a
substitution. Unification depends on the substitution process.
o It takes two literals as input and makes them identical using substitution.
o Let Ψ1 and Ψ2 be two atomic sentences and � be a unifier such that, Ψ1� = Ψ2�, then it can be
expressed as UNIFY(Ψ1, Ψ2).
o Example: Find the MGU for Unify{King(x), King(John)}
Let Ψ1 = King(x), Ψ2 = King(John),
Substitution θ = {John/x} is a unifier for these atoms and applying this substitution, and both expressions
will be identical.
o The UNIFY algorithm is used for unification, which takes two atomic sentences and returns a
unifier for those sentences (If any exist).
o Unification is a key component of all first-order inference algorithms.
o It returns fail if the expressions do not match with each other.
o The substitution variables are called Most General Unifier or MGU.
Conditions for Unification:
Following are some basic conditions for unification:
o Predicate symbol must be same, atoms or expression with different predicate symbol can never be
unified.
o Number of Arguments in both expressions must be identical.
o Unification will fail if there are two similar variables present in the same expression.
Unification Algorithm:
Algorithm: Unify(Ψ1, Ψ2)
Step. 1: If Ψ1 or Ψ2 is a variable or constant, then:
a) If Ψ1 or Ψ2 are identical, then return NIL.
b) Else if Ψ1is a variable,
a. then if Ψ1 occurs in Ψ2, then return FAILURE
b. Else return { (Ψ2/ Ψ1)}.
c) Else if Ψ2 is a variable,
a. If Ψ2 occurs in Ψ1 then return FAILURE,
b. Else return {( Ψ1/ Ψ2)}.
d) Else return FAILURE.
Step.2: If the initial Predicate symbol in Ψ1 and Ψ2 are not same, then return FAILURE.
Step. 3: IF Ψ1 and Ψ2 have a different number of arguments, then return FAILURE.
Step. 4: Set Substitution set(SUBST) to NIL.
Step. 5: For i=1 to the number of elements in Ψ1.
a) Call Unify function with the ith element of Ψ1 and ith element of Ψ2, and put the
result into S.
b) If S = failure then returns Failure
c) If S ≠ NIL then do,
a. Apply S to the remainder of both L1 and L2.
b. SUBST= APPEND(S, SUBST).
Step.6: Return SUBST.
For each pair of the following atomic sentences find the most general unifier (If exist).
Forward Chaining:
Forward chaining is a method used in artificial intelligence (AI) for reasoning and inferencing. It
is commonly employed in expert systems and rule-based systems to derive conclusions or make
decisions based on a set of known facts and inference rules. Forward chaining works by starting
with known facts and applying inference rules to extract more facts until a goal is reached or no
more rules can be applied.
1. Initialization: Begin with a set of known facts (initial data) and a set of inference rules.
2. Inference: Apply the inference rules to the known facts to generate new facts.
3. Iteration: Repeat the inference process, adding new facts to the set of known facts, until
no new facts can be derived or a specific goal is achieved.
Facts: Assertions about the world, usually expressed as predicates or propositions (e.g.,
Human(Socrates)).
Rules: Implications that define how new facts can be inferred from existing facts. A rule
has the form If condition(s) Then conclusion.
Working Memory: A dynamic collection of known facts that is updated as new facts are
inferred.
Facts
Fever(John)
Cough(John)
SoreThroat(John)
Rules
Process
1. Initial Facts:
o Fever(John)
o Cough(John)
o SoreThroat(John)
2. First Iteration:
o Apply Rule 1: Fever(John) And Cough(John) ⇒ Flu(John)
o Add Flu(John) to the working memory.
3. Second Iteration:
o Apply Rule 2: SoreThroat(John) And Cough(John) ⇒ Cold(John)
o Add Cold(John) to the working memory.
4. Third Iteration:
o Apply Rule 3: Flu(John) ⇒ BedRest(John)
o Add BedRest(John) to the working memory.
5. Final Facts:
o Fever(John)
o Cough(John)
o SoreThroat(John)
o Flu(John)
o Cold(John)
o BedRest(John)
Data-Driven: Begins with available data and applies rules to derive conclusions, which
is useful in situations where all data may not be initially known.
Incremental Reasoning: Can handle new facts as they become available, making it
suitable for dynamic environments.
Simplicity: Easy to implement and understand, especially for rule-based systems.
Efficiency: Can be less efficient than backward chaining in some cases, as it may
generate many intermediate facts that are not useful.
Comprehensiveness: May need a large number of rules and facts to derive complex
conclusions, potentially leading to combinatorial explosion.
Forward Chaining: Starts with known facts and applies rules to infer new facts until a
goal is reached.
Backward Chaining: Starts with a goal and works backwards by finding rules that can
achieve that goal, checking if the conditions for those rules are met.
Both forward and backward chaining are fundamental inference methods in rule-based AI
systems, each with its own strengths and appropriate use cases.
Backward chaining:
Backward chaining is a reasoning method used in artificial intelligence (AI) for goal-driven
inference. It starts with a goal (or a set of goals) and works backward through a set of inference
rules to determine the necessary conditions (facts) that must be true to achieve that goal. This
approach is often used in expert systems and logic programming to derive solutions or make
decisions.
Goal: The target proposition or statement that the system is trying to prove.
Facts: Known propositions or statements about the domain.
Rules: Implications that define how facts and sub-goals relate to each other. A rule
typically has the form If condition(s) Then conclusion.
Working Memory: A dynamic collection of known facts that is updated as new facts are
inferred.
Goal
Facts
HasFeathers(Tweety)
LaysEggs(Tweety)
Rules
Process
Goal-Driven: Efficient for problems where the goal is specified, as it focuses directly on
achieving the goal.
Reduced Search Space: By focusing on the goal and working backward, it often
examines fewer possibilities compared to forward chaining.
Natural for Certain Applications: Well-suited for applications like diagnostics, where
the goal is to identify the cause of observed symptoms.
Complex Rule Management: Can become complex if there are many rules and sub-
goals to manage.
Efficiency: May require extensive backtracking if multiple rules can achieve the same
goal or if the rules are deeply nested.
Not Suitable for All Problems: Less effective in situations where data is continuously
changing or when it's difficult to specify a clear goal.
Expert Systems: Used in medical diagnosis, legal reasoning, and other domains where
conclusions need to be drawn from known data and rules.
Logic Programming: Employed in languages like Prolog for solving problems by
defining goals and rules.
Automated Theorem Proving: Used to prove mathematical theorems by working
backward from the theorem to be proved.
Backward Chaining: Starts with a goal and works backward to find supporting facts.
Efficient for goal-driven problems.
Forward Chaining: Starts with known facts and applies rules to derive new facts until a
goal is reached. Suitable for data-driven problems.
Example in Prolog
Prolog is a logic programming language that uses backward chaining for reasoning. Here's a
simple example:
prolog
Copy code
% Facts
has_feathers(tweety).
lays_eggs(tweety).
% Rules
bird(X) :- has_feathers(X).
bird(X) :- lays_eggs(X).
% Query
?- bird(tweety).
In this example, the query bird(tweety) starts with the goal of proving bird(tweety). Prolog uses
backward chaining to check the rules and finds that both has_feathers(tweety) and
lays_eggs(tweety) are true, thus confirming the goal.
Backward chaining is a powerful inference method in AI, enabling systems to achieve goals by
systematically working backward through known rules and facts.