0% found this document useful (0 votes)
28 views99 pages

Ai Unit 3 Digital Notes

This document outlines the course structure for 'Artificial Intelligence' offered by RMK Group of Educational Institutions, detailing objectives, prerequisites, syllabus, and course outcomes. It includes a comprehensive table of contents, lecture plans, and assessment methods for the course. The document emphasizes the importance of knowledge-based agents, logical reasoning, and various AI problem-solving strategies.

Uploaded by

23102208
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
28 views99 pages

Ai Unit 3 Digital Notes

This document outlines the course structure for 'Artificial Intelligence' offered by RMK Group of Educational Institutions, detailing objectives, prerequisites, syllabus, and course outcomes. It includes a comprehensive table of contents, lecture plans, and assessment methods for the course. The document emphasizes the importance of knowledge-based agents, logical reasoning, and various AI problem-solving strategies.

Uploaded by

23102208
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 99

1

2
Pleaseread this disclaimerbeforeproceeding:
This document is confidential and intended solely for the educational purpose of
RMK Group of Educational Institutions. If you have received this document
through email in error, please notify the system manager. This document
contains proprietary information and is intended only to the respective group /
learning community as intended. If you are not the addressee you should not
disseminate, distribute or copy through e-mail. Please notify the sender
immediately by e-mail if you have received this document by mistake and delete
this document from your system. If you are not the intended recipient you are
notified that disclosing, copying, distributing or taking any action in reliance on
the contentsof this information is strictlyprohibited.

3
22AI301 ARTIFICIAL INTELLIGENCE
(Lab Integrated)
Department: CSE
Batch/Year: 2023-2027/II
Created by: Dr.R.Sasikumar
Dr . M. Arun Manicka Raja
Dr. G. Indra
Dr. M. Raja Suguna
Mr.K.Mohanasundaram
Mrs.S.Logesswari

Date: 25.01.2024

4
Table of Contents

S.No Contents Page No.

1 Contents 6

2 Course Objectives 7

3 Pre-Requisites 9

4 Sylabus 11

5 Course outcomes 13

6 CO- PO/PSO Mapping 15

7 Lecture Plan 17

8 Activity based learning 19

9 Lecture Notes 23

10 Assignments 109

11 Part A Q & A 111

12 Part B Qs 117

13 Supportive online Certification courses 119

14 Real time Applications in day to day life and to Industry 121

15 Assessment Schedule 125

16 Prescribed Text Books & Reference Books 127

17. Mini Project suggestions 129

5
1. COURSE OBJECTIVES

ToExplain the Foundations of AI and various intel igent agents


Todiscuss problem solving search strategies and game playing

Todescribe logical agents and first-order logic

Toilustrate problem-solving strategies withknowledge representatoni

mechanism for hard problems

ToExplain the basics of learning and expert systems

6
3. PRE REQUISITES

PRE-REQUISITE CHART

21CS202 –Python Programming


(Lab Integrated)

21CS201-Data Structures

21MA402-Probability and
Statistics

21CS502-Artificial Intelligence

7
4.SYLLABUS

22AI301-ARTIFICIALINTELLIGENCE LTPC
3 02 4

Unit-I ARTIFICIAL INTELLIGENCE AND INTELLIGENT AGENTS 9+6


Introduction to AI–Foundations of Artificial Intelligence-Intelligent Agents-Agents and
Environment-Concept of rationality – Nature of environments – Structure of agents –
Problem Solving Agents–Example Problems – Search Algorithms – Uninformed Search

Strategies
Lab Programs:
1. Implement basic search strategies – 8-Puzzle, 8 - Queens problem.
2. Implement Breadth First Search & Depth first Search Algorithm
3. Implement Water Jug problem.
4. Solve Tic-Tac-Toe problem.

Unit II : PROBLEM SOLVING 9+6


Heuristics Search Strategies – Heuristic Functions - Game Play ing – Mini Max Algorithm-
Optimal Decisions in Games – Alpha - Beta Search – Monte Carlo Search for Games -
Constraint Satisfaction Problems – Constraint Propagation - Backtracking Search for CSP-
Local Search for CSP-Structure of CS
Lab Programs:
1. Implement A* and memory bounded A* algorithms.
2. Implement Minimax algorithm & Alpha-Beta pruning for game playing.
3. Constraint Satisfaction Problem
4. Mini Project – Chess. Sudoku

Unit III : LOGICAL AGENTS 9+6


Knowledge-based agents – Logic - Propositional logic – Propositional theorem proving –
Propositional model checking – Agents based on propositional logic First-Order Logic –
Syntax and semantics – Using First-Order Logic - Knowledge representation and engineering
– Inferences in first-order logic – Propositional Vs First Order Inference - Unification and
First-Order Inference - Forward chaining – Backward chaining – Resolution.
Lab Programs:
1. Implement Unification algorithm for the given logic.
2. Implement forward chaining and backward chaining using Python.
5.COURSE OUTCOME

Cognitive
/
Affective
Course Level of Course
Code Course Outcome Statement the Outcome
Course
Outcome
Course Outcome Statements in Cognitive Domain

Explain the foundations of AI and Apply


21AI401 K3 CO1
various Inteligent Agents
Apply Search Strategies in Problem Apply
21AI401 Solving and Game Playing K3 CO2

Explain logical agents and Apply


21AI401 First-order logic K3 CO3

Apply problem-solving strategies Apply


21AI401 with knowledge representation K4 CO4
mechanism for solving hard
problems
Describe the basics of learning and Apply
21AI401 expert systems. K4 CO5

9
6.CO-PO/PSO MAPPING

Programme Outcomes (POs), Programme Specific Outcomes (PSOs)

Course P P P P P P P P P P P P PS PS PS
Outcomes
O O O O O O O O O O O O O O O
(Cos)
1 2 3 4 5 6 7 8 9 10 11 12 1 2 3

21AI401.1 K
2 3 3 1 - - - - - - - - - - - -

21AI401.2 K
3 3 2 1 - - - - - - - - - - - -

21AI401.3 K
3 3 2 1 - - - - - - - - - - - -

5
21AI401.4 K
3 3 3 2 - - - - - - - - - - - -

21AI401.5 K
2 3 2 2 - - - - - - - - - - - -

10
LECTURE PLAN – UNIT III

ASSESSMENTCOMPONENTS MODE OF DELEIVERY


AC 1. Unit Test MD 1. Oral presentation
AC 2. Assignment MD 2. Tutorial
AC 3. Course Seminar MD 3. Seminar
AC 4. Course Quiz MD 4 Hands On
AC 5. Case Study MD 5. Videos
AC 6. Record Work MD 6. Field Visit
AC 7. Lab / Mini Project
AC 8. Lab Model Exam
AC 9. Project Review

11
7. LECTURE PLAN – UNIT III

UNIT II PROBLEMSOLVINGMETHODS
Sl.
No PROPOSED ACTUAL
NO
LECTURE LECTURE
OF PERTAINI TAXONOMY MODE OF
TOPIC PER PEROID PERIOD NG CO(s) LEVEL DELIVERY
I
ODS
Knowledge 19.09.2023
MD1,
1 based agents-
Logic- 1 CO3 K3 MD5
Propositional
logic
20.09.2023
Propositional MD1,
Theorem Proving- 1 CO3 K3 MD5
2 Propositional model
checking
21.09.2023
Agent based on MD1,
propositional logic- 1 CO3 K3 MD5
3 First order Logic-
Syntax and semantics
Using first order 22.09.2023 MD1,
4 logic 1 CO3 K3 MD5
23.09.2023
Knowledge MD1,
Representation and 1 CO3 K3 MD5
5 Engineering
Inference in first order 25.09.2023
MD1,
6 logic
1 CO3 K3 MD5

Propositional Vs First- 26.09.2023 MD1,


7 order Inference, MD5
1 CO3 K3
Unification and First
order Inference
Forward Chaining 27.09.2023 MD1,
1 CO3 K3 MD5
8
Backward chaining & 28.09.2023
resolution MD1,
1 CO3 K3 MD5
9

9
8. Activity Based
Learning

13
Activity Based Learning
1. Play a game where the other person thinks of an animal you must identify. Does it
have feathers? Does it have fur? Does it walk on four legs? Is it black? In a small
number of questions we narrow down the possibilities until we know what it is.
The more we know about the features of animals, the easier it is to narrow it
down. A machine learning algorithm learns about the features of the specific
things it classifies. Each feature rules out some possibilities, but leaves others.
Returning to our robot that makes expressions. Was it a loud sound? Yes. Was it
sudden? No. … The robot should look unhappy. The right combination of features
allows the algorithm to narrow the possibilities down to one thing. The more data
the algorithm was trained on, the more patterns it can spot. The more patterns it
spots, the more rules about features it can create. The more rules about features
it has, the more questions it can make its decision on.

2. The Intelligent Piece of Paper Activity


This activity aims to introduce the topic of what a computer program is and how
everything computers do simply involves following instructions written by (creative)
computer programmers. It also aims to start a discussion about what intelligence is
and whether something that just blindly folows rules can be considered inteligent.

Materials

• A whiteboard or flipchart to write on so al can see.

• Two flip chart / whiteboard pens

• A copy of the inteligent piece of paper (possibly laminated)


•(optional) A musical greeting card that plays some appropriately horrible song.
Choose one that is recognizable by the different age group people.

Activity Document Link

14
9. LECTURE NOTES-UNIT III
SYLLABUS
Knowledge Based Agents-Logic-Propositional logic- Propositional theorem proving-
Propositional model Checking- Agents based on propositional Logic-First Order Logic –
Propositional Vs First Order Inference – Unification and First Order Inference – Forward
Chaining- Backward Chaining – Resolution

3.1 Knowledge-based agents


Inteligent agents need knowledge about the world in order to reach good
decisions. Knowledge is contained in agents in the form of sentences in
a knowledge representation language that are stored in a knowledge base.

Knowledge base (KB): a set of sentences, is the central component of a


knowledge-based agent. Each sentence is expressed in a language caled
a knowledge representation language and represents some assertion about the world.
Axiom: Sometimes we dignify a sentence with the name axiom, when the
sentence is taken as given without being derived from other sentences. TELL:
The operation to add new sentences to the knowledge base.
ASK: The operation to query what is known.
Inference: Both TELL and ASK may involve, deriving new sentences from old.

The outline of a knowledge-based program:


A knowledge-base agent is composed of a knowledge base and an inference mechanism.
It operates by storing sentences about the world in its knowledge base, using the
inference mechanism to infer new sentences, and using these sentences to decide what
action to take.

The knowledge-based agent is not an arbitrary program for calculating actions, it is


amenable to a description at the knowledge level, where we specify only what the agent
knows and what its goals are, in order to fix its behavior, the analysis is independent of
the implementation level.

Declarative approach: A knowledge-based agent can be built simply by tel ing ing it
what it needs to know. Starting with an empty knowledge base, the gent designer can
TELL sentences one by one until the agent knows how to operate in its environment.

Procedure approach: encodes desired behaviors directly as program code.


A successful agent often combines both declarative and procedural elements in its design.
A fundamental property of logical reasoning: The conclusion is guaranteed to be correct if
the available information is correct.

3.1.1 Logic
A representation language is defined by its syntax, which specifies the structure of
sentences, and its semantics, which defines the truth of each sentence in each possible
world or model.

Syntax: The sentences in KB are expressed according to the syntax of the representation
language, which specifies al the sentences that are wel formed.

Semantics: The semantics defines the truth of each sentence with respect to
each possible world.
Models: We use the term model in place of “possible world” when we need to be
precise. Possible world might be thought of as (potentially) real environments that the
agent might or m ight not be in, models are mathematical abstractions, each of which
simply fixes the truth or falsehood of every relevant sentences.

If a sentence α is true in model m, we say that m satisfies α, or m is a model of α.


Notation M(α) means the set of al models of α.
The relationship of entailment between sentence is crucial to our understanding of
reasoning. A sentence α entails another sentence β if β is true in all world where α is true.
Equivalent definitions include the validity of the sentence α⇒β and the unsatisfiability of
sentence α𝖠¬β.
Logical entailment: The relation between a sentence and another sentence that
folows from it.
Mathematical notation: α 𝖼 β: αentails the sentence β.
Formal definitionof entailment:
α 𝖼 β if and only if M(α) ⊆ M(β)
i.e. α 𝖼 β if and only if, in every model in which αis true, β is also true.
(Notice: if α 𝖼 β, then α is a stronger assertion than β: it rules out more possible
worlds. )
Logical inference: The definition of entailment can be applied to derive conclusions.
E.g. Apply analysis to the wupus-world.
The KB is false in models that contradict what the agent knows. (e.g. The KB is false in
any model in which [1,2] contains a pit because there is no breeze in [1, 1]).

Consider 2 possible conclusions α1and α2.


We see: in every model in which KB is true,α1 is also true. Hence KB𝖼α1 , so the agent
can conclude that there is no pit in [1, 2].
We see: in some models in which KB is true,α2 is false. Hence KB⊭α2, so the agent
cannot conclude that there is no pit in [1, 2].
The inference algorithm used is called model checking: Enumerate al possible models
to check that α is true in al models in which KB is true, i.e. M(KB) ⊆ M(α).

If an inference algorithm i can derive α from KB, we write KB𝖼iα, pronounced as “α is


derived from KB by i” or “i derives α from KB.”
Sound/truth preserving: An inference algorithm that derives only entailed sentences.
Soundness is a highly desirable property. (e.g. model checking is a sound procedure
when it is applicable.)

Completeness: An inference algorithm is complete if it can derive any sentence that is


entailed. Completeness is also a desirable property.
Inference is the process of deriv ing new sentences from old ones. Sound inference
algorithms derive only sentences that are entailed; complete algorithms derive all
sentences that are entailed.

If KB is true in the real world, then any sentence α derived from KB by a sound
inference procedure is also true in the real world.

Grounding: The connection between logical reasoning process and the real environment in
which the agent exists.

In particular, how do we know that KB is true in the real world?


3.1.1.1 Propositional logic
Propositional logic is a simple language consisting of proposition symbols and logical
connectives. It can handle propositions that are known true, known false, or
completely unknown.

Syntax
The syntax defines the alowable sentences.
Atomic sentences: consist of a single proposition symbol, each such symbol stands for
a proposition that can be true or false. (e.g. W1,3 stand for the proposition that the
wumpus is in [1, 3].)

Complex sentences: constructed from simpler sentences, using parentheses and logical
connectives.

Semantics
The semantics defines the rules for determining the truth of a sentence with respect
to a particular model.

The semantics for propositional logic must specify how to compute the truth value of
any sentence, given a model.
For atomic sentences: The truth value of every other proposition symbol must be
specified directly in the model.

For Complex Sentences:


A simple inference procedure:
To decide whether KB 𝖼 α for some sentence α:
Algorithm 1: Model-checking approach
Enumerate the models (assignments of true or false to every relevant
propositionsymbol), check that α is true in every model in which KB is true.
e.g.

TT-ENTAILS?: A general algorithm for deciding entailment in


propositional logic, performs a recursive enumeration of a finite space of
assignments to symbols.
Sound and complete.
Time complexity: O(2n)

Space complexity: O(n), if KB and α contain n symbols in all.


3.1.1.2 Propositional theorem proving
We can determine entailment by model checking (enumerating models, introduced
above) or theorem proving.
Theorem proving: Applying rules of inference directly to the sentences in our
knowledge base to construct a proof of the desired sentence without consulting
models.
Inference rules are patterns of sound inference that can be used to find proofs.
The resolution rule yields a complete inference algorithm for knowledge bases that
are expressed in conjunctive normal form. Forward chaining and backward
chaining are very natural reasoning algorithms for knowledge bases in Horn form.
Logical equivalence:
Two sentences α and β are logicaly equivalent if they are true in the same set of
models. (write as α ≡ β).
Also: α ≡ β if and only if α 𝖼 β and β 𝖼 α.
Validity: A sentence is valid if it is true in al models.
Valid sentences are also known as tautologies—they are necessarily true. Every valid
sentence is logicaly equivalent to True.
The deduction theorem: For any sentence αand β, α 𝖼 β if and only if the sentence
(α ⇒ β) is valid.

Satisfiability: A sentence is satisfiable if it is true in, or satisfied by, some model.


Satisfiability can be checked by enumerating the possible models until one is found that
satisfies the sentence.

The SAT problem: The problem of determining the satisfiability of sentences in


propositional logic.

Validity and satisfiabilityare connected:


α is valid if ¬α is unsatisfiable;
α is satisfiable if ¬α is not valid;
α 𝖼 β if and only if the sentence (α𝖠¬β) is unsatisfiable.
Proving β from α by checking the unsatisfiability of (α𝖠¬β) corresponds to proof by
refutation / proof by contradiction.
Inference and proofs
Inferences rules (such as Modus Ponens and And-Elimination) can be applied to
derived to a proof.
·Modus Ponens:

Whenever any sentences of the form α⇒β and α are given, then the
sentence β can be inferred.

·And-Elimination:

From a conjunction, any of the conjuncts can be inferred.All of logical


equivalence (in Figure 7.11) can be used as inference rules.

e.g. The equivalence for biconditional elimination yields 2 inference rules:


De Morgan’s rule: We just need to define a proof problem as folows:
INITIAL STATE: the initial knowledge base;
ACTION: the set of actions consists of al the inference rules applied to al the
sentences that match the top half of the inference rule.
RESULT: the result of an action is to add the sentence in the bottom half of the
inference rule.
GOAL: the goal is a state that contains the sentence we are trying to prove.
In many practical cases, finding a proof can be more efficient than enumerating
models, because the proof can ignore irrelevant propositions, no matter how many of
them they are.
Monotonicity: A property of logical system, says that the set of entailed sentences
can only increased as information is added to the knowledge base.
For any sentences α and β,
If KB 𝖼 αthen KB 𝖠β 𝖼 α.
Monotonicity means that inference rules can be applied whenever suitable premises
are found in the knowledge base, what else in the knowledge base cannot invalidate
any conclusion already inferred.
Proof by resolution
Resolution: An inference rule that yields a complete inference algorithm when
coupled with any complete search algorithm.
Clause: A disjunction of literals. (e.g. A∨B). A single literal can be viewed as a unit
clause (a disjunction of one literal ).
Unit resolutioninference rule: Takes a clause and a literal and produces a new
clause.

where each l is a literal, li and m are complementary literals (one is the


negation of the other).

Ful resolution rule: Takes 2 clauses and produces a new clause.

where li and mj are complementary literals.


Notice: The resulting clause should contain only one copy of each literal. The
removal of multiple copies of literal is caled factoring.
e.g. resolve(A∨B) with (A∨¬B), obtain(A∨A) and reduce it to just A.The
resolution rule is sound and complete.
Conjunctive normal form
Conjunctive normal form (CNF): A sentence expressed as a conjunction of
clauses is said to be in CNF.Every sentence of propositional logic is logicaly
equivalent to a conjunction of clauses, after converting a sentence into CNF,it can be
used as input to a resolution procedure.
A resolutionalgorithm :

e.g.
KB = (B1,1⟺(P1,2∨P2,1)) 𝖠¬B1,1
α = ¬P1,2
Notice: Any clause in which two complementary literals appear can be discarded,
because it is always equivalent to True.
e.g. B1,1∨¬B1,1∨P1,2 = True∨P1,2 = True.
PL-RESOLUTION is complete.

Horn clauses and definite clauses:

Definite clause: A disjunction of literals of which exactly one is positive. (e.g. ¬


L1,1∨¬Breeze∨B1,1) Every definite clause can be written as an implication, whose
prem ise is a conjunction of positive literals and whose conclusion is a single positive
literal.

Horn clause: A disjunction of literals of which at most one is positive. (All definite
clauses are Horn clauses.) In Horn form, the premise is called the body and the
conclusion is called the head. A sentence consisting of a single positive literal is
caled a fact, it too can be written in implication form.

Horn clause are closed under resolution: if you resolve 2 horn clauses, you get back
a horn clause.Inference with horn clauses can be done through the forward-
chaining and backward-chaining algorithms.

Deciding entailment with Horn clauses can be done in time that is linear in the size of
the knowledge base.

Goal clause: A clause with no positive literals.


Forward and backward chaining
forward-chaining algorithm: PL-FC-ENTAILS?(KB, q) (runs in linear time)
Forward chaining is sound and complete.

e.g. A knowledge base of horn clauses with A and B as known facts.


Fixed point: The algorithm reaches a fixed point where no new inferences are
possible.
Data-drivenreasoning: Reasoning in which the focus of attention starts with the
known data. It can be used within an agent to derive conclusions from incoming
percept, often without a specific query in mind. (forward chaining is an example)

Backward-chaining algorithm: works backward rom the query.


If the query q is known to be true, no work is needed; Otherwise the algorithm finds
those implications in the KB whose conclusion is q. If al the premises of one of those
implications can be proved true (by backward chaining), then q is true. (runs in
linear time)in the corresponding AND-OR graph: it works back down the graph until
it reaches a set of known facts.
(Backward-chaining algorithm is essentially identical to the AND-OR-
GRAPH-SEARCH algorithm.)
Backward-chaining is a form of goal-directedreasoning.
Effective propositional model checking
The set of possible models, given a fixed propositional vocabulary, is finite, so
entailment can be checked by enumerating models. Efficient model checking
Inference algorithms for propositional logic include backtracking and local search
methods and can often solve large problems quickly.
2 families of algorithms for the SAT problem based on model checking:
a. based on backtracking
b. based on local hill-climbing search

1.A complete backtracking algorithm


David-Putnam algorithm (DPLL):
DPLL embodies 3 improvements over the scheme of TT-ENTAILS?: Early
termination, pure symbol heuristic, unit clause heuristic.
Tricks that enable SAT solvers to scale up to large problems: Component
analysis, variable and value ordering, inteligent backtracking, random restarts, clever
indexing.
Local search algorithms
Local search algorithms can be applied directly to the SAT problem, provided that
choose the right evaluation function. (We can choose an evaluation function that
counts the number of unsatisfied clauses.)
These algorithms take steps in the space of complete assignments, flipping the truth
value of one symbol at a time.
The space usualy contains many local minima, to escape from which various forms
of randomness are required. Local search methods such as WALKSAT can be used to
find solutions. Such algorithm are sound but not complete.
WALKSAT: one of the simplest and most effective algorithms.

On every iteration, the algorithm picks an unsatisfied clause, and chooses randomly
between 2 ways to pick a symbol to flip:

Either a. a “min-conflicts” step that minimizes the number of unsatisfied clauses in the
new state;Or b. a “random walk” step that picks the symbol randomly.
When the algorithm returns a model, the input sentence is indeed satisfiable;

When the algorithm returns failure, there are 2 possible causes:


Either a. The sentence is unsatisfiable;Or b. We need to give the algorithm more time.
If we set max_flips=∞, p>0, the algorithm wil:
Either a. eventually return a model if one exists Or b. never terminate if the sentence is
unsatifiable. Thus WALKSAT is useful when we expect a solution to exist, but cannot
always detect unsatisfiability.
The landscape of random SAT problems

Underconstrained problem: When we look at satisfiability problems in CNF,an


underconstrained problem is one with relatively few clauses constraining the variables.
An overconstrained problem has many clauses relative to the number of variables and is
likely to have no solutions.
The notation CNFk(m, n) denotes a k-CNF sentence with m clauses and n symbols.
(with n variables and k literals per clause).Given a source of random sentences, where the
clauses are chosen uniformly, independently and without replacement from among al
clauses with k different literals, which are positive or negative at random.
Hardness: problems right at the threshold > over-constrained problems >
underconstrained problems

Satifiability threshold conjecture: A theory says that for every k≥3, there is a threshold
ratio rk, such that as n goes to infinity, the probability that CNFk(n, rn) is satisfiable
becomes 1 for al values or r below the threshold, and 0 for all values above. (remains
unproven)
3.1.1.4 Agent based on propositional logic

1. The current state of the world


We can associate proposition with timestamp to avoid contradiction.
e.g. ¬Stench3, Stench4
fluent: refer an aspect of the world that changes. (E.g. Ltx,y)
atemporal variables: Symbols associated with permanent aspects of the world do not
need a time superscript.
Effect axioms: specify the outcome of an action at the next time step.

Frame problem: some information lost because the effect axioms fails to state
what remains unchanged as the result of an action.
Solution: add frame axioms explicity asserting al the propositions that remain the
same.

Representation frame problem: The proliferation of frame axioms is inefficient,


the set of frame axioms wil be O(mn) in a world with m different actions and n
fluents.
Solution: because the world exhibits locaility (for humans each action typicaly
changes no more than some number k of those fluents.) Define the transition model
with a set of axioms of size O(mk) rather than size O(mn).

Inferential frame problem: The problem of projecting forward the results of a t


step lan of action in time O(kt) rather than O(nt).
Solution: change one’s focus from writing axioms about actions to writing axioms
about fluents.
For each fluent F, we wil have an axiom that defines the truth value of Ft+1 in terms
of fluents at time t and the action that may have occurred at time t.
The truth value of Ft+1 can be set in one of 2 ways:
Either a. The action at time t cause F to be true at t+1
Or b. F was already true at time t and the action at time t does not cause it to be
false.
An axiom of this form is caled a successor-state axiom and has this schema:

Qualification problem: specifying al unusual exceptions that could cause the


action to fail.
2. A hybrid agent
Hybrid agent: combines the ability to deduce various aspect of the state of the world
with condition-action rules, and with problem-solving algorithms.
The agent maintains and update KB as a current plan.
The initial KB contains the atemporal axioms. (don’t depend on t)
At each time step, the new percept sentence is added along with al the axioms that
depend on t (such as the successor-state axioms). Then the agent use logical
inference by ASKING questions of the KB (to work out which squares are safe and
which have yet to be visited). The main body of the agent program constructs a plan
based on a decreasing priority of goals:
1. If there is a glitter, construct a plan to grab the gold, folow a route back to the
initial location and climb out of the cave;
2. Otherwise if there is no current plan, plan a route (with A* search) to the closest
safe square unvisited yet, making sure the route goes through only safe squares;
3. If there are no safe squares to explore, if stil has an arrow, try to make a safe
square by shooting at one of the possible wumpus locations.
4. If this fails, look for a square to explore that is not provably unsafe.
5. If there is no such square, the mission is impossible, then retreat to the initial
location and climb out of the cave.

Weakness: The computational expense goes up as time goes by.


3. Logical state estimation
Toget a constant update time, we need to cache the result of inference.
Belief state: Some representation of the set of al possible current state of the
world.(used to replace the past history of percepts and al their ramifications)

We use a logical sentence involving the proposition symbols associated with the
current time step and the temporal symbols. Logical state estimation involves
maintaining a logical sentence that describes the set of possible states consistent
with the observation history. Each update step requires inference using the transition
model of the environment, which is built from successor-state axioms that specify
how each fluent changes.
State estimation: The process of updating the belief state as new percepts arrive.
Exact state estimation may require logical formulas whose size is exponential in the
number of symbols. One common scheme for approximate state estimation: to
represent belief state as conjunctions of literals (1-CNF formulas).
The agent simply tries to prove Xt and ¬Xt for each symbol Xt, given the belief state
at t-1.The conjunction of provable literals becomes the new belief state, and the
previous belief state is discarded.(This scheme may lose some information as time
goes along.)
The set of possible states represented by the 1-CNF belief state includes al states
that are in fact possible given the ful percept history. The 1-CNF belief state acts as
a simple outer envelope, or conservative approximation.
4. Making plans by propositional inference
We can make plans by logical inference instead of A* search in Figure 7.20.
Basic idea:
1. Construct a sentence that includes:
a) Init0: a colection of assertions about the initial state;
b) Transition1, …, Transitiont: The successor-state axioms for al possible actions at
each time up to some maximum time t;
c) HaveGoldt𝖠ClimbedOutt: The assertion that the goal is achieved at time t.
2. Present the whole sentence to a SAT solver. If the solver finds a satisfying model,
the goal is achievable; else the planning is impossible.
3. Assuming a model is found, extract from the model those variables that represent
actions and are assigned true.
Together they represent a plan to achieve the goals.
Decisions within a logical agent can be made by SAT solving: finding possible models
specifying future action sequences that reach the goal. This approach works only for
fuly observable or sensorless environment.
SATPLAN: A propositional planning. (Cannot be used in a partialy observable
environment)
SATPLAN finds models for a sentence containing the initial sate, the goal, the
successor-state axioms, and the action exclusion axioms.
(Because the agent does not know how many steps it wil take to reach the goal, the
algorithm tries each possible number of steps t up to some maximum conceivable
plan length Tmax.)

Precondition axioms: stating that an action occurrence requires the


preconditions to be satisfied, added to avoid generating plans with ilegal actions.
Action exclusion axioms: added to avoid the creation of plans with multiple
simultaneous actions that interfere with each other. Propositional logic does not
scale to environments of unbounded size because it lacks the expressive power to
deal concisely with time, space and universal patterns of relationships among
objects.
2. FIRST ORDER PREDICATE LOGIC

1. Logic
The knowledge bases consist of sentences and these sentences are
expressed according to the syntax of the representation language, which specifies all
the sentences that are well formed. The notion of syntax is clear enough in ordinary

arithmetic: “x + y = 4” is a wel-formed sentence, whereas “x4y+ =” is not.


Logic must also define the semantics or meaning of sentences. The
semantics defines the truth of each sentence with respect to each possible world.
For example, the semantics for arithmetic specifies that the sentence “x + y =4” is
true in a world where x is 2 and y is 2, but false in a world where x is 1 and y is 1.
In standard logics, every sentence must be either true or false in each possible world
there is no “in between.”. Types of Logic are propositional Logic and First order Logic.

Propositional Logic
The syntax of propositional logic and its semantics are the way in which
the truth of sentences is determined. Then we look at entailment the relation
between a sentence and another sentence that folows from it and see how this
leads to a simple algorithm for logical inference.

Fig 3.1 A BNF (Backus–Naur Form) grammar of sentences in propositionallogic,


along with operator precedence’s,from highest to lowest.

35
Truth tables for the five logical connectives are given below. To use the table to
compute, for example, the value of P ∨ Q when P is true and Q is false, first look on
the left for the row where P is true and Q is false (the third row). Then look in that
row under the P ∨Q column to see the result is true.

We used propositional logic as our representation language because it sufficed to


illustrate the basic concepts of logic and knowledge-based agents. Unfortunately,
propositional logic is too puny a language to represent knowledge of complex
environments in a concise way. we examine first-order logic, which is sufficiently
expressive to represent a good deal of our commonsense knowledge.

Difference between Propositionallogic & predicate logic:


Besides the propositional logic, there are other logics as well such as
predicate logic and other modal logics. Propositional logic provides more efficient and
scalable algorithms than the other logics. There are few differences between the
propositional logic and first-order logic, some of them are mentioned below.

• Propositional logic deals with simple declarative propositions, while first-order


logic additionaly covers predicates and quantification.
• A proposition is a collection of declarative statements that has either a truth
value “true” or a truth value “false”. While a predicate logic is an expression
of one or more variables defined on some specific domain.

3.2.2 First-Order Logic


The language of first-order logic, whose syntax and semantics we define
in the next section, is built around objects and relations.

36
It has been so important to mathematics, philosophy, and artificial
intelligence precisely because those fields and indeed, much of everyday human
existence can be usefully thought of as dealing with objects and the relations among
them. First-order logic can also express facts about some or al of the objects in the
universe.

The foundation of propositional logic like declarative, compositional


semantics that is context-independent and unambiguous are used and build a more
expressive logic on that foundation, borrowing representational ideas from natural
language while avoiding its drawbacks. When we look at the syntax of natural
language, the most obvious elements are nouns and noun phrases that refer to
objects (squares, pits, wumpuses) and verbs and verb phrases that refer to relations
among objects (is breezy, is adjacent to, shoots). Some of these relations are
functions—relations in which there is only one “value” for a given “input.” It is easy
to start listing examples of objects, relations, and functions.

• Objects: people, houses, numbers, theories, Ronald McDonald, colors, basebal


games, wars, centuries . . .
• Relations: these can be unary relations or properties such as red, round, bogus,
prime, multistoried . . ., or more general n-ary relations such as brother of, bigger
than, inside, part of, has coorl, occurred after, owns, comes between, . . .

• Functions: father of,best friend, third inning of,one more than, beginning of . . .

3.2.3 Syntax and Semantics of First-Order Logic


The models of a logical language are the formal structures that constitute
the possible worlds under consideration. Each model links the vocabulary of the
logical sentences to elements of the possible world, so that the truth of any
sentence can be determined. Thus, models for propositional logic link proposition
symbols to predefined truth values.

37
Models for first-order logic are much more interesting. The domain of a model is the
set of objects or domain elements it contains.
The do-main is required to be nonempty—every possible world must
contain at least one object. The objects in the model may be related in various ways.
In the figure, Richard and John are brothers. Formally speaking, a relation is just the
set of tuples of objects that are related. (A tuple is a collection of objects arranged in
a fixed order and is written with angle brackets surrounding the objects.) Thus, the
brotherhood relation in this model is the set

{ _Richard the Lionheart, King John_, _King John, Richard the Lionheart_ }.

Fig 3.2 A model containing five objects, twobinaryrelations, three unaryrelations


(indicated by labels on the objects), and one unaryfunction, left-leg

The crown is on King John’s head, so the “on head” relation contains just one tuple, _the
crown, King John_. The “brother” and “on head” relations are binary relations—that is, they
relate pairs of objects. The model also contains unary relations, or properties: the “person”
property is true of both Richard and John; the “king” property is true only of John
(presumably because Richard is dead at this point); and the “crown” property is true only of
the crown. For example, each person has one left leg, so the model has a unary “left leg”
function that includes the following mappings:

_Richard the Lionheart_ → Richard’s left leg

_King John_ → John’s left leg .

38
3.2.4 Symbols and interpretations of First-Order Logic
The basic syntactic elements of first-order logic are the symbols that stand
for objects, relations, and functions. The symbols, therefore, come in three kinds:
constant symbols, which stand for objects; predicate symbols, which stand for
relations; and function symbols, which stand for functions. We adopt the convention
that these symbols will begin with uppercase letters. For example, we might use the
constant symbols Richard and John; the predicate symbols Brother , OnHead,
Person, King, and Crown; and the function symbol LeftLeg. As with proposition
symbols, the choice of names is entirely up to the user. Each predicate and function
symbol comes with an arity that fixes the number of arguments An interpretation is
that specifies exactly which objects, relations and functions are referred to by the
constant, predicate, and function symbols. One possible interpretation for our
example which a logician would cal the intended interpretation is as folows:

• Richard refers to Richard the Lionheart and John refers to the evil King John.
• Brother refers to the brotherhood relation, that is, the set of tuples of objects.
OnHead refers to the “on head” relation that holds between the crown and King
John; Person, King, and Crown refer to the sets of objects that are persons, kings,
and crowns.

• LeftLeg refers to the “left leg” function, that is, the mapping .
A term is a logical expression that refers to an object. Constant symbols
are therefore terms, but it is not always convenient to have a distinct symbol to
name every object. For example, in English we m ight use the expression “King
John’s left leg” rather than giv ing a name to his leg. This is what function symbols
are for: instead of using a constant symbol, we use LeftLeg (John). In the general
case, a complex term is formed by a function symbol followed by a parenthesized list
of terms as arguments to the function symbol. An atomic sentence (or atom for
short) is ATOMIC SENTENCE formed from a predicate symbol optionally followed by a
parenthesized list of terms, such as

39
Brother (Richard , John).
This states, under the intended interpretation given earlier,that Richard the
Lionheart is the brother of King John.

6 Atomic sentences can have complex terms as arguments. Thus,

Fig 3.3 The syntaxof first-orderlogic


states that Richard the Lionheart’s father is married to King John’s mother (again,
under a suitable interpretation). An atomic sentence is true in a given model if the
relation referred to by the predicate symbol holds among the objects referred to by
the arguments.

We can use logical connectives to construct more complex sentences, with the same
syntax and semantics as in propositional calculus.

40
¬Brother (LeftLeg(Richard), John)
Brother (Richard , John) 𝖠Brother (John,Richard)

King(Richard ) ∨King(John)

¬King(Richard) ⇒ King(John) .

3.2.5 Quantifiers

First-order logic contains two standard quantifiers, caled universal and existential

Universal quantification (∀)


Rules such as “Squares neighboring the wumpus are smelly” and “All kings are
persons” are the bread and butter of first-order logic. The second rule, “All kings are
persons,” is written in first-order logic as

∀x King(x) ⇒ Person(x).
∀ is usually pronounced “For all . . .”. (Remember that the upside-down A stands for
“all.”) Thus, the sentence says, “For al x, if x is a king, then x is a person.” The
symbol x is caled a variable. By convention, variables are lowercase letters. A

variable is a term all by itself, and as such can also serve as the argument of a
function—for example, LeftLeg(x). A term with no variables is caled a ground term .

The sentence ∀x P, where P is any logical expression, says that P is true for every
object x. More precisely, ∀x P is true in a given model if P is true in all possible
extended interpretations constructed from the interpretation given in the model,
where each extended interpretation specifies a domain element to which x refers

x → Richard the Lionheart,


x → King John,
x → Richard’s left leg,
x → John’s left leg,

x → the crown.

41
The universally quantified sentence ∀ x King(x) ⇒ Person(x) is true in the origina l
model of the sentence King(x) ⇒ Person(x) is true under each of the five extended
interpretations. That is, the universally quantified sentence is equivalent to asserting
the folowing five sentences:

Richard the Lionheart is a king ⇒ Richard the Lionheart is a person.


King John is a king ⇒ King John is a person.

Richard’s left leg is a king ⇒ Richard’s left leg is a person.


John’s left leg is a king ⇒ John’s left leg is a person.

The crown is a king ⇒ the crown is a person.


in our model, King John is the only king, the second sentence asserts that he is a
person matches.

Existential quantification (Ǝ)


Universal quantification makes statements about every object. Similarly, we can
make a statement about some object in the universe without naming it, by using an
existential quantifier. To say, for example, that King John has a crown on his head, we

write

Ǝx Crown(x) 𝖠OnHead(x, John) .


Ǝx is pronounced “There exists an x such that . . .” or “For some x . . .”. Intuitively, the
sentence ƎxP says that P is true for at least one object x. More precisely, ƎxP is true in
a given model if P is true in at least one extended interpretation that assigns x to a
domain element. That is, at least one of the folowing is true:

Richard the Lionheart is a crown 𝖠Richard the Lionheart is on John’s head;

King John is a crown 𝖠King John is on John’s head;

Richard’s left leg is a crown 𝖠Richard’s left leg is on John’s head;


John’s left leg is a crown 𝖠John’s left leg is on John’s head;

The crown is a crown 𝖠the crown is on John’s head.

42
The fifth assertion is true in the model, so the original existentialy quantified
sentence is true in the model.

Nested quantifiers
The simplest case is where the quantifiers are of the same type. For example,
“Brothers are siblings” can be written as

∀x ∀y Brother (x, y) ⇒ Sibling(x, y).


Consecutive quantifiers of the same type can be written as one quantifier with
several variables. For example, to say that siblinghood is a symmetric relationship,
we can write

∀x, y Sibling(x, y) ⇔ Sibling(y, x).


In other cases we will have mixtures. “Everybody loves somebody” means that for
every person, there is someone that person loves:

∀x Ǝy Loves(x, y) .

On the other hand, to say “There is someone who is loved by everyone,” we write

Ǝy∀x Loves(x, y) .
The order of quantification is therefore very important. It becomes clearer if we
insert parentheses. ∀ x (Ǝy Loves(x, y)) says that everyone has a particular property,
namely, the property that they love someone.

Connections between ∀andƎ


The two quantifiers are actually intimately connected with each other, through
negation. Asserting that everyone dislikes parsnips is the same as asserting there
does not exist someone who likes them, and vice versa:

∀x ¬Likes(x, Parsnips) is equivalentto ¬ƎxLikes(x, Parsnips).

43
We can go one step further: “Everyone likes ice cream” means that there is no one
who does not like ice cream:
∀x Likes(x, IceCream) is equivalent to ¬Ǝx¬Likes(x, IceCream)
Because ∀ is really a conjunction over the universe of objects and ∃ is a disjunction,
it should not be surprising that they obey De Morgan’s rules. The De Morgan rules
for quantified and unquantified sentences are as folows:

∀x ¬ P ≡¬Ǝx P ¬(P ∨Q) ≡ ¬ P 𝖠¬Q

¬∀x P ≡Ǝx ¬ P ¬(P 𝖠 Q) ≡ ¬ P ∨¬Q

∀x P ≡ ¬Ǝx ¬ P P𝖠 Q ≡¬ ( ¬ P ∨¬Q)

ƎxP ≡ ¬∀x ¬ P P∨ Q ≡¬ ( ¬ P 𝖠¬Q)

Equality
First-order logic includes one more way to make atomic sentences, other than using
a predicate and terms as described earlier. We can use the equality symbol to signify
that two terms refer to the same object. For example,

Father (John)=Henry
says that the object referred to by Father (John) and the object referred to by Henry
are the same. One proposal that is very popular in database systems works as
follows. First, we insist that every constant symbol refer to a distinct object—the so-
called unique-names assumption. Second, we assume that atomic sentences not
known to be true are in fact false the closed-world assumption.

3.3 UNIFICATION

Inference rules for quantifiers


Let us begin with universal quantifiers. Suppose our knowledge base contains the
standard folkloric axiom stating that al greedy kings are evil:

44
∀x King(x) 𝖠Greedy(x) ⇒ Evil(x) .

Then it seems quite permissible to infer any of the folowing sentences:

King(John) 𝖠Greedy(John) ⇒ Evil(John)

King(Richard ) 𝖠Greedy(Richard) ⇒ Evil(Richard)

King(Father (John)) 𝖠Greedy(Father (John)) ⇒ Evil(Father (John)) .

...
The rule of Universal Instantiation (UI for short) says that we can infer any sentence
obtained by substituting a ground term (a term without variables) for the variable.1
Towrite out the inference rule formaly, we use the notion of substitutions. In the
rule for Existential Instantiation, the variable is replaced by a single new constant
symbol. For example, from the sentence

ƎxCrown(x) 𝖠OnHead(x, John)

we can infer the sentence

Crown(C1) 𝖠OnHead(C1, John)


as long as C1 does not appear elsewhere in the knowledge base. The first idea is
that, just as an existentially quantified sentence can be replaced by one
instantiation, a universally quantified sentence can be replaced by the set of all
possible instantiations. For example, suppose our knowledge base contains just the
sentences,

∀x King(x) 𝖠Greedy(x) ⇒ Evil(x)

King(John)

Greedy(John)

Brother (Richard, John) .

45
Then we apply UI to the first sentence using al possible ground-term substitutions
from the vocabulary of the knowledge base—in this case, {x/John} and {x/Richard

}. We obtain

King(John) 𝖠Greedy(John) ⇒ Evil(John)

King(Richard ) 𝖠Greedy(Richard) ⇒ Evil(Richard) ,

And we discard the universaly quantified sentence.

3.3.1 A first-order inference rule


The inference that John is evil—that is, that {x/John} solves the query
Evil(x)—works like this: to use the rule that greedy kings are evil, find some x such
that x is a king and x is greedy, and then infer that this x is evil. More generally, if
there is some substitution θthat makes each of the conjuncts of the premise of the
implication identical to sentences already in the knowledge base, then we can assert
the conclusion of the implication, after applying θ.In this case, the substitution θ
={x/John} achieves that aim. We can actually make the inference step do even
more work. Suppose that instead of knowing Greedy(John), we know that everyone
is greedy:

∀y Greedy(y)
Then we would still like to be able to conclude that Evil(John), because we know
that John is a king (given) and John is greedy (because everyone is greedy).
apply ing the substitution {x/John, y/John} to the implication prem ises King(x) and
Greedy(x) and the knowledge-base sentences King(John) and Greedy(y) will make
them identical. Thus, we can infer the conclusion of the implication. This inference
process can be captured as a single inference rule that we call Generalized Modus
Ponens.

46
3.3.2 Unification
Lifted inference rules require finding substitutions that make different
logical expressions look identical. This process is called unification and is a key
component of all first-order inference algorithms. The UNIFY algorithm takes two
sentences and returns a unifier for them if one exists:

UNIFY(p, q)=θwhere SUBST(θ, p)= SUBST(θ, q) .


Suppose we have a query AskVars(Knows(John, x)): and Answers to this query can
be found by finding all sentences in the knowledge base that unify with Knows(John,
x). Here are the results of unification with four different sentences that m ight be in
the knowledge base:
UNIFY(Knows(John, x), Knows(John, Jane)) = {x/Jane}
UNIFY(Knows(John, x), Knows(y, Bill )) = {x/Bill, y/John}
UNIFY(Knows(John, x), Knows(y,Mother (y))) = {y/John, x/Mother (John)}
UNIFY(Knows(John,x), Knows(x, Elizabeth)) = fail .

The last unification fails because x cannot take on the values John and Elizabeth at
the same time. Now, remember that Knows(x, Elizabeth) means “Everyone knows
Elizabeth,” so we should be able to infer that John knows Elizabeth. The problem
arises only because the two sentences happen to use the same variable name, x.
The problem can be avoided by standardizing apart one of the two sentences being
unified, which means renaming its variables to avoid name clashes. For example, we
can rename x in Knows(x, Elizabeth) to x17 (a new variable name) without changing
its meaning. Now the unification wil work:

UNIFY(Knows(John, x), Knows(x17, Elizabeth)) = {x/Elizabeth, x17/John} .


There is one more complication: we said that UNIFY should return a substitution that
makes the two arguments look the same. But there could be more than one such
unifier.

47
For example, UNIFY(Knows(John, x), Knows(y, z)) could return {y/John, x/z} or
{y/John, x/John, z/John}. The first unifier gives Knows(John, z) as the result of
unification, whereas the second gives Knows(John, John). The second result could
be obtained from the first by an additional substitution {z/John}; we say that the
first unifier is more general than the second, because it places fewer restrictions on
the values of the variables. It turns out that, for every unifiable pair of expressions,
there is a single most general unifier (or UNIFIER MGU) that is unique up to
renaming and substitution of variables. (For example, {x/John} and {y/John} are
considered equivalent, as are {x/John, y/John} and {x/John, y/x}.) In this case it is

{y/John, x/z}.
The algorithm works by comparing the structures of the inputs, element by element.
The substitution θthat is the argument to UNIFY is built up along the way and is
used to make sure that later comparisons are consistent with bindings that were
established earlier. In a compound expression such as F(A,B), the OP field picks out
the function symbol F and the ARGS field picks out the argument list (A,B).

The Unification Algorithm

48
3.3.3 Storage and retrieval
Underly ing the TELL and ASK functions used to inform and interrogate a
knowledge base are the more prim itive STORE and FETCH functions. STORE(s)
stores a sentence s into the knowledge base and FETCH(q) returns all unifiers such
that the query q unifies with some sentence in the knowledge base. The problem we
used to illustrate unification—finding all facts that unify with Knows(John, x)—is an
instance of Fetching. The simplest way to implement STORE and FETCH is to keep al
the facts in one long list and unify each query against every element of the list. We
can make FETCH more efficient by ensuring that unifications are attempted only with
sentences that have some chance of unifying.

Predicate index ing is useful when there are many predicate symbols but only a few
clauses for each symbol. Sometimes, however, a predicate has many clauses. For
example, suppose that the tax authorities want to keep track of who employs whom,
using a predicate Employs(x, y). This would be a very large bucket with perhaps
millions of employers. and tens of millions of employees. Answering a query such as
Employs(x,Richard ) with predicate index ing would require scanning the entire
bucket.

For other queries, such as Employs(IBM , y), we would need to have indexed the
facts by combining the predicate with the first argument. Therefore, facts can be
stored under multiple index keys, rendering them instantly accessible to various
queries that they might unify with.

49
Given a sentence to be stored, it is possible to construct indices for al possible
queries that unify with it. For the fact Employs(IBM ,Richard), the queries are

Employs(IBM ,Richard) Does IBM employ Richard?

Employs(x,Richard ) Who employs Richard?

Employs(IBM , y) Whom does IBM employ?

Employs(x, y) Who employs whom?

These queries form a subsumption lattice, as shown in Figure 3.4(a). The lattice has
some interesting properties. For example, the child of any node in the lattice is
obtained from its parent by a single substitution; and the “highest” common
descendant of any two nodes is the result of apply ing their most general unifier. The
portion of the lattice above any ground fact can be constructed systematically. A
sentence with repeated constants has a slightly different lattice, as shown in Figure
3.4(b). Function symbols and variables in the sentences to be stored introduce still
more interesting lattice structures. For most AI systems, the number of facts to be
stored is small enough that efficient indexing is considered a solved problem. For
commercial databases, where facts number in the billions, the problem has been the
subject of intensive study and technology development.

3.4 FORWARD CHAINING


The idea is simple start with the atomic sentences in the knowledge base
and apply Modus Ponens in the forward direction, adding new atom ic sentences,
until no further inferences can be made. Here, we explain how the algorithm is
applied to first-order definite clauses. Definite clauses such as Situation ⇒ Response
are especially useful for systems that make inferences in response to newly arrived
information. Many systems can be defined this way, and forward chaining can be
implemented very efficiently.

50
3.4.1 First-order definiteclauses
A definite clause either is atom ic or is an implication whose antecedent is
a conjunction of positive literals and whose consequent is a single positive literal.
The folowing are first-order definite clauses:

King(x) 𝖠 Greedy(x) ⇒ Evil(x) .

King(John) .

Greedy(y) .
Unlike propositional literals, first-order literals can include variables, in which case
those variables are assumed to be universally quantified. (Typically, we om it universal
quantifiers when writing definite clauses.) Not every knowledge base can be
converted into a set of definite clauses because of the single-positive-literal
restriction, but many can. Consider the folowing problem:

The law says that it is a crime for an American to sell weapons to hostile
nations. The country Nono, an enemy of America, has some missiles, and all of its
missiles were sold to it by ColonelWest, who is American.

We will prove that “West is a criminal”. First, we will represent these facts as first-
order definite clauses. The next section shows how the forward-chaining algorithm
solves the problem.

“. . . it is a crime for an American to sel weapons to hostile nations”:


American(x) 𝖠Weapon(y) 𝖠Sells(x, y, z) 𝖠Hostile(z) ⇒ Criminal (x) ------ 1
“Nono . . . has some missiles.” The sentence ∃ x Owns(Nono, x)𝖠Missile(x) is
transformed into two definite clauses by Existential Instantiation, introducing a new
constant M1:

Owns(Nono,M1) --------------- 2

Missile(M1)-------------------- 3

51
“All of its missiles were sold to it by Colonel West”:

Missile(x) 𝖠Owns(Nono, x) ⇒ Sells(West, x, Nono) . ------------4

We wil also need to know that missiles are weapons:

Missile(x) ⇒ Weapon(x) --------------- 5

and we must know that an enemy of America counts as “hostile”:

Enemy(x,America) ⇒ Hostile(x) . ------------6

“West, who is American . . .”:

American(West) .---------------- 7

“The country Nono, an enemy of America . . .”:

Enemy(Nono,America) . -------- 8
This knowledge base contains no function symbols and is therefore an instance of
the class of Datalog knowledge bases. Datalog DATALOG is a language that is
restricted to first-order definite clauses with no function symbols. Datalog gets its
name because it can represent the type of statements typically made in relational
databases.

3.4.2 A simple forward-chainingalgorithm


Starting from the known facts, it triggers all the rules whose prem ises are
satisfied, adding their conclusions to the known facts. The process repeats until the
query is answered (assuming that just one answer is required) or no new facts are
added. Notice that a fact is not “new” if it is just a renaming of a known fact. One
sentence is a renam ing of another if they are identical except for the names of the
variables. For example, Likes(x, IceCream) and Likes(y, IceCream) are renamings of
each other because they differ only in the choice of x or y; their meanings are
identical: everyone likes ice cream.

52
We use our crime problem to ilustrate how FOL-FC-ASK works. The implication
sentences are (1), (4), (5), and (6). Twoiterations are required:

• On the first iteration, rule (1) has unsatisfied premises.


Rule (4) is satisfied with {x/M1}, and Sels(West,M1, Nono) is added.

Rule (5) is satisfied with {x/M1}, and Weapon(M1) is added.

Rule (6) is satisfied with {x/Nono}, and Hostile(Nono) is added.


• On the second iteration, rule (1) is satisfied with {x/West, y/M1, z/Nono},
and Criminal (West) is added.
Figure 3.5 shows the proof tree that is generated. Notice that no new
inferences are possible at this point because every sentence that could be concluded
by forward chaining is already contained explicitly in the KB. Such a knowledge base
is called a fixed point of the inference process. Fixed points reached by forward
chaining with first-order definite clauses are sim ilar to those for propositional forward
chaining ; the principal difference is that a firstorder fixed point can include
universaly quantified atomic sentences. FOL-FC-ASK is easy to analyze.
First, it is sound, because every inference is just an application of
Generalized Modus Ponens, which is sound. Second, it is complete for definite clause
knowledge bases; that is, it answers every query whose answers are entailed by any
knowledge base of definite clauses. For general definite clauses with function
symbols, FOL-FC-ASK can generate infinitely many new facts, so we need to be more
careful. For the case in which an answer to the query sentence q is entailed, we
must appeal to Herbrand’s theorem to establish that the algorithm will find a proof.

A conceptually straight forward but very inefficient, forward-chaining


algorithm is shown below. On each iteration, it adds to KB all the atomic sentences
that can be inferred in one step from the implication sentences and the atomic
sentences already in KB.

53
The function STANDARDIZE-VARIABLES replaces al variables in its arguments with
new ones that have not been used before.

Step 1:

Step 2:

Step 3:

54
Forward-chaining Algorithm

Example:

Solution:

55
3.4.3 Efficientforward chaining
There are three possible sources of inefficiency. First, the “inner loop” of
the algorithm involves finding all possible unifiers such that the premise of a rule
unifies with a suitable set of facts in the knowledge base. This is often called pattern
matching and can be very expensive. Second, the algorithm rechecks every rule on
every iteration to see whether its premises are satisfied, even if very few additions
are made to the knowledge base on each iteration. Finally, the algorithm might
generate many facts that are irrelevant to the goal.

Matching rules againstknown facts


The problem of matching the prem ise of a rule against the facts in the
knowledge base might seem simple enough. For example, suppose we want to apply
the rule Missile(x) ⇒ Weapon(x) .

Then we need to find all the facts that unify withM issile(x); in a suitably indexed
knowledge base, this can be done in constant time per fact. Now consider a rule
such as Missile(x) 𝖠Owns(Nono, x) ⇒ Sells(West, x, Nono) .

Again, we can find all the objects owned by Nono in constant time per object; then,
for each object, we could check whether it is a missile. If the knowledge base
contains many objects owned by Nono and very few missiles, however, it would be
better to find all the m issiles first and then check whether they are owned by Nono.
This is the conjunct ordering problem: find an ordering to solve the conjuncts of the
rule prem ise so that the total cost is minim ized. It turns out that finding the optimal
ordering is NP-hard, but good heuristics are available. The connection between
pattern matching and constraint satisfaction is actually very close. We can view each
conjunct as a constraint on the variables that it contains—for example, Missile(x) is a
unary constraint on x. Extending this idea, we can express every finite-domain CSP
as a single definite clause together with some associated ground facts.

56
Consider the map-coloring problem shown again in Figure 3.6(a). An equivalent
formulation as a single definite clause is given in Figure 3.7(b). Clearly, the
conclusion Colorable () can be inferred only if the CSP has a solution. Because CSPs
in general include 3-SAT problems as special cases, we can conclude that matching a
definite clause against a set of facts is NP-hard.

It might seem rather depressing that forward chaining has an NP-hard matching
problem in its inner loop. There are three ways to cheer ourselves up:
• We can remind ourselves that most rules in real-world knowledge bases are small
and simple (like the rules in our crime example) rather than large and complex (like
the CSP formulation in Figure 3.6).

• We can consider subclasses of rules for which matching is efficient. Essentially


every Data log clause can be viewed as defining a CSP, so matching will be tractable
just when the corresponding CSP is tractable.

• We can try to elim inate redundant rule-matching attempts in the forward-chaining


algorithm, as described next.

57
Incremental forward chaining
When we showed how forward chaining works on the crime example, we
cheated; in particular, we om itted some of the rule matching done by the algorithm.
For example, on the second iteration, the rule

Missile(x) ⇒ Weapon(x)
matches against Missile(M1) (again), and of course the conclusion Weapon(M1) is
already known so nothing happens. Such redundant rule matching can be avoided if
we make the following observation: Every new fact inferred on iteration t must be
derived from at least one new fact inferred on iteration t − 1. This is true because
any inference that does not require a new fact from iteration t − 1 could have been
done at iteration t − 1 already. Typically, only a small fraction of the rules in the
knowledge base are actualy triggered by the addition of a given fact. This means
that a great deal of redundant work is done in repeatedly constructing partia l
matches that have some unsatisfied prem ises. Our crime example is rather too small
to show this effectively, but notice that a partial match is constructed on the first
iteration between the rule,

American(x) 𝖠Weapon(y) 𝖠Sells(x, y, z) 𝖠Hostile(z) ⇒ Criminal (x)


and the fact American(West). This partial match is then discarded and rebuilt on the
second iteration (when the rule succeeds). It would be better to retain and gradually
complete the partial matches as new facts arrive, rather than discarding them.

Irrelevantfacts
The final source of inefficiency in forward chaining appears to be intrinsic to the
approach and also arises in the propositional context. Forward chaining makes all
allowable inferences based on the known facts, even if they are irrelevant to the goal
at hand.

58
In our crime example, there were no rules capable of drawing irrelevant
conclusions, so the lack of directedness was not a problem. In other cases (e.g., if
many rules describe the eating habits of Americans and the prices of missiles), FOL- FC-
ASK wil generate many irrelevant conclusions.

One way to avoid drawing irrelevant conclusions is to use backward


chaining. Another solution is to restrict forward chaining to a selected subset of
Rules. A third approach has emerged in the field of deductive databases, which are
large-scale databases, like relational databases, but which use forward chaining as
the standard inference tool rather than SQL queries.

The idea is to rewrite the rule set, using information from the goal, so that
only relevant variable bindings—those MAGIC SET belonging to a so-caled magic set—
are considered during forward inference. For example, if the goal is Crim inal (West),
the rule that concludes Crim inal (x) will be rewritten to include an extra conjunct that
constrains the value of x:

Magic(x) 𝖠American(x) 𝖠Weapon(y) 𝖠Sells(x, y, z) 𝖠Hostile(z) ⇒ Criminal


(x) .
The fact Magic(West) is also added to the KB. In this way, even if the
knowledge base contains facts about millions of Americans, only Colonel West will be
considered during the forward inference process. The complete process for defining

magic sets and rewriting the knowledge base is too complex to go into here, but the
basic idea is to perform a sort of “generic” backward inference from the goal in order
to work out which variable bindings need to be constrained. The magic sets
approach can therefore be thought of as a kind of hybrid between forward inference
and backward preprocessing.

59
5. BACKWARD CHAINING
The second major family of logical inference algorithms uses the backward
chaining approach. These algorithms work backward from the goal, chaining through
rules to find known facts that support the proof. We describe the basic algorithm,
and then we describe how it is used in logic programming, which is the most widely
used form of automated reasoning. We also see that backward chaining has some
disadvantages compared with forward chaining, and we look at ways to overcome
them. Finally, we look at the close connection between logic programming and
constraint satisfaction problems.

1. A backward-chaining algorithm

In backward-chaining algorithm for definite clauses, FOL-BC-ASK(KB, goal


) wil be proved if the knowledge base contains a clause of the form lhs ⇒ goal,
where lhs (left-hand side) is a list of conjuncts. An atomic fact like American(West) is
considered as a clause whose lhs is the empty list. Now a query that contains
variables might be proved in multiple ways. For example, the query Person(x) could
be proved with the substitution {x/John} as well as with {x/Richard }. So we
implement FOL-BC-ASK as a generator— a function that returns multiple times, each
time giving one possible result. Backward chaining is a kind of AND/OR search—the
OR part because the goal query can be proved by any rule in the knowledge base,
and the AND part because al the conjuncts in the lhs of a clause must be proved. FOL-
BC-OR works by fetching all clauses that might unify with the goal, standardizing
the variables in the clause to be brand-new variables, and then, if the rhs of the
clause does indeed unify with the goal, proving every conjunct in the lhs, using FOL-BC-
AND.

Backward chaining, as we have written it, is clearly a depth-first search algorithm.


This means that its space requirements are linear in the size of the proof
(neglecting, for now, the space required to accumulate the solutions)

60
It also means that backward chaining (unlike forward chaining) suffers from
problems with repeated states and incompleteness.
Proof tree (Fig 3.7) constructed by backward chaining to prove that West
is a criminal. The tree should be read depth first, left to right. To prove Crim inal
(West), we have to prove the four conjuncts below it. Some of these are in the
knowledge base, and others require further backward chaining. Bindings for each
successful unification are shown next to the corresponding subgoal. Note that once
one subgoal in a conjunction succeeds, its substitution is applied to subsequent
subgoals. Thus, by the time FOL-BC-ASK gets to the last conjunct, originally
Hostile(z), z is already bound to Nono.

61
Step 5:

Fig 3.7 Backward chaining to prove that West is a criminal

62
A simple backward-chaining algorithm

S. No. Forward Chaining BackwardChaining


Forward chaining starts fromknown factsand Backward chaining starts from the goal and works
applies inference ruleto extract more data unitit backward throughinference rules to find the required
1.
reaches to the goal. facts that support thegoal.

2. It is a bottom-up approach It is a top-down approach


Forward chaining is knownas data-driven Backward chaining is known as goal-driven technique
inference technique as we reach to the as we start fromthe goal and divide into sub-goal to
3.
goal using the availabledata. extract the facts.

Forward chainingreasoning applies a breadth- Backward chaining reasoning applies a depth-first


4.
first search strategy. search strategy.

5. Forward chaining tests forallthe availablerules Backward chaining onlytests for fewrequired rules.

Forward chaining is suitable for theplanning, Backward chaining is suitable for diagnostic,
monitoring,control,andinterpretation
6. prescription, and debuggingapplication.
application.

Forward chaining can generate aninfinite Backward chaining generates a finite number of
7.
number of possible conclusions. possible conclusions.
8. It operates in the forward direction. It operates in the backwarddirection.
9. Forward chaining is aimed for any conclusion. Backward chaining is only aimed for the requireddata.

63
Example:

Goal State : Z

Facts: A,E,B,C

Rules: F&B->Z,C&D->F,A->D

Solution:

64
6. RESOLUTION
The last of our three families of logical systems is based on resolution.
1. Conjunctive normal form for first-order logic
As in the propositional case, first-order resolution requires that sentences
be in conjunctive normal form (CNF)—that is, a conjunction of clauses, where each
clause is a disjunction of literals. Literals can contain variables, which are assumed to
be universaly quantified. For example, the sentence

∀x American(x) 𝖠Weapon(y) 𝖠Sells(x, y, z) 𝖠Hostile(z) ⇒ Criminal (x)

becomes, in CNF,

¬American(x) ∨¬Weapon(y) ∨¬Sells(x, y, z) ∨¬Hostile(z) ∨Criminal


(x) .
Every sentence of first-order logic can be converted into an inferentially equivalent
CNF sentence. In particular, the CNF sentence will be unsatisfiable just when the
original sentence is unsatisfiable, so we have a basis for doing proofs by
contradiction on the CNF sentences. The principal difference arises from the need to
elim inate existential quantifiers. We illustrate the procedure by translating the
sentence “Everyone who loves al animals is loved by someone”, or

∀x [∀ y Animal(y) ⇒ Loves(x, y)] ⇒ [ƎyLoves(y, x)] .

The steps are as folows:

• Eliminate implications:

∀x [¬∀ y ¬Animal(y) ∨Loves(x, y)] ∨[ƎyLoves(y, x)] .


• Move ¬ inwards: In addition to the usual rules for negated connectives, we
need rules for negated quantifiers. Thus, we have

¬∀xp becomes Ǝx¬p

¬Ǝxp becomes ∀x ¬ p.

65
Our sentence goes through the folowing transformations:
∀x [Ǝy ¬(¬Animal(y) ∨Loves(x, y))] ∨[ƎyLoves(y, x)] .
∀x [Ǝy¬¬Animal(y) 𝖠¬Loves(x, y)] ∨[ƎyLoves(y, x)] .
∀x [ƎyAnimal (y) 𝖠¬Loves(x, y)] ∨[ƎyLoves(y, x)].
Notice how a universal quantifier (∀ y) in the prem ise of the implication has become
an existential quantifier.
• Standardize variables: For sentences like (ƎxP(x))∨( ƎxQ(x)) which use the same
variable name twice, change the name of one of the variables. This avoids confusion
later when we drop the quantifiers. Thus, we have

∀x [ƎyAnimal (y) 𝖠¬Loves(x, y)] ∨[ƎzLoves(z, x)].


• Skolemize: Skolemization is the process of removing existential quantifiers by
elim ination. In the simple case, it is just like the Ex istential Instantiation rule. If we
blindly apply the rule to the two matching parts we get

∀x [Animal (A) 𝖠¬Loves(x,A)] ∨Loves(B, x) ,


which has the wrong meaning entirely: it says that everyone either fails to love a
particular animal A or is loved by some particular entity B. In fact, our origina l
sentence alows each person to fail to love a different animal or to be loved by a
different person. Thus, we want the Skolem entities to depend on x and z:

∀x [Animal (F(x)) 𝖠¬Loves(x, F(x))] ∨Loves(G(z), x) .


Here F and G are Skolem functions. The general rule is that the arguments of the
Skolem function are all the universally quantified variables in whose scope the
existential quantifier appears.

• Drop universal quantifiers: At this point, all remaining variables must be


universally quantified. Moreover, the sentence is equivalent to one in which all the
universal quantifiers have been moved to the left. We can therefore drop the
universal quantifiers:

66
[Animal (F(x)) 𝖠¬Loves(x, F(x))] ∨Loves(G(z), x) .
• Distribute ∨over 𝖠:

[Animal (F(x)) ∨Loves(G(z), x)] 𝖠[¬Loves(x, F(x)) ∨Loves(G(z), x)] .

This step may also require flattening out nested conjunctions and disjunctions. The
sentence is now in CNF and consists of two clauses. It is quite unreadable. (It may
help to explain that the Skolem function F(x) refers to the animal potentially unloved
by x, whereas G(z) refers to someone who might love x.)

3.6.2 The resolution inference rule


Two clauses, which are assumed to be standardized apart so that they share no
variables, can be resolved if they contain complementary literals. Propositional literals
are complementary if one is the negation of the other; first-order literals are
complementary if one unifies with the negation of the other. This rule is called the
binary resolution rule because it resolves exactly two literals. The binary resolution
rule by itself does not y ield a complete inference procedure. The full resolution rule
resolves subsets of literals in each clause that are unifiable. An alternative approach
is to extend factoring—the removal of redundant literals—to the first-order case.

67
Steps for Resolution:
Conversion of facts into first-order logic.

Convert FOL statements into CNF

Negate the statement which needs to prove (proof by contradiction)

• Draw resolution graph (unification).

3.6.3 Example proofs


Example 1: The first is the crime example from forward chaining. The sentences in
CNF are
¬American(x) ∨¬Weapon(y) ∨¬Sells(x, y, z) ∨¬Hostile(z) ∨Criminal
(x)
¬Missile(x) ∨¬Owns(Nono, x) ∨Sells(West, x, Nono)
¬Enemy(x,America) ∨Hostile(x)
¬Missile(x) ∨Weapon(x)
Owns(Nono,M1)
Missile(M1)
American(West)
Enemy(Nono,America)
We also include the negated goal ¬Criminal (West).
Proof: A resolution proof that West is a criminal

68
Notice the structure: single “spine” beginning with the goal clause, resolving against
clauses from the knowledge base until the empty clause is generated. This is
characteristic of resolution on Horn clause knowledge bases. In fact, the clauses
along the main spine correspond exactly to the consecutive values of the goals
variable in the backward-chaining algorithm. Our second example makes use of
Skolemization and involves clauses that are not definite clauses. This results in a
somewhat more complex proof structure. In English, the problem is as folows:

69
1. John likes al kind of food.

2. Apple and vegetable are food

3. Anything anyone eats and not kiled is food.

4. Anil eats peanuts and stil alive

5. Harry eats everything that Anil eats.

Prove by resolution that:

John likes peanuts.

Step-1: Conversion of Facts into FOL

In the first step we wil convert al the given statements into its first order logic.

Step-2: Conversion of FOL into CNF


In First order logic resolution, it is required to convert the FOL into CNF as CNF form
makes easier for resolution proofs.

70
71
• Eliminate instantiationquantifier by eliminationexistential
In this step, we will elim inate existential quantifier ∃, and this process is known as
Skolem ization. But in this example problem since there is no existential quantifier so
al the statements wil remain same in this step.

• Drop Universal quantifiers.


In this step we wil drop al universal quantifier since al
the statements are not
implicitly quantified so we don't need it.

• Distribute conjunction 𝖠over disjunction ¬.

This step wil not make any change in this problem.

Step-3: Negate the statement to be proved


In this statement, we wil apply negation to the conclusion statements, which wil be
written as ¬likes(John, Peanuts).

Step-4: Draw Resolutiongraph:


Now in this step, we wil solve the problem by resolution tree using substitution. For
the above problem, it wil be given as folows:

72
Hence the negation of the conclusion has been proved as a complete contradiction
with the given set of statements.

Explanation of Resolutiongraph:
oIn the first step of resolution graph, ¬likes(John, Peanuts) , and likes(John, x) get
resolved(canceled) by substitution of {Peanuts/x}, and we are left with ¬
food(Peanuts)

oIn the second step of the resolution graph, ¬ food(Peanuts) , and food(z) get
resolved (canceled) by substitution of { Peanuts/z}, and we are left with ¬ eats(y,
Peanuts) V kiled(y) .

oIn the third step of the resolution graph, ¬ eats(y, Peanuts) and eats (Anil,
Peanuts) get resolved by substitution {Anil/y}, and we are left with Kiled(Anil)

.
oIn the fourth step of the resolution graph, Kiled(Anil) and ¬ kiled(k) get resolve by
substitution {Anil/k}, and we are left with ¬ alive(Anil) .

oIn the last step of the resolution graph ¬ alive(Anil) and alive(Anil) get resolved.

73
3.6.4 Completeness of resolution
We show that resolution is refutation-complete, which means that if a set of
sentences is unsatisfiable, then resolution will always be able to derive a
contradiction. Resolution cannot be used to generate all logical consequences of a
set of sentences, but it can be used to establish that a given sentence is entailed by
the set of sentences. Our proof sketch follows Robinson’s original proof with some
simplifications from Genesereth and Nilsson (1987).

1.First, we observe that if S is unsatisfiable, then there ex ists a particular set of


ground instances of the clauses of S such that this set is also unsatisfiable
(Herbrand’s theorem).

2.We then appeal to the ground resolution theorem given in Chapter 7, which states
that propositional resolution is complete for ground sentences.
3.We then use a lifting lemma to show that, for any propositional resolution proof
using the set of ground sentences, there is a corresponding first-order resolution
proof using the first-order sentences from which the ground sentences were
obtained.

Tocarry out the first step, we need three new concepts:


• Herbrand universe: If S is a set of clauses, then HS, the Herbrand universe
of S, is the set of al ground terms constructable from the folowing:

a. The function symbols in S, if any.

b. The constant symbols in S, if any; if none, then the constant symbol A.


For example, if S contains just the clause ¬P(x, F(x,A))∨¬Q(x,A)∨R(x,B), then HS is
the following infinite set of ground terms: {A,B, F(A,A), F(A,B), F(B,A), F(B,B),
F(A,F(A,A)), . . .} .

74
• Saturation: If S is a set of clauses and P is a set of ground terms, then P(S), the
saturation of S with respect to P, is the set of all ground clauses obtained by applying
al possible consistent substitutions of ground terms in P with variables in S.

• Herbrand base: The saturation of a set S of clauses with respect to its Herbrand
universe is called the Herbrand base of S, written as HS(S). For example, if S
contains solely the clause just given, then HS(S) is the infinite set of clauses

{¬P(A, F(A,A)) ∨¬Q(A,A) ∨R(A,B),

¬P(B,F(B,A)) ∨¬Q(B,A) ∨R(B,B),

¬P(F(A,A), F(F(A,A),A)) ∨¬Q(F(A,A),A) ∨ R(F(A,A),B),

¬P(F(A,B), F(F(A,B),A)) ∨¬Q(F(A,B),A) ∨R(F(A,B),B), . . . }


These definitions allow us to state a form of Herbrand’s theorem (Herbrand, 1930):
If a set S of clauses is unsatisfiable, then there exists a finite subset of HS(S) that is
also unsatisfiable.

75
This is structure of a completeness proof for resolution is given below:

3.6.5 Equality
None of the inference methods described so far in this chapter handle an assertion
of the form x = y. Three distinct approaches can be taken. The first approach is to
axiomatize equality— to write down sentences about the equality relation in the
knowledge base. We need to say that equality is reflexive, symmetric, and transitive,
and we also have to say that we can substitute equals for equals in any predicate or
function. The simplest rule, demodulation, takes a unit clause x=y and some clause α
that contains the term x, and yields a new clause formed by substituting y for x
within α.Itworks if the term within αunifies with x; it need not be exactly equal to x.

76
we have θ =UNIFY(F(A, y), F(x,B))= {x/A, y/B}, and we can conclude by
paramodulation the sentence

P(B,A) ∨Q(A) ∨R(B) .

Paramodulation yields a complete inference procedure for first-order logic with


equality.

3.6.6 Resolutionstrategies
We know that repeated applications of the resolution inference rule will eventually
find a proof if one ex ists. In this subsection, we examine strategies that help find
proofs efficiently. Unit preference: This strategy prefers to do resolutions where one
of the sentences is a single literal (also known as a unit clause). The idea behind the
strategy is that we are trying to produce an empty clause, so it might be a good idea
to prefer inferences that produce shorter clauses. Resolving a unit sentence (such as
P) with any other sentence (such as ¬P ∨¬Q∨R) always yields a clause (in this case,

¬Q ∨R) that is shorter than the other clause.


Set of support: Preferences that try certain resolutions first are helpful, but in
general it is more effective to try to elim inate some potential resolutions altogether.
For example, we can insist that every resolution step involve at least one element of
a special set of clauses—the set of support. The resolvent isthen added into the set
of support. If the set of support is smal relative to the whole knowledge base, the
search space wil be reduced dramaticaly.

Input resolution: In this strategy, every resolution combines one of the input
sentences (from the KB or the query) with some other sentence. The linear
resolution strategy is a slight generalization that allows P and Q to be resolved
together either if P is in the original KB or if P is an ancestor of Q in the proof tree.
Linear resolution is complete.

77
Subsumption: The subsumption method elim inates all sentences that are subsumed
by (that is, more specific than) an existing sentence in the KB. For example, if P(x) is
in the KB, then there is no sense in adding P(A) and even less sense in adding P(A) ∨

Q(B). Subsumption helps keep the KB small and thus helps keep the search space
small. Theorem provers can be applied to the problems involved in the synthesis and
verification of both hardware and software. Thus, theorem-proving research is
carried out in the fields of hardware design, programming languages, and software
engineering—not just in AI.

3.7 KNOWLEDGE REPRESENTATION


Humans are best at understanding, reasoning, and interpreting knowledge. Human
knows things, which is knowledge and as per their knowledge they perform various
actions in the real world. But how machines do all these things comes under
knowledge representation and reasoning. Hence we can describe Knowledge
representation as folowing:

• Knowledge representation and reasoning (KR, KRR) is the part of Artificial


intelligence which concerned with AI agents thinking and how thinking contributes to
inteligent behavior of agents.

• It is responsible for representing information about the real world so that a


computer can understand and can utilize this knowledge to solve the complex real
world problems such as diagnosis a medical condition or communicating with
humans in natural language.

• It is also a way which describes how we can represent knowledge in artificial


intelligence. Knowledge representation is not just storing data into some database,
but it also enables an intelligent machine to learn from that knowledge and
experiences so that it can behave inteligently like a human.

78
10. Assignment3

79
Assignment

1. Nearest Neighbor Classification


This assignment engages students in basic Machine Learning concepts and
implementation, including classification and similarity-based search, with minimal
background knowledge.

Nearest neighbor classification

2. Behavior-BasedRobotics
Students implement a behavior-based simulated tank agent in the AutoTank
environment with a reactive behavior-based architecture built using the Unified
Behavior Framework (UBF).

Behavior based robotics

80
11. Part - A
Question & Answers

81
Part - A Question & Answers

1. Write down the basic syntactic elements of first order logic (K1, CO3)
The basic syntactic elements of the FOL are the symbols that stand for objects,
relations and functions. The symbols come in three kinds: constant symbols,
which stand for objects; predicate symbols, which stand for relations; and
function symbols, which stand for functions.

2. Define ontological engineering (K1, CO3)


How to create choice of representations, concentrating on general concepts-
such as Actions, Time, Physical Objects, and Beliefs-that occur in many different
domains. Representing these abstract concepts is sometimes called ontologica l
engineering-it is related to the knowledge engineering process, but operates on a
grander scale. The prospect of representing everything in the world is daunting.

3. Distinguish between predicate and propositional logic. (K1, CO3)


Propositional logic (also caled sentential logic) is the logic the includes sentence
letters (A,B,C) and logical connectives, but not quantifiers. The semantics of
propositional logic uses truth assignments to the letters to determine whether a
compound propositional sentence is true.

Predicate logic is usually used as a synonym for first-order logic, but sometimes it is
used to refers to other logics that have similar syntax. Syntactically, first-order
logic has the same connectives as propositional logic, but it also has variables
for indiv idual objects, quantifiers, symbols for functions, and symbols for
relations. The semantics include a domain of discourse for the variables and
quantifiers to range over, along with interpretations of the relation and function
symbols.

82
4. How TELL and ASK are used in first-order logic? (K1, CO3)
The TELL and ASK functions used to inform and interrogate a knowledge base are
the more prim itive STORE and FETCH functions. STORE (Ss) t ores a sentence s into
the knowledge base and FETCH (^) returns all unifiers such that the query q unifies
with some sentence in the knowledge base.

5. Define Declarative and procedural knowledge. (K1, CO3)


Declarative knowledge involves knowing THAT something is the case - that J is the
tenth letter of the alphabet, that Paris is the capital of France. Declarative knowledge
is conscious; it can often be verbalized. Metalinguistic knowledge, or knowledge
about a linguistic form, is declarative knowledge.

Procedural knowledge involves knowing HOW to do something - ride a bike, for


example. We may not be able to explain how we do it. Procedural knowledge
involves implicit learning, which a learner may not be aware of, and may involve
being able to use a particular form to understand or produce language without
necessarily being able to explain it.

6. What is structured knowledge representation? (K1, CO3)


Structure knowledge representations were explored as a general representation for
symbolic representation of declarative knowledge. One of the results was a theory
for schema systems.

7. Define the first order definite clause. (K1, CO3)


A Horn clause with exactly one positive literal is a definite clause; a definite clause
with no negative literals is sometimes called a fact; and a Horn clause without a
positive literal is sometimes called a goal clause (note that the empty clause
consisting of no literals is a goal clause).

83
8. What are the 3 types of symbol which is used to indicate objects,
relations and functions? (K1, CO3)

i) Constant symbols for objects

ii) Predicate symbols for relations

iii) Function symbols for functions


9. With an example, show objects, properties functions and relations.
Example “EVIL KING JOHN BROTHER OF RICHARD RULED ENGLAND IN
1200” (K1, CO4)

Objects : John, Richard, England, 1200

Relation : Ruled

Properties : Evil, King

Functions : BROTHER OF

10. Define Unification. (K1, CO3)


Lifted Inference rule require finding substitutions that make different logical
expressions look identical (same). This is caled Unification

11. What factors justify whether the reasoning is to be done in forward or


backward reasoning? (K1, CO3)

Selection of forward reasoning or backward reasoning depends on which direction


offers less branching factor and justifies its reasoning process to the user. Most of the
search techniques can be used to search either forward or backward. One exception
is the means-ends analysis technique which proceeds by reducing
differences
between current and goal states, sometimes reasoning forward and sometimes
backward.

84
12. Define Universal Instantiation (K1, CO3)
The rule of Universal Instantiation is defined as any sentence can be obtained by
substituting a ground term (a term without variables) for the variable. To write out
the inference rule formaly, the notion of substitutions is introduced.

13. Define Existential Instantiation (K1, CO3)


In the rule for Existential Instantiation, the variable is replaced by a single new
constant symbol. The formal statement is as folows: for any sentence α, variable v,
and constant symbol k that does not appear elsewhere in the knowledge base.

14. What is data-driven search? (forward chaining) (K1, CO3)


In data-driven search, sometimes called forward chaining, the problem solver begins
with the given facts and a set of legal moves or rules for changing the state. Search
proceeds by applying rules to facts to produce new facts. This process continues
until (hopefuly) it generates a path that satisfies the goal condition.

Data-driven search uses knowledge and constraints found in the given data to
search along lines known to be true. Use data-driven search if:

• All or most of the data are given in the initial problem statement.
• There are a large number of potential goals, but there are only a few ways to use
the facts and the given information of a particular problem.

It is difficult to form a goal or hypothesis.

85
15. What are the four parts of knowledge in first-order logic? (K1, CO3)

The four parts of knowledge in first-order logic are,

• A set of clauses known as set of support;

• A set of usable axioms that are outside the set of support;

• A set of equations known as rewrites or demodulators;

• A set of parameters and clause that defines the control strategy


16. State the main characteristics of Inductive logic programming
(K1, CO3)

Inductive logic programming (ILP) combines inductive methods with the power of first-
order representations, concentrating in particular on the representation of theories
as logic program .It has gained popularity for three reasons. First, ILP offers a
rigorous approach to the general knowledge-based inductive learning problem.
Second, it offers complete algorithms for inducing general, first-order theories from
examples, which can therefore learn successfully in domains where attribute-based
algorithms are hard to apply.

86
12. Part - B Questions

87
Part - B Questions

1. Explain about Propositionallogic and theorem proving? (K2,CO3)

2. Describe in detail about proof and model checking? (K2, CO3)

3. Discuss about backward chaining algorithm with example. (K3, CO3)

4. Elaborate in detail about the unificationprocess in KB with its algorithm.


(K2, CO3)

5. What is resolution? Explain its various types. Give example KB for deriving a
conclusion from it. (K2, CO4)

6. Explain the Wumpus world problem in terms of propositions? (K3,CO3)

7. Discuss about forward chaining algorithm with example (K3,CO3)

88
13. Supportive
Online Certification
Courses

89
Supportive Online Certification Courses

1. Udacity: Artificial Intelligence


https://www.udacity.com/course/ai-artificial-intelligence-
nanodegree--nd898

2. NPTEL: Artificial Intelligence

https://nptel.ac.in/courses/106/102/106102220/

90
14. Real time
applications in day
to day life and
Industry

91
Real time applications in day to day life and to Industry

Smart Cars
Self-driv ing cars are the most common existing example of applications of artificial
intelligence in real-world, becoming increasingly reliable and ready for dispatch every
single day. From Google’s self-driving car project to Tesla’s “autopilot” feature, it is a
matter of time before AI is a standard-issue technology in the automotive industry.

Advanced Deep Learning algorithms can accurately predict what objects in the
vehicle’s vicinity are likely to do. The AI system collects data from the vehicle’s radar,
cameras, GPS, and cloud services to produce control signals that operate the vehicle.
Moreover, some high-end vehicles come with AI parking systems already. With the
evolution of AI, soon enough, fuly automated vehicles will be seen on most streets.

Smart Car

Healthcare –TELEDOC HEALTH ROBOT


The Healthcare sector has been amongst the top adopters of AI technology. It boils
down to the power of AI to crunch numbers fast and learn from historical data,
which is critical in the healthcare industry.

AI has taken a critical step in helping people with looking after patients as well. The
automated bots and healthcare applications ensure proper medication and treatment
of patients in the facilities.

In certain cases, AI applications have also been known to provide operating


assistance to the doctors.

Healthcare

92
15. Assessment
Schedule

93
15. ASSESSMENT SCHEDULE

Tentative schedule for the Assessment During 2022-2023 ODD


semester

Name of the
S.NO Start Date End Date Portion
Assessment

1 Unit Test 1 UNIT 1


09.09.2023 15.09.2023
2 IAT 1 UNIT 1 & 2

3 Unit Test 2 UNIT 3


26.10.2023 01.11.2023
4 IAT 2 UNIT 3 & 4

UNIT 5 , 1 &
5 Revision 1
2
6 Revision 2 UNIT 3 & 4
15.11.2023 25.11.2023
7 Model ALL 5 UNITS

94
16. Prescribed
Text Books and
Reference Books

95
16. PRESCRIBED TEXT BOOKS & REFERENCE BOOKS

TEXT BOOKS:
1. Peter Norvig and Stuart Russel, Artificial Inteligence: A Modern Approach‖, Pearson,
Fourth Edition, 2020.

2. Bratko, Prolog: Programming for Artificial Inte ligence‖, Fourth edition, Addison-
Wesley Educational Publishers Inc., 2011.

REFERENCES:
1. Elaine Rich,Kevin Knight and B.Nair, Artificial Inteligence 3rd Edition, McGraw
Hil,2017.

2. Melanie Mitchel, Artificial Inteligence: A Guide for Thinking Humans. Series :


Pelican Books,2020

3. Ernest Friedman-Hil, Jess in action, Rule-Based Systems in Java, Manning


Publications,2003

4. Nils J.Nilsson, The Quest for Artificial Inteligence, Cambridge University Press,2009
5. Dan W.Patterson Introduction to Artificial Inteligence and expert systems,1st
Edition by Patterson, Pearson, India, 2015
17. Mini Project
Suggestions

99
Mini Project Suggestions

1. Handwritten Digits Recognition


AI Project Idea – Digits written by humans vary a lot in curves and sizes as they are
hand-drawn and everyone’s writing is not the same.

It is a great way to start artificial inteligence by building a handwritten digits


recognition system that can identify the digit drawn by humans.

2. Optimal Path detection


AI Project Idea – One of the chalenging tasks of AI is to find the optimal path from
one place to the destination place.

The project idea is to find the optimal path for a vehicle to travel so that cost and
time can be minimized. This is a business problem that needs solutions.

10
0
Thank you

Disclaimer:

This document is confidential and intended solely for the educational purpose of RMK Group of
Educational Institutions. If you have received this document through email in error, please notify the
system manager. This document contains proprietary information and is intended only to the
respective group / learning community as intended. If you are not the addressee you should not
disseminate, distribute or copy through e-mail. Please notify the sender immediately by e-mail if you
have received this document by mistake and delete this document from your system. If you are not
the intended recipient you are notified that disclosing, copying, distributing or taking any action in
reliance on the contentsofthisinformationisstrictlyprohibited.

10
1

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy