0% found this document useful (0 votes)
10 views42 pages

AIML_Module3

Uploaded by

lathikaa M. K
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views42 pages

AIML_Module3

Uploaded by

lathikaa M. K
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 42

1st Semester

Fundamentals of AI and ML (24BTPHY106)

Fundamentals of AI and ML 1
Module-3
Knowledge, reasoning, and planning
Logical Agents - Knowledge-Based Agents — The Wumpus World — Logic — Propositional Logic: A Very
Simple Logic — Propositional Theorem Proving — Effective Propositional Model Checking — Agents
Based on Propositional Logic
First-Order Logic - Representation Revisited — Syntax and Semantics of First-Order Logic — Using First-
Order Logic — Knowledge Engineering in First-Order Logic
Inference in First-Order Logic
Propositional vs. First-Order Inference — Unification and First-Order Inference — Forward Chaining —
Backward Chaining — Resolution

Logical agents

Logical agents are a type of intelligent agent in artificial intelligence (AI) that use formal logic to
represent knowledge and make decisions. These agents rely on logical reasoning to infer new
knowledge from existing information and to determine the best actions to achieve their goals.

Definition

A logical agent is an AI system that employs formal logic as the basis for representing knowledge,
reasoning, and decision-making. Logical agents utilize symbolic representations and logical operations
to derive conclusions and make decisions.

In AI, knowledge-based agents use a process of reasoning over an internal representation of knowledge
to decide what actions to take.

Knowledge-Based Agent

o An intelligent agent needs knowledge about the real world for taking decisions
and reasoning to act efficiently.
o Knowledge-based agents are those agents who have the capability of maintaining an internal
state of knowledge, reason over that knowledge, update their knowledge after observations and
take actions. These agents can represent the world with some formal representation and act
intelligently.
Fundamentals of AI and ML 2
o Knowledge-based agents are composed of two main parts:
o Knowledge-base and
o Inference system.

A knowledge-based agent must able to do the following:

An agent should be able to represent states, actions, etc.

o An agent Should be able to incorporate new percepts.


o An agent can update the internal representation of the world
o An agent can deduce the internal representation of the world
o An agent can deduce appropriate actions.

The architecture of knowledge-based agent:

The above diagram is representing a generalized architecture for a knowledge-based agent. The
knowledge-based agent (KBA) take input from the environment by perceiving the environment. The
input is taken by the inference engine of the agent and which also communicate with KB to decide as
per the knowledge store in KB. The learning element of KBA regularly updates the KB by learning
new knowledge.

Fundamentals of AI and ML 3
Knowledge base: Knowledge-base is a central component of a knowledge-based agent, it is also
known as KB. It is a collection of sentences (here 'sentence' is a technical term and it is not identical
to sentence in English). These sentences are expressed in a language which is called a knowledge
representation language. The Knowledge-base of KBA stores fact about the world.

Why use a knowledge base?

Knowledge-base is required for updating knowledge for an agent to learn with experiences and take
action as per the knowledge.

Inference system

Inference means deriving new sentences from old. Inference system allows us to add a new sentence
to the knowledge base. A sentence is a proposition about the world. Inference system applies logical
rules to the KB to deduce new information.

Inference system generates new facts so that an agent can update the KB. An inference system works
mainly in two rules which are given as:

o Forward chaining
o Backward chaining

Operations Performed by KBA

Following are three operations which are performed by KBA in order to show the intelligent behavior:

1. TELL: This operation tells the knowledge base what it perceives from the environment.

2. ASK : This operation asks the knowledge base what action it should perform.

3. Perform: It performs the selected action.A generic knowledge-based agent:

The knowledge-based agent takes percept as input and returns an action as output. The agent maintains
the knowledge base, KB, and it initially has some background knowledge of the real world. It also has
a counter to indicate the time for the whole process, and this counter is initialized with zero.

Each time when the function is called, it performs its three operations:
Fundamentals of AI and ML 4
o Firstly it TELLs the KB what it perceives.
o Secondly, it asks KB what action it should take
o Third agent program TELLS the KB that which action was chosen.

The MAKE-PERCEPT-SENTENCE generates a sentence as setting that the agent perceived the given
percept at the given time.

The MAKE-ACTION-QUERY generates a sentence to ask which action should be done at the current
time.

MAKE-ACTION-SENTENCE generates a sentence which asserts that the chosen action was
executed.

Various levels of knowledge-based agent:

A knowledge-based agent can be viewed at different levels which are given below:

1. Knowledge level
Knowledge level is the first level of knowledge-based agent, and in this level, we need to specify what
the agent knows, and what the agent goals are. With these specifications, we can fix its behavior. For
example, suppose an automated taxi agent needs to go from a station A to station B, and he knows the
way from A to B, so this comes at the knowledge level.

2. Logical level:

At this level, we understand that how the knowledge representation of knowledge is stored. At this
level, sentences are encoded into different logics. At the logical level, an encoding of knowledge into
logical sentences occurs. At the logical level we can expect to the automated taxi agent to reach to the
destination B.

3. Implementation level:

This is the physical representation of logic and knowledge. At the implementation level agent perform
actions as per logical and knowledge level. At this level, an automated taxi agent actually implement
his knowledge and logic so that he can reach to the destination.

Fundamentals of AI and ML 5
The Wumpus World

Wumpus world:

The Wumpus world is a simple world example to illustrate the worth of a knowledge-based agent and
to represent knowledge representation. It was inspired by a video game Hunt the Wumpus by
Gregory Yob in 1973.

The Wumpus world is a cave which has 4/4 rooms connected with passageways. So there are total 16
rooms which are connected with each other. We have a knowledge-based agent who will go forward
in this world. The cave has a room with a beast which is called Wumpus, who eats anyone who enters
the room. The Wumpus can be shot by the agent, but the agent has a single arrow. In the Wumpus
world, there are some Pits rooms which are bottomless, and if agent falls in Pits, then he will be stuck
there forever. The exciting thing with this cave is that in one room there is a possibility of finding a
heap of gold. So the agent goal is to find the gold and climb out the cave without fallen into Pits or
eaten by Wumpus. The agent will get a reward if he comes out with gold, and he will get a penalty if
eaten by Wumpus or falls in the pit.

Following is a sample diagram for representing the Wumpus world. It is showing some rooms with
Pits, one room with Wumpus and one agent at (1, 1) square location of the world.

A typical wumpus world. The agent is in the bottom left corner, facing east (rightward).

Fundamentals of AI and ML 6
There are also some components which can help the agent to navigate the cave. These components are
given as follows:

a. The rooms adjacent to the Wumpus room are smelly, so that it would have some stench.
b. The room adjacent to PITs has a breeze, so if the agent reaches near to PIT, then he will perceive
the breeze.
c. There will be glitter in the room if and only if the room has gold.
d. The Wumpus can be killed by the agent if the agent is facing to it, and Wumpus will emit a
horrible scream which can be heard anywhere in the cave.

PEAS description of Wumpus world:

To explain the Wumpus world we have given PEAS description as below:

Performance measure:

o +1000 reward points if the agent comes out of the cave with the gold.
o -1000 points penalty for being eaten by the Wumpus or falling into the pit.
o -1 for each action, and -10 for using an arrow.
o The game ends if either agent dies or came out of the cave.

Environment:

o A 4*4 grid of rooms.


o The agent initially in room square [1, 1], facing toward the right.
o Location of Wumpus and gold are chosen randomly except the first square [1,1].
o Each square of the cave can be a pit with probability 0.2 except the first square.

Actuators:

o Left turn,
o Right turn
o Move forward

Fundamentals of AI and ML 7
o Grab
o Release
o Shoot.

Sensors:

o The agent will perceive the stench if he is in the room adjacent to the Wumpus. (Not
diagonally).
o The agent will perceive breeze if he is in the room directly adjacent to the Pit.
o The agent will perceive the glitter in the room where the gold is present.
o The agent will perceive the bump if he walks into a wall.
o When the Wumpus is shot, it emits a horrible scream which can be perceived anywhere in the
cave.
o These percepts can be represented as five element list, in which we will have different
indicators for each sensor.
o Example if agent perceives stench, breeze, but no glitter, no bump, and no scream then it can
be represented as: [Stench, Breeze, None, None, None].

The Wumpus world Properties:

o Partially observable: The Wumpus world is partially observable because the agent can only
perceive the close environment such as an adjacent room.

o Deterministic: It is deterministic, as the result and outcome of the world are already known.

o Sequential: The order is important, so it is sequential.

o Static: It is static as Wumpus and Pits are not moving.

o Discrete: The environment is discrete.


Fundamentals of AI and ML 8
o One agent: The environment is a single agent as we have one agent only and Wumpus is not
considered as an agent. Exploring the Wumpus world:

Now we will explore the Wumpus world and will determine how the agent will find its goal by
applying logical reasoning.

Agent's First step:

Initially, the agent is in the first room or on the square [1,1], and we already know that this room is
safe for the agent, so to represent on the below diagram (a) that room is safe we will add symbol OK.
Symbol A is used to represent agent, symbol B for the breeze, G for Glitter or gold, V for the visited
room, P for pits, W for Wumpus.

At Room [1,1] agent does not feel any breeze or any Stench which means the adjacent squares are also
OK.

The first step taken by the agent in the wumpus world. (a) The initial situation, after percept
[None,None,None,None,None]. (b) After moving to [2,1] and perceiving
[None,Breeze,None,None,None].

Agent's second Step:

Fundamentals of AI and ML 9
Now agent needs to move forward, so it will either move to [1, 2], or [2,1]. Let's suppose agent moves
to the room [2, 1], at this room agent perceives some breeze which means Pit is around this room. The
pit can be in [3, 1], or [2,2], so we will add symbol P? to say that, is this Pit room?

Now agent will stop and think and will not make any harmful move. The agent will go back to the [1,
1] room. The room [1,1], and [2,1] are visited by the agent, so we will use symbol V to represent the
visited squares.

Agent's third step:

At the third step, now agent will move to the room [1,2] which is OK. In the room [1,2] agent perceives
a stench which means there must be a Wumpus nearby. But Wumpus cannot be in the room [1,1] as
by rules of the game, and also not in [2,2] (Agent had not detected any stench when he was at [2,1]).
Therefore agent infers that Wumpus is in the room [1,3], and in current state, there is no breeze which
means in [2,2] there is no Pit and no Wumpus. So it is safe, and we will mark it OK, and the agent
moves further in [2,2].

Two later stages in the progress of the agent. (a) After moving to [1,1] and then [1,2], and perceiving
[Stench,None,None,None,None]. (b) After moving to [2,2] and then [2,3], and perceiving
[Stench,Breeze,Glitter,None,None].

Agent's fourth step:

Fundamentals of AI and ML 10
At room [2,2], here no stench and no breezes present so let's suppose agent decides to move to [2,3].
At room [2,3] agent perceives glitter, so it should grab the gold and climb out of the cave.

LOGIC

A representation language is defined by its syntax, which specifies the structure of sentences, and its
semantics, which defines the truth of each sentence in each possible world or model.

Syntax: The sentences in KB are expressed according to the syntax of the representation language,
which specifies all the sentences that are well formed.

Semantics: The semantics defines the truth of each sentence with respect to each possible world.

Models: We use the term model in place of “possible world” when we need to be precise. Possible
world might be thought of as (potentially) real environments that the agent might or might not be in,
models are mathematical abstractions, each of which simply fixes the truth or falsehood of every
relevant sentences.

PROPOSITIONAL LOGIC

Propositional logic (PL) is the simplest form of logic where all the statements are made by
propositions. A proposition is a declarative statement which is either true or false. It is a technique of
knowledge representation in logical and mathematical form.

Fundamentals of AI and ML 11
A Very Simple Logic
Example:
a) It is Sunday.
b) The Sun rises from West (False proposition)
c) 3+3= 7(False proposition)
d) 5 is a prime number.

Following are some basic facts about propositional logic:


o Propositional logic is also called Boolean logic as it works on 0 and 1.
o In propositional logic, we use symbolic variables to represent the logic, and we can use any
symbol for a representing a proposition, such A, B, C, P, Q, R, etc.
o Propositions can be either true or false, but it cannot be both.
o Propositional logic consists of an object, relations or function, and logical connectives.
o These connectives are also called logical operators.
o The propositions and connectives are the basic elements of the propositional logic.
o Connectives can be said as a logical operator which connects two sentences.
o A proposition formula which is always true is called tautology, and it is also called a valid
sentence.
o A proposition formula which is always false is called Contradiction.
o A proposition formula which has both true and false values is called
o Statements which are questions, commands, or opinions are not propositions such as "Where
is Rohini", "How are you", "What is your name", are not propositions.

Syntax of propositional logic:

The syntax of propositional logic defines the allowable sentences for the knowledge representation.
There are two types of Propositions:

1. Atomic Propositions
2. Compound propositions

o Atomic Proposition: Atomic propositions are the simple propositions. It consists of a single
proposition symbol. These are the sentences which must be either true or false.

Example:
Fundamentals of AI and ML 12
a) 2+2 is 4, it is an atomic proposition as it is a true fact.
b) "The Sun is cold" is also a proposition as it is a false fact.

o Compound proposition: Compound propositions are constructed by combining simpler or


atomic propositions, using parenthesis and logical connectives.

Example:

1. a) "It is raining today, and street is wet."


2. b) "Ankit is a doctor, and his clinic is in Mumbai."

Logical Connectives:
Logical connectives are used to connect two simpler propositions or representing a sentence logically.
We can create compound propositions with the help of logical connectives. There are mainly five
connectives, which are given as follows:

1. Negation: A sentence such as ¬ P is called negation of P. A literal can be either Positive literal
or negative literal.
2. Conjunction: A sentence which has ∧ connective such as, P ∧ Q is called a conjunction.
Example: Rohan is intelligent and hardworking. It can be written as,
P= Rohan is intelligent,
Q= Rohan is hardworking. → P∧ Q.
3. Disjunction: A sentence which has ∨ connective, such as P ∨ Q. is called disjunction, where P
and Q are the propositions.
Example: "Ritika is a doctor or Engineer",
Here P= Ritika is Doctor. Q= Ritika is Doctor, so we can write it as P ∨ Q.
4. Implication: A sentence such as P → Q, is called an implication. Implications are also known
as if-then rules. It can be represented as
If it is raining, then the street is wet.
Let P= It is raining, and Q= Street is wet, so it is represented as P → Q
5. Biconditional: A sentence such as P⇔ Q is a Biconditional sentence, example If I am
breathing, then I am alive
P= I am breathing, Q= I am alive, it can be represented as P ⇔ Q.

Fundamentals of AI and ML 13
Following is the summarized table for Propositional Logic Connectives:

Truth Table:
In propositional logic, we need to know the truth values of propositions in all possible scenarios. We
can combine all the possible combination with logical connectives, and the representation of these
combinations in a tabular format is called Truth table. Following are the truth table for all logical

connectives:
Fundamentals of AI and ML 14
Logical equivalence:

Logical equivalence is one of the features of propositional logic. Two propositions are said to be
logically equivalent if and only if the columns in the truth table are identical to each other.

Let's take two propositions A and B, so for logical equivalence, we can write it as A⇔B. In below
truth table we can see that column for ¬A∨ B and A→B, are identical hence A is Equivalent to B.

Limitations of Propositional logic:

We cannot represent relations like ALL, some, or none with propositional logic. Example:

a. All the girls are intelligent.


b. Some apples are sweet.

o Propositional logic has limited expressive power.


o In propositional logic, we cannot describe statements in terms of their properties or logical
relationships.

Propositional theorem proving in AI


Propositional theorem proving in AI is one of the earliest applications of propositional calculus. It
involves using logical reasoning to prove mathematical theorems, relying on the principles of
propositional logic1. In simple terms, it is the ability of computers to automatically prove or disprove
mathematical statements and propositions using formal logic.

FIRST-ORDER LOGIC IN ARTIFICIAL INTELLIGENCE

In propositional logic, we can only represent the facts, which are either true or false. PL is not sufficient
to represent the complex sentences or natural language statements. The propositional logic has very
limited expressive power. Consider the following sentence, which we cannot represent using PL logic.
"Some humans are intelligent", or "Sachin likes cricket."

First-Order logic (FOL):


Fundamentals of AI and ML 15
o First-order logic is another way of knowledge representation in artificial intelligence. It is an
extension to propositional logic.
o FOL is sufficiently expressive to represent the natural language statements in a concise way.
o First-order logic is also known as Predicate logic or First-order predicate logic. First-order
logic is a powerful language that develops information about the objects in a more easy way
and can also express the relationship between those objects.
o First-order logic (like natural language) does not only assume that the world contains facts like
propositional logic but also assumes the following things in the world:
o Objects: A, B, people, numbers, colors, wars, theories, squares, pits, wumpus, ......
o Relations: It can be unary relation such as: red, round, is adjacent, or n-any relation
such as: the sister of, brother of, has color, comes between
o Function: Father of, best friend, third inning of, end of, ......
o As a natural language, first-order logic also has two main parts:
a. Syntax
b. Semantics

Syntax of First-Order logic:

The syntax of FOL determines which collection of symbols is a logical expression in first-order logic.
The basic syntactic elements of first-order logic are symbols. We write statements in short-hand
notation in FOL.

Basic Elements of First-order logic:

Following are the basic elements of FOL syntax:

Constant 1, 2, A, John, Mumbai, cat,....

Variables x, y, z, a, b,....

Predicates Brother, Father, >,....

Function sqrt, LeftLegOf, ....

Connectives ∧, ∨, ¬, ⇒, ⇔
Fundamentals of AI and ML 16
Equality ==

Quantifier ∀, ∃

Atomic sentences:

o Atomic sentences are the most basic sentences of first-order logic. These sentences are formed
from a predicate symbol followed by a parenthesis with a sequence of terms.
o We can represent atomic sentences as

Predicate (term1, term2, ......, term n).

Example: Ravi and Ajay are brothers: => Brothers(Ravi, Ajay).


Chinky is a cat: => cat (Chinky).

Complex Sentences:

o Complex sentences are made by combining atomic sentences using connectives.

First-order logic statements can be divided into two parts:

o Subject: Subject is the main part of the statement.


o Predicate: A predicate can be defined as a relation, which binds two atoms together in a
statement.

Consider the statement: "x is an integer.", it consists of two parts, the first part x is the subject of
the statement and second part "is an integer," is known as a predicate.

Quantifiers in First-order logic:


Fundamentals of AI and ML 17
o A quantifier is a language element which generates quantification, and quantification specifies
the quantity of specimen in the universe of discourse.
o These are the symbols that permit to determine or identify the range and scope of the variable
in the logical expression. There are two types of quantifier:
a. Universal Quantifier, (for all, everyone, everything)
b. Existential quantifier, (for some, at least one).

Universal Quantifier:

Universal quantifier is a symbol of logical representation, which specifies that the statement within its
range is true for everything or every instance of a particular thing.
The Universal quantifier is represented by a symbol ∀, which resembles an inverted A.
If x is a variable, then ∀x is read as
o For all x
o For each x
o For every x.

Example:

All man drink coffee.

Let a variable x which refers to a cat so all x can be represented in UOD as below:

Fundamentals of AI and ML 18
∀x man(x) → drink (x, coffee).

It will be read as: There are all x where x is a man who drink coffee.

Existential Quantifier:

Existential quantifiers are the type of quantifiers, which express that the statement within its scope is
true for at least one instance of something.

It is denoted by the logical operator ∃, which resembles as inverted E. When it is used with a predicate
variable then it is called as an existential quantifier.

o If x is a variable, then existential quantifier will be ∃x or ∃(x). And it will be read as:

There exists a 'x.'


o For some 'x.'
o For at least one 'x.'

Example

Fundamentals of AI and ML 19
∃x: boys(x) ∧ intelligent(x)

It will be read as: There are some x where x is a boy who is intelligent.

o The main connective for universal quantifier ∀ is implication →.


o The main connective for existential quantifier ∃ is and ∧.

Some Examples of FOL using quantifier:

1. All birds fly.


In this question the predicate is "fly(bird)."
And since there are all birds who fly so it will be represented as follows.
∀x bird(x) →fly(x).

2. Every man respects his parent.


In this question, the predicate is "respect(x, y)," where x=man, and y= parent.Since there is every
man so will use ∀, and it will be represented as follows:
∀x man(x) → respects (x, parent).

3. Some boys play cricket.


In this question, the predicate is "play(x, y)," where x= boys, and y= game. Since there are some boys
so we will use ∃, and it will be represented as:
∃x boys(x) → play(x, cricket).
4. Not all students like both Mathematics and Science.

Fundamentals of AI and ML 20
In this question, the predicate is "like(x, y)," where x= student, and y= subject.
Since there are not all students, so we will use ∀ with negation, so following representation for this:

¬∀ (x) [ student(x) → like(x, Mathematics) ∧ like(x, Science)].

5.Only one student failed in Mathematics.


In this question, the predicate is "failed(x, y)," where x= student, and y= subject.
Since there is only one student who failed in Mathematics, so we will use following representation for
this:

∃(x) [ student(x) → failed (x, Mathematics) ∧∀ (y) [¬(x==y) ∧ student(y) → ¬failed (x,
Mathematics)].

Representation Revisited

Programming languages (such as C++ or Java or Python) are the largest class of formal languages in
common use. Data structures within programs can be used to represent facts; for example, a program
could use a 4 ×4 array to represent the contents of the wumpus world. Thus, the programming language
statement World[2,2] ←Pit is a fairly natural way to assert that there is a pit in square [2,2]. Putting
together a string of such statements is sufficient for running a simulation of the wumpus world. What
programming languages lack is a general mechanism for deriving facts from other facts; each update
to a data structure is done by a domain-specific procedure whose details are derived by the programmer
from his or her own knowledge of the domain. This procedural approach can be contrasted with the
declarative nature of propositional logic, in which knowledge and inference are separate, and inference
is entirely domain independent. SQL databases take a mix of declarative and procedural knowledge.
A second drawback of data structures in programs (and of databases) is the lack of any easy
way to say, for example, “There is a pit in [2,2] or [3,1]” or “If the wumpus is in [1,1] then he is not
in [2,2].” Programs can store a single value for each variable, and some systems allow the value to be
unknown,” but they lack the expressiveness required to directly handle partial information.
Propositional logic is a declarative language because its semantics is based on a truth relation
between sentences and possible worlds. It also has sufficient expressive power to deal with partial
information, using disjunction and negation. Propositional logic has a third property that is desirable
in representation languages, namely, compositionality.

The language of thought Natural languages (such as English or Spanish) are very expressive indeed.
We managed to write almost this whole book in natural language, with only occasional lapses into

Fundamentals of AI and ML 21
other languages (mainly mathematics and diagrams). There is a long tradition in linguistics and the
philosophy of language that views natural language as a declarative knowledge representation
language. If we could uncover the rules for natural language, we could use them in representation and
reasoning systems and gain the benefit of the billions of pages that have been written in natural
language.

The modern view of natural language is that it serves as a medium for communication rather than pure
representation. When a speaker points and says, “Look!” the listener comes to know that, say,
Superman has finally appeared over the rooftops. Yet we would not want to say that the sentence
“Look!” represents that fact. Rather, the meaning of the sentence depends both on the sentence itself
and on the context in which the sentence was spoken. Clearly, one could not store a sentence such as
“Look!” in a knowledge base and expect to recover its meaning without also storing a representation
of the context—which raises the question of how the context itself can be represented.

Combining the best of formal and natural languages

We can adopt the foundation of propositional logic—a declarative, compositional semantics that is
context-independent and unambiguous—and build a more expressive logic on that foundation,
borrowing representational ideas from natural language while avoiding its drawbacks.
When we look at the syntax of natural language, the most obvious elements are nouns and noun phrases
that refer to objects (squares, pits, wumpuses) and verbs and verb phrases along with adjectives and
adverbs that refer to relations among objects (is breezy, is adjacent to, shoots). Some of these relations
are functions—relations in which there is only one “value” for a given “input.” It is easy to start listing
examples of objects, relations, and functions:

• Objects: people, houses, numbers, theories, Ronald McDonald, colors, baseball games, wars,
centuries . . .

• Relations: these can be unary relations or properties such as red, round, bogus, prime, multistoried .
. ., or more general n-ary relations such as brother of, bigger than, inside, part of, has color, occurred
after, owns, comes between, . . .

• Functions: father of, best friend, third inning of, one more than, beginning of . . .
Indeed, almost any assertion can be thought of as referring to objects and properties or relations.

Some examples follow:

Fundamentals of AI and ML 22
• “One plus two equals three.”
Objects: one, two, three, one plus two; Relation: equals; Function: plus. (“One plus two” is a name for
the object that is obtained by applying the function “plus” to the objects “one” and “two.” “Three” is
another name for this object.)

• “Squares neighboring the wumpus are smelly.”


Objects: wumpus, squares; Property: smelly; Relation: neighboring.

• “Evil King John ruled England in 1200.”


Objects: John, England, 1200; Relation: ruled during; Properties: evil, king.
The language of first-order logic, whose syntax and semantics we define in the next section, is built
around objects and relations. It has been important to mathematics, philosophy, and artificial
intelligence precisely because those fields—and indeed, much of everyday human existence—can be
usefully thought of as dealing with objects and the relations among them.
First-order logic can also express facts about some or all of the objects in the universe. This enables
one to represent general laws or rules, such as the statement “Squares neighboring the wumpus are
smelly.”

Using First-Order Logic

In knowledge representation a domain is just some part of the world about which we wish to express
some knowledge.
We begin with a brief description of the TELL/ASK interface for first-order knowledge bases. Then
we look at the domains of family relationships, numbers, sets, and lists, and at the wumpus world.
Assertions and queries in first-order logic
Sentences are added to a knowledge base using TELL, exactly as in propositional logic. Such
sentences are called assertions.
• For example, we can assert that John is a king, Richard is a person, and all kings are persons:
TELL(KB, King(John)) .
TELL(KB, Person(Richard)) .

Fundamentals of AI and ML 23
TELL(KB, ∀x King(x) ⇒ Person(x)) .
• We can ask questions of the knowledge base using ASK.
For example, ASK(KB, King(John)) returns true.
• Questions asked with ASK are called queries or goals. Generally speaking, any query that is
logically entailed by the knowledge base should be answered affirmatively.
For example, given the three assertions above, the query ASK(KB, Person(John)) should also
return true.
• We can ask quantified queries, such as ASK(KB, ∃x Person(x)) . The answer is true, but this
is perhaps not as helpful as we would like. It is rather like answering “Can you tell me the
time?” with “Yes.”
• If we want to know what value of x makes the sentence true, we will need a different function,
which we call ASKVARS,
ASK VARS(KB, Person(x))
and which yields a stream of answers. In this case there will be two answers: {x/John} and
{x/Richard}. Such an answer is called a substitution or binding list.
• ASKVARS is usually reserved for knowledge bases consisting solely of Horn clauses, because
in such knowledge bases every way of making the query true will bind the variables to specific
values. That is not the case with first-order logic; in a KB that has been told only that
King(John) ∨ King(Richard) there is no single binding to x that makes the query ∃x King(x)
true, even though the query is in fact true.

Numbers, sets, and lists


Numbers are perhaps the most vivid example of how a large theory can be built up from a tiny kernel
of axioms. We describe here the theory of natural numbers or nonnegative integers. We need a
predicate NatNum that will be true of natural numbers; we need one constant symbol, 0; and we need
one function symbol, S (successor). The Peano axioms define natural numbers and addition. Natural
numbers are defined recursively:

Fundamentals of AI and ML 24
This axiom reduces addition to repeated application of the successor function.

The use of infix notation is an example of syntactic sugar, that is, an extension to or abbreviation of
the standard syntax that does not change the semantics. Any sentence that uses sugar can be
“desugared” to produce an equivalent sentence in ordinary first-order logic.

KNOWLEDGE ENGINEERING IN FIRST-ORDER LOGIC

What is knowledge-engineering?

The process of constructing a knowledge-base in first-order logic is called as knowledge- engineering.


In knowledge-engineering, someone who investigates a particular domain, learns important concept
of that domain, and generates a formal representation of the objects, is known as knowledge engineer.

The knowledge-engineering process:

Following are some main steps of the knowledge-engineering process. Using these steps, we will
develop a knowledge base which will allow us to reason about digital circuit (One-bit full adder)
which is given below.

Fundamentals of AI and ML 25
1. Identify the task:
The first step of the process is to identify the task, and for the digital circuit, there are various reasoning
tasks.

At the first level or highest level, we will examine the functionality of the circuit:

o Does the circuit add properly?


o What will be the output of gate A2, if all the inputs are high?

At the second level, we will examine the circuit structure details such as:

o Which gate is connected to the first input terminal?


o Does the circuit have feedback loops?

2. Assemble the relevant knowledge:

In the second step, we will assemble the relevant knowledge which is required for digital circuits. So
for digital circuits, we have the following required knowledge:
o Logic circuits are made up of wires and gates.
o Signal flows through wires to the input terminal of the gate, and each gate produces the
corresponding output which flows further.
o In this logic circuit, there are four types of gates used: AND, OR, XOR, and NOT.
o All these gates have one output terminal and two input terminals (except NOT gate, it has one
input terminal).
Fundamentals of AI and ML 26
3. Decide on vocabulary:

The next step of the process is to select functions, predicate, and constants to represent the circuits,
terminals, signals, and gates. Firstly we will distinguish the gates from each other and from other
objects. Each gate is represented as an object which is named by a constant, such as, Gate(X1). The
functionality of each gate is determined by its type, which is taken as constants such as AND, OR,
XOR, or NOT. Circuits will be identified by a predicate: Circuit (C1).
For the terminal, we will use predicate: Terminal(x).

For gate input, we will use the function In(1, X1) for denoting the first input terminal of the gate, and
for output terminal we will use Out (1, X1).

The function Arity(c, i, j) is used to denote that circuit c has i input, j output.
The connectivity between gates can be represented by predicate Connect (Out(1, X1), In(1,
X1)).
We use a unary predicate On (t), which is true if the signal at a terminal is on.

Encode general knowledge about the domain:

To encode the general knowledge about the logic circuit, we need some following rules:
o If two terminals are connected then they have the same input signal, it can be represented as:
∀ t1, t2 Terminal (t1) ∧ Terminal (t2) ∧ Connect (t1, t2) → Signal (t1) = Signal (2).
o Signal at every terminal will have either value 0 or 1, it will be represented as:
∀ t Terminal (t) →Signal (t) = 1 ∨Signal (t) = 0.

o Connect predicates are commutative:


∀ t1, t2 Connect(t1, t2) → Connect (t2, t1).

Representation of types of gates:

∀ g Gate(g) ∧ r = Type(g) → r = OR ∨r = AND ∨r = XOR ∨r = NOT.


o Output of AND gate will be zero if and only if any of its input is zero.

∀ g Gate(g) ∧ Type(g) = AND →Signal (Out(1, g))= 0 ⇔ ∃n Signal (In(n, g))= 0.


o Output of OR gate is 1 if and only if any of its input is 1:

∀ g Gate(g) ∧ Type(g) = OR → Signal (Out(1, g))= 1 ⇔ ∃n Signal (In(n, g))= 1


Fundamentals of AI and ML 27
o Output of XOR gate is 1 if and only if its inputs are different:

∀ g Gate(g) ∧ Type(g) = XOR → Signal (Out(1, g)) = 1 ⇔ Signal (In(1, g)) ≠ Signal (In(2,
g)).
o Output of NOT gate is invert of its input:

∀ g Gate(g) ∧ Type(g) = NOT → Signal (In(1, g)) ≠ Signal (Out(1, g)). All the gates in the
above circuit have two inputs and one output (except NOT gate).
∀ g Gate(g) ∧ Type(g) = NOT → Arity(g, 1, 1)
∀ g Gate(g) ∧ r =Type(g) ∧ (r= AND ∨r= OR ∨r= XOR) → Arity (g, 2, 1).
o All gates are logic circuits:

∀ g Gate(g) → Circuit (g).

Encode a description of the problem instance:

Now we encode problem of circuit C1, firstly we categorize the circuit and its gate components. This
step is easy if ontology about the problem is already thought. This step involves the writing simple
atomics sentences of instances of concepts, which is known as ontology.

For the given circuit C1, we can encode the problem instance in atomic sentences as below:

Since in the circuit there are two XOR, two AND, and one OR gate so atomic sentences for these gates
will be:

1. For XOR gate: Type(x1)= XOR, Type(X2) = XOR


2. For AND gate: Type(A1) = AND, Type(A2)= AND
3. For OR gate: Type (O1) = OR.

Pose queries to the inference procedure and get answers:

In this step, we will find all the possible set of values of all the terminal for the adder circuit. The first
query will be:

What should be the combination of input which would generate the first output of circuit C1, as 0 and
a second output to be 1?

∃ i1, i2, i3 Signal (In(1, C1))=i1 ∧ Signal (In(2, C1))=i2 ∧ Signal (In(3, C1))= i3
∧ Signal (Out(1, C1)) =0 ∧ Signal (Out(2, C1))=1

Fundamentals of AI and ML 28
Debug the knowledge base:

Now we will debug the knowledge base, and this is the last step of the complete process. In this step,
we will try to debug the issues of knowledge base.

In the knowledge base, we may have omitted assertions like 1 ≠ 0.

INFERENCE IN FIRST-ORDER LOGIC

Inference in First-Order Logic is used to deduce new facts or sentences from existing sentences. Before
understanding the FOL inference rule, let's understand some basic terminologies used in FOL.

Substitution:

Substitution is a fundamental operation performed on terms and formulas. It occurs in all inference
systems in first-order logic. The substitution is complex in the presence of quantifiers in FOL. If we
write F[a/x], so it refers to substitute a constant "a" in place of variable "x".

First-Order logic does not only use predicate and terms for making atomic sentences but also uses
another way, which is equality in FOL. For this, we can use equality symbols which specify that the
two terms refer to the same object.

Example: Brother (John) = Smith.

As in the above example, the object referred by the Brother (John) is similar to the object referred
by Smith. The equality symbol can also be used with negation to represent that two terms are not the
same objects.

Example: ¬(x=y) which is equivalent to x ≠y.

FOL inference rules for quantifier:

As propositional logic we also have inference rules in first-order logic, so following are some basic
inference rules in FOL:

o Universal Generalization
o Universal Instantiation
o Existential Instantiation
o Existential introduction

Fundamentals of AI and ML 29
1. Universal Generalization:
Universal generalization is a valid inference rule which states that if premise P(c) is true for any
arbitrary element c in the universe of discourse, then we can have a conclusion as ∀ x P(x).

It can be represented as: .

This rule can be used if we want to show that every element has a similar property.

In this rule, x must not appear as a free variable.

Example: Let's represent, P(c): "A byte contains 8 bits", so for ∀ x P(x) "All bytes contain
8 bits.", it will also be true.

Universal Instantiation:

o Universal instantiation is also called as universal elimination or UI is a valid inference rule. It


can be applied multiple times to add new sentences.
o The new KB is logically equivalent to the previous KB.
o As per UI, we can infer any sentence obtained by substituting a ground term for the
variable.
o The UI rule state that we can infer any sentence P(c) by substituting a ground term c (a constant
within domain x) from ∀ x P(x) for any object in the universe of discourse.

o It can be represented as: .

Example:1.

IF "Every person like ice-cream"=> ∀x P(x) so we can infer that


"John likes ice-cream" => P(c)

Example: 2.

Let's take a famous example,


"All kings who are greedy are Evil." So let our knowledge base contains this detail as in the form of
FOL:
∀x king(x) ∧ greedy (x) → Evil (x),
So from this information, we can infer any of the following statements using

Fundamentals of AI and ML 30
Universal Instantiation:

o King(John) ∧ Greedy (John) → Evil (John),


o King(Richard) ∧ Greedy (Richard) → Evil (Richard),
o King(Father(John)) ∧ Greedy (Father(John)) → Evil (Father(John)),

Existential Instantiation:

o Existential instantiation is also called as Existential Elimination, which is a valid inference rule
in first-order logic.
o It can be applied only once to replace the existential sentence.
o The new KB is not logically equivalent to old KB, but it will be satisfiable if old KB was
satisfiable.
o This rule states that one can infer P(c) from the formula given in the form of ∃x P(x) for a new
constant symbol c.
o The restriction with this rule is that c used in the rule must be a new term for which P(c ) is
true.

o It can be represented as:

Existential introduction

o An existential introduction is also known as an existential generalization, which is a valid


inference rule in first-order logic.
o This rule states that if there is some element c in the universe of discourse which has a property
P, then we can infer that there exists something in the universe which has the property P.

o It can be represented as:


o Example: Let's say that,

"Priyanka got good marks in English."


"Therefore, someone got good marks in English."

Generalized Modus Ponens Rule:

For the inference process in FOL, we have a single inference rule which is called Generalized Modus
Ponens. It is lifted version of Modus ponens.

Generalized Modus Ponens can be summarized as, " P implies Q and P is asserted to be true, therefore
Q must be True."

Fundamentals of AI and ML 31
According to Modus Ponens, for atomic sentences pi, pi', q. Where there is a substitution θ such that
SUBST (θ, pi',) = SUBST(θ, pi), it can be represented as:

Example:

We will use this rule for Kings are evil, so we will find some x such that x is king, and x is greedy
so we can infer that x is evil.

Here let say, p1' is king(John) p1 is king(x)


p2' is Greedy(y) p2 is Greedy(x)
θ is {x/John, y/John} q is evil(x)
SUBST(θ,q).

Propositional vs. First-Order Inference


Propositional Logic:
Propositional logic, also known as sentential logic, deals with propositions or statements that are either
true or false. It focuses on the relationships between propositions using logical connectives (such as AND,
OR, NOT, IMPLIES) and truth-functional operators. However, it doesn't deal with the internal structure
of propositions or the quantification over variables.
Example propositions:
P: It is raining.
Q: The ground is wet.
Example expressions in propositional logic:
P AND Q (It is raining and the ground is wet.)
NOT P (It is not raining.)
P IMPLIES Q (If it is raining, then the ground is wet.)

First-Order Logic:
First-order logic (FOL), also known as first-order predicate logic or first-order predicate calculus, is a
more expressive logic that allows for quantification over variables, relationships between objects, and
Fundamentals of AI and ML 32
internal structure within propositions. It includes predicates (relations), functions, quantifiers (such as ∀
for "for all" and ∃ for "exists"), and variables.
Example predicates and quantified expressions in first-order logic:
Loves(x, y) (Person x loves person y.)
∀x ∃y Loves(x, y) (Everyone loves someone.)

Key Differences:
Expressiveness: Propositional logic deals with simple true/false propositions and their combinations
using logical operators. First-order logic goes beyond this by allowing the representation of complex
relationships, quantification over variables, and functions that operate on objects.
Quantification: Propositional logic lacks quantifiers (like "for all" and "exists") which are essential for
expressing general statements and relationships involving variables. First-order logic includes quantifiers
to make statements about entire classes of objects.
Structure: In propositional logic, propositions are atomic and not further decomposed. In first-order
logic, propositions can contain variables, predicates, and functions, allowing for more detailed
representation of relationships and properties.
Scope: Propositional logic is often used for simple reasoning tasks and truth tables. First-order logic is
more suitable for representing complex relationships, making inferences, and expressing higher-level
concepts.
In total, propositional logic deals with true/false propositions and their logical relationships, while first-
order logic extends this by allowing quantification, variable binding, and representation of more intricate
relationships between objects.

UNIFICATION IN FIRST-ORDER LOGIC.

o Unification is a process of making two different logical atomic expressions identical by finding
a substitution. Unification depends on the substitution process.
o It takes two literals as input and makes them identical using substitution.
o Let Ψ1 and Ψ2 be two atomic sentences and 𝜎 be a unifier such that, Ψ1𝜎 = Ψ2𝜎, then it can be
expressed as UNIFY(Ψ1, Ψ2).
o Example: Find the MGU for Unify{King(x), King(John)}

Fundamentals of AI and ML 33
Let Ψ1 = King(x), Ψ2 = King(John),

Substitution θ = {John/x} is a unifier for these atoms and applying this substitution, and both
expressions will be identical.

o The UNIFY algorithm is used for unification, which takes two atomic sentences and returns a
unifier for those sentences (If any exist).
o Unification is a key component of all first-order inference algorithms.
o It returns fail if the expressions do not match with each other.
o The substitution variables are called Most General Unifier or MGU.

E.g. Let's say there are two different expressions, P(x, y), and P(a, f(z)).

In this example, we need to make both above statements identical to each other. For this, we will
perform the substitution.

P(x,y).........(i)
P(a, f(z))......... (ii)

o Substitute x with a, and y with f(z) in the first expression, and it will be represented as a/x and
f(z)/y.
o With both the substitutions, the first expression will be identical to the second expression and
the substitution set will be: [a/x, f(z)/y].

Conditions for Unification:

Following are some basic conditions for unification:

o Predicate symbol must be same, atoms or expression with different predicate symbol can never
be unified.
o Number of Arguments in both expressions must be identical.
o Unification will fail if there are two similar variables present in the same expression.

Forward Chaining and backward chaining


Fundamentals of AI and ML 34
The inference engine is the component of the intelligent system in artificial intelligence, which applies
logical rules to the knowledge base to infer new information from known facts. The first inference
engine was part of the expert system. Inference engine commonly proceeds in two modes, which are:

A Forward chaining

B Backward chaining

Forward Chaining

Forward chaining is also known as a forward deduction or forward reasoning method when using an
inference engine. Forward chaining is a form of reasoning which start with atomic sentences in the
knowledge base and applies inference rules (Modus Ponens) in the forward direction to extract more
data until a goal is reached.

The Forward-chaining algorithm starts from known facts, triggers all rules whose premises are
satisfied, and add their conclusion to the known facts. This process repeats until the problem is solved.

Properties of Forward-Chaining:
o It is a down-up approach, as it moves from bottom to top.
o It is a process of making a conclusion based on known facts or data, by starting from the initial
state and reaches the goal state.
o Forward-chaining approach is also called as data-driven as we reach to the goal using available
data.
o Forward -chaining approach is commonly used in the expert system, such as CLIPS, business,
and production rule systems.
Consider the following famous example which we will use in both approaches:
Example:
"As per the law, it is a crime for an American to sell weapons to hostile nations. Country A, an
enemy of America, has some missiles, and all the missiles were sold to it by Robert, who is an
American citizen."
Prove that "Robert is criminal."
To solve the above problem, first, we will convert all the above facts into first-order definite clauses, and
then we will use a forward-chaining algorithm to reach the goal.
Facts Conversion into FOL:
Fundamentals of AI and ML 35
o It is a crime for an American to sell weapons to hostile nations. (Let's say p, q, and r are
variables)
American (p) ∧ weapon(q) ∧ sells (p, q, r) ∧ hostile(r) → Criminal(p) ...(1)
o Country A has some missiles. ?p Owns(A, p) ∧ Missile(p). It can be written in two definite
clauses by using Existential Instantiation, introducing new Constant T1.
Owns(A,T1) ......(2)
Missile(T1) .......(3)
o All of the missiles were sold to country A by Robert.

?p Missiles(p) ∧ Owns (A, p) → Sells (Robert, p, A) ......(4)


o Missiles are weapons.
Missile(p) → Weapons (p) .......(5)
o Enemy of America is known as hostile.
Enemy(p, America) →Hostile(p) ........(6)
o Country A is an enemy of America.
Enemy (A, America) .........(7)
o Robert is American
American(Robert). ..........(8)
Forward chaining proof:
Step-1:
In the first step we will start with the known facts and will choose the sentences which do not have
implications, such as: American(Robert), Enemy(A, America), Owns(A, T1), and Missile(T1). All
these facts will be represented as below.

Step-2:

At the second step, we will see those facts which infer from available facts and with satisfied premises.
Rule-(1) does not satisfy premises, so it will not be added in the first iteration.

Fundamentals of AI and ML 36
Rule-(2) and (3) are already added.

Rule-(4) satisfy with the substitution {p/T1}, so Sells (Robert, T1, A) is added, which infers from the
conjunction of Rule (2) and (3).

Rule-(6) is satisfied with the substitution(p/A), so Hostile(A) is added and which infers from Rule-(7).

Step-3:
At step-3, as we can check Rule-(1) is satisfied with the substitution {p/Robert, q/T1, r/A}, so we
can add Criminal(Robert) which infers all the available facts. And hence we reached our goal
statement.
Hence it is proved that Robert is Criminal using forward chaining approach.

Backward Chaining
Backward-chaining is also known as a backward deduction or backward reasoning method when
using an inference engine. A backward chaining algorithm is a form of reasoning, which starts with
the goal and works backward, chaining through rules to find known facts that support the goal.
Properties of backward chaining:

Fundamentals of AI and ML 37
○ It is known as a top-down approach.
○ Backward-chaining is based on modus ponens inference rule.
○ In backward chaining, the goal is broken into sub-goal or sub-goals to prove the facts true.
○ It is called a goal-driven approach, as a list of goals decides which rules are selected and
used.
○ Backward -chaining algorithm is used in game theory, automated theorem proving tools,
inference engines, proof assistants, and various AI applications.
The backward-chaining method mostly used a depth-first search strategy for proof.
Example
In backward-chaining, we will use the same above example, and will rewrite all the rules.
American (p) ∧ weapon(q) ∧ sells (p, q, r) ∧ hostile(r) → Criminal(p) ...(1)

Owns(A, T1) ........(2)

Missile(T1) ………(3)
?p Missiles(p) ∧ Owns (A, p) → Sells (Robert, p, A) ………(4)

Missile(p) → Weapons (p) .......(5)

Enemy(p, America) →Hostile(p) ........(6)

Enemy (A, America) .........(7)

American(Robert). ..........(8)

Backward-Chaining proof
In Backward chaining, we will start with our goal predicate, which is Criminal(Robert), and then
infer further rules.
Step-1:
At the first step, we will take the goal fact. And from the goal fact, we will infer other facts, and at
last, we will prove those facts true. So our goal fact is "Robert is Criminal," so following is the
predicate of it.
Fundamentals of AI and ML 38
Step-2:
At the second step, we will infer other facts form goal fact which satisfies the rules. So as we can see
in Rule-1, the goal predicate Criminal (Robert) is present with substitution {Robert/P}. So we will add
all the conjunctive facts below the first level and will replace p with Robert.
Here we can see American (Robert) is a fact, so it is proved here.

step-3:
At step-3, we will extract further fact Missile(q) which infer from Weapon(q), as it satisfies Rule-(5).
Weapon (q) is also true with the substitution of a constant T1 at q.

Step-4:

Fundamentals of AI and ML 39
At step-4, we can infer facts Missile(T1) and Owns(A, T1) form Sells(Robert, T1, r) which satisfies
the Rule- 4, with the substitution of A in place of r. So these two statements are proved here.

Step-5:
At step-5, we can infer the fact Enemy(A, America) from Hostile(A) which satisfies Rule- 6. And
hence all the statements are proved true using backward chaining.

Fundamentals of AI and ML 40
RESOLUTION
Resolution is a fundamental inference rule used in automated reasoning and logic-based AI systems.
It is a technique employed in propositional logic and first-order logic to derive new logical statements
or facts from existing ones. Resolution plays a crucial role in logic programming, theorem proving,
and knowledge representation.
The concept of resolution involves the following steps:
1. Clause Representation: Logical statements or facts are typically represented as clauses in first-
order logic. A clause is a disjunction of literals, where a literal is either an atomic proposition or its
negation. For example, (p ∨ q) represents the clause "p or q."
2. Unification: Before applying resolution, the clauses involved must be unified. Unification is the
process of finding substitutions for variables such that two clauses can be made identical. Variables in
the clauses are matched and instantiated to specific terms.
3. Resolution Rule: The resolution rule states that if two clauses contain complementary literals (one
positive and one negative), they can be resolved to derive a new clause. The resolution process
combines the two clauses by removing the complementary literals.
4. Generating New Clauses: The resolution process is repeated by selecting pairs of clauses and
applying the resolution rule until no new clauses can be derived. This iterative process continues until
all possible combinations are exhausted.

Fundamentals of AI and ML 41
5. Contradiction Detection: If, during the resolution process, an empty clause (a clause with no
literals) is derived, it indicates a contradiction in the set of initial clauses. This contradiction implies
that the original set of statements is inconsistent or unsatisfiable

Fundamentals of AI and ML 42

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy