0% found this document useful (0 votes)
14 views44 pages

M03 - 03 Logical Agents

Logical agents utilize a knowledge base (KB) to represent knowledge and make decisions through inference to achieve goals. The Wumpus World serves as a practical example of a task environment where agents must navigate dangers and collect gold, using logical reasoning based on percepts and actions. Propositional logic is introduced as a framework for constructing and evaluating knowledge bases, with specific rules governing the truth of sentences within this context.

Uploaded by

mansi.gupta
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views44 pages

M03 - 03 Logical Agents

Logical agents utilize a knowledge base (KB) to represent knowledge and make decisions through inference to achieve goals. The Wumpus World serves as a practical example of a task environment where agents must navigate dangers and collect gold, using logical reasoning based on percepts and actions. Propositional logic is introduced as a framework for constructing and evaluating knowledge bases, with specific rules governing the truth of sentences within this context.

Uploaded by

mansi.gupta
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 44

Logical Agents

Logical Agents
• An agent can represent knowledge of its world, its goals and the
current situation.
• Logical Agent has a collection of sentences in logic By using these
logical sentences, the agent decides what to do by inferring
knowledge (conclusion(s))
• The conclusions are achieved by certain action or set of actions, is
appropriate to achieve its goals.
• Knowledge and reasoning are important to logical agents because
they enable successful behaviors to achieve a desired goal.
KNOWLEDGE-BASED AGENTS

• Central component of a Knowledge-Based


Agent is a Knowledge-Base (KB)
• KB contains a set of sentences in a formal
language
• Sentences are expressed using a knowledge
representation language
• Two generic functions:
– TELL - add new sentences (facts) to the KB
• "Tell it what it needs to know"
– ASK - query what is known from the KB
• "Ask what to do next"
• The agent maintains a knowledge base, KB, which may initially contain
some background knowledge.
• Each time the agent program is called, it does three things.
• First, it TELLs the knowledge base what it perceives.
• Second, it ASKs the knowledge base what action it should perform. In the
process of answering this query, extensive reasoning may be done about
the current state of the world, about the outcomes of possible action
sequences, and so on.
• Third, the agent program TELLs the knowledge base which action was
chosen, and the agent executes the action.
• The knowledge-based agent is not an arbitrary program for calculating
actions.
• It is amenable to a description at the knowledge level, where we need
specify only what the agent knows and what its goals are, in order to fix
its behavior.
• For example, an automated taxi might have the goal of taking a
passenger from Hebbal to BMSIT&M and might know that the NES
Yelahanka is the only link between the two locations. Then we can expect
it to cross the NES Yelahanka because it knows that that will achieve its
goal.
• Notice that this analysis is independent of how the taxi works at the
implementation level. It doesn’t matter whether its geographical
knowledge is implemented as linked lists or pixel maps, or whether it
reasons by manipulating strings of symbols stored in registers or by
propagating noisy signals in a network of neurons.
• A knowledge-based agent can be built simply by TELLing it what it needs
to know.
• Starting with an empty knowledge base, the agent designer can TELL
sentences one by one until the agent knows how to operate in its
environment.
• This is called the declarative approach to system building.
• In contrast, the procedural approach encodes desired behaviors directly
as program code.
THE WUMPUS WORLD
• The wumpus world is a cave consisting of rooms
connected by passageways.
• Lurking somewhere in the cave is the terrible
wumpus, a beast that eats anyone who enters its
room.
• The wumpus can be shot by an agent, but the
agent has only one arrow.
• Some rooms contain bottomless pits that will trap
anyone who wanders into these rooms (except for
the wumpus, which is too big to fall in).
• The only mitigating feature of this bleak
environment is the possibility of finding a heap of
gold.
• Although the wumpus world is rather tame by
modern computer game standards, it illustrates
some important points about intelligence.
The task environment of WUMPUS WORLD
• Performance measure: +1000 for climbing out of the cave with the gold, –
1000 for falling into a pit or being eaten by the wumpus, –1 for each action
taken and –10 for using up the arrow. The game ends either when the
agent dies or when the agent climbs out of the cave.
• Environment: A 4×4 grid of rooms. The agent always starts in the square
labeled [1,1], facing to the right. The locations of the gold and the wumpus
are chosen randomly, with a uniform distribution, from the squares other
than the start square. In addition, each square other than the start can be
a pit, with probability 0.2.
• Actuators: The agent can move Forward, TurnLeft by 90◦, or TurnRight by
90◦. The agent dies a miserable death if it enters a square containing a pit
or a live wumpus. (It is safe, albeit smelly, to enter a square with a dead
wumpus.) If an agent tries to move forward and bumps into a wall, then the
agent does not move. The action Grab can be used to pick up the gold if it
is in the same square as the agent. The action Shoot can be used to fire an
arrow in a straight line in the direction the agent is facing. The arrow
continues until it either hits (and hence kills) the wumpus or hits a wall.
The agent has only one arrow, so only the first Shoot action has any effect.
Finally, the action Climb can be used to climb out of the cave, but only
from square [1,1].
• Sensors: The agent has five sensors, each of which gives a single bit of
information:
– In the square containing the wumpus and in the directly (not
diagonally) adjacen squares, the agent will perceive a Stench.
– In the squares directly adjacent to a pit, the agent will perceive a
Breeze.
– In the square where the gold is, the agent will perceive a Glitter.
– When an agent walks into a wall, it will perceive a Bump.
– When the wumpus is killed, it emits a woeful Scream that can be
perceived anywhere the cave.
• The percepts will be given to the agent program in the form of a list of five
symbols; for example, if there is a stench and a breeze, but no glitter,
bump, or scream, the agent program will get [Stench, Breeze, None, None,
None].
Logic
• Knowledge bases consist of sentences.
• These sentences are expressed according to the syntax of the
representation language (syntax of logic), which specifies all the sentences
that are well formed.
• Example: “x + y = 4” is a well-formed sentence, whereas “x4y+ =” is not (The
notion of syntax is clear enough in ordinary arithmetic).
• The semantics defines the truth of each sentence with respect to each
possible world.
• Example: the semantics for arithmetic specifies that the sentence “x + y =4”
is true in a world where x is 2 and y is 2, but false in a world where x is 1
and y is 1.
• In standard logics, every sentence must be either true or false in each
possible world—there is no “in between.”
• We use the term model in place of “possible world.”
• The sentence x + y = 4 is true in a world where
– x is 1 and y is 3
– x is 2 and y is 2
– x is 3 and y is 1
– But false in all other cases
• Knowledge bases consist of sentences.
• These sentences are expressed according to the syntax of the
representation language (syntax of logic), which specifies all the sentences
that are well formed.
• Example: “x + y = 4” is a well-formed sentence, whereas “x4y+ =” is not (The
notion of syntax is clear enough in ordinary arithmetic).
• The semantics defines the truth of each sentence with respect to each
possible world.
• Example: the semantics for arithmetic specifies that the sentence “x + y =4”
is true in a world where x is 2 and y is 2, but false in a world where x is 1
and y is 1.
• In standard logics, every sentence must be either true or false in each
possible world—there is no “in between.”
• We use the term model in place of “possible world.”
• The sentence x + y = 4 is true in a world where
– x is 1 and y is 3
– x is 2 and y is 2
– x is 3 and y is 1
– But false in all other cases
• Logical entailment between sentences is that a
sentence follows logically from another sentence.
• Mathematical notation:  |=  (to mean that the
sentence  entails the sentence .)
• The formal definition of entailment is this:
–  |=  if and only if, in every model in which  is true,
β is also true.
– Using the notation just introduced, we can write
 |= β if and only if M() ⊆ M(β)
• : x + y = 4
• : x * y  2
• For all x, y  3

(1, 1), (1, 2), (1, 3), (2,1), (2, 2), (2, 3), (3,1), (3,2) and (3, 3)
 is true when x is 1 and y is 3, x is 2 and y is 2 & x is 3 and y is 1
• We can apply the same kind of
analysis to the wumpus-world
reasoning example.
• Consider the situation in
Figure: the agent has detected
nothing in [1,1] and a breeze in
[2,1].
• These percepts, combined with
the agent's knowledge of the
rules of the wumpus world,
constitute the KB.
• The agent is interested in
whether the adjacent squares
[1,2], [2,2], and [3,1] contain
pits.
• Each of the three squares
might or might not contain
pits.
• So there are 23 = 8 possible
models
• Let us consider two possible conclusions:
– 1 = "There is no pit in [1, 2]."
–  2 = "There is no pit in [2, 2]."
• KB |= 1
• KB |= 2
PROPOSITIONAL LOGIC: A VERY SIMPLE LOGIC
Syntax:
• defines valid sentences
• The atomic sentences consist of a single proposition
symbol.
• Each such symbol stands for a proposition that can be
true or false.
• We use symbols that start with an uppercase letter
and may contain other letters or subscripts, for
example: P, Q, R, W1,3 and North.
• There are two proposition symbols with fixed
meanings: True is the always-true proposition and
False is the always-false proposition.
• Complex sentences are constructed from simpler
sentences, using parentheses and logical connectives.
• Five Connectives:
Semantics:
• The semantics defines the rules for determining the
truth of a sentence with respect to a particular model.
• In propositional logic, a model simply fixes the truth
value true or false for every proposition symbol.
• one possible model is m1 = {P1,2 = false, P2,2 = false,
P3,1 = true}.
• With three proposition symbols, there are 23 = 8
possible models.
• All sentences are constructed from atomic sentences
and the five connectives; therefore, we need to specify
how to compute the truth of atomic sentences and
how to compute the truth of sentences formed with
each of the five connectives.
• Atomic sentences are easy:
– True is true in every model and False is false in every model.
– The truth value of every other proposition symbol must be
specified directly in the model.
– For example, in the model m1 given earlier, P1,2 is false.
• For complex sentences, we have five rules, which hold
for any sub sentences P and Q in any model m
• the sentence ¬P1,2 ∧ (P2,2 ∨ P3,1), evaluated in m1, gives
true ∧ (false ∨ true) = true ∧ true = true.
• The truth table for ⇒ may not quite fit one’s intuitive understanding
of “P implies Q” or “if P then Q.” For one thing, propositional logic
does not require any relation of causation or relevance between P and
Q.
• The sentence “5 is odd implies Tokyo is the capital of Japan” is a true
sentence of propositional logic (under the normal interpretation), even
though it is a decidedly odd sentence of English.
• Another point of confusion is that any implication is true whenever
its antecedent is false. For example, “5 is even implies Sam is smart”
is true, regardless of whether Sam is smart. This seems bizarre, but it
makes sense if you think of “P ⇒ Q” as saying, “If P is true, then I am
claiming that Q is true. Otherwise I am making no claim.”
• The only way for this sentence to be false is if P is true but Q is false.
• The biconditional, P ⇔ Q, is true whenever both P ⇒ Q
and Q ⇒ P are true.
• In English, this is often written as “P if and only if Q.”
• Many of the rules of the wumpus world are best
written using ⇔.
• For example, a square is breezy if a neighboring
square has a pit, and a square is breezy only if a
neighboring square has a pit.
• So we need a biconditional, B1,1 ⇔ (P1,2 ∨ P2,1) , where
B1,1 means that there is a breeze in [1,1].
A simple Knowledge base:
• We can construct a knowledge base for the wumpus
world.
• We focus first on the immutable aspects of the
wumpus world.
• We need the following symbols for each [x, y] location:
– Px,y is true if there is a pit in [x, y].
– Wx,y is true if there is a wumpus in [x, y], dead or
alive.
– Bx,y is true if the agent perceives a breeze in [x, y].
– Sx,y is true if the agent perceives a stench in [x, y].
• There is no pit in [1,1]:
– R1 : ¬P1,1.
• A square is breezy if and only if there is a pit in a
neighboring square.
– R2 : B1,1 ⇔ (P1,2 ∨ P2,1).
– R3 : B2,1 ⇔ (P1,1 ∨ P2,2 ∨ P3,1).
• The preceding sentences are true in all wumpus worlds.
• Now we include the breeze percepts for the first two
squares visited in the specific world the agent is in,
leading up to the situation in Figure 7.3(b).
– R4 : ¬B1,1.
– R5 : B2,1.
• The KB consists of R1 through R5
• It can also be considered as single
sentence
• The conjunction R1R2  R3  R4  R5
• Because it asserts all sentences are true
A simple Inference Procedure:
• Our goal now is to decide whether KB |=  for some
sentence .
• For example, is ¬P1,2 entailed by our KB?
• In wumpus-world example:
• the relevant proposition symbols are B1,1, B2,1, P1,1,
P1,2, P2,1, P2,2, and P3,1.
• With seven symbols, there are 27 = 128 possible
models; in three of these, KB is true.
• In those three models, ¬P1,2 is true, hence there is no
pit in [1,2].
• On the other hand, P2,2 is true in two of the three
models and false in one, so we cannot yet tell whether
there is a pit in [2,2].
• TT-ENTAILS? performs a recursive enumeration of a
finite space of assignments to symbols.
• The algorithm is sound because it implements directly
the definition of entailment, and complete because it
works for any KB and α and always terminates—there
are only finitely many models to examine.
• the time complexity of the algorithm is O(2n).
• The space complexity is only O(n)
PROPOSITIONAL THEOREM PROVING
Equivalence, Validity and Satisfyability:
• logical equivalence: two sentences  and  are
logically equivalent if they are true in the same set of
models.
• We write this as  ≡ β.
•  ≡ β if and only if  |= β and β |= .
• Validity: A sentence is valid if it is true in all models.
For example, the sentence P ∨ ¬P is valid. Valid
sentences are also known as tautologies. They are
necessarily true.
• A sentence is satisfiable if it is true in, or satisfied by,
some model. (Example: P  Q is satisfiable), (P  ¬Q) is not satisfiabl e)
• For example, the knowledge base given earlier, (R1 ∧
R2 ∧ R3 ∧ R4 ∧ R5), is satisfiable because there are
three models in which it is true,
Inference and Proofs:
• Standard patterns of inference that can be applied to derive
chains of conclusions that lead to the desired goal. These
patterns of inferences are called inference rules.
• The useful inference rules are:
– Modes Ponens
– And Elimination
• Modes Ponens:
α ⇒ β, α
β
• The above notation means that, whenever any sentences of the
form  ⇒ β and  are given, then the sentence β can be inferred.
• For example, if (WumpusAhead ∧ WumpusAlive) ⇒ Shoot and
(WumpusAhead ∧ WumpusAlive) are given, then Shoot can be
inferred. αβ
• And Elimination: α
• For example, from (WumpusAhead ∧ WumpusAlive), WumpusAlive
can be inferred.
• All of the logical equivalences in Figure 7.11 can be used as
inference rules.
• For example, the equivalence for biconditional elimination yields
the two inference rules

• Not all inference rules work in both directions like this. For
example, we cannot run Modus Ponens in the opposite direction
to obtain α ⇒ β and α from β.
• Let us see how these inference rules and equivalences can be
used in the wumpus world.
• We start with the knowledge base containing R1 through R5 and
show how to prove ¬P1,2, that is, there is no pit in [1,2].
• First, we apply biconditional elimination to R2 to obtain

• R1 : ¬P1,1.
• R2 : B1,1 ⇔ (P1,2 ∨ P2,1)
• R3 : B2,1 ⇔ (P1,1 ∨ P2,2 ∨ P3,1).
• R4 : ¬B1,1.
• R5 : B2,1.

• That is, neither [1,2] nor [2,1] contains a pit.


• One final property of logical systems is monotonicity, which says
that the set of entailed sentences can only increase as information
is added to the knowledge base.
• For any sentences  and β, if KB |=  then KB ∧ β |=  .
Proof by Resolution:
• the agent returns from [2,1] to [1,1] and then goes to [1,2], where it perceives
a stench, but no breeze. We add the following facts to the knowledge base:
• R11 : ¬B1,2 R12 : B1,2 ⇔ (P1,1 ∨ P2,2 ∨ P1,3)
• By the same process that led to R10 earlier, we can now derive the absence of
pits in [2,2] and [1,3] (remember that [1,1] is already known to be pitless):
• R13 : ¬P2,2 R14 : ¬P1,3
• We can also apply biconditional elimination to R3,
followed by Modus Ponens with R5, to obtain the
fact that there is a pit in [1,1], [2,2], or [3,1]:
• R15 : P1,1 ∨ P2,2 ∨ P3,1 .
• Now comes the first application of the resolution
rule: the literal ¬P2,2 in R13 resolves with the literal
P2,2 in R15 to give the resolvent R16 : P1,1 ∨ P3,1
• In English; if there’s a pit in one of [1,1], [2,2], and
[3,1] and it’s not in [2,2], then it’s in [1,1] or [3,1].
• Similarly, the literal ¬P1,1 in R1 resolves with the R1 : ¬P1,1.
literal P1,1 in R16 to give R17 : P3,1 . R2 : B1,1 ⇔ (P1,2 ∨ P2,1)
• In English: if there’s a pit in [1,1] or [3,1] and it’s R3 : B2,1 ⇔ (P1,1 ∨ P2,2 ∨ P3,1).
not in [1,1], then it’s in [3,1]. R4 : ¬B1,1.
• These last two inference steps are examples of the R5 : B2,1.
unit resolution inference rule --->
• where each l is a literal and li and m are complementary literals
(i.e., one is the negation of the other).
• Thus, the unit resolution rule takes a clause — a disjunction of
literals — and a literal and produces a new clause.
• Note that a single literal can be viewed as a disjunction of one
literal, also known as a unit clause.
• The unit resolution rule can be generalized to the full resolution
rule,

• Where li and mj are complementary literals.


• This says that resolution takes two clauses and produces a new
clause containing all the literals of the two original clauses except
the two complementary literals.
Conjunctive Normal Form (CNF):
• A sentence expressed as a conjunction of clauses is said to be in
conjunctive normal form or CNF
• Exercise: Convert the sentence B1,1 ⇔ (P1,2 ∨ P2,1) into CNF.
Resolution Algorithm:
Horn clauses and definite clauses
Forward and backward chaining
Excercises:
• Ex. 7.2, 7.4, 7.5, 7.6, 7.7, 7.10, 7.20.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy